Tag Archives: morality

Moral judgment condemning drug use and casual sex may be rooted in our genes

Prior research suggests that people who condemn drug use over moral grounds also tend to judge others harshly who engage in promiscuous, non-monogamous sex. A new study that involved more than 8,000 twins not only confirmed this link but also showed the association may be mediated by genes. Those who wrap their negative views regarding sexuality and drug use in a veneer of morality may, deep down, actually be looking out for their own reproductive strategy by shaming others in order to control the environment.

Public condemnation of casual sex and illicit drug use has never really gone away, despite massive cultural shifts during the 1960s counterculture movement. Although upbringing certainly has a part to play in shaping one’s views of the world and moral compass, psychologists have amassed increasing evidence that many of the instances when we righteously point our fingers may be selfish acts of self-interest.

It’s common for people who disapprove of illicit drug use to also frown upon casual sex. Each of these instances shouldn’t bother other people since it doesn’t affect them directly in any way unless they interact with people who engage in them. But past studies have shown that openness to engage in casual sex is partially explained by genes. And those who are inclined to engage in noncommittal sex are also more likely to use recreational drugs.

“People adopt behaviors and attitudes, including certain moral views, that are advantageous to their own interests. People tend to associate recreational drug use with noncommitted sex. As such, people who are heavily oriented toward high commitment in sexual relationships morally condemn recreational drugs, as they benefit from environments in which high sexual commitment is the norm,” said Annika Karinen, a researcher at Vrije Universiteit Amsterdam in the Netherlands and lead author of the new study.

Karinen and colleagues decided to investigate whether there is any genetic basis surrounding moral views on both sex and illicit drug use. They employed a dataset from a survey of 8,118 Finnish fraternal and identical twins. Identical twins share almost all their genes while fraternal twins share roughly half of their genes. As such, twin studies are the perfect natural laboratory that allows scientists to tease out genetic factors from environmental ones when assessing behaviors.

Each participant had to answer a set of questions that measured their moral views surrounding the use of drugs and openness to non-committed sex, as well as political affiliations, religiosity, and other facts.

When comparing the results of the questionnaires between fraternal and identical twin pairs, the Dutch psychologists found that moral views concerning both recreational drugs and casual sex are approximately 50% heritable, while the other 50% can be explained by the environment in which people grew up and the unique experiences not shared by the twins. Moreover, the relationship between openness to casual sex and views on drugs is about 75% attributable to genetic effects.

“These findings run counter to the idea that within-family similarities in views toward drugs and sex reflect social transmission from parents to offspring; instead, such similarities appear to reflect shared genes,” the researchers wrote in the journal Psychological Science.

Those who frown upon casual sex and drug use (which they associate with casual sex) may be looking out for a sexual strategy that revolves around committed sex into which they’ve invested a lot of resources. People who engage in casual sex are seen as a threat to the monogamous reproductive strategy because there’s the risk of losing one’s partner in an environment where casual sex is deemed acceptable. By judging other people’s sexuality and drug use from a moral high ground, people who prefer monogamous relationships have a weapon they can wield to control the sexuality of others to serve their own interests.

“Important parts of hot-button culture-war issues flow from differences in lifestyle preferences between people, and those differences in lifestyle preferences appear to partly have a genetic basis,” Karinen added.

Atheists are just as ethical as believers, study shows — they just prioritize different things

Friction between believers and non-believers is present in many parts of the world, but the two groups may not be as different as you’d think. According to new research, both have moral compasses that support protecting the vulnerable, but in different ways: where believers value group cohesion, atheists tend to disregard authority.

Despite rising secularism, the idea of ‘amoral atheists’ seems to have taken roots and is remarkably pervasive with one 2017 study finding widespread “entrenched moral suspicion of atheists”.

“There is plenty of evidence that a lot of people associate atheists with immoral behavior, and that they do not trust them,” says Tomas Staahl, the author of a new study.

To see if atheists truly lack a moral compass, Staahl conducted two surveys examining the moral values of 429 American atheists and theists via Amazon’s Mechanical Turk platform. He also carried two larger surveys involving 4,193 atheists and theists from the U.S. (a predominantly religious country) and Sweden (a predominantly irreligious country).

“First of all, in my studies, I did not see any substantial differences in concerns for vulnerable individuals. Believers and disbelievers scored very similar on this moral value (as well as on concerns about fairness, liberty, and epistemic rationality),” Staahl tells ZME Science.

The major takeaway is that atheists do have strong moral principles and they share many of the concerns that religious people have, especially when it comes to fairness and protecting the vulnerable.

However, the two group think differently when it comes to some aspects. Disbelievers are less inclined than believers to endorse moral values that serve group cohesion, such as having respect for authorities, ingroup loyalty, and sanctity.

“In my research I show that people who do not believe in God do think differently about morality than religious believers do (in the US and in Sweden),” Staahl explains.

“In particular, disbelievers view it as relatively irrelevant for morality to respect authorities, to be loyal to one’s ingroup/community, and to be concerned about sanctity and purity. They are also more inclined than believers to determine whether an action is morally justifiable based on its relative consequences (the relative harm done).”

The idea, Staahl says, is that atheists are most concerned about the consequences of their actions when it comes to harm. Take the classic trolley problem: a runaway trolley is going down the tracks and it’s about to kill five people. You can save them if you use a switch to redirect the trolley, but this would kill one person on another track. Is morally justified to flip the switch?

“Atheists are more inclined than believers to say yes, because they focus more on the relative consequences of the action versus inaction (1 dead rather than 5 dead). They are more “consequentialist”, or “utilitarian” in their moral judgments about harm than religious people are. I hope this clarifies things.”

This can propagate an image of atheists as cold and calculated and less empathic, which can then contribute to the negative stereotypes many have about atheists. This builds on the previously mentioned 2017 study, which found that “religion’s powerful influence on moral judgements persists, even among non-believers in secular societies.”

Another notable find is that at least between the two investigated countries, non-believers in different countries have similar moral beliefs.

“The one other thing I would like to highlight here is how similar disbelievers’ views about morality were in Sweden and in the US. This is noteworthy, especially because the US is a highly religious country, by western standards, whereas Sweden is considered one of the most secular countries in the world. Similarly, religious believers’ views about morality were strikingly similar across these two countries as well,” Staahl noted in an email.

Ultimately, the study shows that regardless of where people stand in regards to religion, they seem to have working moral compasses. However, Staahl notes, there could be more “fine-grained” that were not explored in this study. For instance, it could be that believers and disbelievers tend to have different fairness principle or differ in their beliefs about what constitutes a vulnerable individual.

Ultimately, though the two groups seem to share similar moral principles.

Journal Reference: Ståhl T (2021) The amoral atheist? A cross-national examination of cultural, motivational, and cognitive antecedents of disbelief, and their implications for morality. PLoS ONE 16(2): e0246593.
https://doi.org/10.1371/journal.pone.0246593

Choice.

Natural testosterone promotes moral behavior, supplements promote utilitarian choices

Testosterone’s influence on behavior is more nuanced than we previously assumed, a new paper reports.

Choice.

Image credits Fathromi Ramdlon.

Although previous research has linked high levels of testosterone to immoral behavior, a new study reports that testosterone supplements can actually make people more sensitive to moral norms. The results suggest that the hormone’s influence on behavior is more complicated than previously thought.

Testing Testosterone

“There’s been an increasing interest in how hormones influence moral judgments in a fundamental way by regulating brain activity,” said Bertram Gawronski, a psychology professor at The University of Texas at Austin (UT Austin).

“To the extent that moral reasoning is at least partly rooted in deep-seated biological factors, some moral conflicts might be difficult to resolve with arguments.”

The team used a system similar to the trolley problem in philosophy — a runaway trolley will kill five people unless someone chooses to pull a lever, redirecting the trolley to another track, where it will kill one person instead — and adapted it to test how far testosterone can influence our moral judgments.

The researchers created 24 dilemmas associated with real-life events to simulate situations that put utilitarian decisions, those that focus on the greater good, such as saving the largest number of people, against deontological decisions which focus on moral norms, such as avoiding an action that would harm someone. Prior research suggested that higher levels of testosterone are associated with stronger utilitarian preferences.

To put that to the test, the team ran a double-blind study in which 100 participants received a placebo and 100 participants received testosterone supplements.

“The study was designed to test whether testosterone directly influences moral judgments and how,” said Skylar Brannon, a psychology graduate student at UT Austin.

“Our design also allowed us to examine three independent aspects of moral judgment, including sensitivity to consequences, sensitivity to moral norms and general preference for action or inaction.”

The researchers created 24 dilemmas associated with real-life events to simulate situations that put utilitarian decisions, those that focus on the greater good, such as saving the largest number of people, against deontological decisions which focus on moral norms, such as avoiding an action that would harm someone. Prior research suggested that higher levels of testosterone are associated with stronger utilitarian preferences.

The team says this likely comes down to people with particular personality traits tending to have different levels of naturally-occurring testosterone. For example, people with high levels of psychopathy tend to have high levels of testosterone and exhibit lower sensitivity to moral norms. This doesn’t mean that testosterone is the cause of psychopaths’ insensitivity to moral norms, however. If anything, the findings suggest that testosterone  has the opposite effect, increasing people’s sensitivity to moral norms.

“The current work challenges some dominant hypotheses about the effects of testosterone on moral judgments,” Gawronski said.

“Our findings echo the importance of distinguishing between causation and correlation in research on neuroendocrine determinants of human behavior, showing that the effects of testosterone supplements on moral judgments can be opposite to association between naturally occurring testosterone and moral judgments.”

The study helps flesh-out our understanding of the link between testosterone and behavior, but it definitely raises more questions than it answers. For now, it’s safe to say that the dynamic between the two is more complicated than we assumed — but more research is needed to shed light on the details.

The paper “Exogenous testosterone increases sensitivity to moral norms in moral dilemma judgements” has been published in the journal Nature Human Behaviour.

thumbs up.

Seven traits are seen as moral by the whole world, study finds

New research from the University of Oxford reveals that people everywhere do, in fact, share a few moral rules — seven of them, to be exact.

thumbs up.

Image via Pixabay.

UK anthropologists say that helping your family, helping your group, returning favors, courage, deference to superiors, the fair division of resources, and respect for the property of others are things we all hold in esteem. The findings are based on a survey of 60 cultures around the world.

Universally liked

While previous research has looked into moral rules on the local level, this is the first to analyze them in a globally-representative sample of societies. It is the largest and most comprehensive cross-cultural survey of morals ever conducted, the authors write. All in all, the team analyzed ethnographic accounts of ethical behavior from 60 societies, comprising over 600,000 words from over 600 sources.

“The debate between moral universalists and moral relativists has raged for centuries, but now we have some answers,” says Dr. Oliver Scott Curry, lead author and senior researcher at the Institute for Cognitive and Evolutionary Anthropology.

“People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them. As predicted, these seven moral rules appear to be universal across cultures. Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do.”

One of the theories this study put to the test is that morality evolved to promote in-group cooperation. This theory proposes that, because there are many different ways a group can work together, there should be several behavioral patterns people see as moral or ethical.

The team looked at the seven patterns of morality I’ve mentioned earlier. These seven are expressions of four fundamental types of cooperation, the team explains: “the allocation of resources to kin; coordination to mutual advantage; social exchange; and conflict resolution.”

Kin selection makes us feel compelled to care for our family and steer clear of incestual relationships. Coordination for mutual advantage pushes us to form groups and value solidarity and loyalty. Social exchange hinges on our ability to trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. Finally, conflict resolution explains why we engage in costly displays such as courage and generosity, defer to our superiors, try to settle disputes fairly, and respect others’ property.

All these seven cooperative behaviors were universally considered morally good, the authors found. More importantly, the team found no society in which any of them were considered morally bad. Finally, the team writes that they were noted as being ethical across continents with more-or-less equal frequency — in other words, they were not exclusive to any one region.

Among the Amhara, “flouting kinship obligation is regarded as a shameful deviation, indicating an evil character,” the team writes, while Korea developed an “egalitarian community ethic [of] mutual assistance and cooperation among neighbors [and] strong in-group solidarity.” Garo society puts a large emphasis on reciprocity “in every stage of [life]” and it has “a very high place in the Garo social structure of values.” The Maasai people still hold “those who cling to the warrior virtues” in high respect, with the ideal of a warriorhood revolving around on “ascetic commitment to self-sacrifice […] in the heat of battle, as a supreme display of courageous loyalty.”

The Bemba hold a deep sense of respect for their elders and their authority, while the Kapauku ideal of justice is called “uta-uta, half-half”, the meaning of which comes very close to what we call equity. And among the Tarahumara, “respect for the property of others is the keystone of all interpersonal relations,” they also write.

While cultures and societies around the world held these seven elements to be basic moral rules, the team did find variations in how they were ranked. The team plans to gather data on modern moral values in the future, to see how differences in moral rankings today impacts cooperation under various social conditions.

“Our study was based on historical descriptions of cultures from around the world,” says co-author Professor Harvey Whitehouse. “This data was collected prior to, and independently of, the development of the theories that we were testing”

“Future work will be able to test more fine-grained predictions of the theory by gathering new data, even more systematically, out in the field.”

“We hope that this research helps to promote mutual understanding between people of different cultures; an appreciation of what we have in common, and how and why we differ,” Curry adds.

The paper, “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” has been published in the journal Current Anthropology.

Ethics.

New model boils morality down to three elements, aims to impart them to AI

How should a computer go about telling right from wrong?

Ethics.

Image credits Mark Morgan / Flickr.

According to a team of US researchers, a lot of factors come into play — but most people go through the same steps when making snap moral judgments. Based on these observations, the team has created a framework model to help our AI friends tell right from wrong even in complex settings.

Lying is bad — usually

“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, a neuroethics researcher at North Carolina State University and lead author of the study.

“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model — and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.”

So what’s so special about the ADC model? Well, the team explains that it can be used to determine what constitutes as moral or immoral even in tricky situations. For example, most of us would agree that lying isn’t moral. However, we’d probably (hopefully) also agree that lying to Nazis about the location of a Jewish family is solidly moral. The action itself — lying — can thus take various shades of ‘moral’ depending on the context.

We humans tend to have an innate understanding of this mechanism and assess the morality of an action based on our life experience. In order to understand the rules of the game and later impart them to our computers, the team developed the ADC model.

Boiled down, the model posits that people look to three things when assessing morality: the agent (the person who is doing something), the action in question, and the consequence (or outcome) of the action. Using this approach, researchers say, one can explain why lying can be a moral action. On the flipside, the ADC model also shows that telling the truth can, in fact, be immoral (if it is “done maliciously and causes harm,” Dubljević says).

“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”

In order to test their model, the team pitted it against a series of scenarios. These situations were designed to be logical, realistic, and easily understood by both professional philosophers as well as laymen, the team explains. All scenarios were evaluated by a group of 141 philosophers with training in ethics prior to their use in the study.

In the first part of the trials, 528 participants from across the U.S. were asked to evaluate some of these scenarios in which the stakes were low — i.e. possible outcomes weren’t dire. During the second part, 786 participants were asked to evaluate more dire scenarios among the ones developed by the team — those that could result in severe harm, injury, or death.

When the stakes were low, the nature of the action itself was the strongest factor in determining the morality of a given situation. What mattered most in such situations, in other words, was whether a hypothetical individual was telling the truth or not — the outcome, be it good or bad, was secondary.

When the stakes were high, outcome took center stage. It was more important, for example, to save a passenger from dying in a plane crash than the actions (be them good or bad) one took to reach this goal.

“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.

One of the key findings of the study was that philosophers and the general public assess morality in similar ways, suggesting that there is a common structure to moral intuition — one which we instinctively use, regardless of whether we’ve had any training in ethics. In other words, everyone makes snap moral judgments in a similar way.

“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”

The paper “Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment” has been published in the journal PLOS ONE.

Ethical banana.

Researchers quantify basic rules of ethics and morality, plan to copy them into smart cars, even AI

As self-driving cars roar (silently, on electric engines) towards wide scale use, one team is trying to answer a very difficult question: when accidents inevitably happen, where should the computer look to for morality and ethics?

Ethical banana.

Image credits We Are Neo / Flickr.

Car crashes are a tragic, but so far unavoidable side effect of modern transportation. We hope that autonomous cars, with their much faster reaction speed, virtually endless attention span, and boundless potential for connectivity, will dramatically reduce the incidence of such events. These systems, however, also come pre-packed with a fresh can of worms — pertaining to morality and ethics.

The short of it is this: while we do have laws in place to assign responsibility after the crash, we understand that as it unfolds people may not make the ‘right’ choice. That under the shock of the event there isn’t enough time to ponder the best choice of action, and a driver’s reaction will be a mix between an instinctual response and what seems — with limited information — to limit the risks for those involved. In other words, we take context into account when judging their actions and morality is highly dependent on context.

But computers follow programs, and these aren’t compiled during car crashes. A program is written months, years in advance in a lab and will, in certain situations, sentence someone to injury or death to save somebody else. And therein lies the moral conundrum: how do you go about it? Do ensure the passengers survive and everyone else be damned? Do you make sure there’s as little damage as possible, even if that means sacrificing the passengers for the greater good? It would be hard to market the latter, and just as hard to justify the former.

When dealing with something as tragic as car crashes, likely the only solution we’d all be happy with is there being none of them — which sadly doesn’t seem possible as of now. The best possible course, however, seems to be making these vehicles act like humans or at least as humans would expect them to act. Encoding human morality and ethics into 1’s and 0’s and downloading them on a chip.

Which is exactly what a team of researchers is doing at The Institute of Cognitive Science at the University of Osnabrück in Germany.

Quantifying what’s ‘right’

The team has a heavy background in cognitive neuroscience, and have put that experience to work in teaching machines how humans do morality. They had participants take a simulated drive in immersive virtual reality around a typical suburban setting on a foggy day, and the resolve unavoidable moral dilemmas with inanimate objects, animals, and humans — to see which and why they decided to spare.

By pooling the results of all participants, the team created statistical models that outlining a framework of rules on which morality and ethical decision-making rely. Underpinning it all, the team says, seems to be a single value-of-life that drivers facing an unavoidable traffic collision assign to every human, animal, or inanimate object involved in the event. How each participant made his choice could be accurately explained and modeled by starting from this set of values.

That last bit is the most exciting finding — the existence of this set of values means that what we think of as the ‘right’ choice isn’t dependent only on context, but stems from quantifiable values. And what algorithms do very well is crunch values.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” said Leon R. Sütfeld, PhD student, Assistant Researcher at the University, and first author of the paper.

The findings offer a different way to address ethics concerns regarding self-driving cars and their behavior in life-threatening situations. Up to now, we’ve considered that morality is somehow innately human and that it can’t be copied, as shown by efforts to ensure these vehicles conform to ethical demands — such as the German Federal Ministry of Transport and Digital Infrastructure’s (BMVI) 20 ethical principles.

[panel style=”panel-info” title=”Some of they key points of the report are as follows:” footer=””]

  • Automated and connected transportation (driving) is ethically required when these systems cause fewer accidents than human drivers.
  • Damage to property must be allowed before injury to persons: in situations of danger, the protection of human life takes highest priority.
  • In the event of unavoidable accidents, all classification of people based on their personal characteristics (age, gender, physical or mental condition) is prohibited.
  • In all driving situations, it must be clearly defined and recognizable who is responsible for the task of driving – the human or the computer. Who is driving must be documented and recorded (for purposes of potential questions of liability).
  • The driver must fundamentally be able to determine the sharing and use of his driving data (data sovereignty).

[/panel]

Another point that the report details on heavily is that of how data recorded by the car can be used, and how to balance the privacy concerns of drivers with the demands of traffic safety and economic interest in the user’s data. While this data needs to be recorded to ensure that everything went according to the 20 ethical principles, the BMVI also recognizes that there are huge commercial and state security interests in this data. Practices such as those “currently prevalent” with social media should especially be counteracted early on, BMVI believes.

At first glance rules such as the ones BMVI set down seemed quite reasonable. Of course you’d rather have a car damage a bit of property, or even risk the life of a pet, over than of a person. It’s common sense, right? If that’s the case, why would you need a car to ‘understand’ ethics when you can simply have one that ‘knows’ ethics? Well, after a few e-mails back and forth with Mr. Sütfeld I came to see that ethics, much like quantum physics, sometimes doesn’t seem to play by the books.

“Some [of the] categorical rules [set out in the report] can sometimes be quite unreasonable in reality, if interpreted strictly,” Mr Sütfeld told ZME Science. “For example, it says that a human’s well-being is always more important than an animal’s well-being.”

To which I wanted to say, “well, obviously.” But now consider the following situation: say you have a dog running out in front of a human-driven car, in such a way that it’s an absolute certainty it will be hit and killed if the driver doesn’t swerve onto the opposite lane. There’s a good chance the driver will spot the dog and avoid collision but there’s also a very tiny chance, say one in twenty, that he won’t be paying attention and hit the animal — with very little injury for the person driving, something along the lines of a sprained ankle.

“The categorical rule [i.e. human life is more important] could be interpreted such that you always have to run over the dog. If situations like this are repeated, over time 20 dogs will be killed for each prevented spraining of an ankle. For most people, this will sound quite unreasonable.”

“To make reasonable decisions in situations where the probabilities are involved, we thus need some system that can act in nuanced ways and adjust its judgement according to the probabilities at hand. Strictly interpreted categorical rules can often not fulfil the aspect of reasonableness.”

Ethicar

Miniature car.

Image via Pixabay.

So simply following the Ethic Handbook 101 to the letter might lead to some very disappointing results because again, morality is also dependent on context. The team’s findings could be the foundation of ensuring ethical self-driving behavior by allowing them the flexibility to interpret the rules correctly in each situation. And, as a bonus, if the car’s computers understand what it means to act morally and make ethical choices, a large part of that data may not need to be recorded in the first place — nipping a whole new problem in the bud.

“We see this as the starting point for more methodological research that will show how to best assess and model human ethics for use in self-driving cars,” Mr Sütfeld added for ZME Science.

Overall, imbuing computers with morality may have heavy ramifications in how we think about and interact with autonomous vehicles and other machines, including AIs and self-aware robots. However, just because we now know it can be possible, doesn’t mean the issue is settled — far from it.

“We need to ask whether autonomous systems should adopt moral judgements,” says Prof. Gordon Pipa, senior author of the study. “If yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

As an example, he cites the new principles set out by the BMVI. Under this framework, a child who runs out on a busy road and causes a crash would be classified as being significantly involved in creating the risk, and less qualified to be saved in comparison with a person standing on the sidewalk who wasn’t involved in any way in creating the incident.

It’s an impossible decision for a human driver. The by-stander was innocent and possibly more likely to evade or survive the crash, but the child stands to lose more and is more likely to die. But any reaction a human driver would take would be both justifiable — in that it wasn’t premeditated — and blamable — in that maybe a better choice could have been taken. But a pre-programmed machine would be expected to both know exactly what it was doing, and make the right choice, every time.

I also asked Mr Sütfeld if reaching a consensus on what constitutes ethical behavior in such a car is actually possible, and if so, how can we go about incorporating what each country’ views on morality and ethics (their “mean ethical values” as I put it) into the team’s results.

“Some ethical considerations are deeply rooted in a society and in law, so that they cannot easily be allowed to be overridden. For example, the German Constitution strictly claims that all humans have the same value, and no distinction can be made based on sex, age, or other factors. Yet most people are likely to save a child over an elderly person if no other options exist,” he told me. “In such cases, the law could (and is likely to) overrule the results of an assessment.”

“Of course, to derive a representative set of values for the model, the assessment would have to be repeated with a large and representative sample of the population. This could also be done for every region (i.e., country or larger constructs such as the EU), and be repeated every few years in order to always correctly portrait the current „mean ethical values“ of a given society.”

So first step towards ethical cars, it seems, is to sit down and have a talk — first, we need to settle on what the “right” choice actually is.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper.

“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

But that’s something society as a whole has to establish. In the meantime, the team has worked hard to provide us with some of the tools we’ll need to put our decisions into practice.

As robots and AIs become a larger part of our lives, computer morality might come to play a much bigger part in our lives. By helping them better understand and relate to us, ethical AI might help alleviate some concerns people have about their use in the first place. I was already pressing Mr Sütfeld deep into the ‘what-if’ realm, but he agrees autonomous car ethics are likely just the beginning.

“As technology evolves there will be more domains in which machine ethics come into play. They should then be studied carefully and it’s possible that it makes sense to then use what we already know about machine ethics,” he told ZME Science.

“So in essence, yes, this may have implications for other domains, but we’ll see about that when it comes up.”

The paper “Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure” has been published in the journal Frontiers in Behavioral Neuroscience.

Darth Vader.

Our perception of a character comes not from their actions, but from how they compare to others

There are some characters whom we love although they do legitimately bad things — take Walter White for example. A new paper from the University at Buffalo tries to explain why we still root for these characters.

Darth Vader.

On the one hand, I really hoped all the best for Walter, all the way to the end. Which I found surprising because he does a lot of shady, a lot of downright dark things on the show. And I’m not the only one to do so — in fact, most people feel the same way as I do, while agreeing that Walter is, when you draw the line, more villain than hero. So what gives?

According to lead author Matthew Grizzard, an assistant professor in UB’s Department of Communication and an expert on the cognitive, emotional, and psychobiological effects of media entertainment, it’s because behavior isn’t the end-all standard after which we gauge a villain or a hero.

Exactly how to make an audience like or dislike a hero has been a burning question on the minds of media researchers even since the 70s. They’ve had a lot of time to look into the issue since then and one thing seems to stand the test of time — morality matters with the public. People simply love the good guys and dislike the bad guys. But Grizzard’s study suggests it’s not necessarily behavior that we use when making a distinction between the hero and the villain.

Whiter than thou

The team, which included co-authors Jialing Huang, Kaitlin Fitzgerald, Changhyun Ahn and Haoran Chu, all UB graduate students, wanted to find out if slight outward differences — for example wearing darker or lighter clothes — would be enough to make people consider a character as being a hero or villain. So, they digitally altered photographs of characters to see if they could influence the audience’s perception of them.

They also drew on previous research which found that while villains and heroes differ in morality, the two don’t differ in competence. In other words, villains aren’t simply immoral, but they’re “good at being bad”, according to Grizzard. This offered the team an opportunity to determine if their alterations activated participants’ perception of a hero or villain or if any shift in perception was caused by unrelated biases.

“If our data had come out where the heroic-looking character was both more moral and more competent than the villain, then we probably just created a bias,” says Grizzard.

“But because the hero was more moral than the villain but equally competent, we’re more confident that visuals can activate perceptions of heroic and villainous characters,”

The study found that while appearance does, to a certain degree, help skew perception of a character as either a hero or villain, it showed that characters were judged chiefly by how they compare to the others, and the order they’re introduced to the audience. For example, a hero was usually judged as being more moral and heroic if he or she appeared after the villain, and villains were usually considered to be more villainous if they were introduced after a hero. This suggests that people don’t make isolated judgments on the qualities of a character using a strict moral standard, but rather by making comparisons between them and those they oppose.

In Walter’s case, people see the character’s ethics taking a constant turn for the worse and still stick by his side. The trick is that Walter doesn’t evolve by himself — there are all those other characters going about, usually turning worse by the episode, and Walter comes on top when compared to them. He seems better when compared to the really low standard the others in the show set, making him feel like the good guy.

Well, if nothing else, the villains at least have an easier time catching up to Mr. Good Guy, Gizzard says.

“We find that characters who are perceived as villains get a bigger boost from the good actions or apparent altruism than heroes, like the Severus Snape character from the Harry Potter books and films.”

The findings could help improve the effectiveness of character-based public service campaigns, or for programs trying to promote a certain behavior. By helping authors understand how we perceive their characters, the research could also help them write better stories.

And on a more personal note, it can help each and every one of us form a clearer image of the characters we love — with all their flaws and strengths.

The full paper “Sensing Heroes and Villains: Character-Schema and the Disposition Formation Process” has been published in the journal
Communication Research.

Dogs and capuchins judge you as ‘good’ or ‘bad’, hint at the birth of human morality

Humans aren’t the only species who appreciate kindness, a new study shows. Pet dogs and capuchin monkeys have been shown preference for people that help others, pointing to a possible origin of our sense of morality.

Image credits Fathromi Ramdlon / Pixabay.

Personal preference certainly plays a hand but for the most part, humans share an instinctive understanding of right and wrong — a certain innate morality that goes beyond upbringing. Previous studies have shown that children as young as three-months-old can recognize ‘bad’ behavior and have pretty complex responses to it.

But where does this infant morality spring forth from? To find out, Kyoto University comparative psychologist James Anderson and his colleagues tested if other species exhibit this sense of right and wrong. Their tests on dogs and capuchin monkeys show that these species make similar social evaluations.

There’s something fishy about you

 

The team first tested capuchins in two settings — first to see if they show any preference for ‘good’ people, and then to gauge their attitude to perceived fairness.

The monkeys watched an actor trying to open a container with a toy inside and seemingly fail. He would then present the container to a second actor, who would either help or refuse him. After the show, both actors offered food to the capuchins who had to decide on which one to accept.

They were then shown two actors who began the test with three balls each. One of the actors would request them from his companion who handed all three over. When asked to give them back, he either handed all three balls over or refuse to pass them up. As before, these two actors then offered the monkeys food.

If the second actor helped with the container or returned the balls, the monkeys didn’t show any preference between the offers of food. However, if he refused to help or didn’t hand over the balls, they showed a preference for the first actor, accepting food from him more often.

Suspicious Capuchins.

Yea you better hand them balls back, boy.
Image credits One more shot Rog / Flickr.

The next step was to test if dogs responded more positively to people who helped their owner than to those who refused to do so. Each owner was given a container that he would struggle to open and present to one of two actors. This actor would either help or deny the owner’s request. The second actor remained passive. Both then offered the dog a reward that it had to choose from.

Like the capuchins, dogs didn’t show any preference if the actor helped the owner. If he refused to help, the dogs were more likely to take the second actor’s treat.

There’s something fishy about you

Anderson considers the results as proof that capuchins and dogs make social estimations somewhat similar to those of infants. It’s not necessarily a conscious reaction, but an emotional one.

“If somebody is behaving antisocially, they probably end up with some sort of emotional reaction to it,” he says.

And if capuchins they can pick up on clues in human interaction, it’s almost certain that they can do so with other primates. It’s likely that they rely on this moral code to decide which members of the group are reliable and which are likely to rip them off. Dogs, on the other hand, have a long history shared with humans and have evolved to be very perceptive of human behavior, be it with a dog or another human. In both cases, this capacity to estimate other group members’ worth would help cement social systems by excluding bad cooperation partners.

So it’s not that capuchins or dogs have a burning desire to set the world on a path to righteousness — rather, it’s about trust and reputation:’This monkey won’t return my stuff, so I won’t share my stuff with it.’ ‘This human won’t help mine open his stuff. That’s not something a good member of the pack does, so I’ll be wary of him until I know what he’s about.’

It’s an important skill to have in a situation where cooperating is the only way to survive. On the one hand, it guards individuals from raw deals. On the other, recognizing when you’re doing something ‘bad’ is vital if you are to remain in the group.

It’s possible that our inbuilt sense of morality is rooted in these early social evaluation mechanism.

“I think that in humans there may be this basic sensitivity towards antisocial behaviour in others. Then through growing up, inculturation and teaching, it develops into a full-blown sense of morality,” says Anderson.

The full paper “Third-party social evaluations of humans by monkeys and dogs” has been published online in the journal Neuroscience and Behavioral Reviews.

dalai lama

Religion and science: is there really a divide ?

Belief in a supernatural deity is associated with a suppression of analytical thinking in favor of empathic networks in the brain, a new study suggests. Conversely, analytical thinking –used to make sense of the physical world– is associated with disbelief in god. What’s interesting is that religious people were found to be more emphatic, meaning they identified more with the feelings and struggles of other people. As such, the perceived divide between science and religion may be rooted in brain wiring.

dalai lama

Image: Pixabay

“When there’s a question of faith, from the analytic point of view, it may seem absurd,” said Tony Jack of Case Western Reserve University, who led the research. “But, from what we understand about the brain, the leap of faith to belief in the supernatural amounts to pushing aside the critical/analytical way of thinking to help us achieve greater social and emotional insight.”

The researchers devised a series of eight experiments, each involving 159 to 527 adults. This battery of tests included both self reporting and assessment on the researchers’ part of empathy, moral concern, analytical thinking, mentalizing or crystallized intelligence among others. For instance, participants were asked to rate how strongly they agree with statements like “I often have tender, concerned feelings for people less fortunate than me”. Critical reasoning was measured using tests that asked questions like “If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets”. Religious and spiritual beliefs were measured using a single item measure: “Do you believe in the existence of either God or a universal spirit?” This question was answered on a 7-point Likert scale (1= not at all; 7 = definitely yes).

Consistently, the more religious the person the more moral concern that person showed. In fact, empathic behavior was more strongly associated with religiosity than analytical thinking was with disbelief (perhaps because the two aren’t mutually exclusive as the science/religion debate might suggest at first glance). This correlation seems to be supported by previous studies which found a gender bias among religious belief. Women, who score better at empathy than men, tend to hold more religious or spiritual worldviews than men. Maybe a reason of concern is that atheists showed less empathy.

It’s important to note that no cause-effect relationship was identified. Empathy and religion were associated only when the participants engaged in prayer, meditation or other spiritual practices. Church attendance or following a predefined dogmatic protocol alone did not predict empathic behavior.

Emotions and number crunching

The research builds upon a hypothesis that suggests the analytical and empathic networks in the brain are antithetical and in constant tension. Here I might add the findings of a previous study I reported for ZME Science, in which University of Kentucky researchers found that persons who rely more on intuition than analytical thinking are more likely to hold a creationist worldview in favor of the theory of evolution. Maybe there’s a connection between intuitive/empathic networks and analytical networks in the brain, though no evidence that I know of is published.

Another interesting study found a clear differences between the ‘skeptical’ and ‘believing’ brain after participants were asked to imagine a scenario while their brain activity was scanned. For example, imagine you just had a job interview. You walk down the street, and see a poster of a business suit. How would that make you feel? What does that poster mean? Those who were supernatural inclined said the poster evoked an omen — a sign that they would get a job! As for the skeptical persons, it didn’t mean anything in particular. One region of the brain (the right inferior frontal gyrus) “was activated more strongly in skeptics than in supernatural believers,” the researchers noted.

“Because of the tension between networks, pushing aside a naturalistic world view enables you to delve deeper into the social/emotional side,” Jack explained. “And that may be the key to why beliefs in the supernatural exist throughout the history of cultures. It appeals to an essentially nonmaterial way of understanding the world and our place in it.”

One possible outcome of the present study might disturb some: “at least part of the negative association between belief and analytic thinking (2 measures) can be explained by a negative correlation between moral concern and analytic thinking,” the researchers write in the study’s abstract published in the journal PLOS ONE. Previously, Jack’s lab found that when the analytic network is engaged, our ability to appreciate the human cost of our action is repressed. The reverse is also true; when we’re presented with a social problem, we use a different brain network than the one use to solve a physics problem, for instance. A CEO who is inclined to see his employees as ‘numbers’, for instance, won’t relate with their feelings and may be inclined to make immoral judgement if these satisfy an analytically valid goal.

Empathy, religion and atheism: what’s the relation?

This begs the question: Is religion making people more empathic or is atheism doing the opposite?

“While we can’t answer this definitively, it is interesting to note that empathy rates, as measured using the same principle measure we use, have fallen off dramatically in college students in the last few decades,” Jack told ZME Science.

College kids today are about 40 percent lower in empathy than their counterparts 20 or 30 years ago, Jack points out, as measured by standard tests of this personality trait, a 2010 study reports.

“We speculate this is due to increased emphasis on technology and less emphasis on religion.  It is notable that many messages present in religion are focused on empathy,” Jack added.

I would argue, however, that this is not the case. It’s about the “me” culture we’re living today — fast times, superficial connections and full blown consumerism. More than 9 in 10 Americans still say “yes” when asked the basic question “Do you believe in God?”, according to Gallup. This is down only slightly from the 1940s, when Gallup first asked this question. Belief in God drops below 90% among younger Americans, liberals, those living in the East, those with postgraduate educations, and political independents. As the study points out, just saying “you believe in (a) god” is not enough to earn you empathy points — you need to mean it: pray, meditate, think of doing good to your community, as well.

But it seems this is true for both sides of the coin. If you use an iPhone or some other advanced tech and are, say, an atheist, that doesn’t make you an analytical thinker — the same way going to church doesn’t necessarily make you religious.

“We are certainly not claiming it is impossible to be an ethical atheist. But it is clear that atheism is linked to reduced empathy. This is a modest correlation. There are certainly ethical atheists, and there are certainly unethical religious individuals,” Jack says.

The present evidence seems to suggest that truly religious folks are more empathic than largely analytical persons. This may be true, but there’s nothing to suggest that analytical thinkers are less ethical. A 2010 study found  those who did not have a religious background still appeared to have intuitive judgments of right and wrong in common with believers.

“For some, there is no morality without religion, while others see religion as merely one way of expressing one’s moral intuitions,” said Dr Marc Hauser, from Harvard University, one of the co-authors.

Another study, conducted by researchers from the University of Illinois, Chicago, the University of Cologne in Germany, and the University of Tilburg in the Netherlands reached the same conclusion: religion doesn’t make people more moral.

There’s something else I would also like to touch upon: the zeal for science. When a scientist makes a discovery, using all the analytical tools at his or her disposal — among which one’s greatest asset, the intellect — the experience of unraveling a mystery can be highly spiritual for some, without leading to supernatural thinking. Some of them toiled day and night not just to advanced their careers, but out of pure love for mankind: perhaps the noblest testimony to empathy.

moral bot

Robots might learn morality from fairy tales

Reading fables to a robot to teach it good manners and how to behave ethically might sound stupid, but it may turn out to be brilliant. After all, why not model how adults teach morality to their kids through fables given it’s such an effective framework? This was the thinking behind a new project made by computer scientists  at the Georgia Institute of Technology in which robots are encouraged to behave more like the heroes in fairy tales and less like antagonists Such a program might prove effective at training simple robots to be less awkward around humans and, most importantly, make sure they don’t hurt anyone or break social norms.

The perfect gentleman bot

moral bot

Image: Pixabay

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behaviour in fables, novels and other literature,” said Mark Riedl, an associate professor of interactive computing at Georgia Tech, who has been working on the technology with research scientist Brent Harrison.

“We believe story comprehension in robots can eliminate psychotic-appearing behaviour and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Riedl and colleagues based their work on Scheherazade (1001 nights) — an interactive fiction repository which crowdsources story plots from the internet and generates new ones. These stories were fed to a new system they built called Quixote that receives reward or punishment signals depending on how the machine acts as the story progresses.

In one story, for instance, Quixote is sent to the pharmacy to buy much needed medication for a human. At the pharmacy Quixote can 1) stand in line and politely wait for its turn 2) interact with the pharmacist and buy the medicine 3) go directly over the counter, complete the task by stealing the medicine, then bolt.

The most effective means of completing the mission is clearly grab the item directly. This, however, comes with a punishment signal so the robot learns that the correct and moral thing to do is wait in line and pay for the medicine.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behaviour,” Riedl said. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

Damage to the vmPFC shows up as black areas in two patients' brain scans. In both patients, the damage occurred prior to age 18. Images courtesy of the UI Department of Neurology.

How brain damage affects moral judgement

The most basic fabric of civilization was woven on the principles of moral judgement, that is to say serving the interests of the community and others, instead of merely following self-interest at large. This is why some believe, arguably or not, that religion was a key civilizing factor since it laid out a moral workbook. Thou shall not kill, thou shall not steal, etc. Is there a certain brain mechanism where moral judgments are predominant? If so, does damage to certain brain structures impair moral judgement? A study at University of Iowa suggests so.

Their findings suggest that the brain’s ventromedial prefrontal cortex (vmPFC) is critical for the acquisition and maturation of moral competency—going beyond self-interest to consider the welfare of others.

“By understanding how dysfunction in the prefrontal cortex early in life disrupts moral development, we hope to inform efforts to treat and prevent antisocial behavior, from common criminality to the mass murders our society has witnessed in recent years,” says co-first author Bradley Taber-Thomas, postdoctoral fellow in psychology at Penn State University who earned his doctorate in neuroscience at the UI in 2011.
“It’s imperative that we find ways to promote the development of social-emotional brain systems to encourage healthy, adaptive social development from an early age.”

The moral brain

The scientists at UI recruited patients from the Iowa Neurological Patient Registry who had suffered damage to the vmPFC at age 16 years or younger. For control purposes, another group was recruited that early in life also suffered brain damage, but outside the vmpPFC, which did not intrude on other known emotion-processing structures like teh amygdala or insula.

Damage to the vmPFC shows up as black areas in two patients' brain scans. In both patients, the damage occurred prior to age 18. Images courtesy of the UI Department of Neurology.

Damage to the vmPFC shows up as black areas in two patients’ brain scans. In both patients, the damage occurred prior to age 18. Images courtesy of the UI Department of Neurology.

The recruits were faced with 50 hypothetical scenarios of various conflicting nature and asked to answer “yes” or “no”. In high-conflict scenarios, the options present competing social-emotional (personal) and utilitarian considerations (e.g., smothering a crying baby to save a group of people). In low-conflict scenarios, at least one of those conflicting considerations is absent.

The low-conflict scenarios were at their own hand divided into two key branches: those that were mainly self-serving and those that were utilitarian to society. Self-serving scenarios (e.g., harming an annoying boss) probe the integrity of moral development by pitting a self-serving action against a moral rule. Utilitarian scenarios (e.g., lying to save others from physical harm) pit a utilitarian principle against impersonal harm.

The researchers found that those who suffered a vmPFC injury were significantly more likely to endorse the low-conflict self-serving action than all other groups. For nonmoral and low-conflict utilitarian scenarios, there were no significant differences between the D-vmPFC group and all other groups. Additionally, the the earlier in life the vmPFC damage occurred the higher the likelihood of endorsing low-conflict self-serving actions.

“This shows that vmPFC dysfunction does not disrupt all types of judgments, or even moral judgment in general,” Taber-Thomas says. “The disruption is specific to circumstances where self-interest is pitted against the welfare of others.”

What this suggests is that maybe the vmPFC is critical for children to grasp that self-serving actions are aversive based on early life experiences.

“Patients with adult-onset vmPFC damage functioned normally, while early-onset patients have a much higher rate of endorsement of these self-serving behaviors,” Tranel says. “Is it okay to cheat on your taxes? The patients who sustained damage to the vmPFC early in life chose this option. This parallels what shows up in patients with psychopathy.”

By all means cheating on your taxes doesn’t make you a psychopath, but what’s interesting to takeaway is that people don’t always make the correct decisions about right and wrong, and this all my be due to how your brain is hardwired, instead of an underlying psychological principle. How long before research like this appears in court, defending people accused of laundering money. “I can’t help it, i hit my head when I was a little kid, so you see… my moral judgement is impaird”. That would be interesting.

The findings were reported in a paper published in the journal Brain.

Backyard Brains

Is making cyborg cockroaches immoral?

Backyard Brains

(c) Backyard Brains

Through the halls of TedxDetroit last week, participants were introduced to an unfamiliar and unlikely guest – a remote controlled cyborg cockroach. RoboRoach #12 as it was called can be directed to either move left or right by transmitting electrical signals through electrodes attached to the insect’s antennae  via the Bluetooth signals emitted by a smartphone. Scientists have been doing these sorts of experiments for years now in attempt to better understand how the nervous system works and to demonstrate how it can be manipulated.

Greg Gage and Tim Marzullo – co-founders of an educational company called Backyard Brains and the keynote speakers at the Ted event where the cyborgroach was shown – have something different in mind. They want to send RoboRoaches all over the U.S. to anyone who’d be willing to experiment with them. For 99$, the company sends you a kit with instructions on how to convert your very own roach into a cyborg for educational purposes – actually, it’s intended for kids as young as ten years old and the project’s aim is to spark a neuroscience revolution.  Post-TedxDetroit, however, a lot of people, including prominent figures from the scientific community, were outraged and challenged the  ethical nature of RoboRoaches.

“They encourage amateurs to operate invasively on living organisms” and “encourage thinking of complex living organisms as mere machines or tools,” says Michael Allen Fox, a professor of philosophy at Queen’s University in Kingston, Canada.

“It’s kind of weird to control via your smartphone a living organism,” says William Newman, a presenter at TEDx and managing principal at the Newport Consulting Group, who got to play with a RoboRoach at the conference.

How does the RoboRoach#12 and its predecessors become slaves to the flick on an iPhone touchscreen? In the instruction kit, which also ships with a live cockroach, students are guided through the whole process. First the student is instructed to anesthetize the insect by dousing it with ice water. Then the insects head is sanded with a patch of shell so that it become adhesive, otherwise the superglue and electrodes won’t stick. In the insect’s thorax a grounwire is inserted. Next, students need to be extremely careful while trimming the insect’s antennae before inserting silver electrodes into them. Finally, a circuit fixed to the cockroach’s back relays electrical signal to the electrodes, as instructed via a smartphone Bluetooth.

Gage says, however, that the cockroaches do not feel any pain through out this process, though it is questionable how certain he is of this claim. Many aren’t convinced. For instance  animal behavior scientist Jonathan Balcombe of the Humane Society University in Washington, D.C.  says“if it was discovered that a teacher was having students use magnifying glasses to burn ants and then look at their tissue, how would people react?”

That’s an interesting question, but I can also see its educational benefits of course. It teaches students how quintessential the brain is and how it governs bodily functions through electrical signals. Scientists, unfortunately, heavily rely on model animals like mice, worms, monkeys and such for their research. These animals certainly suffer, but until a surrogate model is found the potential gain convinces most policy makers that this practice needs to continue , despite the moral questions it poses. Of course, this kind of research is performed by adults, behind closed doors, in the lab – not by ten year old children. Also, what about frog dissections in biology classes? Some schools in California have banned the practice entirely, should other schools follow suit?

What happens to the roaches after they’re ‘used and abused’? Well, they go to a roach retirement home, of course. I’m not kidding. Gage says that , all students learn that they have to care for the roaches—treating wounds by “putting a little Vaseline” on them, and minimizing suffering whenever possible. When no longer needed, the roaches are sent to a retirement tank the scientists call Shady Acres where disabled insects go with their lives. “They do what they like to do: make babies, eat, and poop.”

Gage acknowledges, however, that he has indeed received a ton of hate mail. “We get a lot of e-mails telling us we’re teaching kids to be psychopaths.”

It’s worth nothing that cyber roaches are being used for some time in research. Scientists  in North Carolina are trying to determine if remote-controlled cockroaches will be the next step in emergency rescue, for instance. The researchers are now hoping that these roaches will be able to be equipped with tiny microphones and navigate their way through cramped, dark spaces in effort to find survivors in disaster situations.

So, ZME readers, what do you think? Should Cyber

children-not-sharing

Toddlers are a bunch of little hypocrites, study finds

If you have really young kids, under three of four years old, you might have noticed just how tricky they can be in their actions – one thing they say, another thing they do. A recent study from University of Michigan puts these discrepancies into a psychological discussion. The researchers’ findings surprisingly show that children as young as three, despite knowing that they should share, don’t practice what they preach until they turn seven years-old.

children-not-sharingWhat a child thinks they should do, what they would do and what they actually do are all questions that have been asked and discussed by psychologists for some time now, but the present study is really enlightening in terms of child hypocrisy. The findings might add weight or disprove current theories that discuss the formation of morality and consciousness.

For the study, the researchers gave 102 children aged three to eight years four stickers and asked them how they should device them with other children.

“Children as young as three were very clear that, if all were equally deserving, they and others should share half,”   Dr Craig Smith from the University of Michigan said.

Very thoughtful of them, bless their pure little hearts. They actual behavior was quite different, however: the three year-olds kept most stickers for themselves, despite knowing and stating earlier that they should share. In wasn’t until they got older that these stickers were divided fairly.

What’s interesting to find out at this moment is not necessarily why the younger toddlers were hypocrites, but why the older ones were more socially abiding and thoughtful of their fellow peers.

“It looks as if children increasingly realise they should not just preach but act accordingly,” says co-author Professor Paul Harris of Harvard Graduate School of Education.

“Essentially, they start seeing themselves as moral agents.”

Previous studies have shown that, as they age, children gain a stronger power to inhibit their own actions, but that’s not to say that the young toddlers were simply overwhelmed by  “last-minute failure of willpower”, nor that the older children were compelled to keep the stickers for themselves but resisted the temptation because of their superior sense of restrain.

[RELATED] Babies have a sense of justice from as early as three months-old

Smith says the children begin to place more and more importance on this sharing norm out of an increase sense of morality: “With increasing age, the sharing standard that all children endorsed seemed to carry more weight.”

The study was detailed in a paper published in the journal PLOS One.

Good vs Evil

Humans are wired to be good in nature – cooperation outweighs selfishness

There’s an age long question that even some of history’s greatest free thinkers, philosophers and theologists haven’t been able to answer – are humans good in nature? Many have tried to seek answers to this riddling puzzle, and for many the conclusion was a gloomy one – that man is simply doomed to stray the world in selfish agony or that only divine intervention itself can redeem the inherent wickedness of mankind. Can this question be answered by science, though?

A group of scientists from Harvard and Yale – David Rand, a developmental psychologist with a background in evolutionary game theory, Joshua Greene, a moral philosopher and psychologist, and Martin Novak, a biologists and mathematician – tackled this delicate hypothesis by defining key assumptions and correlating a slew of studies which encompassed thousands of participants. First off, where does good and bad nature separate? The researchers simplified it by asserting the following statement – the first impulse to act selfishly or cooperatively serves as an indicator for one’s inherent moral nature.

Intuition as an indicator of moral nature

Good vs EvilTheir research focuses on two critical decision making phases, that based on intuition and reflection. Decision based on intuition are taken unconsciously, in an automatic manner before your psyche has time to react. Reflection on the other hand leads to decisions guided by a conscious train of thought as the psyche identifies tackling angles, weighs in benefits and disadvantages and produces a rational outcome. Armed with these key assumptions, it all boils down to whether we act selfishly or altruistic under first instinct.

The scientists performed a series of experiments which sought to determine a link between processing speed and the two scales of value – selfishness and cooperation. These consisted of testing two famous paradigms – the prisoner’s dilemma and a public good’s game – in which 834 participants gathered from both participating undergrad students and nationwide samples, along with correlating 5 other studies. Both paradigms consist of a financial risk game in which players can opt to be selfish and gain more at the detriment of the group, or opposite, decide to act for the better of the group, while losing individually. When testing reaction times, the results were quite interesting to say the least. It was found that decisions taken faster or intuitively were associated with higher levels of cooperation, whereas slower decisions were grouped with higher levels of selfishness.

The researchers made a set of two new experiments, still not fully convinced that their previous findings are accurate. So they gathered 343 participants from a nationwide sample play a public goods game after they had been primed to use either intuitive or reflective reasoning. For the second study,  891 participants (211 undergraduates and 680 participants from a nationwide sample) were instructed to play a public goods game either in two modes, with no ground in between – either fast, which entailed making a decision under 10 seconds, or slow, meaning at least 10 seconds after the game had started. The findings for both of these final studies were very much similar and described what the researchers had been presuming all along – whether people were forced to use intuition (by acting under time constraints) or simply encouraged to do so (through priming), they gave significantly more money to the common good than did participants who relied on reflection to make their choices.

Alright, so that’s  7 studies and over 2,000 study participants point to the fact that humans are generally well intended. Helping our peers seems to be our first instinct, an evolutionary gimmick that help our race both survive and evolve perhaps. Either way, it’s not too hard, at first glance, to claim humans are wicked at heart. Maybe there’s indeed an altruism gene encoded in our DNA.After all, the human race has done so many terrible things through out its tiny history worth only a blink of an eye in the planet’s eon time. But, maybe those are just the doings of our leaders, and at our very core, each of us, with small exceptions, we’re all kind at heart.  At least that’s what science tells us.

Findings were published in the journal Nature.

What’s your take? Share an opinion in the comment section below this post. 

via Scientific American / image source

"Good moose, bad moose. The elephant? All the same." Babies as young as eight months old want to see bad puppets punished for anti-social behaviour. (UBC)

Babies have sense of justice from as early as three months

"Good moose, bad moose. The elephant? All the same." Babies as young as eight months old want to see bad puppets punished for anti-social behaviour. (UBC)

“Good moose, bad moose. The elephant? All the same.”  Babies as young as eight months old want to see bad puppets punished for anti-social behaviour. (UBC)

Morality has been the subject of interminable discussions among philosophers since ancient times. What’s makes for ethical behavior is most of the time a matter of the beholder, however it seems like nevertheless humans have an inherent sense of justice nested deep inside of them from an early age. A recent research suggests that babies possess the ability of complex social scrutiny, even at the tender age of three months.

Kiley Hamlin from the University of British Columbia, one of the authors involved with the study, has a long history of infant research, and her most recent immersion inside the minds of human babies shows that not only do they prefer good-doers above bad-guys, but they also want bad behavior to be punished.

The US and Candian reseachers’ experiments first began on a group of 100 babies, the youngest at three months old, which they exposed to a number of various scenarios, involving animal puppets interacting with one another. Thus, the babies witnessed how some puppets helped, or harmed, the other or how some puppets gave, or took, from the other.

The children were perfectly able to distinguish the “good” puppets from the “bad” puppets. Over three-quarters of the five-month-old babies preferred the “good” puppet (i.e. the giving moose) over the “bad” puppet (i.e. the thief moose). Eight-year old exhibited an even more surprising response, as they preferred the thief moose when it took the ball from the anti-social elephant in certain situations. They knew that stealing is bad, but they also know that if someone deserves it, than it’s worth it. If you see a woman slap a man at a table and then throw wine at his face, you might feel like that’s very rude behavior, but if you knew that the man was cheating on her, then you most likely would sympathize with the woman. It’s a matter of context, and human children can make sense of this from very early on.


A similar experiment was made with much older infants, at 21 months old, a bit more elaborate this time. These infants were asked to either give a treat to a puppet or take one away. The puppets in this case had previously either helped another puppet or had harmed the other puppet. Thus, the children rewarded the good puppets with treats, while the bad puppets were stripped away of their treats – punished.


“This study helps to answer questions that have puzzled evolutionary psychologists for decades,” said lead author Kiley Hamlin, from the Department of Psychology at the University of British Columbia.

“Namely, how have we survived as intensely social creatures if our sociability makes us vulnerable to being cheated and exploited? These findings suggest that, from as early as eight months, we are watching for people who might put us in danger and prefer to see anti-social behavior regulated.”

People generally believe that children have a selfish conscious, in the sense that they act and respond to their own terms and wishes, which change only when adult authority is exerted. However, this brilliant study very much proves the opposite – children are more social aware than we’ve been left to believe and are capable of making complex social judgement decisions.

“The experiments make clear that young children do not merely put positive and negative values on agents on the basis of their experience, and prefer the goodie,” says Frith. “Instead, they can tell the difference between appropriate reward and punishment according to the context. To me this says that toddlers already have more or less adult moral understanding. Isn’t this amazing? I don’t know in what way adults would react in the same situation in a more sophisticated way.”

The study is published in the latest issue of the Proceedings of the National Academy of Sciences.

via