Tag Archives: ethics

Would it be ethical (and even feasible) to issue COVID-19 immunity licenses?

Most of the world is eyeing the next stage of the pandemic — which means restarting as much of the economy as possible without risking a rising new tide of infections.

In this scenario, one possible approach would be to give people who have had the disease and defeated it a certificate of immunity, allowing said people to operate normally as they would presumably be immune to the disease.

This idea, which has already been discussed in Germany and the UK among other countries, would open up a major can of worms and needs to be analyzed extremely carefully if we ever want to implement it.

Immunity certifications could make it easier for people to go out into the world again, but it could also deepen inequality.

An unparalleled idea

Medical certifications are not exactly a new idea. Certifications are common parts of infection control strategies, especially for indicating childhood vaccination. Also, many countries require visitors to show a yellow fever vaccination certificate. But this is a different matter altogether.

For starters, COVID-19 is not a vaccine-preventable disease (at least not yet). This means that any inoculation must be due to a prior infection — an absolute novelty in medical certification. Secondly, this would likely only apply to selected activities (such as factory or construction workers, teachers, or public transit operators), and thirdly, a lot of civil liberties could depend on this. For instance, the freedom of association, worship, work, and travel might be temporarily disallowed for most citizens but allowed for those with an immunity passport.

This treads into some delicate legal issues, not to mention very complex ethical issues.

But before we get into those, let’s first look at the feasibility aspect.

Is immunity even guaranteed?

Our understanding of SARS-CoV-2 is, unfortunately, still rudimentary. We still don’t know how much immunity the infection confers, whether everyone gets this immunity or not, how long this immunity would last, and what type of antibodies indicate this immunity.

Antibody tests have proliferated in the past few weeks, but we still don’t have a fully-proven antibody test that works for everyone. There are many unknowns about the way immunity is generated against COVID-19 and how tests could detect the antibodies responsible for this immunity.

Immunity-based licenses can only be introduced if serology testing is accurate so we’re not even sure if immunity passports are a realistic possibility. Ideally, we’d have a much clearer understanding of these issues, and then deliberate the social and ethical implications of this approach.

The ethics of immunity passports

In a viewpoint published in the medical journal JAMA, two researchers argue that immunity licenses should not be evaluated against a baseline of normalcy (ie uninfected free movement), but rather against the alternative of enforcing strict public health restriction for many months. From an ethical perspective, this is extremely important because the discussion becomes not about whether these immunity passports risk exacerbating inequality — but rather if this accentuated inequality is better than the one produced by a stricter quarantine.

“The ethics of COVID-19 immunity licenses can be assessed with respect to 3 fundamental ethical values: the maximization of benefit; priority to the least advantaged; and treating people equally. These values can be consistent with a well-designed implementation of immunity licenses,” write Govind Persad and Ezekiel J. Emanuel in JAMA.

The first point is fairly straightforward. Why would you issue these certificates in the first place? Because you have a group of people who are allowed to work, travel, worship, etc, without risking an increase in the number of cases. This helps this economy and improves the quality of life of people who have demonstrably recovered from COVID-19. But inequitable access to testing is likely to plague vulnerable communities.

People in poorer communities may be less likely to get tested in the first place, in which case they will be less likely to get an immunity passport, which will perpetuate a state of poverty.

In a separate JAMA Viewpoint, two other researchers, Mark A. Hall and David M. Studdert write that certifying those who are immune may discriminate against those who aren’t.

“Even when differentiating is legal, it can still be unfair. Certifying those who are fit may stigmatize those who are not. There is ample historical evidence that tying advantage to fitness can amplify existing socioeconomic disparities. At the extreme, critics warn that excessive immunity advantages could create an Orwellian or dystopian social apartheid. Those are serious concerns, but the picture is more nuanced.”

While immunity licensing can be ethical in principle, it could also lead to the stigmatization of people. They could split communities in two, and stigmatize those without immunity — much like the yellow stars Nazis forced Jews to wear. Those “yellow stars” (and more importantly, the attitudes surroudning them) showed just how easy classification can lead to discrimination.

Nevertheless, Persad and Emanuel argue that immunity-based licenses do not violate equal treatment because the factors used to grant a license are not discriminatory, like race or religion, but instead grounded in relevant evidence.

Perverse incentives

Furthermore, if we are creating an incentive for people to become immune, aren’t we creating an incentive for people to become sick? If you’re healthy and/or reckless, you may think you can defeat the disease easily and therefore may feel incentivized to get the disease in order to get an immune certificate. The behavior is reminiscent of the “pox parties” that some vaccine-opposed parents hold for their children — but COVID-19 is far more dangerous and unpredictable than the pox.

Then, as you are creating an incentive for some people to get sick, you are also creating a disincentive for people to follow strict social distancing rules. People who are conscientious and follow the best medical advice and don’t get sick will feel disadvantaged by the system, and rightfully so.

Ultimately, this balance of perverse incentives will need to be carefully balanced to ensure that immunity certifications don’t cause more problems than they solve.

Practical problems

Even if an idea is good, it doesn’t necessarily mean it will be applied in a good way — and this is very much the case here. The benefits of immunity licenses could encourage forgery, illegal markets, or fraud by unethical physicians or testing facilities. This could lead to a stealthy increase in the number of infections, which is absolutely what we don’t want to have happen.

These problems underscore the need for careful implementation through strategies like anti-counterfeiting designs, cryptographic or biometric features, and reliable chains of verification for tests.

Ultimately, the implementation strategy of this will be extremely important, and careful consideration needs to be given to every potential shortcoming of the approach — an immunity certification is not something that should be granted with ease.

The bottom line

It’s extremely important then to address inequalities and stigmatization that immunity passports could produce, but it’s also important to consider that refusing immunity licensing can also lead to the discriminatory treatment. Without an official immunity licensing system, researchers argue, many businesses and individuals might opt for an unregulated system, with lower accuracy and higher potential for economic and social fallout.

In many ways, we are going through an unprecedented period, and we may have to deploy unprecedented solutions to keep society going in the year(s) until a vaccine is developed.

This is an opportunity to address and reduce inequalities, not to introduce another discrimination formula, the researchers conclude.

New tweets: ten species of bustling songbirds discovered on Indonesian islands

An expedition off the coast of Sulawesi has come upon ten new songbird species. It’s a rare discovery that highlights once again the thriving Indonesian biodiversity — but also the threats this biodiversity faces.

The Wakatobi white-eye. Image credits: Seán Kelly.

Deep seas, unique birds

Although they are some of the most-studied groups of animals in the world, new birds are rarely discovered. Maybe it’s because we’ve found most of them, or maybe because birds are easier to spot than other creatures, but identifying new bird species is rare.

In the past two decades, an average of just six new bird species have been described every year. But 2020 is already different.

The expedition was carried out from late 2013 to early 2014. Three small, little-explored islands off the coast of Sulawesi were visited by a team led by Frank Rheindt at the National University of Singapore. The team tried to focus on the areas where they thought it was most likely to find new species. They analyzed geological trends that would have influenced the likelihood of finding birds, zooming in on one particular aspect: how deep the water around the islands is.

Taliabu Myzomela, one of the newly-identified species, carefully watching its surroundings. Image credits: James Eaton / Birdtour Asia.

Sea depth is a surprisingly important factor in determining how distinct an island’s biodiversity is. As the Earth has undergone over 20 glacial periods in the past 2 million years, sea levels have repeatedly risen and dropped, connecting and disconnecting islands with other areas. Islands surrounded by shallow waters would have had periods of communication with the mainland or other islands, producing a gene flow between populations, which slows down the emergence of endemic creatures.

But islands which are surrounded by seas deeper than 120 meters would have remained isolated throughout this period, increasing the likelihood of unique species.

This was exactly the case with Peleng and Taliabu, two of the surveyed islands. In addition, these islands have rarely been explored by biologists, making them an excellent target.

Hill forest in Peleng. Image credits: Philippe Verbelen.

The researchers’ efforts were rewarded as 10 new species were identified — 9 of which on Peleng and Taliabu.

Two of the newly discovered animals are leaf warblers — small, insect-eating songbirds. Others include a type of honeyeater that feeds on nectar and fruit and the Peleng fantail (which, as the name implies, fans its tail feathers when is alarmed), as well as two flycatchers. It’s a fairly diverse group, the majority of which was discovered in the islands’ highlands, over 1,000 meters (3,200 feet) high.

Problems already

As it is so often the case, threats to these new species have already been identified. It already seems like a trope: we’ve found some new species, but they’re at risk. In this case, rampant deforestation on the islands is threatening the survival of the birds. Logging is the main cause of deforestation, although forest fires (exacerbated by climate change) also play a role.

It’s an important reminder that life needs to be protected — even life that we haven’t discovered yet.

Thousands of species have been described in recent years, but most researchers agree that thousands more still remain undescribed. Although Sulawesi has been populated by archaic hominins since before the time of Homo sapiens, its zoology still has surprises to offer.

Holotype of one of the newly-described species. Credits: Rheindt et al (2020) / Science.

This study, just like many others analyzing species of birds, leaves behind another pressing ethical question.

This sort of specimen-collecting expeditions involve, as the name implies, collecting specimens — killing them. In this case, nets were placed at strategic points on the island, and whichever unfortunate birds flew into them are harvested and sent to the lab for later analysis.

Establishing that an animal is a new species cannot be done without this analysis — and yet, it involves killing specimens from a population that may very well be threatened. This has been done for centuries, but the ethics of it are being debated more and more in recent times.

Does the end goal of conservation and study justify this process?

The study was published in Science.

thumbs up.

Seven traits are seen as moral by the whole world, study finds

New research from the University of Oxford reveals that people everywhere do, in fact, share a few moral rules — seven of them, to be exact.

thumbs up.

Image via Pixabay.

UK anthropologists say that helping your family, helping your group, returning favors, courage, deference to superiors, the fair division of resources, and respect for the property of others are things we all hold in esteem. The findings are based on a survey of 60 cultures around the world.

Universally liked

While previous research has looked into moral rules on the local level, this is the first to analyze them in a globally-representative sample of societies. It is the largest and most comprehensive cross-cultural survey of morals ever conducted, the authors write. All in all, the team analyzed ethnographic accounts of ethical behavior from 60 societies, comprising over 600,000 words from over 600 sources.

“The debate between moral universalists and moral relativists has raged for centuries, but now we have some answers,” says Dr. Oliver Scott Curry, lead author and senior researcher at the Institute for Cognitive and Evolutionary Anthropology.

“People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them. As predicted, these seven moral rules appear to be universal across cultures. Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do.”

One of the theories this study put to the test is that morality evolved to promote in-group cooperation. This theory proposes that, because there are many different ways a group can work together, there should be several behavioral patterns people see as moral or ethical.

The team looked at the seven patterns of morality I’ve mentioned earlier. These seven are expressions of four fundamental types of cooperation, the team explains: “the allocation of resources to kin; coordination to mutual advantage; social exchange; and conflict resolution.”

Kin selection makes us feel compelled to care for our family and steer clear of incestual relationships. Coordination for mutual advantage pushes us to form groups and value solidarity and loyalty. Social exchange hinges on our ability to trust others, reciprocate favors, feel guilt and gratitude, make amends, and forgive. Finally, conflict resolution explains why we engage in costly displays such as courage and generosity, defer to our superiors, try to settle disputes fairly, and respect others’ property.

All these seven cooperative behaviors were universally considered morally good, the authors found. More importantly, the team found no society in which any of them were considered morally bad. Finally, the team writes that they were noted as being ethical across continents with more-or-less equal frequency — in other words, they were not exclusive to any one region.

Among the Amhara, “flouting kinship obligation is regarded as a shameful deviation, indicating an evil character,” the team writes, while Korea developed an “egalitarian community ethic [of] mutual assistance and cooperation among neighbors [and] strong in-group solidarity.” Garo society puts a large emphasis on reciprocity “in every stage of [life]” and it has “a very high place in the Garo social structure of values.” The Maasai people still hold “those who cling to the warrior virtues” in high respect, with the ideal of a warriorhood revolving around on “ascetic commitment to self-sacrifice […] in the heat of battle, as a supreme display of courageous loyalty.”

The Bemba hold a deep sense of respect for their elders and their authority, while the Kapauku ideal of justice is called “uta-uta, half-half”, the meaning of which comes very close to what we call equity. And among the Tarahumara, “respect for the property of others is the keystone of all interpersonal relations,” they also write.

While cultures and societies around the world held these seven elements to be basic moral rules, the team did find variations in how they were ranked. The team plans to gather data on modern moral values in the future, to see how differences in moral rankings today impacts cooperation under various social conditions.

“Our study was based on historical descriptions of cultures from around the world,” says co-author Professor Harvey Whitehouse. “This data was collected prior to, and independently of, the development of the theories that we were testing”

“Future work will be able to test more fine-grained predictions of the theory by gathering new data, even more systematically, out in the field.”

“We hope that this research helps to promote mutual understanding between people of different cultures; an appreciation of what we have in common, and how and why we differ,” Curry adds.

The paper, “Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies” has been published in the journal Current Anthropology.

Ethics.

New model boils morality down to three elements, aims to impart them to AI

How should a computer go about telling right from wrong?

Ethics.

Image credits Mark Morgan / Flickr.

According to a team of US researchers, a lot of factors come into play — but most people go through the same steps when making snap moral judgments. Based on these observations, the team has created a framework model to help our AI friends tell right from wrong even in complex settings.

Lying is bad — usually

“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, a neuroethics researcher at North Carolina State University and lead author of the study.

“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model — and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.”

So what’s so special about the ADC model? Well, the team explains that it can be used to determine what constitutes as moral or immoral even in tricky situations. For example, most of us would agree that lying isn’t moral. However, we’d probably (hopefully) also agree that lying to Nazis about the location of a Jewish family is solidly moral. The action itself — lying — can thus take various shades of ‘moral’ depending on the context.

We humans tend to have an innate understanding of this mechanism and assess the morality of an action based on our life experience. In order to understand the rules of the game and later impart them to our computers, the team developed the ADC model.

Boiled down, the model posits that people look to three things when assessing morality: the agent (the person who is doing something), the action in question, and the consequence (or outcome) of the action. Using this approach, researchers say, one can explain why lying can be a moral action. On the flipside, the ADC model also shows that telling the truth can, in fact, be immoral (if it is “done maliciously and causes harm,” Dubljević says).

“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”

In order to test their model, the team pitted it against a series of scenarios. These situations were designed to be logical, realistic, and easily understood by both professional philosophers as well as laymen, the team explains. All scenarios were evaluated by a group of 141 philosophers with training in ethics prior to their use in the study.

In the first part of the trials, 528 participants from across the U.S. were asked to evaluate some of these scenarios in which the stakes were low — i.e. possible outcomes weren’t dire. During the second part, 786 participants were asked to evaluate more dire scenarios among the ones developed by the team — those that could result in severe harm, injury, or death.

When the stakes were low, the nature of the action itself was the strongest factor in determining the morality of a given situation. What mattered most in such situations, in other words, was whether a hypothetical individual was telling the truth or not — the outcome, be it good or bad, was secondary.

When the stakes were high, outcome took center stage. It was more important, for example, to save a passenger from dying in a plane crash than the actions (be them good or bad) one took to reach this goal.

“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.

One of the key findings of the study was that philosophers and the general public assess morality in similar ways, suggesting that there is a common structure to moral intuition — one which we instinctively use, regardless of whether we’ve had any training in ethics. In other words, everyone makes snap moral judgments in a similar way.

“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”

The paper “Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment” has been published in the journal PLOS ONE.

Ethical banana.

Researchers quantify basic rules of ethics and morality, plan to copy them into smart cars, even AI

As self-driving cars roar (silently, on electric engines) towards wide scale use, one team is trying to answer a very difficult question: when accidents inevitably happen, where should the computer look to for morality and ethics?

Ethical banana.

Image credits We Are Neo / Flickr.

Car crashes are a tragic, but so far unavoidable side effect of modern transportation. We hope that autonomous cars, with their much faster reaction speed, virtually endless attention span, and boundless potential for connectivity, will dramatically reduce the incidence of such events. These systems, however, also come pre-packed with a fresh can of worms — pertaining to morality and ethics.

The short of it is this: while we do have laws in place to assign responsibility after the crash, we understand that as it unfolds people may not make the ‘right’ choice. That under the shock of the event there isn’t enough time to ponder the best choice of action, and a driver’s reaction will be a mix between an instinctual response and what seems — with limited information — to limit the risks for those involved. In other words, we take context into account when judging their actions and morality is highly dependent on context.

But computers follow programs, and these aren’t compiled during car crashes. A program is written months, years in advance in a lab and will, in certain situations, sentence someone to injury or death to save somebody else. And therein lies the moral conundrum: how do you go about it? Do ensure the passengers survive and everyone else be damned? Do you make sure there’s as little damage as possible, even if that means sacrificing the passengers for the greater good? It would be hard to market the latter, and just as hard to justify the former.

When dealing with something as tragic as car crashes, likely the only solution we’d all be happy with is there being none of them — which sadly doesn’t seem possible as of now. The best possible course, however, seems to be making these vehicles act like humans or at least as humans would expect them to act. Encoding human morality and ethics into 1’s and 0’s and downloading them on a chip.

Which is exactly what a team of researchers is doing at The Institute of Cognitive Science at the University of Osnabrück in Germany.

Quantifying what’s ‘right’

The team has a heavy background in cognitive neuroscience, and have put that experience to work in teaching machines how humans do morality. They had participants take a simulated drive in immersive virtual reality around a typical suburban setting on a foggy day, and the resolve unavoidable moral dilemmas with inanimate objects, animals, and humans — to see which and why they decided to spare.

By pooling the results of all participants, the team created statistical models that outlining a framework of rules on which morality and ethical decision-making rely. Underpinning it all, the team says, seems to be a single value-of-life that drivers facing an unavoidable traffic collision assign to every human, animal, or inanimate object involved in the event. How each participant made his choice could be accurately explained and modeled by starting from this set of values.

That last bit is the most exciting finding — the existence of this set of values means that what we think of as the ‘right’ choice isn’t dependent only on context, but stems from quantifiable values. And what algorithms do very well is crunch values.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” said Leon R. Sütfeld, PhD student, Assistant Researcher at the University, and first author of the paper.

The findings offer a different way to address ethics concerns regarding self-driving cars and their behavior in life-threatening situations. Up to now, we’ve considered that morality is somehow innately human and that it can’t be copied, as shown by efforts to ensure these vehicles conform to ethical demands — such as the German Federal Ministry of Transport and Digital Infrastructure’s (BMVI) 20 ethical principles.

[panel style=”panel-info” title=”Some of they key points of the report are as follows:” footer=””]

  • Automated and connected transportation (driving) is ethically required when these systems cause fewer accidents than human drivers.
  • Damage to property must be allowed before injury to persons: in situations of danger, the protection of human life takes highest priority.
  • In the event of unavoidable accidents, all classification of people based on their personal characteristics (age, gender, physical or mental condition) is prohibited.
  • In all driving situations, it must be clearly defined and recognizable who is responsible for the task of driving – the human or the computer. Who is driving must be documented and recorded (for purposes of potential questions of liability).
  • The driver must fundamentally be able to determine the sharing and use of his driving data (data sovereignty).

[/panel]

Another point that the report details on heavily is that of how data recorded by the car can be used, and how to balance the privacy concerns of drivers with the demands of traffic safety and economic interest in the user’s data. While this data needs to be recorded to ensure that everything went according to the 20 ethical principles, the BMVI also recognizes that there are huge commercial and state security interests in this data. Practices such as those “currently prevalent” with social media should especially be counteracted early on, BMVI believes.

At first glance rules such as the ones BMVI set down seemed quite reasonable. Of course you’d rather have a car damage a bit of property, or even risk the life of a pet, over than of a person. It’s common sense, right? If that’s the case, why would you need a car to ‘understand’ ethics when you can simply have one that ‘knows’ ethics? Well, after a few e-mails back and forth with Mr. Sütfeld I came to see that ethics, much like quantum physics, sometimes doesn’t seem to play by the books.

“Some [of the] categorical rules [set out in the report] can sometimes be quite unreasonable in reality, if interpreted strictly,” Mr Sütfeld told ZME Science. “For example, it says that a human’s well-being is always more important than an animal’s well-being.”

To which I wanted to say, “well, obviously.” But now consider the following situation: say you have a dog running out in front of a human-driven car, in such a way that it’s an absolute certainty it will be hit and killed if the driver doesn’t swerve onto the opposite lane. There’s a good chance the driver will spot the dog and avoid collision but there’s also a very tiny chance, say one in twenty, that he won’t be paying attention and hit the animal — with very little injury for the person driving, something along the lines of a sprained ankle.

“The categorical rule [i.e. human life is more important] could be interpreted such that you always have to run over the dog. If situations like this are repeated, over time 20 dogs will be killed for each prevented spraining of an ankle. For most people, this will sound quite unreasonable.”

“To make reasonable decisions in situations where the probabilities are involved, we thus need some system that can act in nuanced ways and adjust its judgement according to the probabilities at hand. Strictly interpreted categorical rules can often not fulfil the aspect of reasonableness.”

Ethicar

Miniature car.

Image via Pixabay.

So simply following the Ethic Handbook 101 to the letter might lead to some very disappointing results because again, morality is also dependent on context. The team’s findings could be the foundation of ensuring ethical self-driving behavior by allowing them the flexibility to interpret the rules correctly in each situation. And, as a bonus, if the car’s computers understand what it means to act morally and make ethical choices, a large part of that data may not need to be recorded in the first place — nipping a whole new problem in the bud.

“We see this as the starting point for more methodological research that will show how to best assess and model human ethics for use in self-driving cars,” Mr Sütfeld added for ZME Science.

Overall, imbuing computers with morality may have heavy ramifications in how we think about and interact with autonomous vehicles and other machines, including AIs and self-aware robots. However, just because we now know it can be possible, doesn’t mean the issue is settled — far from it.

“We need to ask whether autonomous systems should adopt moral judgements,” says Prof. Gordon Pipa, senior author of the study. “If yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

As an example, he cites the new principles set out by the BMVI. Under this framework, a child who runs out on a busy road and causes a crash would be classified as being significantly involved in creating the risk, and less qualified to be saved in comparison with a person standing on the sidewalk who wasn’t involved in any way in creating the incident.

It’s an impossible decision for a human driver. The by-stander was innocent and possibly more likely to evade or survive the crash, but the child stands to lose more and is more likely to die. But any reaction a human driver would take would be both justifiable — in that it wasn’t premeditated — and blamable — in that maybe a better choice could have been taken. But a pre-programmed machine would be expected to both know exactly what it was doing, and make the right choice, every time.

I also asked Mr Sütfeld if reaching a consensus on what constitutes ethical behavior in such a car is actually possible, and if so, how can we go about incorporating what each country’ views on morality and ethics (their “mean ethical values” as I put it) into the team’s results.

“Some ethical considerations are deeply rooted in a society and in law, so that they cannot easily be allowed to be overridden. For example, the German Constitution strictly claims that all humans have the same value, and no distinction can be made based on sex, age, or other factors. Yet most people are likely to save a child over an elderly person if no other options exist,” he told me. “In such cases, the law could (and is likely to) overrule the results of an assessment.”

“Of course, to derive a representative set of values for the model, the assessment would have to be repeated with a large and representative sample of the population. This could also be done for every region (i.e., country or larger constructs such as the EU), and be repeated every few years in order to always correctly portrait the current „mean ethical values“ of a given society.”

So first step towards ethical cars, it seems, is to sit down and have a talk — first, we need to settle on what the “right” choice actually is.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper.

“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

But that’s something society as a whole has to establish. In the meantime, the team has worked hard to provide us with some of the tools we’ll need to put our decisions into practice.

As robots and AIs become a larger part of our lives, computer morality might come to play a much bigger part in our lives. By helping them better understand and relate to us, ethical AI might help alleviate some concerns people have about their use in the first place. I was already pressing Mr Sütfeld deep into the ‘what-if’ realm, but he agrees autonomous car ethics are likely just the beginning.

“As technology evolves there will be more domains in which machine ethics come into play. They should then be studied carefully and it’s possible that it makes sense to then use what we already know about machine ethics,” he told ZME Science.

“So in essence, yes, this may have implications for other domains, but we’ll see about that when it comes up.”

The paper “Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure” has been published in the journal Frontiers in Behavioral Neuroscience.

Darth Vader.

Our perception of a character comes not from their actions, but from how they compare to others

There are some characters whom we love although they do legitimately bad things — take Walter White for example. A new paper from the University at Buffalo tries to explain why we still root for these characters.

Darth Vader.

On the one hand, I really hoped all the best for Walter, all the way to the end. Which I found surprising because he does a lot of shady, a lot of downright dark things on the show. And I’m not the only one to do so — in fact, most people feel the same way as I do, while agreeing that Walter is, when you draw the line, more villain than hero. So what gives?

According to lead author Matthew Grizzard, an assistant professor in UB’s Department of Communication and an expert on the cognitive, emotional, and psychobiological effects of media entertainment, it’s because behavior isn’t the end-all standard after which we gauge a villain or a hero.

Exactly how to make an audience like or dislike a hero has been a burning question on the minds of media researchers even since the 70s. They’ve had a lot of time to look into the issue since then and one thing seems to stand the test of time — morality matters with the public. People simply love the good guys and dislike the bad guys. But Grizzard’s study suggests it’s not necessarily behavior that we use when making a distinction between the hero and the villain.

Whiter than thou

The team, which included co-authors Jialing Huang, Kaitlin Fitzgerald, Changhyun Ahn and Haoran Chu, all UB graduate students, wanted to find out if slight outward differences — for example wearing darker or lighter clothes — would be enough to make people consider a character as being a hero or villain. So, they digitally altered photographs of characters to see if they could influence the audience’s perception of them.

They also drew on previous research which found that while villains and heroes differ in morality, the two don’t differ in competence. In other words, villains aren’t simply immoral, but they’re “good at being bad”, according to Grizzard. This offered the team an opportunity to determine if their alterations activated participants’ perception of a hero or villain or if any shift in perception was caused by unrelated biases.

“If our data had come out where the heroic-looking character was both more moral and more competent than the villain, then we probably just created a bias,” says Grizzard.

“But because the hero was more moral than the villain but equally competent, we’re more confident that visuals can activate perceptions of heroic and villainous characters,”

The study found that while appearance does, to a certain degree, help skew perception of a character as either a hero or villain, it showed that characters were judged chiefly by how they compare to the others, and the order they’re introduced to the audience. For example, a hero was usually judged as being more moral and heroic if he or she appeared after the villain, and villains were usually considered to be more villainous if they were introduced after a hero. This suggests that people don’t make isolated judgments on the qualities of a character using a strict moral standard, but rather by making comparisons between them and those they oppose.

In Walter’s case, people see the character’s ethics taking a constant turn for the worse and still stick by his side. The trick is that Walter doesn’t evolve by himself — there are all those other characters going about, usually turning worse by the episode, and Walter comes on top when compared to them. He seems better when compared to the really low standard the others in the show set, making him feel like the good guy.

Well, if nothing else, the villains at least have an easier time catching up to Mr. Good Guy, Gizzard says.

“We find that characters who are perceived as villains get a bigger boost from the good actions or apparent altruism than heroes, like the Severus Snape character from the Harry Potter books and films.”

The findings could help improve the effectiveness of character-based public service campaigns, or for programs trying to promote a certain behavior. By helping authors understand how we perceive their characters, the research could also help them write better stories.

And on a more personal note, it can help each and every one of us form a clearer image of the characters we love — with all their flaws and strengths.

The full paper “Sensing Heroes and Villains: Character-Schema and the Disposition Formation Process” has been published in the journal
Communication Research.

miners

No, Baby Boomers don’t work harder than X-ers, Y-ers or Millennials. Work is just as hard for everyone

miners

Credit: Public Domain

If you’re under thirty, chances are your parents gave you a long talk about what real hard work means. While we can’t speak for everyone here, the science is pretty clear — there seems to be no difference in work ethic across generations.

There’s no denying that we can find perceptual gaps in attitudes or thinking between every generation. “Don’t trust anyone over 30” was the motto of the hippie counterculture, today’s baby boomers.

These ideological rifts between generations naturally lead to misunderstandings which may translate to the workplace. You’ll hear human resource managers say that baby boomers are more goal orientated and competitive while generation Y-ers and Millennials are more technology savvy, better at problem solving and teamwork. Is this narrative actually rooted in reality?

Keith Zabel of Wayne State University did not delve too much into ideological differences across generations but instead focused on studying work ethic. He and colleagues analyzed 77 different studies comprising 105 distinct measurements of work ethic to verify whether the popular stereotype of harder working generations is true.

“Given that Baby Boomers, Generation X, and Millennial generations will continue working together for decades, it is of vital importance to determine whether generational differences exist in the Protestant work ethic (PWE) endorsement, an important enabler of twenty-first-century skill development,” the authors of the paper wrote in the Journal of Business and Psychology.

The Protestant work ethic is an interdisciplinary concept which states that hard work, discipline, and frugality are the result of a person’s values espoused by the Protestant faith. That’s in contrast to the focus upon religious attendance, confession, and ceremonial sacrament in the Roman Catholic tradition. Some Americans might know this by the Puritan work ethic, mainly due to its prevalence among the Puritans.

Max Weber, an eminent German philosopher and sociologist famously argued in 1958 that the PWE, characterized by the disdain of leisure activities and a strong belief in the importance of hard work, was mainly responsible for the prosperous economic boom in Europe and the United States at the turn of the last century. Yet the analysis revealed no significant differences in work ethics among different generations.

“The finding that generational differences in the Protestant work ethic do not exist suggests that organizational initiatives aimed at changing talent management strategies and targeting them for the ‘very different’ millennial generation may be unwarranted and not a value added activity,” Zabel said in a news release.

“Human resource-related organizational interventions aimed at building 21st century skills should therefore not be concerned with generational differences in Protestant work ethic as part of the intervention.”

 

artificial-intelligence

Artificial Intelligence: teaching a robot to have human values

artificial-intelligence

Image: Pixabay, Geralt.

Artificial Intelligence. To most of us that brings up images and short clips from movies where AI dominates Earth and enslaves us poor humans. Put away those connotations for a moment. AI in its purest sense, where programs evolve and self-improve has been very interesting. Google recently showcased an interesting program; they plugged it into a game on the PS4, and in a matter of hours, the program had taught itself to play the game, and a few hours later could play it better than any human. Although this is slightly frightening, it shows how powerful technology is getting.

A topic that is ever-growing in presence in our news is driverless cars. Most big tech companies have caught on; Apple recently hired a leader in AI, probably for its own car – and of course, Google has been running thousands of kilometres of testing for its own driverless car. This raises a lot of machine ethics issues:

Machine ethics

Suppose a self-driving car gets itself into a catastrophic situation where it can either hit into a group of ten people, or crash into a wall – killing the driver in the process. What now? Or how should it trade off the small probability of a human injury with the near-certain probability of damaging very costly materials? The list goes on.

Legal issues

Suppose self-driving cars manage to cut the number of car accidents in half. Great news right? Now, rather than (for example) 40,000 accidents we now have 20,000. However the car firm faces itself with 20,000 lawsuits. Should legal questions about AI be treated differently to current laws that shape our actions?

Autonomous weapons

What about extremely powerful weapons? Should we ever entrust such weapons to AI? We would have to implement and hardwire certain humanitarian laws. In my opinion a program in control of weapons may entail fewer rash decisions when it comes to warfare, if everything is calculated. However, giving a program human values and telling it how to act is harder said than done.

Another issue that we must address and deal with is the AI’s ‘will’ for self-improvement. This feature is what makes AI so powerful, with its ability to make itself better at carrying out its task and being more efficient. However, this raises a few questions. A robot wanting to improve its ability to achieve a set of human goals may upgrade its hardware, software, generate a better world model etc. In other words, would it develop a sense of self-preservation? Unlimited resource acquisition? You can see where I’m heading with this. Not to mention; how can we guarantee AI keeps its goals when it ‘self-evolves’?

Unarguably, we will want to instil some values – in particular ethical values (e.g. kindness, mercy). How rigidly should these AI adhere to these ethical values? And what even  are the ethical values we want to give them? It is clear that as a planet we have a multitude of cultures and beliefs. Who should get to decide that, and when?

And what if the AI realises that its world model turns out to be quite different from reality? Or that it is for example given the task of eradicating a certain disease from a country. After that has been achieved, will it be able to extrapolate or re-define them somehow? For more information on this, take a look at the case that has been called an “Ontological crisis”.

Although all of these issues seem chilling, and yes – they have parallels with some Hollywood films, the potential benefits are mind-blowing, and make it possible to envision a world without disease or poverty. Imagine if a super-computer loaded with a friendly AI managed to eradicate Ebola, or remove disease in the poorest areas of the world by thinking of an ingenious solution? As far as that may be away, we should start thinking of an answer to these questions, which will have to come from us, society – and not from companies and industry.

Facebook conducted psychological experiments on its users

Facebook being unethical – again

I think at this point it’s safe to say that ethics isn’t necessarily one of Facebook‘s concerns, and this study shows it once again. What am I talking about? A covert experiment which influenced the emotions of 600,000 people, without asking for permission.

The entire situation is starting to become one big Monty Python sketch. Was permission for the study granted? At first, the answer was ‘yes’, but it quickly changed to ‘no’. Then it became ‘maybe’ , but the final call was still ‘no’. Initially, they said it was funded by the US military, but then, that statement was retracted, without any single further explanation. It’s easy to understand why this caused uproar, and massive discussions about the lack of ethics regarding this study. This pretty much sums it up:

“What many of us feared is already a reality: Facebook is using us as lab rats, and not just to figure out which ads we’ll respond to but actually change our emotions,” wrote Animalnewyork.com in a blog post on Friday morning.

Cornell University and the head of the study quickly washed their hands of the whole thing, saying that they had nothing to do with the gathering of the data and the experiment, they merely interpreted the results:

“Because the research was conducted independently by Facebook and Professor Hancock had access only to results – and not to any individual, identifiable data at any time – Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research.”, Cornell University said in a statement.

The social experiment

Facebook’s CEO, Mark Zuckerberg, hasn’t replied to the heavy accusations brought to Facebook.

What the experiment actually did was that for one week, they changed the content of news feeds for a random sample of Facebook users (over 600,000). For one group of users they removed content that contained positive words, for another group they removed content that contained negative words. The point was to see whether this biased way of presenting things had any effect on users’ emotions. Interestingly enough, it did – news which was presented in a more positive way had a more positive impact, and vice versa. The problem was that the users didn’t know they were participated in any research – just like with Twitter, it can be argued that the data for this study is unethical.

Scientifically, it can clearly be said that the study has a significant value. The number of people which were involved is absolutely huge – it’s quite possibly the largest sample size ever used in a psychological study – so there is a high statistical relevance. However, the statistical difference was one of the smallest ever published – so the results, while noticeable, are extremely small.

But the problems with this study isn’t that the results were small – it’s that again, Facebook didn’t get the approval of the participants. Participation in a study is at the core of science ethics since WWII, and this is simply against those ideas.

“It’s completely unacceptable for the terms of service to force everybody on Facebook to participate in experiments,” said Kate Crawford, visiting professor at MIT’s Center for Civic Media and principal researcher at Microsoft Research.

Facebook said that the study was conducted anonymously, so researchers could not learn the names of the research subjects – but the fact remained that they attempted to manipulate the feelings of some of its users without consent. They also don’t seem to care, since they didn’t even bother to clarify the situation. Gotta love Facebook!

Backyard Brains

Is making cyborg cockroaches immoral?

Backyard Brains

(c) Backyard Brains

Through the halls of TedxDetroit last week, participants were introduced to an unfamiliar and unlikely guest – a remote controlled cyborg cockroach. RoboRoach #12 as it was called can be directed to either move left or right by transmitting electrical signals through electrodes attached to the insect’s antennae  via the Bluetooth signals emitted by a smartphone. Scientists have been doing these sorts of experiments for years now in attempt to better understand how the nervous system works and to demonstrate how it can be manipulated.

Greg Gage and Tim Marzullo – co-founders of an educational company called Backyard Brains and the keynote speakers at the Ted event where the cyborgroach was shown – have something different in mind. They want to send RoboRoaches all over the U.S. to anyone who’d be willing to experiment with them. For 99$, the company sends you a kit with instructions on how to convert your very own roach into a cyborg for educational purposes – actually, it’s intended for kids as young as ten years old and the project’s aim is to spark a neuroscience revolution.  Post-TedxDetroit, however, a lot of people, including prominent figures from the scientific community, were outraged and challenged the  ethical nature of RoboRoaches.

“They encourage amateurs to operate invasively on living organisms” and “encourage thinking of complex living organisms as mere machines or tools,” says Michael Allen Fox, a professor of philosophy at Queen’s University in Kingston, Canada.

“It’s kind of weird to control via your smartphone a living organism,” says William Newman, a presenter at TEDx and managing principal at the Newport Consulting Group, who got to play with a RoboRoach at the conference.

How does the RoboRoach#12 and its predecessors become slaves to the flick on an iPhone touchscreen? In the instruction kit, which also ships with a live cockroach, students are guided through the whole process. First the student is instructed to anesthetize the insect by dousing it with ice water. Then the insects head is sanded with a patch of shell so that it become adhesive, otherwise the superglue and electrodes won’t stick. In the insect’s thorax a grounwire is inserted. Next, students need to be extremely careful while trimming the insect’s antennae before inserting silver electrodes into them. Finally, a circuit fixed to the cockroach’s back relays electrical signal to the electrodes, as instructed via a smartphone Bluetooth.

Gage says, however, that the cockroaches do not feel any pain through out this process, though it is questionable how certain he is of this claim. Many aren’t convinced. For instance  animal behavior scientist Jonathan Balcombe of the Humane Society University in Washington, D.C.  says“if it was discovered that a teacher was having students use magnifying glasses to burn ants and then look at their tissue, how would people react?”

That’s an interesting question, but I can also see its educational benefits of course. It teaches students how quintessential the brain is and how it governs bodily functions through electrical signals. Scientists, unfortunately, heavily rely on model animals like mice, worms, monkeys and such for their research. These animals certainly suffer, but until a surrogate model is found the potential gain convinces most policy makers that this practice needs to continue , despite the moral questions it poses. Of course, this kind of research is performed by adults, behind closed doors, in the lab – not by ten year old children. Also, what about frog dissections in biology classes? Some schools in California have banned the practice entirely, should other schools follow suit?

What happens to the roaches after they’re ‘used and abused’? Well, they go to a roach retirement home, of course. I’m not kidding. Gage says that , all students learn that they have to care for the roaches—treating wounds by “putting a little Vaseline” on them, and minimizing suffering whenever possible. When no longer needed, the roaches are sent to a retirement tank the scientists call Shady Acres where disabled insects go with their lives. “They do what they like to do: make babies, eat, and poop.”

Gage acknowledges, however, that he has indeed received a ton of hate mail. “We get a lot of e-mails telling us we’re teaching kids to be psychopaths.”

It’s worth nothing that cyber roaches are being used for some time in research. Scientists  in North Carolina are trying to determine if remote-controlled cockroaches will be the next step in emergency rescue, for instance. The researchers are now hoping that these roaches will be able to be equipped with tiny microphones and navigate their way through cramped, dark spaces in effort to find survivors in disaster situations.

So, ZME readers, what do you think? Should Cyber