Tag Archives: Confidence

Feedback.

Feeback, not evidence, makes us confident we’re right — even when we’re not

We tend to only look at the most recent feedback when gauging our own levels of competence, a new paper reports. The findings can help explain why people or groups tend to stick to their beliefs even in the face of overwhelming evidence to the contrary.

Feedback.

Image credits Mohamed Hassan.

A team of researchers from the University of California, Berkeley (UC) thinks that feedback — rather than hard evidence — is what makes people feel certain of their beliefs when learning something new, or when trying to make a decision. In other words, people’s beliefs tend to be reinforced by the positive or negative reactions they receive in response to an opinion, task, or interaction, not by logic, reasoning, or data.

“Yes but you see, I’m right”

“If you think you know a lot about something, even though you don’t, you’re less likely to be curious enough to explore the topic further, and will fail to learn how little you know,” said study lead author Louis Marti, a Ph.D. student in psychology at UC Berkeley.

“If you use a crazy theory to make a correct prediction a couple of times, you can get stuck in that belief and may not be as interested in gathering more information,” adds study senior author Celeste Kidd, an assistant professor of psychology at UC Berkeley.

This dynamic is very pervasive, the team writes, playing out in every area of our lives — from how we interact with family, friends, or coworkers, to our consumption of news, social media, and the echo chambers that form around us. It’s actually quite bad news, as this feedback-based reinforcement pattern has a profound effect on how we handle and integrate new information into our belief systems. It’s especially active in the case of information that challenges our worldview, and can limit our intellectual horizons, the team explains.

It can also help explain why some people are easily duped by charlatans.

For the study, the team worked with over 500 adult subjects recruited through Amazon’s Mechanical Turk crowd-sourcing platform. Participants were placed in front of a computer screen displaying different combinations of colored shapes, and asked to identify which shapes qualify as a “Daxxy”.

If you don’t know what a Daxxy is, fret not — that was the whole point. Daxxies are make-believe objects that the team pulled out of a top hat somewhere, specifically for this experiment. Participants weren’t told what a Daxxy is, neither were they clued in as to what any of its defining characteristics were. The experiment aimed to force the participants to make blind guesses, and see how their choices evolve over time.

In the end, the researchers used these patterns of choice to see what influences people’s confidence in their knowledge or beliefs while learning.

Participants were told whether they picked right or wrong on each try, but not why their answer was correct or not. After each guess, they reported on whether or not they were certain of their answer. By the end of the experiment, the team reports, a trend was already evident: the subjects consistently based their certainty on whether they had correctly identified a Daxxy during the last four or five guesses, not all the information they had gathered throughout the trial.

“What we found interesting is that they could get the first 19 guesses in a row wrong, but if they got the last five right, they felt very confident,” Marti said. “It’s not that they weren’t paying attention, they were learning what a Daxxy was, but they weren’t using most of what they learned to inform their certainty.”

By contrast, Marti says, learners should base their certainty on observations made throughout the learning process — but not discount feedback either.

“If your goal is to arrive at the truth, the strategy of using your most recent feedback, rather than all of the data you’ve accumulated, is not a great tactic,” he said.

The paper “Certainty Is Primarily Determined by Past Performance During Concept Learning” has been published in the journal Open Mind.

Smartphone.

Endowing AI with confidence and doubt will make it more useful, paper argues

Hard-wiring AI with confidence and self-doubt could help them better perform their task while recognizing when they need help or supervision, a team of researchers believes.

Smartphone.

Initial image credits Tero Vesalainen.

Confidence — that thing we all wish we had at parties but can thankfully be substituted with alcohol. Having confidence in one’s own abilities is generally considered to be a good thing, although, as it turns out from a certain presidency, too much of it and you annoy the whole planet. Which is an important point to discuss, given that we’re toying around with creating actual minds, in the form of AI. So would confidence, and it’s mirror twin doubt, prove of any use to a thinking machine?

That’s the question a team of researchers led by Dylan Hadfield-Menell from the University of California, Berkeley, set out to answer. We already know part of the answer — we know what happens when machines get over-confident, he says. A perfect example of this is Facebook’s newsfeed algorithms. They were designed to feed article and post suggestions which would match people’s interests based on what they click on or share. But by following these instructions to the letter, they end up filling some feeds with nothing but fake news. A sprinkling of self-doubt would’ve been a great boon in this case.

“If Facebook had this thinking, we might not have had such a problem with fake news,” says Hadfield-Menell.

The team believes the answer lies in human oversight. Instead of showing every article or post the algorithm thinks a Facebook user wants to see, a more uncertain system would be prompted to defer to a human referee in case a link smells fishy.

But knowing that doubt can help make our machines better at what they do isn’t the same as knowing how, and how much of it, should be implemented. So the team set up an experiment to determine how a robot’s sense of its own usefulness could be used in the creation of artificial intelligence.

The off-switch

The team designed a mathematical model of a human-robot interaction they call the “off-switch game” to see how a machine’s confidence levels would impact its interaction with us. Less of a game per se and more of a simulation, it basically consists of a robot with an off switch which is given a task to do. A human overseeing the bots can press this button at any time to stop the robot, but on the other hand, the robot can choose to disable this switch if it so desires.

Not very surprisingly, when the machine was given a high degree of confidence it would never allow the human player to switch it off. In contrast, a robot endowed with low confidence would always allow the player to shut it down, even if it was performing its task perfectly.

Hadfield-Menell believes this is a good indication that we shouldn’t make AI’s too “insecure”. For example, if you task your autonomous car with taking the kids to school in the morning it should never let a child take control. In this case, the AI should be confident that its own ability is greater than that of the children and refuse to relinquish control. But if you were in the car and told it to stop, it should relinquish control. The best robots, he adds, will be those who can best balance these two extremes.

While the idea of a robot refusing a command to stop or shut down might seem a bit scary or far-fetched (and has been debated at large in the past), context is everything. Humans are fallible too, and you wouldn’t want a robotic firefighter to stop from saving someone or putting out a fire because it was ordered to, by mistake. Or a robotic nurse to stop treating a delirious patient who orders it to shut down. This confidence is a key part of AI operation and something we’ll have to consider before putting people and AIs side by side in the real world.

The issue is wider than simple confidence, however. As machines will be expected to make more and more decisions that directly impact human safety, it’s important that we put a solid ethical framework in place sooner rather than later, according to Hadfield-Menell. Next, he plans to see how a robot’s decision-making changes with access to more information regarding its own usefulness — for example, how a coffee-pot robot’s behavior might change in the morning if it knows that’s when it’s most useful. Ultimately, he wants his research to help create AIs that are more predictable and make decisions that are more intuitive to us humans.

The full paper “The Off-Switch Game” has been published in the journal arXiv.

Public is skeptical of all research tied to a company, new study shows

A new study has revealed that at least when it comes to health risks or medicine, most people don’t believe studies associated with an industrial partner, even one with a good reputation.

No one really loves corporations, however, they do play a vital role in society — and in science. But at what cost? Image credits: takomabibelot.

In the past couple of years, we’ve seen a disturbing trend of anti-intellectualism. People don’t believe the experts, they don’t want science, and would often take their news and information from click bait Facebook posts or articles. Science isn’t really quick to react and scientists rarely aim to grab your attention with catchy headlines, so this problem is likely going to stick with us for a long time. However, if there is something scientists are good at, it’s figuring stuff out — and they recently showed that one of the mechanisms which erode trust in science is partnerships with industry.

It doesn’t take a genius to realize that most people dislike big companies, but the effect through which this dislike carries onto science is still not properly explored. Many health studies have a corporate partner or involve some kind of drug or treatment method developed by a corporation; how impactful are these associations?

“People have a hard time seeing research related to health risks as legitimate if done with a corporate partner,” said John Besley, lead author and an associate professor who studies the public’s perception of science. “This initial study was meant to understand the scope of the problem. Our long-term goal though is to develop a set of principles so that quality research that’s tied to a company will be better perceived by the public.”

In Besley’s study, participants were randomly assigned to evaluate one of 15 scenarios which included various partnerships between scientists from a university, a government agency, a non-governmental organization, and a large food company. Basically, participants were presented the same study on genetically modified foods and trans fats, but featuring various partnerships of the author.

The results clearly showed that people tended to dislike and distrust the science when the food company was involved. In fact, 77 percent of participants had something negative to say about this association and questioned the quality of the produced results. At the other side, only 28 percent of participants said something negative when a corporate partner wasn’t present. Additional partners, even reliable ones such as the Centers for Disease Control and Prevention, didn’t change these values significantly.

What this tells us is pretty simple: even if you do some quality science, there’s a good chance people won’t believe you because you got money from a company. This is understandable to some extent and you’d be tempted to say — “OK, scientists simply shouldn’t partner up with corporations” and that’s that. But then… where are you supposed to get funding money from? In the US, the funding leash is getting shorter and shorter, and there’s virtually no branch of science which isn’t getting significant funding from industry. Much of the science happening today is also trans-disciplinary and benefits from multiple actors involved. The study explains:

“University scientists conducting research on topics of potential health concern often want to partner with a range of actors, including government entities, non-governmental organizations, and private enterprises. Such partnerships can provide access to needed resources, including funding. However, those who observe the results of such partnerships may judge those results based on who is involved.”

So you’re stuck between a rock and a hard place — either risk the public not believing in your research or just never get the money you need in the first place. It’s a challenging time to be a researcher.

“Ultimately, the hope is to find some way to ensure quality research isn’t rejected just because of who is involved,” Besley said. “But for now, it looks like it may take a lot of work by scientists who want to use corporate resources for their studies to convince others that such ties aren’t affecting the quality of their research.”

Journal Reference: John C. Besley , Aaron M. McCright, Nagwan R. Zahry, Kevin C. Elliott, Norbert E. Kaminski, Joseph D. Martin — Perceived conflict of interest in health science partnerships. https://doi.org/10.1371/journal.pone.0175643

Arousal makes us more confident in what we perceive, study finds

A new study found that even imperceptible changes in our state of arousal can influence the confidence we have in our visual experiences.

Image credits Nan Palmero / Flickr.

A team from University College London has found that subtle increases in arousal — even ones so slight we aren’t even consciously aware of — affect how confident participants felt about what they were seeing when asked to complete a simple task.

The team asked 29 volunteers to follow a cloud of moving dots on a screen, decide whether they were moving to the left or to the right, then rate how confident they are in their answer. Without the volunteers knowing, some of the challenges started with a disgusted face appearing on the screen — too briefly for the participants to consciously perceive it.

But their unconscious did pick up on the image, causing their heart rate to increase and their pupils to dilate. The team found that even when the dots were made noisier and harder to make out, participants in this aroused state maintained their confidence in the answers they were giving.

“Typically when we see something, we have insight not only into what it is that we’ve seen, but also how clearly we’ve seen it,” explains lead author Micah Allen from the UCL Institute of Neurology.

“If the picture is clouded or obscured, our feeling of confidence in what we’ve seen is lessened. This ability to accurately appraise our own experiences is an important part of our everyday lives.”

Previously, Allen explains, researchers have viewed the brain like “a scientist or statistician” who evaluates the quality of our experiences — and, based on this, it gives us our feeling of confidence. The study challenges this view by tying our confidence to physical states .

“Our results suggest that subtle, unconscious changes in the physiological state of our bodies impact how we perceive uncertainty. Interestingly, we found that not only did confidence correlate with how fast a participant’s heart beat on each trial, but that artificially increasing arousal actually caused participants to act as if they were blind to the quality of their visual experiences,” said Allen.

He added that the findings suggest our ability for conscious introspection is much more dependent on our body’s state than previously assumed. Professor Geraint Rees, Dean at the UCL Faculty of Life Sciences and co-author of the paper, believes that the findings could help understand people struggling with depression. Because anxiety and depression alter the body’s state of arousal, patients suffering these conditions might perceive a too certain or uncertain world.
The full paper “Unexpected arousal modulates the influence of sensory noise on confidence” has been published in the journal eLife.