Tag Archives: bias

The food industry is skewing research, but we’re onto them now

The food industry could be actively working against public health by influencing the results of studies in their favor.

Image credits Stefan Divily.

New research reports that around 13.4% of the nutrition studies it analyzed disclosed ties to the food industry. Studies in which the industry was involved were more likely to produce results that were favorable to its interest, the team adds, raising questions in regards to the merits of these findings.

Harmburger

“This study found that the food industry is commonly involved in published research from leading nutrition journals. Where the food industry is involved, research findings are nearly six times more likely to be favourable to their interests than when there is no food industry involvement,” the authors note.

It’s not uncommon for industry to become involved with research — after all, they have a direct stake in furthering knowledge in their field of activity. This can range from offering funding to assigning employees to research teams for support or active research.

The current paper comes to show that, at least in the food industry, such activities are actively skewing and biasing research into nutrition. It is possible, the team reports, that this can put public health at risk as corporate interests can start dictating what findings see the light of day, where, and in what form. Such findings are worrying since corporations are notorious for putting profits above anything else, including truth or the common good.

In order to get a better idea of just how extensive the influence of industry is in food-related research, the team — led by Gary Sacks of Deakin University in Melbourne, Australia — analyzed all papers published in the top 10 peer-reviewed academic journals related to diet or nutrition. They looked at which had ties to the industry such as funding from food companies or affiliated organizations, and then whether or not the authors went out of their way to support industry interests.

Roughly 13.4% of the articles had some level of industry involvement, with some journals bearing more of the blame than others. The authors explain that studies with industry involvement were over five times more likely to favor industry interests compared to a random sample of studies without involvement (55.6% vs 9.7% for the latter).

Such figures offer a pretty big warning sign that industry involvement could promote research bias or help push an agenda at the expense of quality science (such as the neglect of topics that are important for public health but go against industrial interests). The authors suggest several mechanisms that could be employed to preserve the quality of nutrition research.

The paper “The characteristics and extent of food industry involvement in peer-reviewed research articles from 10 leading nutrition-related journals in 2018” has been published in the journal PLOS One.

Politically-incorrect language can seem sincere, but only if you’re saying what the audience wants to hear

Everyone prefers politically correct language sometimes, a new study reports. Where we differ is who we use it with, and how we perceive it in regards to the groups it’s being applied to.

Image credits Rudy and Peter Skitterians.

The concept of political correctness doesn’t get a lot of love in the online environment, so much so that it’s often pointed at to imply a lack of authenticity of those who use it. But it’s also a very divisive term; what others would see as dishonesty and sweet-talking, I would often just chalk up to being nice in conversation.

But a new study shows that, in fact, we’re all inclined to use politically correct language, we just apply it to different people. We tend to see it as compassionate when it’s applied to groups we support or care for, and as disingenuous when it’s addressed to other groups. Overall, however, we all tend to view people who use politically correct language as warmer, but less authentic and thus less likely to hold true to a particular view or idea.

Speeches and stones may break my bones

Such language is often used in a (genuine or disingenuous) attempt to appear more sensitive to the feelings of other people, especially those perceived to be socially disadvantaged. One example would be saying “Happy Holidays” instead of “Merry Christmas” in the understanding that not everyone holds to Christian or religious beliefs.

On paper, it all sounds great — I think all of us here agree that being considerate of others is a good thing. In practice, as you may know from discussions on various social media groups, the term is thrown about as a shorthand for censorship or socially-sanctioned limitations on free speech.

So there’s obviously a disconnect, but where? The team carried out a series of experiments totalling roughly 5,000 participants to examine this issue, reporting that, in broad lines, such language can make us seem less sincere by making our speech seem more strategic and calculated.

The first experiment asked participants to review a written speech and imagine a senator delivering it to an audience. Half the participants received a speech revolving around transgender policy, and the others around immigration policy (the topics were selected from particularly polarizing topics in American public discourse on purpose). Each speech used either politically correct (“Of course I believe that LGBTQ persons are among the most vulnerable members of our society and we must do everything in our power to protect them”) or incorrect (“These people who call themselves LGBTQ are often profoundly disturbed and confused about their gender identity”) language.

All in all, participants who read speeches using politically correct language tended to rate the senators as warmer, but less authentic. The results were consistent between all participants, regardless of their self-reported like or dislike of such language.

For the second experiment, the participants were asked to read a short biography of either Congressman Steve King, Senator Jim Inhofe, or Governor Jeb Bush and watch one of their speeches that were deemed either politically correct or incorrect. Afterward, they were asked to predict what stance these politicians would take on political issues in the future. This step aimed to evaluate how the use of language impacts an individual’s perceived trustworthiness or willingness to defend their beliefs even in the face of social pressure.

Those who listened to politically correct speeches reported feeling less certain about what stance the politician would take on topics in the future. This step showcased one of the trade-offs of using such language: while it makes one appear warmer and more concerned with others, it also makes them seem less sincere or more easily persuaded.

But it’s bias that convinces me

By this point, you’re probably asking yourself an obvious question: where do ‘them libs’ fit into the picture? The authors asked themselves the same thing, and it turned out that political affiliation has very little impact on our propensity to use politically correct language — but very much to do with whom we use it for.

In the third experiment, the team separated participants (based on their responses in a pre-test) as either Liberal-leaning or Conservative-leaning. The first group reported feeling sympathy for the undocumented immigrants, the LGBTQ community, and pro-choice individuals, while the latter was most concerned with the plight of religious Christians, poor white people, and pro-life individuals.

Each participant was asked to read a statement: “I think it is important for us to have a national conversation about” one of six groups. These groups were referred to using either politically-correct (e.g. ‘undocumented immigrants’) or incorrect terms (e.g. ‘illegal aliens’).

Unsurprisingly, when the participant felt sympathy for the group in question and was presented with a politically incorrect term — such as conservatives with ‘white trash’ or liberals with ‘illegal aliens’ — they didn’t view the language as particularly authentic, but as cold and uncaring. However, when presented with a politically-correct term for a group they did feel sympathy towards, they viewed it as authentic. On the flip-side, people also tended to rate politically incorrect language as more authentic when applied to groups they didn’t feel sympathy towards — such as liberals with ‘white trash’ or conservatives with ‘illegal aliens’.

But, and this is a very important ‘but’ in my opinion, there weren’t any divides in liking political correct speech among political groups. Liberals and conservatives were equally supportive of it as long as it applied to groups they felt sympathy towards — and equally against it when it wasn’t.

I feel the findings give us ample reason to pause and reflect on our own biases. Language does have power, and the way we use it speaks volumes about where our particular interests and sympathies lie. But at the same time, understanding that there are certain things we want to hear, and that this changes our perception of the ones saying them and the way they say it, is an important part of becoming responsible citizens for our countries.

The use of politically correct language can stem from genuine care and concern, just as much as it can from a desire to fake that care for brownie points. Politically incorrect language can come from one’s inner strength and willingness to state their mind regardless of society’s unspoken rules, but it can equally be used to deceive and appear no-nonsense when one is, in fact, callous and uncaring. It could go on to explain why considerate politicians can be perceived as weak, or why those downright rude and disrespectful can have the veneer of strength.

Perhaps, in this light, we should be most wary of those who tell us what we want to hear, the way we want to hear it. At the same time, it can help us understand that those we perceive as opposing our views and beliefs aren’t ‘out to get us’ — they literally see a different intent behind the same words, just as we do. Working together, then, doesn’t start with changing their minds, but with checking our own biases, and seeing which ones we truly believe in.

Back to the study at hand, the team explains that their findings showcase how the use of language can help shape other’s perceptions of us. Politically correct language can make us seem warmer but more calculated and thus less sincere. Politically incorrect language can make us look more honest, but colder and more callous — it all depends on what your conversation mates want to hear.

The paper “Tell it like it is: When politically incorrect language promotes authenticity” has been published in the Journal of Personality and Social Psychology.

‘Groupiness’ makes us biased, not gender, ethnicity, or political leanings

In an age where public discourse seems more polarized and extreme than ever, finding common ground is key. Easier said than done, however. A new study comes to show that our desire to fit into and belong to certain groups could lie at the root of this issue.

Image credits Flickr / tadekk.

The study found that people who identify themselves as belonging to a particular political group are more likely to be biased against people outside of the group. People identifying either as Democrats or Republicans showed this inclination in equal measure, so it’s not where our affiliation lies that matters — only that we desire to be part of the group. Whether or not it’s political in nature isn’t really important.

Alternatively, if you’re reluctant to identify yourself as part of a group, you’re less likely to be biased in general.

Mine with mine, you with yours

“It’s not the political group that matters, it’s whether an individual just generally seems to like being in a group,” said Rachel Kranton, an economist at Duke University’s Trinity College of Arts & Sciences and lead author of the paper.

“Some people are ‘groupy’ — they join a political party, for example. And if you put those people in any arbitrary setting, they’ll act in a more biased way than somebody who has the same political opinions, but doesn’t join a political party.”

The team began by testing the ‘groupiness’ of 141 participants by asking them to allocate money to themselves and someone else in their group or outside of it in different contexts. For one of the tests, participants were divided into groups based on their (self-declared) political affinity. In the second setting, they were organized into groups based on what paintings or poems they enjoyed, and in the third one the groups were random.

They expected people to discriminate against other groups based on how strongly they believed in the opinions of their group; in this sense, the first scenario should have been the most divisive, as people tend to care about politics more than art preferences.

What the team found, however, was that simply being attached to a group made ‘groupy’ people more biased against outsiders (as compared to people with the same political leanings but who didn’t identify as being a Democrat or Republican). This effect persisted in all contexts.

“There is this very specific distinction between the self-declared partisans and politically similar independents,” says co-author Scott Huettel, a psychologist and neuroscientist at Duke. “They don’t differ in their political positions, but they do behave differently toward people who are outside their groups.”

“We can’t show you that all group-minded identities behave this way,” says Huettel. “But this is a compelling first step.”

Around a third of the participants didn’t show a bias when allocating money regardless of context. They were more likely to consider themselves politically independent, the authors note, and also made the decision on how to allocate money faster on average than their peers.

“We don’t know if non-groupy people are faster generally,” Kranton said. “It could be they’re making decisions faster because they’re not paying attention to whether somebody is in their group or not each time they have to make a decision.”

As to exactly what makes someone ‘groupy’, the team can’t say right now. From their data, however, they can tell it’s neither gender nor ethnicity. There’s just “some feature of a person” that makes them put more value on group divisions, the authors argue. Other research will need to uncover what this feature is, and how it arises.

The paper “Deconstructing bias in social preferences reveals groupy and not-groupy behavior,” has been published in the journal PNAS.

Monkeys are willing to try new solutions to problems, while humans stick to what they know — even if it’s less efficient

We may like to think that we’re smarter, but a new study shows that monkeys show greater cognitive flexibility than humans when deciding how to solve a problem.

Image via Pixabay.

New research at the Georgia State University reveals that capuchin and rhesus macaque monkeys are significantly less susceptible to “cognitive set” bias than humans. In other words, when presented with a new, more efficient option for solving a problem, one of these monkeys shows more willingness to try it out than a human.

Your brain gets smart but your head gets dumb

“We are a unique species and have various ways in which we are exceptionally different from every other creature on the planet,” said Julia Watzek, a graduate student in psychology at Georgia State and the paper’s lead author.

“But we’re also sometimes really dumb.”

Watzek’s study supports earlier findings with other primate species — baboons and chimpanzees — who also showed greater willingness to use shortcuts, when available, to earn a treat. Humans, both the present and previous studies note, have the tendency to persist in using a familiar strategy even if it is more inefficient, and even if they see the alternative at work.

The present study worked with 56 humans, 22 capuchin monkeys, and 7 rhesus monkeys. First, the researchers established a specific strategy to lead to a solution. They taught the participants, both human and animal, through trial and error, to follow a pattern on a computer — this involved pushing a striped square, then a dotted square, and finally a triangle (when it appeared) to receive a reward. Humans were rewarded with either a jingle or points to let them know they got it right, and the monkeys received a banana pellet. Wrong results were penalized with a brief time out and, obviously, no reward.

After all the participants got a strong grasp on the process, the team switched it up. Subsequent trials presented the triangle option immediately, without the first two steps (involving the squares). The team notes that all of the monkeys took the chance and used this ‘shortcut’ — meanwhile, only around 39% of the human participants did. Furthermore, around 70% of the monkeys used the shortcut the very first time it was presented, while only a single human participant did the same.

“There’s a heavy reliance on rote learning and doing it the way you were taught and to specifically not take the shortcut,” Watzek said of the human subjects. “More of the humans do take the shortcut after seeing a video of somebody taking the shortcut, but about 30 percent still don’t,” she adds.

“In another version we told them they shouldn’t be afraid to try something new. More of them did use the shortcut then, but many of them still didn’t.”

Rote learning involves mastery or memorization of a skill or concept through repetition and should be painfully familiar to anyone who’s ever crammed for an exam.

The findings are quite interesting as they show how one of our most powerful tools — learning by repetition — can work to hold us back, lead us to make inefficient decisions, and potentially miss opportunities.

The team notes that, usually, sticking to what you know isn’t that much of a cost; for example, always taking the same route to work isn’t that big of a deal, even if a shorter alternative is available. However, there are cases where relying on inefficient or outdated practices can have dramatic consequences: the team points to the recent global financial crisis when many experts ignored warning signs and persisted with risky trading and lending habits. In the end, it led to a housing market crash and all those delightful economic issues we’ve been dealing with since.

“To set ourselves up for good decision-making, sometimes that means changing available options,” Watzek said. “I’m not proposing to topple the entire Western education system, but it is interesting to think through ways in which we train our children to think a specific way and stay in the box and not outside of it.”

“Just be mindful of it. There are good reasons for why we do what we do, but I think sometimes it can get us into a lot of trouble.”

Sarah Pope, a former graduate student in the Neuroscience Institute at Georgia State and a co-author of the study, also carried out the experiment in Namibia with members of the semi-nomadic Himba tribe (which, the authors note, was not exposed to Western education and live in a less predictable environment). They were quicker to use the shortcut immediately, but more than half still used the three-step process as well. Children aged 7-10 that were given the same task at Zoo Atlanta were four times more likely than adults to use the shortcut — but still, more than half continued to use the learned strategy.

So while our brains are undeniably very efficient tools, we should definitely exercise some oversight; their intentions may be good, but the results don’t always line up.

The paper “Capuchin and rhesus monkeys but not humans show cognitive flexibility in an optional-switch task” has been published in the journal Scientific Reports.

Cycling first person photo.

If you want to find your passion, keep a first-person perspective on life

A new study shows that we can smash through pre-existing beliefs and remember what we find interesting or enjoyable.

Cycling first person photo.

Image via Pixabay.

We all have some idea of what we like, and hopefully, we spend our time doing as much of those things as possible. The really lucky ones among us may also manage to make a career out of something we’re passionate about, but new research shows that we may have a distorted view of this subject. Pre-existing self-beliefs and cultural stereotypes, the authors report, can alter your memory of certain events and how interested you were in them. Essentially, this mechanism sometimes makes us forget where our passions lie because they don’t fit into our idea of what we ‘should be’.

However, we can overcome this dynamic.

Walk a mile in your shoes

“When we are developing our interests and looking back on our memories, I don’t think we realize how biased we can be by our pre-existing beliefs,” said study lead author Zachary Niese, who participated in the research as a doctoral student in psychology at Ohio State.

“People think they know themselves and know if they liked something or not, but often they can be misled by their own thoughts.”

Niese gives the example of a young girl who genuinely enjoyed participating in a science project at a summer camp while it was ongoing. However, upon her return home, she’s reminded that “science is not for girls“, and this comment can change the way she remembers her experience of the project. In effect, this dynamic replaces the feelings of enjoyment in her memories with the ‘proper’ ones of being bored by science.

In a series of four recently-published papers, Niese and her colleagues found consistent evidence that this dynamic is real; people can “forget” how much they enjoyed a particular activity because of what they believed going in, they explain.

Luckily for us, they’ve also found an efficient tool to break the bias. It’s as simple as visualizing an activity from a first-person perspective. For the girl in the example above, simply visualizing herself being at camp and picturing exactly what she did in the project will help put her back in touch with her feelings at the moment.

“We can use imagery as a tool to tap into our memories and more accurately identify what our actual experiences are instead of relying on our old beliefs,” said study co-author Lisa Libby, associate professor of psychology at Ohio State.

“People sometimes have experiences that are inconsistent with what they think about themselves. We may think we don’t like math, so if we enjoy a math class, that doesn’t fit in with our view of ourselves, so we dismiss that positive experience. That’s what using first-person visual imagery helps overcome.”

Perspective matters

The team says this approach works because it changes the frame of mind with which we process that particular event. Viewing it from a first-person perspective forces us to think about and pay attention to how the event made us feel, Niese explains. In contrast, a third-person perspective is more abstract and forces us to imagine how we look from the outside — social norms and our pre-existing beliefs have much more sway here.

The team shows that imagery perspective is so powerful that we can change how people process events by merely showing them photographs taken from one visual perspective or the other, Niese adds.

In one of their experiments, the team worked with 253 undergraduate women, which they first surveyed about their interest in science. A few days later, the participants were asked to play a computer simulation game in which the objective was to create a balanced ecosystem.

Players could achieve this by tweaking how much grass, as well as the number of sheep and wolves that were present. Some of the women played an interesting version of the game (where they had complete control) while others played a ‘deliberately boring’ version (where they could run through predetermined settings rather than make any actual choices). Each of the students was asked to complete a task designed to influence their frame of mind in the moment. During this task, the researchers talked about the game as a science task (this was meant to prime participants) and then showed all the women a series of images and told them to pay attention to each one and try to form an impression of it in their mind.

The images showed an everyday action that differed only in whether the photo was taken from the first-person or third-person perspective. For example, the image could show a person cleaning a spill from a first-person or a third-person. Each participant saw all photos in either the first-person or third-person perspective. After this task, they were asked how interesting they found the ecosystem simulation game as a science task.

The team used the baseline value of their interest in science the women provided on the first day as a reference. All in all, the researchers report, those who viewed the third-person photos reported interest in the game that was very similar to how much interest they reported in science earlier. This was true both for those that played the boring or interesting version of the game. In other words, their pre-existing beliefs completely blinded them to how interesting the game actually was, Niese said.

The women who viewed the first-person photos didn’t show this bias. They accurately reported more interest in the game if they played the interesting version than if they played the boring version. The team says this shows that this group was able to accurately recall how interesting the game was, regardless of their individual interest in science. First-person imagery helped women see how interesting an activity actually was rather than be biased by their pre-existing beliefs, Niese said.

At the end of the study, the researchers offered participants three future “opportunities to do more things like the science task you completed today.” Those who played the interesting version of the simulation and who viewed the first-person photos were more likely than others to show greater interest in future science activities.

“Part of what is so interesting and surprising about our study is that a simple manipulation — just the way people think about a past event — is changing their conclusions about what they’re doing and whether they’re interested or not,” Niese said.

“It’s something people could do on their own if they wanted to and gain these benefits in situations where cultural stereotypes or pre-existing beliefs might be likely to bias their judgment or cloud their memories.”

The paper “I can see myself enjoying that: Using imagery perspective to circumvent bias in self-perceptions of interest” has been published in the Journal of Experimental Psychology: General.

Grant Proposal.

Researchers write grant proposals differently depending on their gender, and it can lead to bias

What you’re describing in a research grant proposal is important but how you say it also matters a lot, new research shows.

Grant Proposal.

Probably the wrong wording.

The study looked at health research proposals submitted to the Bill & Melinda Gates Foundation, in particular, the wording they used. It found that men and women tend to use different types of words in this context, both of which carry their own downsides. Female authors tend to use ‘narrow’ words — more topic-specific language — while men tend to go for ‘broad’ words, the team reports. The findings further point to some of the biases proposal reviewers can fall prey to, and may help design effective automated review software in the future.

The words in our grants

“Broad words are something that reviewers and evaluators may be swayed by, but they’re not really reflecting a truly valuable underlying idea,” says Julian Kolev, an assistant professor of strategy and entrepreneurship at Southern Methodist University’s Cox School of Business in Dallas, Texas, and the lead author of the study.

It’s “more about style and presentation than the underlying substance.”

The narrower language used by female authors seems to result in lower review scores overall, the team notes. However, broad language, which tended to see more use with male authors, let them down later throughout the scientific process: proposals that used more broad words saw fewer publications in top-tier journals after receiving funding. They also weren’t more likely to generate follow-up funding that publications with narrower language.

The researchers classified words as being “narrow” if they appeared more often in proposals dealing with a particular topic than others. Words that were more common across topics were classified as “broad”. In effect, this process allowed the team to determine whether certain terms were ‘specialized’ for a particular field or were more versatile. This data-driven approach resulted in word classifications that might not have been obvious from the outset: “community” and “health” were deemed to be narrow words, for example, whereas “bacteria” and “detection” were deemed to be broad words.

Reviewers favored proposals with broader words — and those words were used more often by men. So, should we just teach women to write like men? The team “would be hesitant to recommend” it, which is basically science-speak for ‘no’. Kolev says we should instead look at the potential biases reviewers can have, especially in cases where they are favoring language that doesn’t necessarily result in better research.

“The narrower and more technical language is probably the right way to think about and evaluate science,” he says.

Kolev’s team analyzed 6794 proposals submitted to the Gates Foundation by US-based researchers between 2008 and 2017 and how reviewers scored them. Overall, they report, reviewers tended to give female applicants lower scores, although the authors’ identities were kept secret during the review process. This gap in reviewer scores stood firm even after the team controlled for a host of conditions, such as the applicant’s current career stage or their publication record. The only element that correlated with the gap is the language applicants used in their titles and proposal descriptions, the team reports.

The team isn’t exactly sure whether their findings are broadly applicable to all scientific grant application review processes or not. Other research into this subject, but this one dealing with the peer-review process at the NIH, didn’t find the same pattern. It might be a peculiarity of the Bill & Melinda Gates Foundation.

One explanation could be found in the different takes these two organizations have on reviewing processes. The Gates Foundation draws on reviewers from several disciplines and employs a “champion-based” review approach, whereby grants are much more likely to be funded if they’re rated highly by a single reviewer. This less-specialized body of reviewers may be more susceptible to claims that look good on paper (“I’m going to cure cancer!”) rather than those which actually make for good science (such as “I’m going to study how this molecule interacts with cancerous cells”). This may, unwittingly, place women at a disadvantage.

The Gates Foundation hasn’t been deaf to these findings — in fact, they were the ones who called for the study and gave the team access to their peer-review data and proposals. The organization is “committed to ensuring gender equality” and is “carefully reviewing the results of this study — as well as our own internal data — as part of our ongoing commitment to learning and evolving as an organization,” according to a written statement.

The findings also have interesting implications for automated text-analysis software, which will increasingly take on tasks like this in the future. On the one hand, it shows how altering the wording of a proposal can trick even us — nevermind a bit of code — into considering it more valuable when it’s not. On the other hand, the findings can help us iron out these kinks.

But that’s the larger picture. If you happen to be involved in academia and are working hard on a grant proposal, the study shows how important it is to tailor your paper to the peer-review process. You don’t need to be an expert –he Gates / NIH studies show that there isn’t a one-size-fits-all here, but there are services online that can help you out when the style and terminology of the assignment.

The paper “Is Blinded Review Enough? How Gendered Outcomes Arise Even Under Anonymous Evaluation” has been published in the journal NBER.

Visual depiction of the bodyline task, in which a female body image was presented for 250 ms, immediately followed by a visual noise mask for 500 ms. Participants indicated the perceived size of the image by clicking on the bodyline delineated with extreme female bodies as anchors presented a further unit of scale beyond the bounds of the numberline. Credit: Nature.

The brain might trick most people into thinking they’re thinner than they actually are

How we perceive our own bodies, as well as those of others, is likely distorted by past observations, and may have serious consequences for eating disorders.

Credit: Pixabay.

Credit: Pixabay.

Australian and Italian researchers set out to investigate how an inherent bias called serial dependence relates to body size. Scientists have observed that the human brain has the tendency to average data over time, skewing overall perception towards recent percepts. For instance, a 2016 study found that participants who were showed a series of portraits would judge the attractiveness of a person while bearing in mind the attractiveness level of facial images encountered up to 6 seconds prior.

Serial dependence has been reported in the perception of orientation, position, facial recognition, facial emotion, numerosity, and more.

Researchers led by Dr. Jason Bell from the University of Western Australia have now demonstrated that serial dependence can also act on body size perception. The team’s experiment involved 103 female participants who were shown a series of images depicting a range of female bodies: underweight, normal-weight, overweight, and obese.

Visual depiction of the bodyline task, in which a female body image was presented for 250 ms, immediately followed by a visual noise mask for 500 ms. Participants indicated the perceived size of the image by clicking on the bodyline delineated with extreme female bodies as anchors presented a further unit of scale beyond the bounds of the numberline. Credit: Nature.

Visual depiction of the bodyline task, in which a female body image was presented for 250 ms, immediately followed by a visual noise mask for 500 ms. Participants indicated the perceived size of the image by clicking on the bodyline delineated with extreme female bodies as anchors presented a further unit of scale beyond the bounds of the numberline. Credit: Nature.

For each image, the participants were asked to judge the size of the body by marking a visual scale called the body-line. What the researchers found was that participants showed evidence of sequential bias in their own perceived body size with judgments guided by the previously viewed body. As a person’s weight increases above the average, so too does the likelihood that their prior experience involves smaller bodies. Because the brain combines our past and present experiences, it creates an illusion whereby we — as well as people around us — appear thinner than we actually are.

“The research demonstrates human observers are often poor at estimating their own body size, and the size of others,” Dr. Bell said.

“Crucially, body size judgments are not always accurate and can be biased by various factors. Sometimes it’s influenced just by the people we stand next to,” he added.

This sort of bias may make it more challenging for people to reach their weight goals. In the case of those suffering from eating disorders, serial dependence might actually cause serious problems. For example, individuals suffering from anorexia (extreme weight loss) and bulimia (excessive overeating and purging) have a distorted body image.

“These findings have important implications for weight loss approaches, including our chances of dieting successfully. What makes this particularly interesting from a health perspective is that misperceiving body size is a common symptom of eating disorders or obesity.”

“Ideally, we’d like to correct these illusions, so people are able to make an accurate assessment of their weight and whether it has changed for better or worse.”

Scientific reference: “Past visual experiences weigh in on body size estimation” by Joanna Alexi, Dominique Cleary, Kendra Dommisse, Romina Palermo, Nadine Kloth, David Burr & Jason Bell in Scientific Reports. Published online January 9 2018 doi:10.1038/s41598-017-18418-3.

Your brain tricks you into seeing difficult goals as less appealing

Your lazy brain actually changes how you see the world to discourage you from effort, a new study found. The results suggest that the amount of work required to achieve a task changes our perception of it — in essence, our brain makes anything challenging seem less appealing.

Something tells me we’re not the only species to have this bias.
Image credits Dimitris Vetsikas.

Today was to be the day. You made a commitment to yourself — today, you’d replace the after-work couch and chip marathon with a healthy dose of jogging. You bought the sneakers, put together the mother of all jogging playlists, and had the route all planed. It would be the dawn of the new, fitter you.

So how on Earth did you end up on the couch munching on snacks again?

Blame the brain

A new paper from the University College of London found that your brain just won’t let you put the effort in. The estimated amount of work required to do a task influences the way we perceive it, making us choose the path of least resistance.

“Our brain tricks us into believing the low-hanging fruit really is the ripest,” says Dr Nobuhiro Hagura, who led the UCL study before moving to NICT in Japan.

“We found that not only does the cost to act influence people’s behaviour, but it even changes what we think we see.”

The team had 52 participants undergo a series of tests in which they had to judge the direction a bunch of dots moved on a screen. They would input their answer by moving one of two handles — one held in the right hand, the other in the left.

At first, these two handles required an equal amount of effort to move. But as the tests progressed, the researchers gradually added a load to one of the handles to make it more difficult to move. They report that the volunteers’ responses became gradually more biased towards the free handle as the load increased, even though the dots’ patterns of movement weren’t altered. For example, when weight was added to the left handle, they were more likely to judge the dots as moving to the right — because this answer was easier to express.

When asked about their choices, the participants reported they weren’t aware of the increased load on the handle. This suggests that their movements adapted automatically, which in turn changed their perception of the dots.

“The tendency to avoid the effortful decision remained even when we asked people to switch to expressing their decision verbally, instead of pushing on the handles,” Dr Hagura said.

“The gradual change in the effort of responding caused a change in how the brain interpreted the visual input. Importantly, this change happened automatically, without any awareness or deliberate strategy.”

Matter over mind

“That seems like a lot of work. Let’s watch Netflix instead.” — your brain.
Image credits Carlos Barengo.

Dr Hagura further explained that up to now, researchers believed that we made a decision based on our senses and the motor system reacts to that decision — we see a tasty apple in the grocery and we reach out for it. In this case, the motor system plays no part in our choice to act. The paper suggests that this isn’t entirely true. The effort required to complete a task actually changes how we perceive the object of our desire, playing a central role in influencing our decision.

The team believes their findings can be used to shape everyday decisions by making certain choices more effort-intensive.

“The idea of ‘implicit nudge’ is currently popular with governments and advertisers,” said co-author Professor Patrick Haggard from the UCL Institute of Cognitive Neuroscience.

“Our results suggest these methods could go beyond changing how people behave, and actually change the way the world looks. Most behaviour change focuses on promoting a desired behaviour, but our results suggest you could also make it less likely that people see the world a certain way, by making a behaviour more or less effortful. Perhaps the parent who places the jar of biscuits on a high shelf actually makes them look less tasty to the toddler playing on the floor.”

I’m not particularly big on the idea of being “implicitly nudged” and all, I have to admit. But seeing as my brain is already hard at work doing just that, I guess some counter-manipulation wouldn’t be so bad.

So why does it happen? Well, this effect is probably set in place to conserve energy. Our brains evolved over hundreds of thousands of years when access to food wasn’t guaranteed. One of their prime concerns thus is to make sure you put in as much work as you need to survive — but not much more than that.

The full paper “Perceptual decisions are biased by the cost to act” has been published into the journal eLife.

Mocking birds having an argument. Credit: Wikimedia Commons

Curiosity, not how much science you know, is the best predictor of unbiased opinions

Mocking birds having an argument. Credit: Wikimedia Commons

Mocking birds having an argument. Credit: Wikimedia Commons

Time and time again, research has shown that political affiliation greatly influences people’s opinions on leading scientific issues like fracking, climate change, vaccines, or nuclear power. And before you jump to conclusion, virtually everybody lets politics get the better of them since having a top education or scoring high on science tests does little to nothing to cure bias. Instead, the least vulnerable people to bias might be the curious, a new study suggests.

When you form opinions before reading the science

If you ever felt like smashing your left or right wing friend with the closest pointy object after a ‘lively’ discussion about climate change or Donald Trump, you’re not alone. Be aware, however, that you’re likely just as biased. Research suggests that those who hold strong opinions on leading complex issues are not only difficult to sway even when presented with hard evidence, but will also cherry pick data or pieces of evidence to fit their narrative.

One famous 1979 study paired 48 undergraduates who either supported or opposed capital punishment. They were asked to rate two purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty. The two studies presented the same empirical data though, albeit in a different wait. Unsurprisingly, “both proponents and opponents of capital punishment rated those results and procedures that confirmed their own beliefs to be the more convincing and probative one,” the researchers wrote.

It gets more bizarre the more your read about it. For instance, another study found that climate change skeptics are more likely to cling to their anti-science views the more scientifically literate they are on the subject.

One might naturally assume in light of all this that we humans are doomed to become trapped in a perpetual echo chamber, but it’s no so. There are unbiased people, as well as those that change their views in light of conflicting evidence despite the cognitive dissonance. So what separates these titans of truth and rational debate from other mere mortals?

Researchers at Yale University think the key might lie in curiosity. A team there led by Dan Kahan assessed study participants using two scales. One scale gauged their scientific literacy and thinking using a fairly standard questionnaire packed with questions about science facts and methods. The other scale was far more ingenious and innovative and was meant to gauge scientific curiosity and not how much science they already knew.

The scale they developed is part of a much broader and general project whose aim is to assess the utility of empirical methods to improve science filmmaking.

“This practical aim dictated that we focus on a tightly conscribed conception of science curiosity: an appetite to seek out and consume information in science films and related media for personal pleasure. We anticipated that this focus would help us to negotiate at least some of the obstacles that had constrained previous efforts to measure curiosity. The absence of a concrete object even for “science curiosity,” we suspected, had impeded articulation of a well-formed curiosity construct, and hence development of items for measuring the same,” the researchers wrote.

The scale or instrument was disguised as a general social marketing survey so as to minimize the risk subjects would discern that the goal was to assess science enjoyment. Namely, participants were asked to choose stories relating to sports, finance, politics, popular entertainment, science and other topics, and then answer some questions about the stories they had just selected.

Subjects were instructed to pick one of four news story sets, from which a story would then be selected for them to read and answer questions on. The task was conceived of as a performancebased measure of interest in science. Credit: Social Science Research Network

Subjects were instructed to pick one of four news story sets, from which a
story would then be selected for them to read and answer questions on. The task was conceived of as a performance-based
measure of interest in science. Credit: Social Science Research Network

The researchers then validated the Science Curiosity Scale (SCS) by measuring engagement in science and non-science films. Not surprisingly, the scientifically curious had a great engagement with science-focused documentaries like those airing on PBS. They were also scientifically literate and score high for scientific reasoning. But that wasn’t the point — others who were less curious scored high too.

Armed with their scale scores, the researchers then set out to predict how the subjects felt about public issues which ought to be informed or settled by science. On the scientific knowledge scale, things were depressingly predictable. For instance, left-wing Democrats judged global warming and fracking as dangerous to the public, while right wing republicans were likely to judge these issues as far less risky or serious. The more science literate they were, the more worried the Democrats and the less worried were the Republicans, which in other words means that science education widens the gap or polarizes the discussion even further.

Things changed considerably when the researchers used the curiosity scale. In this situation,  Democrats and Republicans with higher levels of science curiosity were far less polarized. In fact, the more curious they were about science the more both groups’ perception of risks increased.

Respective impacts of science comprehension and of science curiosity on polarizing subjects. Credit:

Respective impacts of science comprehension and of science curiosity on polarizing subjects. Credit: Social Science Research Network

“The data we’ve collected furnish a strong basis for viewing science curiosity as an important individual difference in cognitive style that interacts in a distinctive way with political information processing,” the Yale researchers wrote in their paper.

“I think it’s pretty robust,” Kahan told the Washington Post “If they’re just going to sit there and say, ‘I’m going to read something on climate change that goes against my political predispositions,’ it’s pretty hard to imagine putting them in a tougher position, in terms of choosing between that appetite and their identity. And they can’t resist.”

“It’s an asset that there’s a segment of the population that has that kind of disposition, so what you want to do is exploit it to the greatest extent,” Kahan says. “And if we’re lucky, it will percolate into other people with whom they have interactions.”

 

(Credit: Ollyy via Shutterstock/Salon)

Time slows down when people try to fight their racial bias

Watches might keep time in an absolute manner, but people don’t. Each person perceives time differently depending on mood, and moreover this perception changes with age. “When you are courting a nice girl an hour seems like a second. When you sit on a red-hot cinder a second seems like an hour. That’s relativity,” Einstein famously said. Apparently, time slows down even when white folks are concerned not to appear racially biased, according to a study published in  Psychological Science.

“An example of time slowing is the experience that a full second has transpired after only half a second, which makes the duration of an actual second feel longer or slower,” the researchers explained. “The implications of a time-slowing bias for interpersonal interactions are profound—imagine a police officer needing to gauge the time in which a minority suspect must respond before force is exerted. The perceived difference of a half second could determine whether shots are fired.”

The Lehigh University in Pennsylvania researchers recruited 24 women and 16 men  The volunteers were first asked to fill a survey which measured whether or not they were motivated to control their racial bias. Then they were put in front of a computer screen where a geometric shape was displayed, followed by either a black face or a white face. Both faces had neutral expressions. The first image was shown for exactly 600 milliseconds, while the second was displayed for some time between 300 and 1200 milliseconds. Participants then simply had to state whether they thought the second image showed up for more or less time than the first one.

The researchers found that short amounts of time were mistaken for longer ones when participants who were motivated to reduce bias viewed black faces. However, the effect was noticed in those participants who didn’t care whether or not they seemed racist. The findings were confirmed by a second set of experiments involved 36 white males, but which followed a slightly different procedure.

“Ironically, people trying to suppress the appearance of bias are most likely to display this form of implicit bias because their motivation to control prejudice induces race-related arousal,” Moskowitz and his colleagues wrote.

How mood changes our perception of time

This study is only one in a growing of evidence that suggests our psychological and emotional states influence the passage of time. When we are sad and depressed, we feel that time passes more slowly. Every hour seems like an eternity. On the opposite end of the spectrum, fear causes time to fly. These findings were confirmed by a study which asked participants to watch three films which each elicited a different mood: sad, neutral or frightened. Apparently, the neutral and sad movies didn’t cause a change in time perception but  “the selective lengthening effect after watching frightening films was mediated by an effect of arousal on the speed of the internal clock.”

diet goal

When following goals, people pay attention to progress more than they do to setbacks

Hopes are high this time of year, but before your make your New Year’s resolution you might want to consider an important cognitive bias: when following goals, progress is given a lot more consideration than setbacks. Say your resolution is to lose weight, so next year you’ll be on a diet. Chances have it, according to a study made by University of Colorado Boulder, you’ll feel refraining from eating ice cream (goal-consistent behavior) will help you in your resolution more than eating the ice cream will obstruct it. In doing so, you overestimate movement toward versus away from your target. In a more general context, there’s this bias that makes most people believe good behaviors are more beneficial in reaching goals than bad behaviors are in obstructing goals. It’s an innocent bias, but one that might make you lose focus or fail without even knowing what happened.

It’s all about thinking in net gain

diet goal

Credit: iStock

“Basically what our research shows is that people tend to accentuate the positive and downplay the negative when considering how they’re doing in terms of goal pursuit,” said Margaret C. Campbell, lead author of the paper — published online in the Journal of Consumer Research — and professor of marketing at CU-Boulder’s Leeds School of Business.

 

There’s an upside to it, though. When you accentuate the progress you’ve made and minimize the setbacks, you’ll feel more motivate which will help in reaching your goal, be it eating healthier, saving money or learning a new foreign language. A lapse away from the goal, known as  goal-inconsistent behavior, thus becomes less damaging in perception, so people feel these lapses can be redeemed later on. Success in working toward a goal, known as goal-consistent behavior, then feel like big accomplishments.

The big downside is that there’s a considerable risk people engage in too many goal-inconsistent behaviors and too few goal-consistent behavior, all while the goal pursuer feels he’s making progress when in fact he’s making none.

 “So our moral for the season is monitor, monitor, monitor,” said Campbell. “For example, dieters need to pay close attention to calories in and out — both aspects — during this tempting time to keep from falling prey to the bias.”

The researchers found that even when the goal-consistent and goal-inconsistent behaviors are the same size, like saving $90 or spending $90, the bias tends to be present.

What’s interesting is that a lack of confidence in reaching a goal can lessen the bias, the researchers found. You could say that being realistic makes you more attentive to both progress and setbacks. Of course, this can be dangerous to reaching your goal when the behavior turns to pessimism, since this tends to hinder motivation.

bias

Managers lose track of the big picture, only focus on grades and performance, not context

biasHow many times did you apply for a job or to enter a graduate program in some school somewhere only to find that the position was filled by someone less capable than you? Now, we’re strictly referring to people genuinely less capable than you, otherwise we’d be hitting the same bias exhibited by the manager in charge that renounced you the position you deserve. Needless to say, this may have happened quite a lot, and the reason apparently lies in the fact that  managers tend to ignore the context of past performance, according to the findings of a new study by researchers at University of California,  Berkeley-Haas School of Business.

“We would like to believe that the people who are making judgments that affect our lives—where we get hired or what school we are admitted to—have the wisdom to understand who we are, what we are capable of, what shortcomings aren’t our fault,” says  Don Moore, an Associate Professor at University of California, “But our research shows people evaluating us have a great deal of trouble considering situational factors or context.”

The researchers recruited various company executives, university faculty members and graduate students for their four distinct studies specifically tailored to test for what’s called in social psychology the “correspondence bias” – the tendency of some people to draw inferences about a person’s disposition while ignoring the surrounding circumstances. This tendency has been found to be so widespread by previous studies, as well as the present one as you’ll soon learn, that it has been called by many the fundamental attribution error.

Screening the numbers, not the people

For instance, in one of the study’s experiments participants were asked to evaluate a situation similar to this hypothetical scenario:

John and Dave are applying for a senior management position at Los Angeles International Airport (LAX). John works at the Oakland International Airport (OAK), and David works at San Francisco International (SFO). They offer comparable experience. One key measure of performance for the LAX job is the percentage of flights that leave on time at the applicant’s airport. SFO is considered to be the more difficult airport to land planes, in part because it has more overcast days and only two of four runways in use. Therefore SFO rates lower in on-time departures, and John from OAK gets the job.

The National Association of Colleges and Employers reports that 66 percent of employers screen candidates by grade point average (GPA), and 58 percent of employers indicated that a GPA below 3.0 all but eliminates a candidate’s chances of being hired . Academic grades, after all, test the mental ability central to predicting job performance. However, by setting a cutoff GPA for all candidates, employers implicitly assume that a grade from one school is equivalent to a grade from another. This is a problematic assumption because universities vary considerably in their grading standards; otherwise similar universities display large differences in the grades they award their students.

Similar results like those found when screening human resource managers decisions were discovered as well for graduate school admissions decisions. For example, applicants with higher GPAs from schools known for easier grading systems beat out applicants with lower GPAs from universities with stricter grading policies.

“Our results suggested that alumni from institutions with lenient grading had a leg up in admission to grad school, and the reason for that is the admissions decision makers mistakenly attributed their high grades to high abilities,” says Moore.

The study, which was published in the journal PLOS ONE and can be read in its entirety here for free, supports the hypothesis that people rely heavily on nominal performance (such as GPA) as an indicator of success while failing to sufficiently take into account information about the distributions of performances from which it came. To the extent that admissions officers and hiring managers generally show the same biases discussed here, graduate programs and businesses are collectively choosing to select candidates who demonstrated their merit in favorable situations rather than selecting the best candidates.

The study found that while the decision makers said they wanted to consider situational influences on performance, when given the opportunity, they failed to do so.  The researchers are optimistic, however, in the sense that they believe this tendency can be trained out of the decision-makers’ system.

“If you are a hiring manager, ask for more information about other people in the applicant’s department and how the person you are considering is better or worse than others in the same situation,” says Moore, “If you are an admissions director, ask for class rank.” In addition, Moore says, applicants should offer more information about their performance.