Tag Archives: fake news

All is loud on the eastern front: Ukraine is getting bombarded with fake videos

Everyone’s eyes are set on battered Ukraine right now after Russian President Vladimir Putin gave the order to invade the Eastern European country in the early hours of Thursday. According to American sources, Russian forces began their attack with an onslaught of over 100 missiles from both land and sea, along with 75 fixed-wing bombers, targeting arms and ammunition depots, air defense systems, military headquarters, airports, and other strategically-significant targets.

But as if the bombs raining down from the sky weren’t enough, terrified Ukrainian civilians have to deal with another type of bombing: fake news through the airwaves.

These include false videos shared on social media and messaging apps like Telegram, a popular instant messaging service in Eastern Europe, meant to sow confusion about the reality on the ground.

Both sides are employing the usual wartime propaganda, but the Kremlin seems to be more active and effective at spreading false information.

Many of this fake footage is posted by anonymous social media users, which could be either agents directly connected to the Kremlin or internet trolls that get off sowing chaos and racking thousands of likes. The information war is now in full swing, with propaganda operations mustered well before the war started as Russia filmed and shared staged provocations meant to paint Ukraine as an aggressor to the Russian public.

Fake: Russian paratroopers

One of the most widely shared videos featured what looked like hundreds of Russian soldiers parachuting over Ukraine. The video, however, is from 2016, part of a Russian military exercise. However, this didn’t stop the video from raking over 22 million views in its first day making the rounds on Twitter and TikTok.

Like other fake videos, it has been picked up by legitimate international news outlets which should have known better before publishing unverified footage.

https://twitter.com/Shubham_RSS_/status/1496812048868515847

Video game posing as wartime footage

One of the widely shared videos since the conflict began supposedly shows live attacks on Ukraine Russia, with a jet dodging heavy antiaircraft fire. But the footage is actually from Arma III, a realistic video game. This particular fake video was shared over 25,000 times before it was taken down from Facebook and Twitter, although it keeps popping up from various sources.

Another viral video shows Ukraine firing anti-aircraft missiles into the night. But in reality, it was another animated footage from the video game War Thunder.

It’s not the first time that a video game has posed as genuine wartime footage. In 2018, Russian Channel One TV aired a program praising the country’s military action in Syria. The program used gun-sight footage of a truck being attacked by Russian forces, but the images were, yet again, from Arma III.

Russian plane downed over Ukraine was actually shot down over Libya in 2011

A video shared on Facebook on February 24 shows a plane falling from the sky and bursting into flame, with the headline “REPORTED AS: Ukrainian army shot down a Russian jet.” The video later made the rounds on YouTube and Twitter, where it generated hundreds of thousands of views.

Although the footage is real, it was captured in Benghazi, Libya, more than a decade ago. “Libyan rebels shot down a warplane that was bombing their eastern stronghold Benghazi on Saturday, as the opposition accused Moammar Gadhafi’s government of defying calls for an immediate cease-fire,” the Associated Press reported at the time.

This destroyed jet has nothing to do with the current conflict in Ukraine.

Another fake image that went viral shows a Russian jet at the exact moment it is being destroyed. It’s a spectacular image, which explains why it was shared thousands of times. But by now you’ve spotted a pattern: this is actually an old picture from 2017 during an airshow accident.

Fact or fiction: more challenging than ever

Although the internet can be a great tool for fact-checking, the reality is that most people exposed to emotionally appealing content fail to do their own research. Seeing hundreds of soldiers parachuting over the sky is a shocking image, and it’s understandable that people feel the urge to share such footage with the world. But that’s the exact behavior that nefarious agents are banking on, looking to dupe unwitting social media users into sharing falsehoods.

This is why it’s important to think critically and assess whether the information in front of you is accurate and comes from a credible source. It always helps to take just a few moments and Google something before sharing it.

Unfortunately, the current online environment leaves a lot of room for sketchy sources to fill the void. For Ukrainian people, this problem is exacerbated by internet outages experienced across several parts of the country. Some of these outages are caused by shelling, airstrikes, and other damages to critical internet infrastructure, while others are part of a concentrated effort by Russian forces to disrupt communications and sow panic.

In order to protect yourself from misinformation, it’s good to remember one thing about social media posts: they’re designed to get a reaction, especially the viral kind. Although footage showing violence and bloodshed is nerve-wracking and tempting to post online for others to share the outrage, it’s better to calm down for a second and wonder: am I just being duped here?

Researchers develop a new way to tackle fake news — and it’s aimed at the stock market

Fake news is written to confuse and manipulate public opinion. As such, its intent is always to deceive. But the outcome of twisting facts is, arguably, most evident in financial markets, where there’s always money to be made by shifting people’s trust. Share price, after all, is as much a product of demand as they are of fiscal matters.

Researchers at the University of Göttingen, University of Frankfurt, and the Jožef Stefan Institute in Ljubljana, Slovenia, have developed a new framework that, they hope, will help us identify such content. Since malevolent actors can tailor content to appear genuine, through avoiding incriminating terms, for example, the team focused on other aspects of the text.

No swindlin’

“Here we look at other aspects of the text that makes up the message, such as the comprehensibility of the language and the mood that the text conveys,” says Professor Jan Muntermann from the University of Göttingen, co-author of the paper describing the approach.

The authors used machine learning for the task. This algorithm was tasked with creating analytical models that can be applied to identify suspicious messages on material based on characteristics other than the wording. In very broad lines, it operates similarly to spam filters.

However, there are important differences. For example, today’s spam filters can be appeased by removing incriminating words, so there is a constant back and forth between fraudsters and the systems meant to keep them at bay. To counteract this, the team tested an approach that involves using several overlapping detection models to increase the system’s accuracy (its ability to tell apart fake news from valid information) and robustness (its ability to see through attempts to hide fake news). They explain that even if flagged words are removed from a piece of text, the algorithm can still identify it as fake news based on other linguistic features.

“This puts scammers into a dilemma. They can only avoid detection if they change the mood of the text so that it is negative, for instance,” explains Dr Michael Siering. “But then they would miss their target of inducing investors to buy certain stocks.”

The main intended purpose of this system is to identify attempts to manipulate the corporate news ecosystem in order to influence stock prices — which can lead to major monetary losses for a lot of people. The authors envision a system where their approach can be used as a type of market watchdog, which would flag such attempts at market manipulation and lead to a temporary suspension in the trading of affected stocks. Alternatively, it could potentially become a source of evidence for criminal prosecutions in the future.

Either way, the implementation of such a system would go a long way towards improving public and corporate confidence in the stock market. Normally this wouldn’t really be relevant news for us here, but seeing as retail (i.e. us common Joes and Janes) now comprises an estimated 10% of stock trading, by volume, in the US, I’m certain at least some of you partake as well.

It would be extremely interesting to see how such a system would impact the evolution of the “meme stocks” we’ve seen recently. Although the largest of these undeniably enjoyed major grassroots support, there were definitely a lot of pieces trying to sway public opinion both for and against them. Would a system such as that detailed here help boost retail confidence in meme stocks, in paricular? Or would it stifle their growth by stifling hype around them? Given that the framework is already trialed and the results published, I think it’s a safe bet to say that we’re going to find out in the future.

The crowd can do as good a job spotting fake news as professional fact-checkers — if you group up enough people

New research suggests that relatively small, politically balanced groups of laymen could do a reliable job of fact-checking news for a fraction of today’s cost.

Image credits Gerd Altmann.

A study from MIT researchers reports that crowdsourced fact-checking may not actually be a bad idea. Groups of normal, everyday readers can be virtually as effective as professional fact-checkers, it explains, at assessing the veracity of news from the headline and lead sentences of an article. This approach, the team explains, could help address our current misinformation problem by increasing the number of fact-checkers available to curate content at lower prices than currently possible.

Power to the people

“One problem with fact-checking is that there is just way too much content for professional fact-checkers to be able to cover, especially within a reasonable time frame,” says Jennifer Allen, a Ph.D. student at the MIT Sloan School of Management and co-author of a newly published paper detailing the study.

Let’s face it — we’re all on social media, and we’ve all seen some blatant disinformation out there. That people were throwing likes or retweets at, just to add insult to injury. Calls to have platforms better moderate content have been raised again and again. Steering clear of the question of where exactly moderation ends and manipulation or censoring begins, one practical issue blocking such efforts is sheer work volume. There is a lot of content out in the online world, and more is published every day. By contrast, professional fact-checkers are few and far between, and they don’t enjoy particularly high praise or high pay, so not many people are planning on becoming one.

With that in mind, the authors wanted to determine whether unprofessional fact-checkers could help stymie the flow of bad news. It turns out they can if you lump enough of them together. According to the findings, the accuracy of crowdsourced judgments — from relatively small, politically balanced groups of normal readers — can be virtually as accurate as those from professional fact-checkers.

The study examined over 200 news pieces that Facebook’s algorithms flagged as requiring further scrutiny. They were flagged either due to their content, due to the speed and scale they were being shared at, or for covering topics such as health. The participants, 1,128 U.S. residents, were recruited through Amazon’s Mechanical Turk platform.

“We found it to be encouraging,” says Allen. “The average rating of a crowd of 10 to 15 people correlated as well with the fact-checkers’ judgments as the fact-checkers correlated with each other. This helps with the scalability problem because these raters were regular people without fact-checking training, and they just read the headlines and lead sentences without spending the time to do any research.”

Participants were shown the headline and lead sentence of 20 news stories and were asked to rate them over seven dimensions: how “accurate,” “true,” “reliable,” “trustworthy,” “objective,” and “unbiased” they were, and how much they “describ[ed] an event that actually happened”. These were pooled together to generate an overall score for each category.

These scores were then compared to the verdicts of three professional fact-checkers, who evaluated all 207 stories involved in the study after researching each. Although the ratings these three produced were highly correlated with each other, they didn’t see eye to eye on everything — which, according to the team, is par for the course when studying fact-checking. More to the point, these fact-checkers agreed on the verdict about individual stories 49% of the stories. Two of the three agreed on a verdict with the third disagreeing on 42%, and all three disagreed on a verdict on 9% of the stories.

When the regular reader participants were sorted into groups with equal numbers of Democrats and Republicans, the average ratings were highly correlated with those of the professional fact-checkers. When these balanced groups were expanded to include between 12 and 20 participants, their ratings were as strongly correlated with those of the fact-checkers as the fact-checkers’ were with each other. In essence, these groups matched the performance of the fact-checkers, the authors explain. Participants were asked to undertake a political knowledge test and a test of their tendency to think analytically

Overall, the ratings of people who were better informed about civic issues and engaged in more analytical thinking were more closely aligned with the fact-checkers.

Judging from these findings, the authors explain, crowdsourcing could allow for fact-checking to be deployed on a wide scale for cheap. They estimate that the cost of having news verified in this way rounds up to roughly $0.90 per story. This doesn’t mean that the system is ready to implement, or that it could fix the issue completely by itself. Mechanisms have to be set in place to ensure that such a system can’t be tampered with by partisans, for example.

“We haven’t yet tested this in an environment where anyone can opt in,” Allen notes. “Platforms shouldn’t necessarily expect that other crowdsourcing strategies would produce equally positive results.”

“Most people don’t care about politics and care enough to try to influence things,” says David Rand, a professor at MIT Sloan and senior co-author of the study. “But the concern is that if you let people rate any content they want, then the only people doing it will be the ones who want to game the system. Still, to me, a bigger concern than being swamped by zealots is the problem that no one would do it. It is a classic public goods problem: Society at large benefits from people identifying misinformation, but why should users bother to invest the time and effort to give ratings?”

The paper “Scaling up fact-checking using the wisdom of crowds” has been published in the journal Science Advances.

Bad News Screenshot.

‘Pre-bunking’ is an effective tool against fake news, browser game shows

Bad News comes bearing good news. The game about propaganda and disinformation, that is.

Bad News Screenshot.

Image credits DROG.

An online game that puts players in the role of propaganda producers can help them spot disinformation in real life, a new study reports. The game, christened Bad News, was effective in increasing players’ “psychological resistance” to fake news.

‘Alternative truth’

“Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle,” said Dr. Sander van der Linden, Director of the Cambridge Social Decision-Making Lab.

Researchers at the University of Cambridge helped develop and launch the browser-based video game back in February of 2018, in collaboration with Dutch media collective DROG and design agency Gusmanson. Since then, thousands of people have played the game — which takes about fifteen minutes from start to finish — with many, yours truly included, submitting their data to be used for this study.

In Bad News, your job is to sow anger and fear by creatively tweaking news and manipulating social media. Throughout the game, you’ll find yourself calling on twitter bots, photoshopping ‘evidence’, and churning conspiracy theories to attract followers. It’s quite a good game, and a pretty eye-opening one at that, because you have to walk a very thin line. On the one hand, you want as many people as possible to start following and believing you; on the other hand, you need to rein yourself in somewhat to protect your “credibility score”.

What the team wanted to determine is whether the game can help people spot fake news and disinformation in real life. The results suggest it can.

“We wanted to see if we could preemptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived,” Dr. van der Linden explains. “This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination.”

Players were asked to rate the reliability of a series of headlines and tweets before and after gameplay. Participants were randomly allocated a mixture of real (the control group) and fake news (the “treatment” group). The team reports that members in the treatment group were 21% less likely to perceive fake news headlines as reliable after playing the game. Bad News had no impact on how these participants ranked real news in terms of reliability.

There are six “badges” people can earn in the game for the six most common strategies used by fake news producers today: impersonation; conspiracy; polarisation; discrediting sources; trolling; emotionally provocative content. In-game questions measuring the game’s impact were issued for four of these badges (to limit bandwidth usage). From pre- to post-gameplay, the results show that Bad News:

  • Reduced perceived reliability of the fake headlines and tweets by 24% or the disinformation tactic of “impersonation” — i.e. the mimicking of trusted personalities on social media.
  • Reduced perceived reliability of “polarisation” — i.e. the use of highly-polarizing, emotionally-provocative headlines — by about 10%.
  • Reduced perceived reliability of “discrediting” — i.e. attacking a legitimate source with accusations of bias — by 19%.
  • Reduced perceived reliability of “conspiracy” — i.e. the spreading of false narratives blaming secretive groups for world events — by 20%.

Those who were most susceptible to fake news headlines at the outset of the study benefited most from this “inoculation”, the team adds.

“We find that just fifteen minutes of gameplay has a moderate effect, but a practically meaningful one when scaled across thousands of people worldwide, if we think in terms of building societal resistance to fake news,” adds van der Linden.

“We are shifting the target from ideas to tactics,” says Jon Roozenbeek, study co-author also from Cambridge University. “By doing this, we are hoping to create what you might call a general ‘vaccine’ against fake news, rather than trying to counter each specific conspiracy or falsehood.”

The team worked with the UK Foreign Office to translate the game into nine different languages including German, Serbian, Polish, and Greek. They’ve also developed a “junior version” of the game aimed at children aged 8-10 which is available in ten different languages so far. The goal is to “develop a simple and engaging way to establish media literacy at a relatively early age”, Roozenbeek explains, and then see how long the effects last.

Still, so far, the data isn’t conclusive. The major limitation of this dataset is that it used a self-selecting sample, namely those who came across the game online and opted to play. As such the results are skewed toward younger, male, liberal, and more educated demographics. Even with this limitation, the team says controlling for various characteristics showed that the game was almost equally effective across age, education, gender, and political persuasion. Part of that comes down to the fact that Bad News has an ideological balance built in, the team explains: players can choose to create fake news from the left and right of the political spectrum.

“Our platform offers early evidence of a way to start building blanket protection against deception, by training people to be more attuned to the techniques that underpin most fake news,” Roozenbeek concludes.

You can try the game out here.

The paper “Fake news game confers psychological resistance against online misinformation” has been published in the journal Nature.

Credit: Pixabay.

AI is so good at inventing stories that its creators had to shut it down to avoid ‘fake news’

Credit: Pixabay.

Credit: Pixabay.

Researchers have designed an artificial intelligence algorithm that can effortlessly write plausible stories. It’s so good that the OpenAI Institute — which built the AI — has now barred access from the open source community over fears that the technology could be used for nefarious purposes like spreading fake news. 

Founded in 2015, OpenAI is a non-profit research organization that was created to develop an artificial general intelligence that is available to everyone. Several Silicon Valley heavyweights are behind the project, including LinkedIn founder Reid Hoffman and Tesla CEO Elon Musk.

For some time, OpenAI has been working on natural language processing algorithm that can produce natural-sounding text. The latest version of the algorithm, called GPT-2, was trained with more than 8 million stories that were posted on Reddit and had a “karma” score of 3 or higher. Starting from nothing but a headline, the algorithm is capable of creating a new story, making up attributions and quotes that are disturbingly compelling. It can be used for anything from writing news stories, essay help, and other pieces of text.

Here are some examples of GPT-2 in action, which made up a whole story starting from an initial paragraph written by a human.

SYSTEM PROMPT (HUMAN-WRITTEN)

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

SYSTEM PROMPT (HUMAN-WRITTEN)

Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.

“I take nothing,” said Aragorn. “But I give my word, at my peril and mine, that I will never forget this day of horror. None of us will forget. Ever!”

“I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it. One of the wounded orcs he had carried off, he was the only one of the survivors who remained uninjured. “We’ll keep the memory of that day of evil, and the war with it, alive as long as we live, my friends!”

“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”

“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”

Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.

The Two Rings were returned to Rivendell. Frodo and Sam woke up alone in their room, and Frodo found a note on his pillow. He opened it and read:

May the Power of the Ring be with you always, and may its light never fade. I am not sure if it matters which of the two rings we accept this day but, as you asked me, I have chosen mine. I am sorry to leave you, Frodo, but know that we are very close to the end, and that you are with us forever. May this letter find you safely in Rivendell; and if it does not, then I will accept the ring in your stead. If by any chance you find or give this letter to the enemy, may they learn the strength of the ring and may the Two Rings never be broken!

The generated text certainly has its flaws and is not entirely comprehensible, but it’s a very powerful demonstration nonetheless. So powerful that OpenAI decided to close access to the open source community.

“We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” said Jack Clark, policy director at OpenAI, speaking to the BBC.

Of course, a lot of people were not happy, to say the least. After all, the research institute is called OpenAI, not ClosedAI.

https://twitter.com/AnimaAnandkumar/status/1096209990916833280

OpenAI says that its research should be used to launch a debate about whether such algorithms should be allowed for news writing and other applications. Meanwhile, OpenAI is certainly not the only research group working on similar technology, which puts the effectiveness of OpenAI’s decision into question. After all, it’s only a matter of time — perhaps just months — before the same results are independently replicated elsewhere.

“We’re not at a stage yet where we’re saying, this is a danger,” OpenAI’s research director Dario Amodei said. “We’re trying to make people aware of these issues and start a conversation.”

“It’s not a matter of whether nefarious actors will utilise AI to create convincing fake news articles and deepfakes, they will,” Brandie Nonnecke, director of Berkeley’s CITRIS Policy Lab told the BBC.

“Platforms must recognise their role in mitigating its reach and impact. The era of platforms claiming immunity from liability over the distribution of content is over. Platforms must engage in evaluations of how their systems will be manipulated and build in transparent and accountable mechanisms for identifying and mitigating the spread of maliciously fake content.”

This video that doesn’t feature Barack Obama can teach us a lot about fake news

Sitting before the stars and the stripes, in a fancy office, former president Barack Obama has an important announcement to make… except this isn’t actually Barack Obama.

Seeing is not always believing

“We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time — even if they would never say those things,” says ‘Obama,’ his lips moving in perfect sync with his words as they become increasingly bizarre. “So, for instance, they could have me say things like, I don’t know, [Black Panther’s] Killmonger was right! Or, Ben Carson is in the sunken place! Or, how ‘bout this: simply, President Trump is a total and complete dipshit.”

If you’re still not clear what’s going on, Oscar-winning filmmaker Jordan Peele and BuzzFeed CEO Jonah Peretti, the filmmaker’s brother-in-law, have created a Public Service Announcement-style video to make people aware of what fake news is already capable of doing. Aided by sleek technological advancements and an unimaginable amount of data on their hands, the “baddies” can create an incredibly realistic and manipulative narrative.

Sure, you might say that if you look close enough, you can see the image trickery. You could argue that a careful listener would realize that’s not Obama’s voice. But think about it this way: how many of us are truly paying attention when we’re browsing social media? How many of us lend a critical eye to every video we see? Oh, and if this was created by a duo of people just as a stunt, imagine what an army of people with vast resources and concrete objectives would be able to do?

What’s even more awesome — or, perhaps, even scarier — the fakery was built using readily available software: Adobe After Effects, which you can rent for just about $20 / month, and Fakeapp, an artificial intelligence program that made headlines earlier this year when it was used to transplant actor Nicolas Cage’s face into several movies in which he hadn’t appeared.

Here’s the plan

Okay, so what can we actually do? Fret not: ZME Science has you covered; we have a plan. Think about April Fools Day. Remember that feeling that every video you see and every cooky article you read on that day is potentially a prank? Remember everyone being just a little bit more careful on that day? How about we take that approach and use it every day? Just live your life like every day is April Fools!

But on a more serious note, we all need to employ a bit more critical thinking when browsing the web. We need to ask ourselves questions like:

  • Who posted this, and how reliable are they? Have they posted similar content? What do they usually post? When you find yourself on a website you don’t know, browse their previous articles and see if you can find a theme. Then think about that that theme implies.
  • Why did they post it? Are they a media outlet? Are they independent, or do they follow an agenda? Again, look for themes and patterns.
  • Are they transparent? When they say things, do they quote sources? Or do they tend to use blanket statements, without ever making it clear what the sources are?
  • Is this something that’s real, or is it just something I want to believe? We all tend to read the articles that agree with our views — that’s how we’re built. But it’s easy to fall into the trap of thinking something’s right just because it agrees with your beliefs.
  • Can I double check it? If I search for the story, will I find it on other reliable outlets?
  • Can I see the big picture? Is this the right context, or are things taken out of context?
  • Lastly, is this something I should share? Do I believe this is reliable and want my friends to believe it as well?

These are challenging times, but we are the gatekeepers of real information — both to ourselves, and to our circle of friends.

This fun online game lets you play a propaganda master — and it’s a fake news vaccine

A new game developed by Cambridge scientists lets us take the role of an aspiring propagandist — you decide how to manipulate the public, use a Twitter bot army, and create a loyal, misled following. The game is free to play online, it’s simple, fun, and best of all, it enables you to deal with propaganda when you actually encounter it.

Play the bad news game here

Catchy lies

There’s a reason why people say we live in a post-truth world. If you’ve been keeping an eye on the news, you’ve probably seen stories of pro-Russian twitter bots, fake news, and the ever-present propaganda — and we fall for these more often than you think.

When President Trump was still a running candidate, a story went viral; the story said that the Pope endorses Trump and was picked up by 960 million Facebook users. A quick investigation revealed that the story was completely fake, and traced it back to a small town in Macedonia called Veles — where no less than 140 fake news sites are based. But the debunk didn’t reach half as many people as the original lie, and even if it did, the sows of doubt were already planted.

For the past few years, researchers have been frantically looking for a way to inoculate people against fake news. Now, Cambridge researchers have found a way to do that: by playing a game. I’ve played it a couple of times already. It’s fun, addictive, and extremely educative.

The fake news game

The earliest stages of the game. You’re just a small fry with no followers. But we’ll move up soon enough.

The game has you fill the shoes of an aspiring propaganda master. Choosing between several branching options, the game has the player stoke anger, mistrust, and fear in the public by manipulating digital news and social media. You start a website, create a loyal Twitter following (ahem, bots are welcome), and publish polarizing falsehoods. The goal is to create as many followers as possible while also maintaining a high “credibility” score.

But the real goal of the game is to understand how fake news works. The game takes advantage of a simple psychological trick: if someone tells you how something works, you might not want to take in the information. But if someone practically shows you the proverbial sausage factory, the inner workings of online misinformation, you’re much more likely to take it in. Better yet, the game is catchy, so you want to play it more and learn more.

Now we’re going places.

In order to test how well it works, researchers conducted a pilot study with teenagers. They found that those who played the game were much less likely to be tricked by fake news. No one really wants to eat the sausage after you see how it’s done.

In psychology, this process is called inoculation.

Play the bad news game here

A disinformation vaccine

Like a vaccine, psychological inoculation renders you immune (or almost immune) to the effects of fake news.

“A biological vaccine administers a small dose of the disease to build immunity. Similarly, inoculation theory suggests that exposure to a weak or demystified version of an argument makes it easier to refute when confronted with more persuasive claims,” says Dr. Sander van der Linden, Director of Cambridge University’s Social Decision-Making Lab.

“If you know what it is like to walk in the shoes of someone who is actively trying to deceive you, it should increase your ability to spot and resist the techniques of deceit. We want to help grow ‘mental antibodies’ that can provide some immunity against the rapid spread of misinformation.”

The game and the subsequent study drew from existing research on online disinformation, taking cues from actual conspiracy theories.

“You don’t have to be a master spin doctor to create effective disinformation. Anyone can start a site and artificially amplify it through twitter bots, for example. But recognising and resisting fake news doesn’t require a PhD in media studies either,” says Jon Roozenbeek, a researcher from Cambridge’s Department of Slavonic Studies and one of the game’s designers.

“We aren’t trying to drastically change behavior, but instead trigger a simple thought process to help foster critical and informed news consumption.”

The study, The Fake News Game: Actively Inoculating Against the Risk of Misinformation, has been at the Journal of Risk Research. You can read it in full, for free.

Facebook bans “fake news” from advertising

A lie gets halfway around the world before the truth has a chance to get its pants on — and nowhere is that truer than on Facebook.

Post truth has taken the world by storm. We’re dealing with fake news, alternative facts, however you want to name it. Never has information been more readily available in all imaginable forms but like a perverted Garden of Eden, the web of lies creeps at every corner, swamping information and degrading it.

Just take the already classic fake story that Pope Francis endorses Trump (then still a candidate). By November 8, the story had picked up 960,000 Facebook engagements, according to Buzzfeed. Pope Francis had to make a press conference and deny these claims, but that was shared at least ten times less. Basically, the lie prevailed against the truth, and that’s what most people read. It’s not a singular story as well. Fueled greatly by the White House administration, these so-called alternative facts (let’s call them lies, shall we?) have risen to prominence, especially on social media. It took Facebook a bit to adapt to the new context but now, the tech giant is taking some serious steps in fighting fake news.

In a blog post, Facebook said Pages that “repeatedly share stories marked as false” by third-party fact-checkers will be banned from buying ads. Like always, Facebook wasn’t very explicit about what they mean by “repeatedly” or who the third-party fact-checkers will be. Also, the ban is not permanent. Still, while it’s not the toughest approach, it’s understandable that Facebook wants to tread lightly.

The idea of preventing these pages from advertising will likely be quite efficient. Most times, these pages have a website behind them that makes money, so they invest in Facebook advertising with the hopes of making even more money by creating viral, fake stories. This is where this update wants to strike.

“This update will help to reduce the distribution of false news which will keep Pages that spread false news from making money. We’ve found instances of Pages using Facebook ads to build their audiences in order to distribute false news more broadly. Now, if a Page repeatedly shares stories that have been marked as false by third-party fact-checkers, they will no longer be able to buy ads on Facebook. If Pages stop sharing false news, they may be eligible to start running ads again.”

“Today’s update helps to disrupt the economic incentives and curb the spread of false news, which is another step towards building a more informed community on Facebook.”

It remains to be seen whether this approach will be successful or not. Facebook will likely take small, incremental steps and assess how things go before moving on to bigger things. That’s the way the wheel must turn when you have over one billion users.

Facebook has been cracking down on fake stories since last Fall. Facebook users can flag stories as ‘fake’ and these will then be sent to the third-party partners which will fact check them. So far, they’ve made a mild effect on the news sphere. The lies are still there, and people are still buying them.

Perhaps more importantly, we have to change, not Facebook. Too often, we place a great emphasis on social media, buying everything we see there. Often times, we no longer get the news from reputable sources, but just read some random title from a random Facebook page and take it as a given. Simply put, that just won’t do. Read the original source. Do a quick fact check on Google. Use critical thinking, and only share after you’re convinced it’s true. If not for yourself, then at least for your Facebook friends. You are the gatekeeper of their information and you have a responsibility. Facebook must change and it must improve — but at the end of the day, so do we.

Google’s top result for “cure for cancer”says carrot juice is the cure

Nowadays, people take a lot of their information from the internet, but what do you do when the internet is lying?

Your alarm flags should be tingling on that title alone.

Taking a crack at pseudoscience

If you go to Google and search for [cure for cancer], the top organic result (after you go past the ads) is a website called Cancer Tutor. They basically say that carrot and beetroot juice are the best cures for cancer, and medicine and science is just wrong. Here’s one of the many gems on this website:

“The general public is so brainwashed they think Nature is too stupid to be a medical doctor. Yet scientists only understand about 3 percent of human DNA after studying it since 1953. This means scientists still don’t have a clue what 97 percent of all human DNA does.”

You probably get the point by now. It’s yet another fear-mongering pseudoscience website. They also advertise some products or “treatments” so they also want to profit from deceiving people — fits the profile quite well. But this isn’t about debunking another one of these websites; the problem here is that this is the first result. If something ranks that high, then it has to be trustworthy, right?

Well… no, not at all.

How Google works

A third of all Google searches click on the first result. Since 12,000 people Google “cure for cancer” every month, we can expect a lot of people to end up on that bogus page. In fact, according to Similar Web, Cancer Tutor, the bogus website, gets over 500,000 hits a month. All while saying things like this:

“Many people have cured their newly diagnosed cancer by using a very healthy diet and drinking a quart of carrot juice (with a little beet juice mixed in) every day. That is all they did.”

The thing is, Google (just like Facebook and Bing) only recently started cracking down on pseudoscience or blatantly fake websites. If something shows up, even if it’s the first result or even if it’s shared by millions of people — it doesn’t mean it’s true.

Aside from Google, no one really knows how the ranking algorithm works, but we have some good pointers. Being knowledgeable about a topic can help. Having a coherent website also helps. Getting links from other websites definitely helps. But Google does all that automatically, through a “robot,” and for the robot, it can be quite difficult to tell what’s knowledgeable and what’s not. It looks at the words, sees a lot of them relating to cancer and medicine, so it might give it a green light — it might even rank it high. More recently (earlier this year), Google announced they will penalize inaccurate and offensive results but this is obviously not working properly yet.

This is obviously extra-important for sensitive queries such as this one. Google’s Gary Illyes, the go-to person for fighting spam, admits it’s important to look into this type of issue, but doesn’t mention any concrete solution.

The only good thing is that Google also includes a panel on the right side, featuring actual medical information.

Facebook is also taking similar steps, but their algorithm is also still flawed. For instance, a satire website from Romania was wrongly penalized because Facebook couldn’t tell it was satire and thought it was a misleading news outlet.

Fake news

So where does this leave us? We’re living in a post-truth world where facts don’t matter as much as they used to, and are often replaced by feelings or simply by loud shouts. You can convince people that carrot juice cures cancer, that climate change isn’t real, or even that the planet is flat. All you need to do is repeat it many times in a form that people can buy and you’re good to go. But how do we combat these lies?

An ever growing number of people take their information from the internet, so that’s clearly one of the places where we have to start. As mentioned above, the two internet giants (Facebook and Google) are both trying to fight fake and misleading outlets, but that’s a slow process, and like the heads of a hydra, such articles will always find a new way to emerge. So while this can work (and is necessary) in the short run, in the long run, we need a more sustainable solution. There’s also the risk of these tweaks going too far and falling into censorship, and that’s definitely not the way to go. As study after study has shown, the most effective tool against indoctrination is critical thinking — and that’s something neither Google nor Facebook can do for us. We have to do it ourselves.

Contradicting fake stories / conspiracy theories on social media just doesn’t do anything. It may be counterproductive

Instead of trying to convince someone that their crack ideas are, you know, crack, it might be best to smile and move on.

The flat Earth conspiracy theory has gained surprising popularity on social media.

A world of tribes

We know it too well — it’s strange to say, but our website and Facebook page are often flooded by pseudoscience and conspiracy theories. We do our best to explain the science and set things in order, but more often than not, the discussion just derails and things start to spiral uncontrollably. It’s a situation where very little can be done.

If you’ve ventured down this rabbit hole, you probably know just frustrating it can be. Not only do some people not listen to even the most basic of arguments (ie globe Earth), but it seems that the harder you try, the more you feed them. Well, a new study reports that systematic debunking on social media just doesn’t do anything. It may even do more harm than good.

“Debunking posts stimulate negative comments and do not reach “conspiracists” causing the opposite reaction to what was intended” explains Fabiana Zollo, author of the paper and research fellow at Ca’ Foscari University of Venice.

To prove this, Zollo analyzed likes and comments on 83 Facebook scientific pages, 330 conspiracy pages and 66 Facebook pages aiming at debunking conspiracy theses. In total, she analyzed over 50,000 Facebook posts. What she found is not surprising, especially considering recent global events.

Two different worlds

There are two different worlds coexisting on Facebook, but they don’t really interact with each other. Users fall into one category or the other, and after they choose a narrative, they stick to it. In other words, they create an echo chamber for themselves. The study reads:

“Users online tend to focus on specific narratives and select information adhering to their system of beliefs. Such a polarized environment might foster the proliferation of false claims. Indeed, misinformation is pervasive and really difficult to correct.”

What researchers found was that when a dissenting opinion emerges, it’s just ignored; users almost never interact with it. This was confirmed in the data. But something else emerged: after such a dissenting opinion was ignored, overall activity on these conspiracy pages increased. So not only did the debunking not help, but it stirred spirits even more.

If this is the case, then it means any debunking strategy will have underwhelming results. Anecdotally, I can confirm this, and other journalists are taking note too: Washington Post’s Caitlin Dewey decided to suspend her weekly intersect on debunking in the Washington Post.

The results are also consistent with the so-called inoculation theory for which the exposure to repeated, mild attacks can let people become more resistant to changing their ordinary beliefs. So if you make any debunking “attacks” you might want to make them go all the way.

What works

If hard facts don’t work and can be even counterproductive, then what works? Basically, it’s all about building bridges to other tribes. If we want to stop misinformation, we don’t just need to have our facts straight, we also need to share them the right way. It’s not a simple task, but in a world where divides are growing bigger than ever, it’s certainly worth it.

“A more open and smoother approach, which promotes a culture of humility aiming at demolish walls and barriers between tribes, could represent a first step to contrast misinformation spreading and its persistence online,” the study concludes.

However, Facebook and Google are studying specific solutions to reduce the impact and visibility of such conspiracy theories or pseudoscience. The effects of such strategies were not studied here.

The thing is, we all have a responsibility here. We all create our own social media experience, we draw our own social circles. It’s tempting to only select people that say things we want to hear, but that’s really not the way to go. It’s important to not shut our own doors and employ critical thinking to judge what we’re seeing. Just because it fits the narrative you want to hear doesn’t make it right — think about that before you become entrenched in one camp or the other.

Our Facebook page was not involved in the study, which is a bit sad.

Journal Reference: Fabiana Zollo , Alessandro Bessi, Michela Del Vicario, Antonio Scala, Guido Caldarelli, Louis Shekhtman, Shlomo Havlin, Walter Quattrociocchi — Debunking in a world of tribes. https://doi.org/10.1371/journal.pone.0181821

Study analyzes why some people are so sure they’re right — even when evidence shows they’re wrong

After he won the elections, President Donald Trump said that at least three million votes were illegally cast against him. The statement came out of nowhere and has since been refuted by numerous fact-checkers, but Trump just wouldn’t accept it. He called for a special commission to investigate it and try as they might, no one was able to find any evidence. Despite a rampage of facts and obvious truths, he clings to his idea. This is what scientists call dogmatist behavior.

Image via Flickr.

Dogmatists are people who assert their opinions in an unduly positive and arrogant manner, with little regard to realities. They have a dogma and they stick to it, no matter what. They don’t need to be religious, though many are. You probably know at least one person like this, someone who just has their ideas set on something and won’t change it no matter what. Two newly published studied what makes these people tick and what makes them refuse to accept reality.

A lack of critical thinking

While the studies had several differences between them, they both found the same major underlying factor: critical thinking, or rather, a lack of it. Higher levels of critical thinking are associated with lower levels of dogmatism. The reverse also stands: the more dogmatic you are, the less likely you are to employ critical thinking. It makes a lot of sense; actually, you could say that critical thinking, which is the objective analysis of facts to form a judgment, rejects dogmatism by definition.

You could also say that dogmatists, these people who just won’t change their opinion, aren’t really thinking about what they’re saying.

“It suggests that religious individuals may cling to certain beliefs, especially those which seem at odds with analytic reasoning, because those beliefs resonate with their moral sentiments,” said Jared Friedman, a PhD student in organizational behavior and co-author of the studies.

Of course, that isn’t the only aspect worth considering. Rigidity is also a major factor, and this also makes a lot of sense. The more rigid you are, the less likely you are to consider other people’s opinions. This goes hand in hand with empathy.

“Emotional resonance helps religious people to feel more certain–the more moral correctness they see in something, the more it affirms their thinking,” said Anthony Jack, associate professor of philosophy and co-author of the research. “In contrast, moral concerns make nonreligious people feel less certain.”

But empathy, as researchers found out, is a two edged sword.

The pen is mightier than the sword

If you think about terrorists, for instance, you’d probably be surprised to see that within their own bubble, they’re highly empathic. They have a moral compass which tells them that what they’re doing is good, but their moral compass is not tuned to reality. They’re so empathic with “their own” people that they just don’t see anyone else as people, and this can have devastating consequences.

But this also tells us something: how to communicate with them. Friedman and Jack say that if you want to get something through to these people, you have to understand what makes them tick and use that as a bridge. For instance, if you want to talk to a religious dogmatist, use his sense of morality because that’s what he guides himself with. If you’re talking to a non-religious dogmatist, try to use logic. If you’re reaching out to empathic people, empathy matters more than facts.

This is easily visible in today’s world, especially in the US. Jack comments:

“With all this talk about fake news, the Trump administration, by emotionally resonating with people, appeals to members of its base while ignoring facts.”

It obviously works. Many believe Trump, despite the same lack of evidence. Basically, he’s pushed his dogmatism to others. He became the Dogmatist-in-Chief.

Two brain avenues

The studies support what was already suggested from previous works: that the brain has two networks — one for empathy and one for analytic thinking. In healthy, thinking people, these networks alternate between each other. Basically, the brain chooses whichever is most suitable to the situation: when you’re having fun with friends, it’s empathy time. When you’re taking a test, analytic takes over. But obviously, this isn’t always working properly. We shouldn’t really be looking at our voting system using our feelings, something is awry. This is what the studies ultimately conclude: that in dogmatists, these hemispheres are not in balance with each other. In religious dogmatists, empathy rules. In non-religious dogmatists, the analytic rules — but again, without any (or very little) critical thinking.

Researchers say there is a great applicability of this study. After all, dogmatists can take a position on any number of things, from politics or religion to eating habits or racism.

There are several ways to look at it. In many regards, the brain can be (metaphorically) viewed as a muscle. If you don’t train it through critical thinking, then you’re at the mercy of whoever screams the loudest in your ear. If you do, then you’ll start to think for yourself more and your brain will fit into a healthy pattern. Another way of looking at this issue is by fixing a problem. I hope it’s safe to say we don’t want a society of dogmatists, and we need to start healthy conversations — now we know how.

For instance, if a person is against abortion on religious grounds, you might want to avoid taking the “reason” route. Yes, you could explain that the Bible or any other holy book doesn’t mention abortion, or that at one-month-old, we’re not really talking about a human being. Those are reasonable arguments, but they might not get you very far. Instead, you might want to explain that no woman is happy when she has an abortion. She is doing it because she feels the alternative is much worse, and this is a way of reducing suffering. The main reasons why women have abortions is that they feel it will have a negative impact on their lives or because they simply can’t afford it. Raising a baby in poverty, insecurity, and perhaps violence is certainly something that nobody wants.

At the end of the day, we’re living in a world where opinion divides are getting stronger and stronger, and picking the right side doesn’t really matters that much. What matters is finding a way to bridge both sides, and walking the right path together. This is what researchers hope their studies will help us do.

Journal Reference: Jared Parker Friedman and Anthony Ian Jack — What Makes You So Sure? Dogmatism, Fundamentalism, Analytic Thinking, Perspective Taking and Moral Concern in the Religious and Nonreligious.

French Election Louvre.

One in five bots sharing fake news during France’s presidential election were also involved in the United States’

In the wake of France’s recent presidential elections, a study reveals there might be a “black market” of political bots lurking under the surface. Some of the bot accounts which spread disinformation during these elections were also involved in the 2016 U.S. presidential race.

French Election Louvre.

A young boy waves a French flag at The Louvre shortly after Macron’s victory was announced.
Image credits Lorie Shaull.

In the 10 days before Frenchmen everywhere would cast their vote, Twitter was abuzz with activity. By combing through some 17 million messages tweeted during that time, Dr. Emilio Ferrara, a research assistant professor at USC Computer Science Department reports that a sizable subset can be traced back to Twitter bot accounts hell-bent on spreading a package of false documents dubbed the Macron Leaks —  a stash of falsified and doctored documents, photos, and correspondence supposedly coming from Macron and his campaign staff.

Bots have it, we share it

This smear campaign started around afternoon on April 30th, and it was spewing some 300 tweets per minute in the days just prior to the election, the team reports. Ferrara team used “machine learning techniques and cognitive behavioral modeling” to sift through the 17 million tweets and separate human from bot accounts. All those Tweets belonged to two million accounts, out of which Ferrera says some 350,000 accounts were bots wholly dedicated to the MacronLeaks: their job consisted of tweeting links to alt-right news media organizations hyping the fake leaks, links to online archives of the documents, or URLs pointing to well-known far-right fake news websites

Ferrara says he tracked these documents from an “email dump” on a popular imageboard 4chan thread two days before the election proper. From here, they got picked up by alt-right activist Jack Posobiec and even Wikileaks, which helped the disinformation campaign gain a lot of exposure. So although bots were heavily implicated, their role was mainly to dupe a few people into believing the contents of the papers.

These people, in turn, would do the heavy-lifting of disseminating the information around. For example, the now-deleted bot account @jewishhotjean started off with only 46 followers two days before the election but jumped to 14,033 after just 39 retweets. Another suspended account, @yhesum, went from 21 to 9,476 followers after 291 retweets.

But perhaps most troubling is that about one in five of the bots involved in the French election were also active in the U.S. presidential race, suggesting “the possible existence of a black-market for reusable political disinformation bots,” Ferrara wrote in the study.

Luckily, however, French users didn’t bite much into the MacronLeak accounts. Although the false documents made headlines for several days in the country, they didn’t have any significant effect on voting trends — French voters overwhelmingly elected Macron over nationalist opponent Marine Le Pen.

Algorithmic politics

A lot of people did bite, however. But these users had no say or vote in the matter and were “mostly foreigners belonging to the alt-right Twitter community,” the study notes. That’s why findings such as the ones Ferrera’s team describes here are so important: they show just how easily duped we can be, and how powerful a simple bot armed with fake news can become.

“The adoption of automated devices such as social bots in the context of disinformation campaigns is particularly concerning,” Ferrera writes, “because there is the potential to reach a critical mass large enough to dominate the public discourse and alter public opinion.”

“This could steer the public’s attention away from facts and redirecting it toward manufactured, planted information.”

The paper didn’t discuss any politics, but a reusal of propaganda bots from the U.S. election seems to confirm some of the patterns security researchers and policy analysts have found in terms of Russia’s involvement in America’s latest presidential race.

As Macron has been very critical of the Kremlin, criticizing Russia’s state-run media before and shortly after winning the presidency, the motive seems to be there. Seeing the same groups of bot accounts involving themselves in similar issues through similar patterns is a pretty big smoking gun. But at the end of the day, attributing anything to anyone beyond a shade of doubt is almost impossible in cyberspace.

Still, it’s not all bad. Ferrera notes that out of the 15 most active anti-Macron bots, Twitter deleted 4, suspended 7, and quarantined 2. But there’s nothing stoping someone from making a new account and going throguh it all over again — so we should try to improve our chances and educate people to spot fake news for what it is: simple propaganda.

The study “Disinformation and Social Bot Operations in the Run up to the 2017 French Presidential Election” can be read here.

chalkboard

Teaching school children to sniff out bogus medical claims works

Researchers taught thousands of Ugandan school children, some as young as ten years, how to think critically and sniff out bogus health claims. When assessed with various tests, there were twice as many children among those who received the ‘recipe’ for sniffing out medical falsehoods that got a passing grade than the control group comprised of children who were given no instructions. A similar trend was reported for parents, as well, suggesting ‘bullshit detection’ can become an acquired skill — more important than ever in a so-called ‘post-truth’ era.

chalkboard

Credit: Pixabay.

A good bullshit detector starts early

The experiment was led by Andy Oxman, who is the research director at the Norwegian Institute of Public Health. Oxman became inspired to school young people in the ways of critical thinking after attending of his 10-year-old son’s classes almost twenty years ago. Back then, Oxman told his son’s class that some teenagers found that red M&Ms makes you feel good and helped them write and draw more quickly. There were also some side-effects, though: a little pain in their stomach, and they got dizzy if they stood up quickly.

Oxman challenged the school kids to an experiment meant to validate or disprove these findings. The class was divided into a couple of working groups, each with a full bag of M&Ms at their disposal. Although no particular instructions were offered, the children were clever enough to notice that they a) had to try out each different colored M&M to see what the produced effect was and that b) no such test would be completely fair if the children could see the color of the M&M. Essentially, they had discovered the utility of ‘blinding’ in science before anyone had the chance to teach them what it meant.

By the end of the experiment, most of the children reported little to any difference in the effects of differently colored M&M and even questioned the teenagers’ method itself. Oxman had been disappointing by previous interactions with adults meant at instilling critical thinking — and this was before Facebook even existed, let alone the ‘fake news’ craze. This episode, however, suggested that if children were given the means to spot bullshit, they could become immunized against said bullshit. But could he prove it?

Many years later, Oxman got the chance to test this hypothesis in a huge trial involving no fewer than 10,000 school children from 120 primary schools in Uganda. Oxman and colleagues adapted concepts from a popular book called Testing Treatments (free downloadwhich explains in plain English all the concepts people need to grasp in order to separate garbage from genuine health advice. Eventually, the team reached six key points a person would need to grasp in order to think critically about medical treatments. As reported by VOXthese are:

  1. Just because a treatment is popular or old does not mean it’s beneficial or safe.
  2. New, brand-name, or more expensive treatments may not be better than older ones.
  3. Treatments usually come with both harms and benefits.
  4. Beware of conflicts of interest — they can lead to misleading claims about treatments.
  5. Personal experiences, expert opinions, and anecdotes aren’t a reliable basis for assessing the effects of most treatments.
  6. Instead, health claims should be based on high-quality, randomized controlled trials.

According to an Ugandan urban myth, cow dung helps burns heal faster. It does not. Credit: Wikimedia Commons.

According to a Ugandan urban myth, cow dung helps burns heal faster. It does not. Credit: Wikimedia Commons.

Armed with a solid templated and fun exercises, the team comprised of researchers from Uganda, Kenya, Rwanda, Norway, and England, ran their program across more than a hundred schools in Uganda. The researchers also put together a guidebook for teachers and cartoon-filled textbooks for children. The stories presented in the textbooks were also adapted to fit the local narrative, including myth bashing. For instance, many locals recommend putting cow dung on burns as the best treatment for burns [false]. Some of the included myths are known for their particular anti-scientific and menacing nature. For instance, a local myth that had circulated wildly suggested immunization was linked to infertility somehow. As a result, parents stopped allowing their children to become vaccinated with grave consequences. Another myth caused people to replace antiretroviral therapies for HIV with herbal supplements.

To see how all of this work impacted children’s critical thinking, 10,000 fifth-graders, mostly ages 10 to 12, participated in a trial from June to September 2016. Half of the school children had been schooled in detecting bogus medical claims while the other half wasn’t and acted as the control group.

The average score on the test for the kids schooled by Oxman and colleagues was 62.4 percent compared to only 43.1 percent for the control group. More importantly, maybe, twice as many kids from the intervention schools achieved a passing score than those from the control group. The schooling may have even formed the future’s Ugandan critical elite. About one-fifth of the kids schooled by the researchers had mastered the key concepts (more than 20 of 24 answers correct) compared to only 1% of the control group.

When the researchers tried the same thing on parents, they saw similar results. Instead of a course, the parents listened to a dedicated podcast about critical thinking in a medical context. Twice as many parents who listened to the podcast series passed a test on their understanding of key health concepts compared with parents in the control group, as reported in two studies published in the Lancet (1 and 2).

Though it has its limitations, the sheer scale of the study suggests early inoculation of critical thinking works! In today’s age of unshameful, blatant lying in the public space, it’s nice to hear that simple education can actually work.

 

 

EU-Funded fake news spotting tool gets better and better

Journalism may get an extra boost from fact-checking algorithms. Image in Public Domain Pictures.

The Pope endorses Donald Trump! Or does he? Vaccines cause autism! No, they don’t (really, they don’t). Every day we’re bombarded with information and news, much of which is simply not true. Fake news has become a part of our life, and many such stories are compelling enough to make people believe them and often times, share them on social media. The world is still scrambling to adapt to this new situation, and a definitive way to combat fake quickly and efficiently is yet to come through.

Fact checking on steroids

With that in mind, the EU started a new project called Pheme, after a Greek goddess. The Pheme project brings together IT experts and universities to devise technologies that could help journalists find and verify online claims. It’s very difficult for artificial intelligence to detect satire, irony, and propaganda, but Pheme has reportedly been making significant advancements in this area.

Unverified content is dominant and prolific in social media messages, Pheme scientists say. While big data typically presents challenges in its information volume, variety and velocity, social media presents a fourth: establishing veracity. The Pheme project aims to analyze content in real time and determine how accurate the claims made in it are.

Fact checking is an often overlooked aspect of modern journalism. It takes a lot of time, it doesn’t add anything “spicy” to media content, and you rarely hear about the people doing it. Therefore, many media are employing something else: making stuff up. Half-truths and misinformation (or as the White House prefers to call them these days, alternative facts) have been running rampant on Facebook and Twitter, with millions of users spreading them without bothering to check if they’re real or not. Yes, users also carry a part of the blame here.

Well, researchers want to find an algorithmic solution to this human problem. They hope to do this by analyzing the language use and spread of information through the network, as well as the overall context of the information itself. Basically, they want to build a real-time lie detector for social media, flagging hoaxes and myths before they manage to become viral — much like an antibiotic taken as the symptoms start to set in. Ideally, we’d have a vaccine for this — but the vaccine, in this case, is education and convincing people to fact check things before they believe, which is either not happening, or will take a very long time. A faster solution is needed, and Pheme can just be a part of that solution.

The project is named after Pheme, the Greek goddess of fame. Image credits: Luis García.

They focus on two scenarios: lies about diseases and healthcare, which can be especially dangerous, and information used and published by journalists. Pheme addresses speculation, controversy, misinformation, and disinformation, in what can only be described as a broad, ambitious attempt. If this works out, as cliche as it sounds, it has the potential to revolutionize how we receive information and change the world forever.

Pheme will not only focus on analyzing news and stories, but it will also try to identify… memes. They coined the term phemes to designate memes which are enhanced with truthfulness information. Helping the spread of such phemes could not only make your coworkers laugh, but also help propagate truthfulness instead of misinformation.

The cool thing about this is that they will be releasing all this as an open source algorithm, to be used by journalists worldwide. The project will also reportedly develop a free-to-use platform where anybody can filter and verify media claims with an interactive and intuitive dashboard.

Of course, Pheme is not the only project of this type. Facebook and Google are working on their fake news detectors, as are several other tech giants and research institutes. The acuteness of the problem of fake is impossible to ignore, and its impact on the world should not underestimate it. The stakes have never been higher.

‘Psychological vaccination’ counters effects of fake news. Critical thinking is still a thing

Taking hints from vaccines, a group of researchers has devised an immunization method against ‘fake news’. In a typical vaccine, a piece of the virus that causes the illness is inoculated in a dormant form to build the body’s resistance. Likewise, inoculating people against so-called ‘alternative news’ by exposing them to a small dose of misinformation can stop myths or plain lies from propagating, provided misinformation tactics are presented.

The hoax pandemic

The study carried out by researchers from the University of Cambridge in the UK, Yale and George Mason State, US, couldn’t come at a better time. It follows hot on the footsteps of one of the most divisive presidential campaigns in U.S. history, partly fueled by disinformation at the hand of fake news outlets which purposely propagate hoaxes and lies.

Fake stories like those stating that ‘Pope Francis and actor Denzel Washington had endorsed Donald Trump‘ or that ‘Protesters at anti-Trump rallies in Austin, Texas, were “bused in”‘ have been shared hundreds of thousands of times on Facebook and became viral. The most out of this world fake news must be the “Pizzagate” scandal — a viral fake news which suggests hacked emails of John Podesta, the Clinton campaign chair, contain coded messages referring to human trafficking. These codes supposedly connect a number of restaurants and a number of Democratic Party members with a fabricated child-sex ring. This story had scandalized the public being shared thousands of times. On December 4, 2016, Edgar Maddison Welch, a 28-year-old man from Salisbury, North Carolina barged into the  Comet restaurant – one of the places ‘exposed’ by the conspiracy theory — with an assault rifle and fired three shots inside. No one was injured and Welch was later arrested. The man claimed that he wanted to investigate for himself and gave up when he found there were no minors kept captive inside the restaurant. It’s easy to imagine how all of this could have gone horribly wrong and people could have gotten killed.

The impact of fake news is incalculable at this point, but its power to influence people’s opinions is real and clear. A previous study from Stanford found that ‘up to a frightening 80% of surveyed US middle school students can’t tell the difference between fake news and actual news stories.’

Some of the blame for this wretched state of affairs that undermines democracy has been pinned on Facebook. As a response, the social network recently introduced a ‘fake news’ flag and warn system. If a story if flagged as a fake, it will later be verified by a trusted partner and users will be warned the shared article might be bogus. Such measures are much welcome and necessary but not nearly enough because everyone shares the blame.

The ‘vaccine’

There will be some who believe a partisan audience can’t be swayed to reason no matter what you do. However, the present study seems to suggest there are ways to improve the public’s critical thinking and blanket communities against the viral effects of fake news spreading through social networks.

“Misinformation can be sticky, spreading and replicating like a virus,” says lead author Dr. Sander van der Linden, a social psychologist from the University of Cambridge and Director of the Cambridge Social Decision-Making Lab. “We wanted to see if we could find a ‘vaccine’ by pre-emptively exposing people to a small amount of the type of misinformation they might experience. A warning that helps preserve the facts. The idea is to provide a cognitive repertoire that helps build up resistance to misinformation, so the next time people come across it they are less susceptible.”

For their study, the researchers recruited 2,000 participants across the US spectrum of age, education, gender, and politics. The team chose ‘climate change’ as the broad topic of fake news, a polarizing subject for many Americans, despite the overwhelming scientific consensus. Climate change is often the object of deliberate misinformation at the hand of vested interests who funnel millions into such campaigns. Climate change stances are also influenced by political affiliation, making it a good subject to probe fake news.

First, the researchers selected an actual fake news story to work with. Various falsehoods that went viral on the internet had been rated for familiarity and persuasiveness. The winner was this assertion that ‘there is no consensus among scientists’ that CO2 release as a result of human activities will cause climate change. The Oregon Global Warming Petition Project claims it has a petition signed by 31,000 American scientists who support this claim. As debunked by Skeptical Science, “30,000 scientists and science graduates listed on the OISM petition represent a tiny fraction (0.3%) of all science graduates. More importantly, the OISM list only contains 39 scientists who specialise in climate science.” In reality, among climate scientists a.k.a the experts, “97.1% endorsed the consensus position that humans are causing global warming.”

That’s about it for the fake news story. What the researchers did next was to split the participants into various groups to test various fake news exposure scenarios. Beforehand, each participant’s current levels of scientific agreement on climate change throughout the study were assessed.

Credit: Skeptical Science.

Credit: Skeptical Science.

Some participants were shown only the fact about climate change consensus under a pie-chart form similar to the image above. This group reported a 20 percentage points increase in perceived scientific agreement. Another group was only shown a screenshot from the Oregon Global Warming Petition Project website. These participants dropped their belief in a scientific consensus by 9 percentage points.

When participants were presented with the factual climate change consensus pie-chart followed by the erroneous Oregon petition. Strikingly the two stories neutralized each other resulting in only a 0.5 percentage points difference in perceived scientific agreement. In other words, faced with conflicting information, these people remained undecided which side to choose. In many instances, a perceived state of polarization or debate is very lucrative for certain interests, like the oil&gas or tobacco industry, but that’s another story.

“It’s uncomfortable to think that misinformation is so potent in our society,” says van der Linden. “A lot of people’s attitudes toward aren’t very firm. They are aware there is a debate going on, but aren’t necessarily sure what to believe. Conflicting messages can leave them feeling back at square one.”

The good news is that fake news can be rendered benign if preemptive measures are taken. To illustrate this point, the researchers gave two random groups in the study a ‘vaccine’ which could either be:

  1. a general warning that “some politically-motivated groups use misleading tactics to try and convince the public that there is a lot of disagreement among scientists” or
  2. a detailed inoculation that goes into detail why the Oregon petition is bogus. For instance, participants are informed that less than 1% of signatories have backgrounds in climate science. or that some of the signatories are fraudulent, such as Charles Darwin and members of the Spice Girls.

The general inoculation saw a 6.5 percentage point shift towards acceptance of the climate consensus, despite exposure to the fake news. The detailed inoculation resulted in a 13 percentage points increase. That’s still only two-thirds of the effect experienced by participants who had only seen the factually correct consensus pie-chart but still much better than neutralization.

Researchers were also careful to analyze the effects of fake news as a function of political affiliation. The findings suggest that fake news neutralizes the factual for both Democrats and Independents. Fake news vs factual overrode the facts by 9 percentage points for Republicans.

Following inoculation, however, the positive effects (two-thirds of the effect caused by the factual only) were preserved across all parties.

“What’s striking is that, on average, we found no backfire effect to inoculation messages among groups predisposed to reject climate science, they didn’t seem to retreat into conspiracy theories,” van der Linden said.

“There will always be people completely resistant to change, but we tend to find there is room for most people to change their minds, even just a little.”

 

We’re trusting a lot of fake news because we’re abysmal at weeding it out, study finds

We have more information at our fingertips than previous generations absorbed in a lifetime — but we’re doing a very poor job of filtering actual news from the shadier info released upon social media. Even students, the most technically capable and internet-literate people out there, are largely unable to make this distinction,  a new study found.

Image credits Oberholster Venita / Pixabay.

You’ve probably ran into a few bogus pieces of news out on your Facebook adventures at one point or another. And you may be doing a good job at dodging it for the most part. You also probably believe that everyone else can draw on the same level of news-savviness as you. You’d be wrong.

A new study led by Sam Wineburg from Stanford University found that up to a frightening 80% of surveyed US middle school students can’t tell the difference between fake news and actual news stories. An even higher percentage had no qualms to take information from anonymous Imgur posts as reliable facts at face value. Even worse, we believe that we’re doing a good job of weeding out the bad content from the rest just because we can get to it.

“Many people assume that because young people are fluent in social media they are equally perceptive about what they find there,” said lead researcher Sam Wineburg from Stanford University.

“Our work shows the opposite to be true.”

Fake news can take many shapes. Sponsored or advertising content, information from dubious sources, even straight-up fabrications that go viral all qualify. For example, there was the story that one FBI agent who was directly involved in the Hillary emails investigation was found dead in his apartment. Or that Pope Francis is all for Trump being president. Both stories had less much truth in them than there’s bagel in the bagel’s hole.

Traditionally, this job of weeding out fake news was done by editors or journalists themselves through the obscure ancient practice of “fact-checking.” Since those times, social media has largely taken over the role of dedicated news agencies, and anyone can post whatever they want. To their credit, companies such as Facebook or Google are working to de-monetize or actively ban this kind of content following the electoral news disaster. But the sheer percentage found gullible by the study points to a deeper issue in how suppliers and consumers in today’s media world interact. And with 62% of US adults getting the majority of their news from social media, it’s very important we understand just how much of a problem it is. The Stanford researchers themselves admit to being “shocked” by the results.

“In every case and at every level, we were taken aback by students’ lack of preparation,” they write.

The survey totaled 7,804 students from middle school to college levels in 12 US states. The researchers gave each participant a range of activities to perform based on their educational levels. One task presented to high school students had them rate the trustworthiness of an Imgur photo showing deformed daisies. The headline read “Fukushima Nuclear Flowers: Not much more to say, this is what happens when flowers get nuclear birth defects.” And we’ve talked about this picture before.

“[…] people started to freak out all over the internet that these plants suffered mutations due to the devastating nuclear incident from 2011 in Fukushima, Japan. According to the photographer @san_kaido, the radiation level near the daisies was measured at 0.5 μSv/h at 1m above the ground, which in fact is not much higher than the normal values,” Alexandra wrote.

“In other words, no reason to freak out.”

Everything at face value, please!

Only 20% of students thought the photo (posted anonymously) was a little dubious. But double that, 40% of students, considered the photo as “strong evidence” that the region around Fukushima is hazardous.

“We asked students, ‘Does this photograph provide proof that the kind of nuclear disaster caused these aberrations in nature?’ And we found that over 80 percent of the high school students that we gave this to had an extremely difficult time making that determination,” Wineburg told NPR.

“They didn’t ask where it came from. They didn’t verify it. They simply accepted the picture as fact.”

Another task had middle-school students sift through the Slate homepage and decide whether each piece of content was news or an ad. They were pretty good at pointing out traditional ads, such as banners, but more than 80% of the 203 students though that a native finance ad — labeled as “sponsored content” — was a piece of actual news.

Participants also had difficulty identifying credible sources from shadier ones and most even ignored cues such as the authenticated tick on verified Facebook and Twitter accounts. A task saw 30 percent of the students arguing that a fake Fox News account was more credible than the verified one because it used better graphics. And even college students had a hard time identifying the political views of candidates based on Google searches.

Before algorithms are set in place to take this news out of our feeds, the researchers say we need to focus on better educating ourselves on the issue.

“What we see is a rash of fake news going on that people pass on without thinking,” Wineburg told NPR. “And we really can’t blame young people because we’ve never taught them to do otherwise.”

“The kinds of duties that used to be the responsibility of editors, of librarians now fall on the shoulders of anyone who uses a screen to become informed about the world,” he added.

“And so the response is not to take away these rights from ordinary citizens but to teach them how to thoughtfully engage in information seeking and evaluating in a cacophonous democracy.”

There’s also evidence that people can start remembering fake facts as real — an effect which could extend to fake news, as well.

The report “Evaluating Information: The Cornerstone of Civic Online Reasoning” is still awaiting publication and is yet to pass peer-review, so as always take it with a grain of salt. You can read it in full here.

No, global temperatures aren’t “plunging” – fake news is acting up

Scientists were shocked to see the House of Representatives Committee on Science, Space, and Technology tweeting a Breitbart News article with the headline “Global Temperatures Plunge. Icy Silence From Climate Alarmists.” This is disturbing for several reasons. For starters, it showed the House taking a partisan position on climate change, a scientific issue on which there is a virtual consensus. Secondly, it’s propagating the views of Breitbart News – the alt-right mouthpiece known for racist, misogynistic, and overall misleading articles, and perhaps more importantly – that’s just bad science and fake news.

There is a virtual consensus on humanity’s impact on climate. This graphic by John Cook from “Consensus on Consensus” by Cook et al. (2016) uses pie charts to illustrates the results of seven climate consensus studies by Naomi Oreskes, Peter Doran, William Anderegg, Bart Verheggen, Ed Maibach, J. Stuart Carlton, and John Cook.

Cracking down on fake science

They say a lie can travel halfway around the world while the truth is putting its pants on. Unfortunately, this is too often the case. Articles like these are lazy, false, and dangerous – but they tend to be picked up a lot, especially on social media. It all started with David Rose of the Daily Mail last week, who stated that global land temperatures have plunged by more than 1C since the middle of this year. But this is not only false and misleading, it’s also pointless:

  • for starters, this claim relies only on satellite data, which only dates to 1978. Surface thermometers, going back to 1800 put the drop into perspective: it is a drop, but not a record one.
  • secondly, the claim conveniently discusses only land temperatures – when you consider both ocean and land temperatures (so essentially, global temperatures), the drop isn’t so significantly.
  • also, this event is completely normal when an El Niño event transitions to La Niña. Every serious climate scientist knew and expected this, and to claim that this was surprising or somehow related to long-term global shifts is irresponsible.
  • lastly, this was greatly exaggerated and taken out of the global context – it was made to seem as if this (again, completely expected change) took everyone by surprise and meant that global warming isn’t happening.

GlobalTemp

It’s basically cherrypicking on steroids.

“This is an astounding example of cherry-picking the data,” said Kerry Emanuel, a professor at MIT. “Global land temperatures fluctuate significantly from one month to the next, and the article in question appears to have cherry-picked a drop on global land temperatures (not including the ocean, which covers 70% of the globe) for a single month.”

Of course, the climate change denier groups loved it and were all over it, exaggerating and misrepresenting things even more. The Breitbart article largely quotes the Global Warming Policy Foundation located in the United Kingdom. As it turns out, they’re a group of climate change denial advocates funded by anonymous donors. Although one major financier has been revealed: Sir Michael Hintze, a hedge fund founder who is a prominent member of England’s Brexit-climate-denier cabala.

Scientists across the world quickly replied to the news.

“They’re not serious articles,” said Adam Sobel, a Columbia University climate scientist. “They paint it as though it’s an argument between Breitbart and Buzzfeed when it’s an argument between a snarky Breitbart blogger and the entire world’s scientific community, and the overwhelming body of scientific evidence.”

He went on to explain how articles like this can make a seemingly convincing case while ignoring very crucial aspects of climate science.

“The temperature goes up for a couple of years and we have the largest year on record, then it goes down and it reaches a level that’s still well above 20th-century historical averages,” he said. “That in no way disproves anything about the causes of the long-term temperature trends.”

Politics interfering with science

The planet is getting much hotter, and we’re seeing signs of this year after year. To put things into perspective, even with this drop in December, Earth is getting significantly hotter year after year – 2014, 2015 and 2016 were respectively the hottest year on record, as Michael Mann, a climate scientist at Penn State University explains:

“Three consecutive record-breaking warm years, something we’ve never seen before, and a reminder of the profound and deleterious impact that our profligate burning of fossil fuels is having on the planet,” he told the Guardian.

U.S. Senator Bernie Sanders of Vermont speaking at a town meeting at the Phoenix Convention Center in Phoenix, Arizona. Image credits: Gage Skidmore.

But as if it wasn’t troubling enough that big media is picking up articles like this, an official institution picking it up and sharing it is simply unacceptable. As Mann says, it can only be seen as a deliberate effort to fool the public.

“For anyone, least of all the House committee on science, to at this particular moment be promoting fake news aimed at fooling the public into thinking otherwise, can only be interpreted as a deliberate effort to distract and fool the public.”

The best thing to come out of this is Senator Bernie Sander’s response. The Vermont representative stated:


The Trump administration is not even in office and yet we’re already seeing ripples of its effect. Donald Trump has often spoken against climate change, claiming it to be a “Chinese invention.” His staff also comprises mostly of climate change deniers and people who have a long history of promoting fossil fuel energy at the expense of clean renewables.

[NOW READ] Here’s how Trump might bring the U.S. back to the climate ‘dark ages’