Twitter deleted one of the president’s posts and temporarily blocked his account for publishing the personal details of a journalist, while Facebook deleted one of Trump’s posts for falsely saying flu is more lethal than COVID-19.
It’s been a bizarre week even by Trump standards. After the announcement that both the President and the First Lady tested positive for coronavirus, we witnessed a series of conflicting and confusing accounts, followed by a small parade and a controversial discharge from Walter Reed.
But even this wasn’t the end of it. A barrage of all-caps semi-cryptic tweets followed from the President, including one call to “REPEAL SECTION 230!!!” — a law that protects social media companies like Twitter and Facebook from being liable for the content posted on their platforms.
But what triggered a response from social media companies was a false tweet coming from the President, who calimed that the US had “learned to live with” flu season, “just like we are learning to live with Covid, in most populations far less lethal!!!”
So far, COVID-19 has killed more than the past 5 flu seasons combined, despite unprecedented protection measures that have drastically limited the spread of the virus — and still, there is no end in sight to the pandemic without a vaccine. We don’t know exactly how lethal the novel coronavirus is, but it is much more dangerous than the flu. According to Johns Hopkins University, it could be about 10 times more dangerous, but it’s also more contagious, so it’s hard to say just how much worse it is — but by no means is it less dangerous.
Twitter hid the message and posted a warning about “spreading misleading and potentially harmful information,” while Facebook removed the post altogether. Facebook spokesperson Andy Stone also confirmed that the company removed the post for COVID-19 misinformation.
In a separate incident, the president posted a tweet quoting a New York Post that praised him as an “invincible hero”. The tweet also included the journalist’s personal email, which prompted Twitter to delete the post as sharing other people’s information is prohibited on the platform.
Facebook did announce more stringent measures. The social network announced it will ban all accounts, pages, and groups that represent or promote the QAnon group. QAnon is a sprawling, far-right conspiracy theory with a history of scandalous conspiracy theories not based on actual facts). For instance, QAnon members believe that a cabal of Satan-worshiping pedophiles running a global child sex-trafficking ring is plotting against President Donald Trump, and that Trump is planning a day of reckoning. Trump has previously praised the group, saying that they “like me very much” and that they “love America.” Meanwhile, the FBI has deemed QAnon a potential domestic terrorist threat.
Facebook communities that promote distrust in ‘the establishment’ and official health guidelines are more effective than reliable health groups at reaching and engaging with undecided individuals, a new study reports.
The study was carried out at George Washington University and used a special tool built to track vaccine discussions on Facebook during the 2019 measles outbreak. This “battleground” map reveals the broad dynamics of how distrust in established guidelines is fomented on social media. The authors caution that this distrust can come to dominate public discourse in the future, which would pose a major block against immunization efforts for COVID-19 and future outbreaks.
In strangers on the Internet we trust
“There is a new world war online surrounding trust in health expertise and science, particularly with misinformation about COVID-19, but also distrust in big pharmaceuticals and governments,” says Professor Neil Johnson, lead author of the paper.
“Nobody knew what the field of battle looked like, though, so we set to find out.”
The team examined several Facebook communities totaling almost 100 million individual users. These groups, they explain, formed a dynamic and highly-interconnected network that spanned across national borders and cultures.
Among these groups, three ‘camps’ were identified: pro-vaccination, anti-vaccination, and those of “undecided” individuals (for example, parenting groups which discussed vaccines but didn’t lean either way). The team started with a certain community and would then find another one that had strong links to it, repeating the process until they reached a better understanding of the overall relationships forming among the communities.
They report that overall, there are fewer individuals who agree with anti-vaccination sentiments than with pro-vaccination on Facebook, but there are almost three times as many anti-vaccination communities on this platform than pro-vaccination ones.
The anti-vaccination users utilize these groups to engage with undecided communities, while the pro-vaccination ones keep largely to themselves. They focused their efforts on countering the larger anti-vaccination groups, which left the smaller splinter-groups pretty much free to operate with impunity.
Furthermore, while the pro-vaccination camp understandably sticks to one creed (“vaccines work and they’re safe”) their opponents can have their pick of narratives and use this to engage with the undecided. These range from promoting safety concerns or individual choice to conspiracy theories, which they tailor to the particular community they’re addressing at the time.
The team notes that individuals in the undecided communities tended not to sit idly, but were actively engaging with the vaccine content. “The undecided clusters have the highest growth of new out-links [i.e they’re actively engaging with the other two groups] followed by anti-vaccination clusters,” the paper reads.
“We thought we would see major public health entities and state-run health departments at the center of this online battle, but we found the opposite. They were fighting off to one side, in the wrong place,” Dr. Johnson said.
Social media often works to amplify and equalize information, the team explains, meaning it makes it readily accessible but also gives different opinions the appearance of being equally worth considering (they’re not).
The team proposes several strategies to better combat the spread of misinformation on social media such as influencing the heterogeneity of individual communities (making them more diverse) to delay radicalization and decrease their growth, as well as manipulating the links between communities in order to prevent the spread of negative views.
“Instead of playing whack-a-mole with a global network of communities that consume and produce (mis)information, public health agencies, social media platforms and governments can use a map like ours and an entirely new set of strategies to identify where the largest theaters of online activity are and engage and neutralize those communities peddling in misinformation so harmful to the public,” Dr. Johnson said.
The paper “The online competition between pro- and anti-vaccination views” has been published in the journal Nature.
As the COVID-19 pandemic continues, it becomes abundantly clear that conspiracy theories and misinformation are almost as prone to spreading through the public. Thus far during this global crisis, misinformation like “coronavirus is just the flu/no worse than the flu” to drinking water every 15 minutes will “flush out” the virus, to eat cloves of garlic protects against COVID-19. The latter of which led to a woman being hospitalised in China after eating 1.5kg of raw garlic.
Whilst conspiracy theories may be slightly less dangerous than pure misinformation, they are no less insidious. Some ‘theories’ that have circulated thus far are that COVID-19 is a “bioweapon” that was “created in a lab” — either genetically engineered or incubated in bat test subjects, in the US or in China, depending on who you believe — to it being a “poplation control scheme” devised by Bill Gates of Microsoft.
What is very clear is that the “disease vector” responsible for the spread of misinformation and conspiracy is most certain social media and to a wider extent, the internet itself. It is perhaps ironic then, that the most widespread conspiracy theory and the one that the most people seem to be lending credibility to, is that 5G — the next generation of mobile internet connection that promises faster upload/download speeds through the use of a wider radio spectrum — is either responsible for the illness that is being blamed on COVID-19 or is somehow facilitating the spread.
In fact, news reports this week indicate that some people are taking this fallacious connection so seriously that they are attacking 5G towers and workers. Just this morning Birmingham Live in the UK reported that a 5G mast had been set on fire, whilst a video circulates on Twitter of protesters in Hong Kong tearing down masts.
Whilst it would be easy, and perhaps convenient, to claim this as a new phenomenon, the adoption of COVID-19 as the proof of the “dangers” of 5G is just the latest step in a long smear campaign designed to induce fear about its introduction.
The trepidation around 5G can be traced much further back, beyond its inception, beyond the creation of the internet even. The fear of 5G arises from our fundamental and long-standing misunderstanding about radiation. More specifically about what electromagnetic radiation is, and the difference between ionizing and non-ionizing radiation.
But before tackling the long history of irrational radiation fear, we should take a look at some extant claims and demonstrate how easy they are to dismiss.
Tracking down Patient Zero
Whilst it would be pretty much impossible to track down the first person who connected 5G and COVID-19, it’s far more feasible to separate out some of the most common claims and analyse them. The first 5G/Coronavirus claim that I personally came across was the idea that there actually is “no virus” and that all the symptoms are a result of 5G networks, so let’s consider that claim to be our “Patient Zero.”
In the above screenshot, it’s clear that the roll-out of different forms of communication are being linked to the prevalence of certain viruses. It would be pretty easy to start any kind of debunking by pointing out that everything we know about the viral theory of disease transmission would have to be wrong to accommodate this conspiracy theory. Thus, before we even start, there’s a wealth of evidence — enough to build the foundation of our entire understanding of disease and medicine — to demonstrate this claim is nonsense on toast.
But, where’s the fun in that? Instead, let’s pick apart the claim bit by bit.
Firstly, the suggestion that radio waves were introduced in 1916 is laughable and clearly demonstrates that the people that are spreading this conspiracy have zero idea what electromagnetic radiation is.
Radio waves are simply low-frequency, long-wavelength, electromagnetic radiation — less energetic than infrared. In fact, they carry with them less energy than the visible light we use to see everything around us.
It should be clear then that if radio waves are responsible for a viral disease, the largest contributor to epidemics should be sunlight.
Radio waves didn’t “emerge” in 1916, in fact, the static that you can hear on an unturned radio partly consists of radio waves that date back to shortly after the big bang — emerging from the Cosmic Microwave Background (CMB) that permeates the entire Universe. The Earth also receives a great deal of electromagnetic radiation from the Sun in the form of radio waves. Thus, any technological developments that utilised radio waves simply added to those natural sources.
Looking past that there is also an issue with the dates being offered in this widely circulated social media post. The first commercial radio transmitters and receivers were developed between 1895 and 1896 by Guglielmo Marconi — with radio being widely used by 1900. Way before 1918 flu pandemic, which lasted until 1920.
The “evidence” put forward by the conspiracy theorists then takes a break of nearly a century until the supposed introduction of 3G in 2003, which is linked to the spread of SARS. The thing is, there were lots of pandemics in this intervening time — Asian flu in 1957 and Hong Kong flu in 1968 for example. These are ignored because they don’t fit the conspiracy theorists’ narrative.
As for 3G, well its rollout took a protracted period of time. Whilst it was indeed serving Europe in 2003, 3G wasn’t rolled out in Asia until 2006. It took until 2007 to get 3G operational in 40 countries, and it wasn’t introduced in Africa until 2012. The SARS pandemic was first identified in China in 2002 — four years before 3G was introduced. It was brought under control in July 2003. There was another smaller outbreak in 2004, again in China, still two years before the introduction of 3G there.
The roll-out of 4G was much tighter, taking from 2008 to 2010 roughly to implement. The Swine flu pandemic began in Mexico in 2009 and was over by August 2010. That means that for Swine Flu there is some correlation. Far more than can be attributed to 3G and SARS, which barely overlaps at all.
We also have to ignore that coronaviruses such as COVID-19, SARS, and MERS are very different than influenza strains, can often cause radically different symptoms and most certainly have very different incubation periods. If these ailments had the same root-cause — ie. low-frequency radiation — we should expect them to be similar.
With all these cases, even if you discount the fact that every epidemiologist, doctor and scientist who works in virology must be “in” on the conspiracy, there still lurks that problem that science attempts to avoid at every turn.
The strands of “evidence” presented to support this conspiracy are very easy to dismiss based on a well known logical fallacy which scientists are always at pains to avoid. A maxim that passed into infamy when a doctor ignored its principles and started a movement that has cost lives across the globe.
Correlation does not equal causation.
The mere fact that two events are correlated does not mean that they are causally linked. A causal link between events has to be established by evidence. To demonstrate this, one only has to see how easy it is to link events like these epidemics to something else unrelated, especially when you omit and distort data. For example, can we really be sure that the American thrash band Metallica aren’t responsible for the epidemics blamed on 5G and other radio wave-based systems?
Picture what follows as a deranged tweet:
“In 1986, Metallica released their masterpiece “Master of the Puppets.” In the same year, America suffered its largest flu epidemic since 1968!
2003, Metallica release the panned “St. Anger” album — SARS happens!
2008, they release “Death Magnetic” shortly after MERS strikes!
And in 2019 the band release “Helping Hands…Live & Acoustic at the Masonic” thus sparking the COVID-19 pandemic and simultaneously proving the Masons were behind this all along!”
What I did there was made a correlation using the barndoor effect. Rather than aiming at a target painted on a barn-door, I fired a few random shots into it and then painted a target around the bullet holes. Being a crack shot is easy when you cheat. And this is exactly what the people pushing this conspiracy are doing.
One of the key issues that still motivates the anti-vax movement is the rise in autism cases and how this seems to correlate to the introduction of the MMR vaccination. The connection was initially drawn by Andrew Wakefield in a 1998 paper published in TheLancet and later retracted. Wakefield himself was struck off for the unethical procedures he engaged in to obtain his results, but he has been embraced as a hero by the anti-vax —and some would say the anti-science — movement.
All it takes to launch a conspiracy theory based on correlation is a willingness to distort and ignore data, and to bury the fact you have no actual causal evidence.
Let’s bring the viral theory of disease back into play, and look at a slightly toned-down suggestion, the idea that 5G could be weakening our immune systems.
Understanding Non-ionizing Radiation
Again, the main evidence that has been presented for 5G facilitating the spread of COVID-5G has been the correlation between its rollout, the areas of the world in which it is most used, and the timing and location of COVID-19 outbreaks. We can dismiss this by saying correlation doesn’t equal causation. So what about the suggested mechanisms by which 5G is weakening our immune systems?
The idea that 5G weakens the immune system is very similar to claims of electromagnetic hypersensitivity (EHS) in which mild to severe symptoms are connected to exposure to electromagnetic fields (EMF). At the moment the World Health Organisation (WHO) does not consider the symptoms of EHS to be related to exposure to EMF. Likewise, there is no clinical evidence to suggest that 5G can cause harm or weaken the immune system.
Firstly, the human immune system cannot be weakened against COVID-19, for the simple reason that this strain of coronavirus is new, we have no immune response to it. That is what makes it so dangerous, none of our immune systems contain the antibodies for this virus yet.
Secondly, the radio waves that form the basis of 5G are non-ionizing. This essentially means that they don’t have the requisite energy to strip electrons from atoms. This is unlike high-frequency electromagnetic radiation like X-rays or gamma-rays which do have the energy to ionize atoms and thus, damage cells.
When electrons are stripped from atoms — these atoms become ionized. This can be a problem in our bodies because the surface — or valance — electrons of an atom determine how it bonds with other atoms. A change in this respect can change how proteins fold within the body. This might not sound too extreme, but the way a protein folds determines how it functions. Thus, exposure to ionizing radiation can lead to all sorts of nasty effects, including cancer and yes, weakened immune systems.
Again, radio waves don’t have enough energy to do this, but you may well be asking, what if we’ve been exposed to a lot of radiowaves? Surely then there will collectively be enough energy to cause ionization?
The simple answer to this is no. Fortunately, that isn’t how ionization works.
Imagine the valence electron as a rubber duck and the atom to which it is attached as a metal bucket. We start to fill the bucket by pouring water into it — analogous to bombarding our electron and atom with radio waves.
Now in the real world, the water lifts the duck off the bottom of the bucket, and eventually, it spills out. Ionization doesn’t work like this though. With ionization, the electron doesn’t spill out unless the photons that make up these radio waves individually contain enough energy make them do that. It does matter how many photons there are.
Albert Einstein was the first to discover this phenomenon whilst investigating the photoelectric effect. When light hits the surface of a metal, electrons are given off, but Einstein found that lowering frequency of the light cut off the flow of electrons. Yet to his surprise, altering the intensity of light did not cause electrons to stop being released — it just slowed their escape.
So for example, a low-frequency light with a high-intensity shining on the surface of a metal will not cause electrons to flow. Yet a high-frequency, low-intensity light will.
Re-running our bucket experiment, this is like saying the duck stays at the bottom of the bucket unless the water is of the correct temperature to make it rise. No matter how much water pours in, that duck ain’t budging. Bringing the temperature of the water up, spills out the duck at random, it could take a drop of water to do it, it could take a monsoon.
If it seems like this doesn’t make sense, well, yeah. It’s quantum physics. If it confused and terrified Einstein, why should it be comfortable and easy for us to understand?
Finally, we come to the idea that COVID-19 can somehow utilise 5G signals as a method of transport or even communication.
The Daily Star — the UK’s number purveyor of pseudo-scientific junk — this week ran an article that suggested: “viruses can talk to each other” and thus make active decisions about who to infect. The implication is that 5G signals are being used to do this. Full Fact, the UK’s fact-checking website link this bizarre claim to a 2011 paper which suggests bacteria can communicate via electromagnetic signals — an idea that is thoroughly disputed and, as you probably noticed, refers to bacteria not viruses.
The Full fact article also points out that COVID-19 is spreading in areas of the world with little to no 5G coverage. One of the worst-hit countries is Iran, a country with no 5G networks.
This element of the COVID-19/5G conspiracy really goes to the heart of why we need to step on this “theory” hard and fast. We know how COVID-19 spreads and limiting that spread is vital.
The novel coronavirus moves through contact with small droplets when those infected with the virus cough, sneeze or exhale. Smashing down 5G towers will achieve nothing to limit the spread. What will limit the spread is getting people to self-isolate, practice social distance and good hygiene practices when they can’t. Wearing protective gear such as masks and gloves has been shown to have some positive effects.
To get people to do that we must show them that conspiracy theories like those listed — and I hope thoroughly debunked — above, are nonsense. In turn, ensuring that they are listening to good information and not outdated irrational fears about “radiation.”
Bad News comes bearing good news. The game about propaganda and disinformation, that is.
Image credits DROG.
An online game that puts players in the role of propaganda producers can help them spot disinformation in real life, a new study reports. The game, christened Bad News, was effective in increasing players’ “psychological resistance” to fake news.
“Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle,” said Dr. Sander van der Linden, Director of the Cambridge Social Decision-Making Lab.
Researchers at the University of Cambridge helped develop and launch the browser-based video game back in February of 2018, in collaboration with Dutch media collective DROG and design agency Gusmanson. Since then, thousands of people have played the game — which takes about fifteen minutes from start to finish — with many, yours truly included, submitting their data to be used for this study.
InBad News, your job is to sow anger and fear by creatively tweaking news and manipulating social media. Throughout the game, you’ll find yourself calling on twitter bots, photoshopping ‘evidence’, and churning conspiracy theories to attract followers. It’s quite a good game, and a pretty eye-opening one at that, because you have to walk a very thin line. On the one hand, you want as many people as possible to start following and believing you; on the other hand, you need to rein yourself in somewhat to protect your “credibility score”.
What the team wanted to determine is whether the game can help people spot fake news and disinformation in real life. The results suggest it can.
“We wanted to see if we could preemptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived,” Dr. van der Linden explains. “This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination.”
Players were asked to rate the reliability of a series of headlines and tweets before and after gameplay. Participants were randomly allocated a mixture of real (the control group) and fake news (the “treatment” group). The team reports that members in the treatment group were 21% less likely to perceive fake news headlines as reliable after playing the game. Bad News had no impact on how these participants ranked real news in terms of reliability.
There are six “badges” people can earn in the game for the six most common strategies used by fake news producers today: impersonation; conspiracy; polarisation; discrediting sources; trolling; emotionally provocative content. In-game questions measuring the game’s impact were issued for four of these badges (to limit bandwidth usage). From pre- to post-gameplay, the results show that Bad News:
Reduced perceived reliability of the fake headlines and tweets by 24% or the disinformation tactic of “impersonation” — i.e. the mimicking of trusted personalities on social media.
Reduced perceived reliability of “polarisation” — i.e. the use of highly-polarizing, emotionally-provocative headlines — by about 10%.
Reduced perceived reliability of “discrediting” — i.e. attacking a legitimate source with accusations of bias — by 19%.
Reduced perceived reliability of “conspiracy” — i.e. the spreading of false narratives blaming secretive groups for world events — by 20%.
Those who were most susceptible to fake news headlines at the outset of the study benefited most from this “inoculation”, the team adds.
“We find that just fifteen minutes of gameplay has a moderate effect, but a practically meaningful one when scaled across thousands of people worldwide, if we think in terms of building societal resistance to fake news,” adds van der Linden.
“We are shifting the target from ideas to tactics,” says Jon Roozenbeek, study co-author also from Cambridge University. “By doing this, we are hoping to create what you might call a general ‘vaccine’ against fake news, rather than trying to counter each specific conspiracy or falsehood.”
The team worked with the UK Foreign Office to translate the game into nine different languages including German, Serbian, Polish, and Greek. They’ve also developed a “junior version” of the game aimed at children aged 8-10 which is available in ten different languages so far. The goal is to “develop a simple and engaging way to establish media literacy at a relatively early age”, Roozenbeek explains, and then see how long the effects last.
Still, so far, the data isn’t conclusive. The major limitation of this dataset is that it used a self-selecting sample, namely those who came across the game online and opted to play. As such the results are skewed toward younger, male, liberal, and more educated demographics. Even with this limitation, the team says controlling for various characteristics showed that the game was almost equally effective across age, education, gender, and political persuasion. Part of that comes down to the fact that Bad News has an ideological balance built in, the team explains: players can choose to create fake news from the left and right of the political spectrum.
“Our platform offers early evidence of a way to start building blanket protection against deception, by training people to be more attuned to the techniques that underpin most fake news,” Roozenbeek concludes.
If you’re reading this, there’s a very good chance you tried to combat some science misinformation at some point. Whether it’s an antivaxxer friend, that climate change-denying uncle, or just some internet comment, disinformation has become so pervasive that it’s impossible to avoid — and if you’ve tried to talk them out of it, you know just how insanely difficult it can get.
Now, in a new study, researchers describe some evidence-based strategies to combat the misinformation.
Transparency is a must, researchers say.
Misinformation is hardly a new thing. it artificially generated backlash against climate change science — ironically just as the scientific community was reaching a consensus on the issue.
“Nowhere has the impact of scientific misinformation been more profound than on the issue of climate change in the United States,” researchers write in the study.
That might seem like a contradiction, but it is actually the result of a carefully planned strategy. An organized network, funded by organizations with a lot of money invested in the fossil fuel industry, devised a campaign to slow down the transition to a low-carbon economy, especially by eroding public confidence in climate change. The result was a large-scale misinformation campaign which was wildly successful, says Justin Farrell, of the Yale School of Forestry & Environmental Studies (F&ES).
In a new paper, Farrell and colleagues shed new light on this misinformation, and describe four evidence-based strategies to combat it:
Public inoculation: A growing body of research shows that our individual perceptions are strongly influenced by our culture — the set of pre-existing ideologies and values we have. However, there is more and more evidence showing that we can use this and “inoculate” against misinformation. This inoculation is essentially exposing people to refuted scientific arguments before they can even hear them — fittingly, like using a “vaccine”. This strategy can be very effective if done quickly, before misinformation spreads, and if more attention is placed on the sources of misinformation.
Legal strategies: It’s well-known that fossil fuel companies such as ExxonMobil have systematically downplayed the risks for their products. Industry leaders knowingly misled the public. As a response, several cities and states have sued these companies — while this is a lengthy process, it can help shed new light on how these companies lied to the public and to political leaders.
Political mechanisms: Like the public opinion, political opinion has also been swayed — but the exact way in which this happens remains difficult to assess. For instance, they identify a case in which the energy company Entergy Corporation paid actors who posed as grassroots supporters of a controversial power plant in New Orleans — but this got little attention in the media, and it’s unclear how politicians were swayed by this manipulation campaign. Placing cases like this under the spotlight, as well as discussing more political candidates’ views on climate change could be very useful in encouraging the election of responsible leaders into office. Also, shedding light on politicians’ financial connections to fossil fuel companies needs to be addressed more — which leads us to our last point.
Financial transparency: “follow the money” is generally a pretty good plan. The number of campaigns that promote science misinformation — coming from donor-directed foundations that shield the contributor’s identity from the public– has grown dramatically in the past few years, topping $100 million. This is done especially to make it difficult to learn who the authors of the disinformation campaign are and to spread haze around the money trail. How often have we heard that scientists are “lying about climate change to get research money” — when the reality couldn’t be further from the truth. The authors call for new legislation to improve funding transparency.
“Ultimately we have to get to the root of the problem, which is the huge imbalance in spending between climate change opponents and those lobbying for new solutions,” said Farrell. “Those interests will always be there, of course, but I’m hopeful that as we learn more about these dynamics things will start to change. I just hope it’s not too late.”
While the study was focused on climate change, similar strategies can be used to address all sorts of misinformation — from antivaxxing and homeopathy to astrology and conspiracy theories.