Tag Archives: journalism

Journalists are blamed for exaggerating, but a new study finds they tend to temper, not exaggerate, scientific claims

The trope of the journalist writing in all caps and misrepresenting, exaggerating scientific findings is already well established. But while examples like these do exist (and they should be taxed), journalists in general are pretty careful when communicating scientific findings — says exactly a scientific study.

Image credit: Flickr / WCN

In science, absolute certainty is often hard to achieve. Uncertainty is a factor of the process and doesn’t mean that a theory is wrong. Scientists have developed specialized, often murky ways to discuss scientific uncertainty, but outside the scientific community, these methods and the associated terminology can be confusing and lead to incorrect conclusions.

This is especially true for science journalists, who have to report regularly on scientific research and can sometimes ignore or fail to understand scientific uncertainty. However, this is the exception rather than the rule, according to a new study. Researchers found that journalists are overall careful when communicating science.

“I feel like when we talk about the potential of journalists exaggerating claims, it’s always these extreme cases,” David Jurgens, assistant professor at the University of Michigan, said in a statement. “We wanted to see if there was a difference when we lined up what the scientist said and what the journalist said for the same paper.”

Understanding science communication

Science journalists have an important role in shaping public understanding of science as they broker information between the scientific community and the general public. They are intermediaries between scientists and the general public, and their judgment and expertise allow the public to engage with new scientific data. The way they frame new scientific findings influences public opinion — because let’s face it, when’s the last time you read an actual scientific study?

Many studies have looked at journalism amid scientific uncertainty over the years. Last year, a group of US researchers looked at how journalists communicated preprints in the early moments of the Covid-19 pandemic, finding that half of the stories analyzed from digital media outlets contained framing devices emphasizing uncertainty.  

In a new study, David Jurgens and Jiaxin Pai from the University of Michigan looked at how scientific uncertainty is being communicated by journalists and whether scientific claims are being exaggerated or not. They also wanted to explore how science claims in the news changed between well-respected and less rigorous publications

“Our findings suggest that journalists are actually pretty careful when reporting science,” Pei said in a statement, highlighting the skills needed to translate science to a general audience. “Journalists have a hard job. It’s nice to see that they really are trying to contextualize and temper scientific conclusions within the broader space.”

The researchers gathered news data from Altmetrics, a company that follows mentions of scientific papers in news stories. They collected about 129,000 articles that mentioned scientific papers. In the stories, they analyzed sentences that included words such as “conclude” to see how journalists were stating the claims of the paper.

They established certainty levels in more than 1,500 scientific discoveries. Jurgens and Pai then built a computer model to see if they could replicate the certainty levels that readers pointed out. The model was very correlated with human assessments of how certain a claim was. The model is not perfect, but it’s good enough to get the idea of what’s going on. Overall, journalists were staying true to the level of certainty presented to the study, the researchers note.

However, Pei said the work of the journalists can sometimes get trickier when looking at the quality of the journal. News writers tend to report the same levels of certainty no matter where the study was published, he argued. This can be problematic for the audience, Pei said as the journal impact factor is an indicator of research quality (though this is also a matter of debate).

The researchers believe their work is a big step forward in understanding and quantifying how uncertainty is communicated in scientific news. To further help reporters and scientists, they created software that helps to calculate the uncertainty in research and reporting. This could also benefit the general audience, they argue, providing a “calming effect” to some degree.

The study was published in the Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.

Can AI replace newsroom journalists?

It’s no secret that journalism is one of the most fragile industries in the world right now. After years where many publishers faced bankruptcy, layoffs, and downsizing, then came the coronavirus crisis — for many newsrooms, this was the final nail in the coffin.

Alas even more problems are on the way for publishers.

Late last month, Microsoft fired around 50 journalists in the US and another 27 in the UK who were previously employed to curate content from outlets to spotlight on the MSN homepage. Their jobs were replaced by automated systems that can find interesting news, change headlines, and select pictures without human intervention.

“Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, redeployment in others. These decisions are not the result of the current pandemic,” an MSN spokesperson said in a statement.

While it can be demoralizing for anyone to feel obsolete, we shouldn’t call the coroner on journalism just yet.

Some of the sacked journalists warned that artificial intelligence may not be fully familiar with strict editorial guidelines. What’s more, it could end up letting through stories that might not be appropriate.

Lo and behold, this is exactly what happened with an MSN story this week, after the AI mixed up the photos of two mixed-race members of British pop group Little Mix.

The story was about Little Mix singer Jade Thirlwall’s experience with racism. However, the AI used a picture of Thirlwall’s bandmate Leigh-Anne Pinnock to illustrate it. It didn’t take long for Thirlwall to notice, posting on Instagram where she wrote:

“@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

By the looks of it, Thirlwall seems unaware that confusion is owed to a mistake made by an automated system. It’s possible the error was due to mislabelled pictures provided by wire services, although there’s no way to tell for sure because not much detail has been offered by MSN, apart from a formal apology.

“As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image,” Microsoft told The Guardian.

Are we entering the age of robot journalism?

My fellow (human) colleagues might rejoice at this news, but really this happens all the time in newsrooms — even the best of them. For instance, the BBC had to make a formal apology after one of its editors used photos of LeBron James to illustrate the death of his teammate Kobe Bryant.

And while some might believe that curating content is an entirely different matter from crafting content from scratch, think again. The Washington Post has invested considerably in AI content generation, producing a bot called Heliograf that writes stories about local news that the staff didn’t have the resources to cover.

The Associated Press has a similar AI that does the same. Such robots are based on Natural Language Generation software that processes information and transforms it into news copy by scanning data from selected sources, selecting an article template from a range of preprogrammed options, then adding specific details such as location, date, and people involved.

For instance, the following short news story that appeared in the Wolverhampton paper the Express and Star is written by AP’s robot.

The latest figures reveal that 56.5 per cent of the 3,476 babies born across the area in 2016 have parents who were not married or in a civil partnership when the birth was registered. That’s a slight increase on the previous year.

Marriage or a same-sex civil partnership is the family setting for 43.5 per cent of children.

The figures mean that parents in Wolverhampton are less likely to be get married before having children than the average UK couple. Nationwide, 52.3 per cent of babies have parents in a legally recognised relationship.

The figures on births, released by the Office for National Statistics, show that in 2016, 34 per cent of babies were registered by parents who are listed as living together but not married or in a civil partnership.

Unlike a human, robots never tire and can produce thousands of such stories per day. There’s a silver lining though for us journalists — we may have a future yet.

While robots shine when reporting on simple linear stories such as football scores, medal tallies, company profits, and just about anything where the numbers alone tell the story, they are very poor with language and analysis. Can you imagine reading an opinion piece written by a robot? Would you ever trust a robot to write my essay, for that matter? Not really? I thought so, too.

A similar argument can be made for the educational industry. Customized learning is one of the main fields of education where AI is set to have a significant impact. It used to be unthinkable to imagine one-on-one tutoring for each and every student out there, for any subject but now artificial intelligence promises to deliver. For instance, one US-based company called Content Technologies Inc is leveraging deep learning to ‘publish’ customized books — decades-old books that are automatically revamped into smart and relevant learning guides, like advice on writing a research paper about AI.

But, that doesn’t mean that human teachers can be scrapped entirely. For instance, teachers will have to help students develop non-cognitive skills such as confidence and creativity that are difficult if not impossible to transfer from a machine. Simply put, there’s no substitute for good mentors and guides.

Humans are still much better than AIs at reasoning and storytelling — what are arguably the most important journalistic qualities.

Personally, I hope that ZME readers appreciate the fact that there are real humans who care and put great thought into crafting our stories. We’re not done just yet, so until our robot overlords are ready to take over, perhaps you can stand us a while longer.

EU-Funded fake news spotting tool gets better and better

Journalism may get an extra boost from fact-checking algorithms. Image in Public Domain Pictures.

The Pope endorses Donald Trump! Or does he? Vaccines cause autism! No, they don’t (really, they don’t). Every day we’re bombarded with information and news, much of which is simply not true. Fake news has become a part of our life, and many such stories are compelling enough to make people believe them and often times, share them on social media. The world is still scrambling to adapt to this new situation, and a definitive way to combat fake quickly and efficiently is yet to come through.

Fact checking on steroids

With that in mind, the EU started a new project called Pheme, after a Greek goddess. The Pheme project brings together IT experts and universities to devise technologies that could help journalists find and verify online claims. It’s very difficult for artificial intelligence to detect satire, irony, and propaganda, but Pheme has reportedly been making significant advancements in this area.

Unverified content is dominant and prolific in social media messages, Pheme scientists say. While big data typically presents challenges in its information volume, variety and velocity, social media presents a fourth: establishing veracity. The Pheme project aims to analyze content in real time and determine how accurate the claims made in it are.

Fact checking is an often overlooked aspect of modern journalism. It takes a lot of time, it doesn’t add anything “spicy” to media content, and you rarely hear about the people doing it. Therefore, many media are employing something else: making stuff up. Half-truths and misinformation (or as the White House prefers to call them these days, alternative facts) have been running rampant on Facebook and Twitter, with millions of users spreading them without bothering to check if they’re real or not. Yes, users also carry a part of the blame here.

Well, researchers want to find an algorithmic solution to this human problem. They hope to do this by analyzing the language use and spread of information through the network, as well as the overall context of the information itself. Basically, they want to build a real-time lie detector for social media, flagging hoaxes and myths before they manage to become viral — much like an antibiotic taken as the symptoms start to set in. Ideally, we’d have a vaccine for this — but the vaccine, in this case, is education and convincing people to fact check things before they believe, which is either not happening, or will take a very long time. A faster solution is needed, and Pheme can just be a part of that solution.

The project is named after Pheme, the Greek goddess of fame. Image credits: Luis García.

They focus on two scenarios: lies about diseases and healthcare, which can be especially dangerous, and information used and published by journalists. Pheme addresses speculation, controversy, misinformation, and disinformation, in what can only be described as a broad, ambitious attempt. If this works out, as cliche as it sounds, it has the potential to revolutionize how we receive information and change the world forever.

Pheme will not only focus on analyzing news and stories, but it will also try to identify… memes. They coined the term phemes to designate memes which are enhanced with truthfulness information. Helping the spread of such phemes could not only make your coworkers laugh, but also help propagate truthfulness instead of misinformation.

The cool thing about this is that they will be releasing all this as an open source algorithm, to be used by journalists worldwide. The project will also reportedly develop a free-to-use platform where anybody can filter and verify media claims with an interactive and intuitive dashboard.

Of course, Pheme is not the only project of this type. Facebook and Google are working on their fake news detectors, as are several other tech giants and research institutes. The acuteness of the problem of fake is impossible to ignore, and its impact on the world should not underestimate it. The stakes have never been higher.