It’s getting harder and harder to tell fact from half-truths from fiction. Entire websites are devoted to misinforming readers, defaming individuals, communities and groups. Their social-media accounts amplify the lies so successfully, one can scarcely hear the opposing voices of reason.
Fake news spreads fast too. A video uploaded in the US can find viewers in India, Nigeria, and Brazil in under a minute. Platforms that host fake news are either incapable or unwilling to address the issue. And almost no one’s immune to receiving doctored graphics and images online.
Fact-checking organisations have had their hands full this year. At Global Fact 8, the world’s largest annual fact-checking conference, held virtually this year in October, the focus was on sharing resources, protecting the mental health of fact-checkers, and supporting specialised new roles. Here are some pointers from the sessions, to help you with the next is-this-odd-or-is-this-just-untrue forward you see on Twitter, Facebook, YouTube or WhatsApp.
The problem is global, so is the solution: Misinformation is now an organised industry, with money fed into pushing a skewed view of what’s happening and why, and teams of writers and graphic designers. So fact-checkers around the world, from the Philippines and Brazil to South Africa, India and Norway, now face the same challenges. Claire Wardle, who leads strategy and research at the New York fact-checking organisation First Draft, delivered the keynote address at Global Fact 8. “Information’s first-responders” will benefit from adopting global best practices, she said. Teams in different parts of the world must “collaborate a much as possible to avoid duplication”. And checkers must collect their own data and build their own archives rather than rely on social media companies that tend to withhold key information.
The villain is not just your link-forwarding uncle, it’s often the platform too: At a panel titled What does the fact-checking community want from YouTube?, Brandi Geurkink, senior manager of advocacy at the non-profit Mozilla Foundation (Mozilla runs the Firefox browser too) shared results of an in-house investigation. More than 37,000 volunteers across 191 countries were part of the study. It showed that “71% of the videos that YouTube viewers deemed regrettable were ones recommended by the site’s own algorithm”, Geurkink says. This not only violates the platform’s content policies, YouTube offers no clarity on why some videos that promote disinformation and misinformation on their site are removed and some retained.
Platforms and networks aren’t doing enough: In November, Meta announced a new third-party fact-checking mentorship programme with $450,000 in funding. It is aimed at improving processes, sharing resources and helping organisations in more regions tackle harmful trends. But critics point out that its own policies are often at odds with its new initiative.
Fact-checking is taking its toll on mental health: “Many of you spend all of your days looking at the worst parts of the internet,” said Wardle, addressing fact-checkers at the conference. Numerous studies have shown that investigators who are constantly exposed to hate speech, malicious content and doctored images end up having trouble recognising legitimate news after a while. “Even if you’re a fact checker, even if you’re a journalist, even if you’re a researcher, it starts to mess with your ability to evaluate truth,” Wardle said. The cynicism can ultimately affect mental and emotional health. Organisations must also acknowledge how the job affects those who monitor disinformation that targets their own communities, Wardle said.
Non-English speakers are more vulnerable: The Mozilla Foundation study found that in countries where English was not the primarily language, YouTube users were 60% more likely to be recommended videos that they would consider disturbing. Policy improvements are typically rolled out in Anglophone nations. Users and policymakers around the world must demand more transparency from hosting platforms, Geurkink said.
The pandemic made it worse, but there’s hope: In April last year, a single rumour claimed more than 700 lives in Iran. Word spread through local text networks that drinking methanol could kill the coronavirus, prompting people to ingest the toxic chemical. Around the world, fact-checkers have found that a plague of misinformation built up as the virus spread around the world. In Latin America, the Lupa Colectiva (Lupa is “magnifying glass” in Spanish) now focuses solely on public health fact-checks for Spanish-speaking people. Investigation groups globally are attempting to “inoculate” the public against malicious content. “If you show people examples of misinformation, they will be better equipped to spot it and question it,” said Wardle, describing a process she calls “prebunking”.
The climate crisis is the next fact-checking frontier: Misinformation about climate change is polluting social media feeds and message groups, often with help from corporations and even governments. The International Fact-Checking Network, a coalition of more than 100 organisations around the world, has partnered with Facebook to disburse $800,000 worth of grants to climate groups and environmental non-profits to curb misleading information and greenwashing over the coming year. Artificial intelligence is being roped in too. As the COP26 summit unfolded, Eco-Bot.Net, an AI-powered site, exposed, in real time, false claims and misinterpretations by speakers, and showcased data on polluting sectors such as energy and aviation.
We’re learning who tends to share fake news and why: A 2021 study by Duke University’s Fuqua School of Business suggests that impulsive people with conservative politics are more likely to share fake-news stories, even when they know the story may be suspect or outright false. Individuals who are less conscientious or cautious in real life are also more likely to amplify information they do not believe to be true. And a 2021 study by researchers from City University of New York, Temple University in Philadelphia, and the University of Houston, indicated that users with verified social-media handles are more likely to spread disinformation and less likely to recall or delete a debunked post. The study was based on the activities of 5,000 users of the Chinese social media site Weibo. Look through your own feed and you’ll realise the theory applies outside China too.