The harmful impact of dis and misinformation online has been amplified by recent high-profile events, such as the COVID-19 pandemic and Russia’s invasion of Ukraine. Taking steps to tackle this problem is therefore of growing importance for governments, tech firms, mainstream media and even consumers. Speaking during the recent Westminster Media Forum policy conference ‘Next Steps for Tackling Fake News and Improving Media Literacy,’ Claire Gill, partner at law firm Carter-Ruck, observed the particular difficulties around detecting disinformation in relation to the Russia-Ukraine conflict, “where the source of the fake news is not social media but state-controlled press, so whether we can reply on social media or mainstream press to illuminate the truth really depends on where you’re from.”
Achieving this in a way that retains fundamental principles around free expression is a major challenge. Gill noted: “Those measures have to be counterbalanced against the right to freedom of expression and they have to be proportionate to the harm that is being caused.”
During the online event, a session was dedicated to analyzing how fake news and online misinformation can be tackled in a non-authoritarian way that does not impact free expression.
Hazel Baker, head of digital newsgathering and verification at Reuters, highlighted a new verification service that her publication has developed to debunk misinformation, Reuters Fact Check. This was created at the start of 2020, partly in anticipation of the upcoming US election cycle, an event expected to attract a hotbed of fake news.
Baker explained that the team had to carefully define the terms of service, in other words, which claims it would analyze. This is because “we encounter many pieces of content per day and many hundreds of claims per day, and if we don’t prioritize how we’re identifying which claims to address, we are leaving ourselves open to accusations of bias and unfair treatment.”
This is decided based on three main factors:
- Relevance: The claim should be topical and in the public interest.
- Reach: The claim should be gaining traction or have the clear potential to go viral.
- Impact: The claim could cause harm or negative consequences if not addressed.
Baker added that Reuters had signed up to the International Fact-Checking Network (IFCN) Code of Principles in the course of providing this service:
- A commitment to non-partisanship and fairness.
- A commitment to standards and transparency of sources.
- A commitment to transparency of funding and organization.
- A commitment to standards and transparency of methodology.
- A commitment to an open and honest corrections policy.
She then explained how the fact-checking process works once content is selected. The first stage is to assess how the claims came to be. Baker noted that “it is really rare for us to encounter entirely fabricated claims; normally, there is a grain of truth in the center.” Therefore, it is crucial to understand “how the facts may have been distorted or exaggerated to reach something misleading or misinformation.”
The content is then reviewed with a wide range of primary source evidence, including officials, scientists, academics and others with first-hand knowledge of the situation. “Pulling all this together, we can form robust conclusions and offer some context,” said Baker.
"It is really rare for us to encounter entirely fabricated claims; normally there is a grain of truth in the center"
Reuters will then publish these fact checks on their websites, which provide an opportunity “to show readers and wider audience some media literacy skills that we use as journalists but can be used much more widely.” These include reverse image search tools, corroborated imagery and corroborating accounts.
Following Baker’s presentation, Rebecca Skippage, Disinformation Editor at the BBC, outlined steps the BBC takes to combat surging disinformation. She observed that disinformation journalism is ultimately about getting “accurate and impartial information to everybody.”
In addition to traditional journalist techniques, such as using sources and fact-checking, Skippage said tackling this issue involves spotting trends in terms of what people are talking about – “it’s very audience-focused.”
She explained that the BBC looks to do three things regarding disinformation content online: debunk and verify, join the dots to show how disinformation networks operate and show real-world impact.
Skippage noted that the disinformation team’s work has been particularly vital in the past few years due to a raft of conspiracy theories and disinformation during the COVID-19 pandemic, “which became a threat to health and even to human life.”
The current conflict between Russia and Ukraine is now the BBC disinformation team’s primary focus. In the weeks leading up to the invasion, much of this centered around checking “false justifications for military action.” These included Kremlin claims of the need to “denazify” Ukraine. Following the invasion, there have been a plethora of videos uploaded to social media “purporting to tell what was going on.”
Therefore, verifying such content as quickly as possible was critical, which requires a significant amount of collaboration – between fact-checking teams, open-source intelligence experts, disinformation experts and language and regional experts.
Among the technologies used to help in this endeavor are geolocation tools to “geolocate the footage, confirm accents, terrain and the types of military vehicles.”
As with Reuters, Skippage explained the BBC used their work in disinformation to improve media literacy: “so we show our working out and we explain the tools and techniques to empower our audience to spot, avoid and protect themselves and their networks from disinformation in the future.”
The next speaker at the event was Katy Minshall, UK Head of Public Policy, Twitter, who highlighted recent initiatives the social media firm had taken to help prevent the spread of misinformation on its platform. She stated that improving media literacy on Twitter is especially vital because “headlines, global memes and powerful movements all start on Twitter.” Therefore, “as a platform, we have to do all we can to ensure those conversations are healthy; we’re encouraging critical thinking and media literacy.”
However, she emphasized that any solutions must not “undermine any aspects” of the discussions taking place on the public platform.
Minshall then discussed a series of ‘nudges’ that Twitter has introduced in recent years to reduce the impact of fake news in a light touch, educational manner. One of these aims to address the fact that many people share articles on Twitter based on their headlines and not reading them first. Therefore, when users attempt to share certain articles, a statement will pop up stating ‘Headlines don’t tell the full story’ and inviting them to read the article. This technique resulted in a 40% increase in people reading the article and an 11% decrease in users subsequently sharing it. Similar initiatives have been put in place for other challenges on the platform, such as abusive replies to tweets and providing links to official sources of information regarding COVID-19.
“It’s a good illustration of if you can put some friction in the system, encourage people to pause and think before they tweet, it can have a really important benefit from a media literacy perspective,” outlined Minshall.
"If you can put some friction in the system, encourage people to pause and think before they tweet, it can have a really important benefit from a media literacy perspective”
The importance of open data in helping find new approaches to combatting online misinformation was also emphasized by Minshall during her presentation. “When we think about media literacy and how we address some of the challenges in the long-term, what’s fundamental is that data is made available to experts and researchers across the board so that we can collectively enhance our understanding of the challenges, what works and what doesn’t work,” she commented.
The role of artificial intelligence (AI) and machine learning in reducing the harm of disinformation formed the next part of the discussion. This was delivered by Lyric Jain, CEO of Logically, a company whose mission is to “flatten the curve of harm caused by information threats,”, particularly in relation to critical areas like health and national security.
He pointed out that “one of the biggest challenges in the current iteration of mis and disinformation online is the speed at which content and campaigns can go viral and reach a lot of people.”
Therefore, these “information threats” must be identified quickly to prevent significant harm from being caused. Jain said interventions need to be launched within 30 minutes to one hour of the content going live “to ensure they are most effective in a disinformation context.” This fast intelligence can be achieved with a combination of human expertise and technology to “boil down millions of pieces of content to a few salient threats that an organization or individual needs to identify.”
He explained that the AI technology rapidly compares the content to data such as historic misinformation, the authors and publishers behind it and whether the same information has been shared across other platforms. Jain noted that “mis and disinformation rarely live on one platform and is siloed. They are almost always cross-platform.”
Once a likely case of disinformation has been identified, a range of countermeasures can be employed, depending on the severity of the disinformation – these range from publishing a fact check or nudge to escalating it to law enforcement.
The final speaker in the session was Dr Rebecca Helm, senior lecturer in law and director, Evidence-Based Justice Lab, University of Exeter. She discussed research into the effectiveness of various approaches to tackling misinformation. Any method used must not impinge on legal protections for free speech and need to pass three criteria – legality, necessity and proportionality. Therefore, this issue should be tackled in the “least intrusive” way.
However, Helm believes governments and tech firms often struggle to achieve proportionality, “and sometimes freedom of speech can be used to justify a really hands-off approach and approaches that aren’t always effective.” She added that freedom of expression is actually damaged “if others are being actively and intentionally misled by that information.”
Discussing the merits of these ‘nudges,’ such as information corrections on social media sites, Helm observed this method “works really well when people are actively seeking out information.” However, it is far less effective for users with an entrenched viewpoint, particularly “when it is dictated by group membership.” For example, while the link between the MMR vaccine and autism has been debunked and a lot of information corrections issued, a significant number of people still believe in this association.
In fact, “seeing something that challenges the viewpoint that people have is actually pushing them to be more committed towards that initial viewpoint.”
Another problem with these information correction techniques is that they can draw more attention to the disinformation content. “When it’s polarizing, that increases commitment and engagement to it in the opposite direction that we want from the information correction,” said Helm.
Similarly, attempts to block or remove misinformation can draw more attention to the content “in whatever form it is left in.”
Therefore, Helm believes that in some high-stakes situations where people have politically ingrained viewpoints, we must consider “imposing greater infringements on free speech.” These include issuing criminal sanctions for malicious actors who deliberately spread misinformation.
The insights provided during the virtual session highlighted the significant difficulties in combatting the growing prevalence of online misinformation in a way that does not impact freedom of expression. Providing more education and directing users to reliable sources is critical, enabling them to better understand how to detect misinformation and verify the content they view. However, in an era of polarization exacerbated by the rise of social media, such approaches are not always enough. Whether more draconian measures are required in certain instances is likely to be a debate that ensues over the coming months and years.