AI and Misinformation
Online misinformation has tangible impacts on elections and communities, but an AI-fueled democratic disaster has largely been avoided in 2024.
Online misinformation has tangible impacts on elections and communities, but an AI-fueled democratic disaster has largely been avoided in 2024.
Big Tech has rolled out various tools for users to assess claims and access credible information.
More interventions from the public and private sector are needed to combat online mis- and disinformation.
Voters in more than 60 countries, nearly 50% of the global population, are heading to the polls in 2024. With many voters relying on online platforms to engage in debate and obtain information, many tech companies are rolling out novel tools and programmes in the fight against misinformation. Online platforms have a number of characteristics — uncertain user identities, speed of communication, filter bubbles, and click-incentives — that can be problematic in election cycles if not addressed. Rapid advances in frontier technologies like AI exacerbate the potential for harm, as online platforms are increasingly saturated with high-quality, synthetic content designed to mislead.
Prominent candidates have already leveraged AI to mislead voters, and synthetic content (e.g. deepfakes) continues to flood social media. While this surfeit of mis- and disinformation has not led to the democratic collapse once feared, the January 6th insurrection at the US Capitol and the attempted coup d’etat in Brazil in 2023 demonstrate how online dissemination of misleading content can lead to real world harm. When misinformation is left to fester and the electorate has not been given the necessary tools to interrogate what they see online, effects begin to spill over into the physical world.
In response to these threats, industry has been developing strategies to mitigate the effect of synthetic falsehoods while surfacing authoritative information. Such actions have aimed to recover some public confidence, demonstrated social responsibility, and ensured compliance with emerging laws around the world.
When misinformation is left to fester and the electorate has not been given the necessary tools to interrogate what they see online, effects begin to spill over into the physical world.
In the eyes of many policymakers and stakeholders, the last 10 years have seen social media platforms become vehicles for misinformation at scale. Tech companies are making important efforts to address these challenges, while civil society, academics, and the general public continue to push for more proactivity and transparency. One way big tech can build trust is by expanding tactics already deployed.
Mis- and disinformation often spread quicker than facts. When the populace is inundated with false information, literacy skills are critical for users to determine a claim’s veracity. Google, Moonshot, Jigsaw, and other partners in the European Union have developed an approach in the EU to build information literacy skills through a process called prebunking. Prebunking aims to inoculate people against misleading information by teaching them how to identify and resist it. When deceptive claims spread, an electorate endowed with information literacy skills is better equipped to counter manipulation. This collaborative effort to upskill communities across the EU is much needed and warrants expansion.
In an era where distrust in news sources is high globally, there’s a need for easy access to high quality information from authoritative sources. YouTube, during the EU’s Parliamentary Elections, ran marketing campaigns in every EU member state. This campaign consisted of three unique promos encouraging users to register to vote, connecting people to YouTube’s Hit Pause Media Literacy Campaign, and providing a link to the results of the election.
By highlighting authoritative resources to get information about the election and to reinforce the skills necessary to critically engage with media, Youtube complied with EU regulations and demonstrated social responsibility. This campaign shows how tech companies can position themselves when large numbers of users search for consequential information – not as arbiters of truth, but trustworthy sources for high-fidelity knowledge. Broadening these promos would be a welcome expansion as elections continue.
Accompanying the populist resurgence of white nationalism in 2016, “fake news” spread like wildfire. Polarising the environments in which the election of Donald Trump in the United States and Brexit in the United Kingdom occurred, misinformation is a powerful tool. In response to it, a number of organisations in various countries formed coalitions to safeguard the information environment. The Election Coalitions Playbook, published by Katie Harbath of Anchor Change, was a collaborative effort with Google spotlighting election coalitions in France, Brazil, Argentina, Mexico, Nigeria, and the Philippines.
Accompanying the populist revival of white nationalism in 2016, ‘fake news’ spread like wildfire. Polarising the environments in which the election of Donald Trump in the United States and Brexit in the United Kingdom occurred, misinformation is a powerful tool.
The playbook suggests that creating country-led coalitions of news organisations, fact-checkers, and community groups is a powerful way to combat misinformation. Collaboratively identifying and debunking false claims allowed for broader amplification across a network of news outlets serving millions of voters. For example, in France, CrossCheck 2017 was formed. It brought together 30+ media organisations to fact-check misinformation during elections. A tech giant like Google supporting 3rd-party projects which identify successful endeavours is an example of a positive impact big tech can have off platforms.
This work also demonstrates the impact of civil society organisations (CSOs) on this issue. Where governments have not been able to regulate speech online and online platforms themselves have inadequately self-regulated, CSOs play a vital role — often fact-checking claims and broadly addressing misinformation’s impact on democracy.
We have also seen new contextual tools from large online platforms and search engines. These tools attach trusted information to content which may be misconstrued or misrepresented. As false information continues to spread, these tools empower users to better understand what they’re seeing online.
Community Notes on X are affixed to potentially misleading posts. Contributors can leave notes on posts and, if a sufficient number of contributors with varying viewpoints rate the note as helpful, the note will be publicly shown. This collaborative approach to contextualising potential misinformation also notifies users when a post they have liked is community noted.
Meta has rolled out AI Labelling on Facebook and Instagram with a similar understanding that providing context is an effective means to address manipulated media. They have updated their labels to say “AI Info” across apps, allowing people to find out how content was made.
Google has a suite of information and visual literacy tools to help users understand what they see in their search results. About this Result and About this Image are two products giving users context about the content they’re seeing, the organisation that published it, whether it has been made with Google AI, and where it has been used before.
Each of these are potent assets for users when engaging with media. During election periods, when there’s a vested interest in manipulating the polity’s information, contextual tools can help users come to better informed conclusions. Seeing these expanded beyond the English speaking world and the West would be of great benefit to the global population.
Online platforms will continue to be a valuable place for voters to access information about elections. This is not to say that tech companies have done everything right — in fact we have recently seen how an inadequate response to disinformation precipitated the ban of X from Brazil — but rather to show there are already useful products empowering users on some of the biggest platforms. Nonetheless, robust responses to the challenges of misinformation in the broader tech space must be refined and expanded.
As many politicians and bad actors across the world continue to sow doubt through misinformation, big tech has an important role in prominently surfacing high quality information. Though an AI-fueled armageddon during the year of elections seems to have been avoided so far, democratic challenges will continue beyond 2024. Big tech is developing important innovations, but there is still much work to be done.
If you’re interested in learning more about Adapt’s work in this space, send us an email at team@weadapt.io
Luke Coleman is a manager at Adapt where he works on consumer trust & safety, policy analysis, regulatory tracking, and information literacy for our clients. He received his BA in Philosophy, Politics, and Economics from the University of Pennsylvania and a Fulbright Grant from the U.S. Department of State.