
During Britain’s most recent election campaign, commuters poured out of the Westminster Underground on a soggy June evening, staring down at their phones as they crossed Parliament Square. Screens flickered with news alerts. A brief video clip of an irate politician discussing immigration was making the rounds on social media and group chats.
The issue was that nobody seemed to be certain that the video was authentic.
It had received hundreds of thousands of views by midnight. Journalists verified a few hours later that it was probably produced by artificial intelligence—synthetic audio that had been pieced together from previous speeches. However, the correction never made it as far as the actual clip.
| Category | Information |
|---|---|
| Topic | AI Influence in UK Elections |
| Research Institution | Centre for Emerging Technology and Security (CETaS) |
| Host Organization | The Alan Turing Institute |
| Key Concern | AI-driven disinformation and erosion of voter trust |
| Notable Incidents | Deepfake audio of political figures, AI-generated campaign imagery |
| Identified Threat | Bot amplification, fake news sites, and “liar’s dividend” confusion |
| Election Studied | UK General Election 2024 |
| Key Finding | AI did not change results but damaged the information ecosystem |
| Reference Website | https://cetas.turing.ac.uk |
Scattered throughout the campaign season, such moments exposed a subtle but unsettling aspect of contemporary elections. In 2024, artificial intelligence did not significantly affect the outcome of the British election. However, it subtly altered the election-related environment, changing the flow of information, voters’ perceptions of it, and—possibly most significantly—the degree to which they trusted what they saw.
Relatively few instances of viral AI disinformation were discovered by election researchers. A little more than a dozen noteworthy cases went viral on the internet. The actual volume seemed low in comparison to the initial concerns of a “tsunami of deepfakes.”
However, it was more difficult to quantify the damage.
AI-generated content functions more like background noise than a political earthquake. It makes edges blurry. It obfuscates certainty. Divisive claims are amplified by a flurry of automated accounts, a phony audio clip here, and an altered image there. These pieces might seem insignificant on their own. When combined, they alter the context in which voters form their opinions.
During the campaign, editors were continuously updating social media dashboards while strolling through a newsroom, witnessing the rise and fall of rumors. A few posts were from real activists. Others were bots, accounts that propagated the same political talking points by posting dozens of messages every hour.
It became challenging to distinguish between the two.
These campaigns have surprisingly straightforward mechanics. Massive amounts of text that sound authoritative and convincing can be produced by artificial intelligence. An AI system can generate hundreds of slightly different social media posts supporting a specific policy position when given the correct prompts.
Social media algorithms then take care of the rest.
Content that evokes strong feelings, such as fear, anger, or outrage, spreads more widely. Posts are amplified by automated networks, creating the illusion that some viewpoints are far more common than they actually are. A casual viewer may believe they are seeing popular opinion as they scroll through a feed.
They are occasionally. They aren’t all the time.
Analysts discovered networks of accounts spreading divisive political narratives during the UK campaign, including anti-immigration imagery produced by artificial intelligence. The images, which featured dramatic scenes that suggested social unrest and bearded men wearing headbands, were crude but effective. They were intended more to provoke than to persuade.
Additionally, provocation spreads.
It’s easy to think of these campaigns as the product of highly developed foreign intelligence agencies. Sometimes they are. However, many come from closer to home. Researchers found that some of the AI content that went viral during the election originated from online trolls or domestic activists who were trying to get attention or advance their ideologies.
After all, technology makes it easier to get started.
It is now possible to produce persuasive political messages on a large scale with just a laptop and a few software tools. This democratization of power, according to some analysts, may lead to much more chaotic elections in the future. A team of strategists and a sizable campaign budget are no longer necessary for persuasion.
Sometimes all you need is an algorithm and patience.
The phenomenon known as the “liar’s dividend” is another. Authentic content becomes suspicious as soon as people learn that deepfakes exist. Videos that are real are written off as fake. Synthetic recordings are classified as authentic.
Strangely, the truth starts to feel negotiable.
A number of examples were provided by the UK election. Voters have occasionally shared real videos while claiming they were propaganda produced by artificial intelligence. In others, content that was clearly manipulated was justified as authentic. Instead of persuasion, the outcome was confusion.
As this develops, it’s difficult to ignore a more profound change in political discourse. Persuasion has always been a part of elections. The tools of previous decades included speeches in packed halls, television commercials, and campaign posters.
The battlefield is different today.
These days, social media feeds and recommendation engines are used in the campaign trail. The stories that show up first, the videos that become popular, and the political messages that reach different audiences are all determined by algorithms. In addition to creating content, artificial intelligence is also influencing its distribution.
The impact is mild and occasionally undetectable.
Deepfake pornography targeting female politicians was one of the election’s most unsettling incidents. Even though the videos were completely created by AI, they went viral enough to actually upset people. The experience was described by coworkers as frightening and humiliating, serving as yet another reminder of how easily technological tools can be turned into weapons.
Ultimately, the outcome of the British election was largely unaffected. Voters continued to base their choices on well-known issues like immigration, public services, and the economy. The democratic apparatus remained in operation.
However, the atmosphere of the campaign felt different.
Suspicion, doubt, and the question of whether a viral post was real or fake were all more prevalent. The gradual decline in trust in the data pertaining to votes, rather than their dramatic manipulation, might be the true legacy of artificial intelligence in politics.
Voters might change their minds. Radio, television, and social media are examples of disruptive technologies that humans have previously adapted to. Societies acquire new skepticism and verification habits over time.
Nevertheless, there was a persistent feeling of unease as one stood in a packed train car during election week and observed commuters scrolling through streams of political content.
Algorithms were also running campaigns somewhere behind those screens.
