AI-Generated Images and Deepfakes Had Little Effect on 2024 Elections

3 days ago 9
A group of people at an outdoor event wears shirts that read "Swifties for Trump." Two women on the right hold drinks. There's a mix of shirt colors, with white and red prominent. The crowd appears cheerful and engaged in a sunny setting.AI-generated images shared by President-elect Donald Trump.

Recent research has revealed that generative AI and deepfakes barely impacted the 2024 U.S. election as the images are just not good enough yet.

In the last year, there has been widespread concern that generative AI would wreck havoc on the 2024 election.

In response to these fears California passed a law that made it illegal to create deepfakes related to the 2024 election — the toughest law on political AI-generated content in the U.S. yet.

Meanwhile, OpenAI blocked hundreds of thousands of requests to generate DALL-E images of candidates in the month before the 2024 U.S. presidential election.

However, according to a report in The Financial Times, a slew of recent research has indicated that AI-generated misinformation had little to no impact on this year’s global elections and deepfake fears were overblown. This is largely because the technology is still not advanced enough and AI-generated images are just not as realistic as photos.

In fact, social media users were more likely to misidentify real photos as being AI-generated than the other way around.

According to the publication, the Alan Turing Institute found only 27 viral pieces of AI-generated content during the U.K., French, and E.U. elections this summer. Another study revealed that only one in 20 Brits recognized any of the most shared political deepfakes around the election.

In the U.S., the News Literacy Project documented nearly 1,000 examples of misinformation about the presidential election, but only 6% were linked to generative AI. According to TikTok, there was also reportedly no increase in AI-generated content removals as voting day approached in the U.S.

An analysis by The Financial Times found that mentions of terms like “deepfake” or “AI-generated” in the Community Notes fact-checking system on X (the platform formerly known as Twitter) were more correlated with the release of new image generation models than major elections.

Interestingly, an analysis by the Institute for Strategic Dialogue found that social media users were more likely to wrongly assume real images were AI-generated than the other way around. Overall, users showed healthy skepticism, and fake media can still be debunked through tools like Google reverse image search or official communications.

“We’ve had Photoshop for ages, and we still largely trust photos,” Felix Simon, a researcher at Oxford University’s Reuters Institute for the Study of Journalism, tells The Financial Times.

Read Entire Article