How To Spot Ai Fake News And What Policymakers Can Do To Help

Bonisiwe Shabane
-
how to spot ai fake news and what policymakers can do to help

The rise of AI has flooded the internet with election disinformation. Here is an example of a deceptive AI-generated photo of former President Trump and Vice President Kamala Harris. Earlier this year, New Hampshire voters received a phone message that sounded like President Joe Biden, discouraging them to vote in the state’s primary election. The voice on the line, however, was not really Biden’s – it was a robocall created with artificial intelligence (AI) to deceptively mimic the president. The rise of AI has made it easier than ever to create fake images, phony videos and doctored audio recordings that look and sound real. With an election fast approaching, the emerging technology threatens to flood the internet with disinformation, potentially shaping public opinion, trust and behavior in our democracy.

“Democracies depend on informed citizens and residents who participate as fully as possible and express their opinions and their needs through the ballot box,” said Mindy Romero, director of the Center for Inclusive Democracy... “The concern is that decreasing trust levels in democratic institutions can interfere with electoral processes, foster instability, polarization, and can be a tool for foreign interference in politics.” Romero recently hosted a webinar – titled Elections in the Age of AI – in which experts discussed how to identify AI-generated disinformation and how policymakers can regulate the emerging technology. The panel included David Evan Harris, Chancellor’s Public Scholar at UC Berkeley; Mekela Panditharatne, counsel for the Brennan Center’s Elections & Government Program; and Jonathan Mehta Stein, executive director of California Common Cause. Everyone knows Mark Twain said that, but our certitude about the matter points once again to a troubling reality about our world: There has never been a time when more misinformation, disinformation and outright... This is made even worse by the fact that most of us are far more confident in our ability to know when somebody is trying to dupe us than we have any right to...

According to the Institute for Public Relations’ 4th Annual Disinformation in Society report, the vast majority of Americans (78%) express confidence in their ability to recognize “news or information that misrepresents reality or is... Four out of five Americans say they “sometimes” go to other sources to check the veracity of information, while only 20% admit they “rarely” or “never” take the trouble. (What might constitute “other sources,” given the polarization of today’s media, raises important questions in itself.) What’s worrisome, too, is that the world of artificial intelligence is changing daily — maybe hourly — and the bad actors determined to trick an unsuspecting public are getting better at what they do... “Rapid advances in artificial intelligence, including deepfakes, generated text and images, are making it harder to tell what’s real, true, and accurate. This is occurring during a time of deep political polarization when Americans’ trust in the mass media is at a record low, and millions are turning to alternative sources, such as social media, where...

“But is there a way to use these technologies to restore trust, connect with our audiences and ethically achieve our goals faster? How should we react when others use these technologies to influence our interests?” The White House just dropped a flamethrower on the Fake News Media: a new public database that catalogs the avalanche of lies, deliberate distortions, and manufactured hoaxes churned out by activist “journalists” and their... Now live at wh.gov/mediabias, the site lays bare the offending “journalists” and their outlets alongside the actual facts they attempted to bury, twist, or invent. Fully sortable and routinely updated, it ensures no hoax, no anonymously “sourced” fan fiction, and no partisan smear gets memory-holed again. Coupled with the White House Rapid Response account on X, the Trump Administration is pushing back in real time to ensure the American people get the unfiltered truth — no ideological filter, no corporate...

AI-generated newscasts are getting harder to spot — and they're flooding your feed. Here's how to avoid falling for the fakes. In one TikTok video, a reporter stands in front of a traditional red Royal Mail pillar box, with British flags fluttering in the background and a microphone in hand. He asks a female passerby who she plans to vote for in the upcoming election. "Reform," the woman replies. "I just want to feel British again, innit."

A user comments below: "I wonder how much they paid her to say that." But this scene never happened. The interview is entirely fake. The reporter doesn't exist — he was generated by artificial intelligence. And if you look closely, there's a subtle clue: a faint watermark in the corner bearing the word "Veo," the signature of Google DeepMind's powerful new video-generation tool. This 8-second video isn't an isolated case.

From TikTok to Telegram, synthetic newscasts — AI-generated videos that mimic the look and feel of real news segments — are flooding social feeds. They borrow the visual language of journalism: field reporting, on-screen graphics, authoritative delivery. However, they're often completely fabricated, designed to provoke outrage, manipulate opinion or simply go viral. Searching for an apartment online, applying for a loan, going through airport security, or looking up a question on a search engine – you might not think anything of these exchanges other than that... Avoiding AI in our quotidian activities feels impossible nowadays, especially when it is now used by public and private organizations to make decisions about us in hiring, housing, welfare, budgeting, and other high-stakes areas. While proponents of AI usage boast about how efficient the technology is, the decisions it makes about us are oftentimes uncontestable, discriminatory, and infringe on our civil rights.

However, inequity and injustice from artificial intelligence need not be our status quo. Senator Ed Markey and Congresswoman Yvette Clarke have just re-introduced the AI Civil Rights Act of 2025, which will help ensure AI developers and deployers do not violate our civil rights. The ACLU strongly urges Congress to pass this bill, so we can prevent AI systems from undermining the equal opportunities our civil rights gave us decades ago. The AI Civil Rights Act shores up existing civil rights law so their protections now apply to artificial intelligence. Whether you are looking at the Civil Rights Act of 1964, The Fair Housing Act, The Voting Rights Act, the Americans with Disabilities Act, or a multitude of other civil rights statutes, current civil... In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact, and developers may not have tested the AI model for discriminatory harms.

By covering AI harms in several consequential areas -- employment, education, housing, utilities, health care, financial services, insurance, criminal justice, identity verification, and government welfare benefits -- the AI Civil Rights Act provides interlocking... With national elections looming in the United States, concerns about misinformation are sharper than ever, and advances in artificial intelligence (AI) have made distinguishing genuine news sites from fake ones even more challenging. AI programs, especially Large Language Models (LLMs), which train to write fluent-reading text using vast data sets, have automated many aspects of fake news generation. The new instant video generator Sora, which produces highly detailed, Hollywood-quality clips, further raises concerns about the easy spread of fake footage. Virginia Tech experts explore three different facets of the AI-fueled spread of fake news sites and the efforts to combat them. Walid Saad on how technology helps generate, and identify, fake news

“The ability to create websites that host fake news or fake information has been around since the inception of the Internet, and they pre-date the AI revolution,” said Walid Saad, engineering and machine learning... “With the advent of AI, it became easier to sift through large amounts of information and create ‘believable’ stories and articles. Specifically, LLMs made it more accessible for bad actors to generate what appears to be accurate information. This AI-assisted refinement of how the information is presented makes such fake sites more dangerous. “The websites keep operating as long as people are feeding from them. If misinformation is being widely shared on social networks, the individuals behind the fake sites will be motivated to continue spreading the misinformation.

The Rise of AI-Generated Fake News: A Threat to Democratic Discourse The advent of artificial intelligence, particularly Large Language Models (LLMs), has revolutionized the creation and dissemination of information. While offering immense potential benefits, this technological advancement has also amplified the spread of misinformation, posing a significant threat to democratic processes, especially during election cycles. With the 2024 elections approaching in the United States and other major democracies, concerns about the proliferation of AI-generated fake news have reached a fever pitch. The ability of these advanced algorithms to generate human-quality text, coupled with tools like Sora that can produce realistic video footage, makes distinguishing genuine news from fabricated content increasingly difficult. The Mechanics of AI-Driven Disinformation

As Walid Saad, an engineering and machine learning expert at Virginia Tech explains, the creation of fake news websites predates the AI revolution. However, AI, particularly LLMs, has drastically simplified the process of generating seemingly credible articles and stories by automating the sifting through vast datasets and crafting convincing narratives. This AI-assisted refinement of misinformation makes fake news sites more insidious and persuasive. The continuous operation of these websites is fueled by the engagement they receive. As long as misinformation is shared widely on social media platforms, the individuals behind these operations will continue their deceptive practices. Combating AI-Powered Fake News: A Multifaceted Approach

Advances in generative AI mean fake images, videos, audio and bots are now everywhere. But studies have revealed the best ways to tell if something is real Many AI-generated images look realistic until you take a closer look Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, video, audio and text at a time when technological advances are making them increasingly indistinguishable from much human-created content, leaving us open to manipulation by disinformation. But by knowing the current state of the AI technologies used to create misinformation, and the range of telltale signs that what you are looking at might be fake, you can help protect yourself...

World leaders are concerned. According to a report by the World Economic Forum, misinformation and disinformation may “radically disrupt electoral processes in several economies over the next two years”, while easier access to AI tools “have already enabled... Is digital technology really swaying voters and undermining democracy?

People Also Search

The Rise Of AI Has Flooded The Internet With Election

The rise of AI has flooded the internet with election disinformation. Here is an example of a deceptive AI-generated photo of former President Trump and Vice President Kamala Harris. Earlier this year, New Hampshire voters received a phone message that sounded like President Joe Biden, discouraging them to vote in the state’s primary election. The voice on the line, however, was not really Biden’s...

“Democracies Depend On Informed Citizens And Residents Who Participate As

“Democracies depend on informed citizens and residents who participate as fully as possible and express their opinions and their needs through the ballot box,” said Mindy Romero, director of the Center for Inclusive Democracy... “The concern is that decreasing trust levels in democratic institutions can interfere with electoral processes, foster instability, polarization, and can be a tool for for...

According To The Institute For Public Relations’ 4th Annual Disinformation

According to the Institute for Public Relations’ 4th Annual Disinformation in Society report, the vast majority of Americans (78%) express confidence in their ability to recognize “news or information that misrepresents reality or is... Four out of five Americans say they “sometimes” go to other sources to check the veracity of information, while only 20% admit they “rarely” or “never” take the tr...

“But Is There A Way To Use These Technologies To

“But is there a way to use these technologies to restore trust, connect with our audiences and ethically achieve our goals faster? How should we react when others use these technologies to influence our interests?” The White House just dropped a flamethrower on the Fake News Media: a new public database that catalogs the avalanche of lies, deliberate distortions, and manufactured hoaxes churned ou...

AI-generated Newscasts Are Getting Harder To Spot — And They're

AI-generated newscasts are getting harder to spot — and they're flooding your feed. Here's how to avoid falling for the fakes. In one TikTok video, a reporter stands in front of a traditional red Royal Mail pillar box, with British flags fluttering in the background and a microphone in hand. He asks a female passerby who she plans to vote for in the upcoming election. "Reform," the woman replies. ...