The Concerning Rise Of Ai Content In Politics

Bonisiwe Shabane
-
the concerning rise of ai content in politics

A deeply offensive AI-generated video, depicting a bizarre version of Gaza, Palestine, was recently shared by President Donald Trump on social media. The video — posted to Trump’s Truth Social and Instagram accounts — depicted Israeli Prime Minister Benjamin Netanyahu, Trump sidekick and billionaire Elon Musk and the president himself sunbathing in a resort-style iteration of... Today, it has become easy for the general public to create content with malicious intent. By using low-cost or free AI tools from companies such as Google and OpenAI, it takes only a simple text prompt to generate realistic media designed to deceive audiences on social media. Right-wing extremists have been using AI-generated content to promote harmful ideals and propaganda online. The accessibility of AI allows users to quickly spread misinformation.

For instance, AI-generated images of Trump cuddling cats and ducks went viral on X and other social media platforms after he and Vice President J.D. Vance incorrectly promoted offensive claims about Haitian immigrants in Ohio eating pets. These posts gained millions of views and thousands of clicks. Some were clearly racist, such as an AI-generated image of Trump running through a field with cats under each arm as two shirtless Black men chase him. Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and...

The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their... The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others. “Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study. The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they... A massive study of political persuasion shows AIs have, at best, a weak effect. Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections.

To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the... It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI. The public debate about the impact AI has on politics has largely revolved around notions drawn from dystopian sci-fi. Large language models have access to essentially every fact and story ever published about any issue or candidate. They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide.

On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal. Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, LLMs can indeed appear kind of scary. The goal of this new gargantuan AI persuasiveness study was to break such scary visions down into their constituent pieces and see if they actually hold water. Co-hosts Archon Fung and Stephen Richer look back at the last five months of headlines as they celebrate the twentieth episode of Terms of Engagement. Archon Fung and Stephen Richer are joined by Michelle Feldman, political director at Mobile Voting, a nonprofit, nonpartisan initiative working to make voting easier with expanded access to mobile voting.

Archon Fung and Stephen Richer discuss whether fusion voting expands representation and strengthens smaller parties—or whether it muddies party lines and confuses voters. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. Content made with generative artificial intelligence has been used in American politics.

Generative AI has increased the efficiency with which political candidates were able to raise money by analyzing donor data and identifying possible donors and target audiences.[1][clarification needed] A Democratic consultant working for Dean Phillips has admitted to using AI to generate a robocall which used Joe Biden's voice to discourage voter participation.[2] In April 2023, the Republican National Committee released an attack ad made entirely with AI-generated images depicting a dystopian future under Joe Biden's re-election.[3] In August 2024, The Atlantic noted that AI slop was becoming associated with the political right in the United States, who were using it for shitposting and engagement farming on social media, with the... AIs are equally persuasive when they’re telling the truth or lying People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories.

But these informative bots are also prone to lying. Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading. Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa. The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science. Instead, they simply dole out the most information.

But those bloviating bots also dole out the most misinformation. October 30, 2025 | andrealaws | Communication Studies, Constitutional Law, Constitutional Studies, Law, Media Studies, Political Science, Public Policy Earlier this year, X (formerly Twitter) filed a federal lawsuit against Minnesota challenging the state’s new law banning political deepfakes in the weeks leading up to elections. X argues the law violates the First Amendment and conflicts with federal protections for online platforms. The case is already being described as a pivotal test for whether voters will be shielded from the next wave of AI-driven deception or left defenseless against fabricated realities in one of the most... The lawsuit arrives amid a broader storm.

As of September 2025, more than two dozen states have introduced or passed laws to restrict or require disclosure of political deepfakes. The bans in Minnesota and Texas prohibit AI-generated political impersonations during sensitive pre-election periods. Others mandate disclaimers if candidates or campaigns use AI to produce content. But these laws face constitutional challenges at every turn, and it’s unclear whether they’ll survive judicial scrutiny. At the very same time, AI misinformation is exploding. A new study found that the rate of false claims produced by popular chatbots nearly doubled in the past year, jumping from 18 percent to 35 percent of outputs.

And AI-powered impersonation scams have surged by nearly 150 percent in 2025, according to cybersecurity experts, with fraudsters now able to clone a loved one’s voice, generate a fake video call, or create a... All of this points to a single reality: we are entering an era where truth is negotiable, facts are contested, and the line between reality and fiction is blurring in ways that threaten both... Which is why Professor Wes Henricksen’s new book, In Fraud We Trust, feels less like an academic treatise and more like a survival manual for democracy. Propagandists are pragmatists and innovators.1 Political marketing is a game in which the cutting edge can be the margin between victory and defeat. Generative Artificial Intelligence (GenAI) features prominently for those in the political marketing space as they add new tools to their strategic kit. However, given generative AI’s novelty, much of the conversation about its use in digital politicking is speculative.

Observers are taking stock of the roles generative artificial intelligence is already playing in U.S. politics and the way it may impact highly contested elections in 2024 and in years to come. Amid policymakers’ and the public’s concerns, there is an urgent need for empirical research on how generative AI is used for the purposes of political communication and corresponding efforts to manipulate public opinion. To better understand major trends and common concerns – such as generative AI’s role in the rapid production of disinformation, the enabling of hyper-targeted political messaging, and the misrepresentation of political figures via synthetic... These interviews were conducted between January and April 2024 with campaign consultants from both major political parties, vendors of political generative AI tools, a political candidate utilizing generative AI for her campaign, a digital... Who is using generative AI in the political space?

How are they using generative AI in the political space?

People Also Search

A Deeply Offensive AI-generated Video, Depicting A Bizarre Version Of

A deeply offensive AI-generated video, depicting a bizarre version of Gaza, Palestine, was recently shared by President Donald Trump on social media. The video — posted to Trump’s Truth Social and Instagram accounts — depicted Israeli Prime Minister Benjamin Netanyahu, Trump sidekick and billionaire Elon Musk and the president himself sunbathing in a resort-style iteration of... Today, it has beco...

For Instance, AI-generated Images Of Trump Cuddling Cats And Ducks

For instance, AI-generated images of Trump cuddling cats and ducks went viral on X and other social media platforms after he and Vice President J.D. Vance incorrectly promoted offensive claims about Haitian immigrants in Ohio eating pets. These posts gained millions of views and thousands of clicks. Some were clearly racist, such as an AI-generated image of Trump running through a field with cats ...

The Researchers Asked For People’s Views On A Variety Of

The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their... The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others. “Our results demonstrate the remarkable persu...

To See If Conversational Large Language Models Can Really Sway

To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the... It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI. The public debate a...

On Top Of That, They Can Often Access Tons Of

On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal. Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, LLMs can indeed appear kind of scary. The goal of this ...