Politicians Fear Growing Use Of Ai Generated Content In Politics
A deeply offensive AI-generated video, depicting a bizarre version of Gaza, Palestine, was recently shared by President Donald Trump on social media. The video — posted to Trump’s Truth Social and Instagram accounts — depicted Israeli Prime Minister Benjamin Netanyahu, Trump sidekick and billionaire Elon Musk and the president himself sunbathing in a resort-style iteration of... Today, it has become easy for the general public to create content with malicious intent. By using low-cost or free AI tools from companies such as Google and OpenAI, it takes only a simple text prompt to generate realistic media designed to deceive audiences on social media. Right-wing extremists have been using AI-generated content to promote harmful ideals and propaganda online. The accessibility of AI allows users to quickly spread misinformation.
For instance, AI-generated images of Trump cuddling cats and ducks went viral on X and other social media platforms after he and Vice President J.D. Vance incorrectly promoted offensive claims about Haitian immigrants in Ohio eating pets. These posts gained millions of views and thousands of clicks. Some were clearly racist, such as an AI-generated image of Trump running through a field with cats under each arm as two shirtless Black men chase him. As artificial intelligence technologies make their way into political ads and campaigning, Americans are expressing growing concern. But they’re not just worried about deepfakes and deceptive content’s impact on elections — they also fear how the government might use the fight against misinformation to restrict free speech.
In a recently released FIRE poll of registered American voters, conducted by Morning Consult, one concern stood out: government regulation itself. Nearly half of respondents (45%) said they are “extremely” or “very” concerned that government regulation of election-related AI content could be abused to suppress criticism of elected officials. That’s a powerful signal that while Americans see the risks posed by AI, they don’t trust government regulators to police political expression fairly. When asked to choose between protecting free speech in politics or stopping deceptive content, a plurality (47%) said protecting free speech in politics is more important, even if that means allowing some deceptive content. Just 37% prioritized stopping deceptive content, even at the expense of limiting speech that would otherwise be protected by the First Amendment. These sentiments are held across the political spectrum, but are stronger among Independents and Republicans, than among Democrats.
This isn’t just a preference — it’s a principled stand in favor of the core freedoms the First Amendment exists to protect. Political speech lies at the heart of those freedoms, and Americans clearly recognize that any government attempts to police what can or can’t be said pose a far greater threat to democracy than free... The chilling effects are already measurable. About 28% of voters said they’d be less likely to share content on social media if the government began regulating AI-generated or AI-altered content. (That’s right: All content, not just AI-generated or AI-altered content.) That may not sound dramatic at first glance, but that’s more than the average voter turnout during the last midterm primaries. As our political culture is increasingly shaped online, discouraging speech — even unintentionally — can have real consequences for public discourse.
The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined.
As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole.
And we don’t yet know the extent of that influence. Politicians on both sides of the aisle are increasingly voicing alarm over the rapid rise of AI-generated videos, images and texts in political campaigns. With tools available to create hyperrealistic deepfakes, many lawmakers worry that voters may soon be unable to distinguish authentic content from manipulated material—a challenge to the very basis of informed democratic decision-making. Several U.S. senators publicly voiced concerns that such artificial content could distort public discourse, mislead voters and undermine trust in institutions. Some of the most cited examples include AI-generated videos showing real politicians saying things they never said, or depicting them in fabricated scenarios.
The trend is gaining traction not only among political operators but also among adversaries seeking to sow confusion or influence electoral outcomes. The debate over how to respond is already dividing lawmakers. On one side, there are calls for stronger regulation—such as watermarking AI-generated political media or banning deceptive content without proper disclosure. On the other, there are arguments rooted in free-speech protection that warn against over-broad constraints on political expression and satire. The regulatory balance remains unsettled. For countries like India and other emerging democracies, the development serves as a clear signal: AI-driven content manipulation will very likely become a key front in future election cycles.
Effective responses will involve regulatory clarity, media-literacy programmes for citizens, platform accountability and technological tools for detection and attribution. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history.
For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate. Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is the John L.
Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University. Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University. New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university... The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains.
Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust. Propagandists are pragmatists and innovators.1 Political marketing is a game in which the cutting edge can be the margin between victory and defeat. Generative Artificial Intelligence (GenAI) features prominently for those in the political marketing space as they add new tools to their strategic kit. However, given generative AI’s novelty, much of the conversation about its use in digital politicking is speculative. Observers are taking stock of the roles generative artificial intelligence is already playing in U.S.
politics and the way it may impact highly contested elections in 2024 and in years to come. Amid policymakers’ and the public’s concerns, there is an urgent need for empirical research on how generative AI is used for the purposes of political communication and corresponding efforts to manipulate public opinion. To better understand major trends and common concerns – such as generative AI’s role in the rapid production of disinformation, the enabling of hyper-targeted political messaging, and the misrepresentation of political figures via synthetic... These interviews were conducted between January and April 2024 with campaign consultants from both major political parties, vendors of political generative AI tools, a political candidate utilizing generative AI for her campaign, a digital... Who is using generative AI in the political space? How are they using generative AI in the political space?
The use of artificial intelligence in political campaigns and messaging is ramping up. Already in this 2024 presidential race is AI being used to create fake robocalls and news stories and to generate campaign speeches and fundraising emails. The use of AI in political messaging has raised several alarms among experts, as there are currently no federal rules when it comes to using AI generated content in political material. Peter Loge is the director of the GW School of Media and Public Affairs. Loge has nearly 30 years of experience in politics and communications, including a presidential appointment at the Food and Drug Administration and senior positions for Sen. Edward Kennedy and three members of the U.S.
House of Representatives. He currently leads the Project on Ethics in Political Communication at the GW School of Media and Public Affairs and continues to advise advocates and organizations. Loge is an expert in communications and political strategy. Loge says AI is being used in a number of ways for political campaigns right now and the use of this emerging technology can ultimately undermine public trust. “Campaigns are using artificial intelligence to predict where voters are, what they care about and how to reach them but they’re also writing fundraising emails, generating first drafts of scripts, first drafts of speeches... “There’s a lot of ethical concerns with AI in campaigns.
The basic rule of thumb is, there aren’t AI ethics that different from everybody else’s ethics. You have a set of ethics. In a campaign, you should aim to persuade and inform, not deceive and divide. That’s true with AI, with mail, with televions, with speeches,” Loge explains. “A lot of the questions we’re asking about AI are the same questions we’ve asked about rhetoric and persuasion for thousands of years.” Standing in front of the U.S.
flag and dressed as Uncle Sam, Taylor Swift proudly proclaims that you should vote for Joe Biden for President. She then wants you to vote for Donald Trump in a nearly identical image circulated by former President Trump himself. Both the images, and the purported sentiments, are fabricated, the output of a generative AI tool used for creating and manipulating images. In fact, shortly after Donald Trump circulated his version of the image, and in response to the fear of spreading misinformation, the real Taylor Swift posted a real endorsement to her Instagram account, for... Generative AI is a powerful tool, both in elections and more generally in people’s personal, professional, and social lives. In response, policymakers across the U.S.
People Also Search
- Politicians Fear Growing Use of AI-Generated Content in Politics ...
- The concerning rise of AI content in politics
- Americans worry about AI in politics — but they're more worried about ...
- AI Is Transforming Politics, Much Like Social Media Did - TIME
- Politicians Fear Growing Use of AI-Generated Content in Politics
- How AI Threatens Democracy - Journal of Democracy
- Political Machines: Understanding the Role of AI in the U.S. 2024 ...
- AI in Political Campaigns: How it's being used and the ethical ...
- U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond
- Gauging the AI Threat to Free and Fair Elections
A Deeply Offensive AI-generated Video, Depicting A Bizarre Version Of
A deeply offensive AI-generated video, depicting a bizarre version of Gaza, Palestine, was recently shared by President Donald Trump on social media. The video — posted to Trump’s Truth Social and Instagram accounts — depicted Israeli Prime Minister Benjamin Netanyahu, Trump sidekick and billionaire Elon Musk and the president himself sunbathing in a resort-style iteration of... Today, it has beco...
For Instance, AI-generated Images Of Trump Cuddling Cats And Ducks
For instance, AI-generated images of Trump cuddling cats and ducks went viral on X and other social media platforms after he and Vice President J.D. Vance incorrectly promoted offensive claims about Haitian immigrants in Ohio eating pets. These posts gained millions of views and thousands of clicks. Some were clearly racist, such as an AI-generated image of Trump running through a field with cats ...
In A Recently Released FIRE Poll Of Registered American Voters,
In a recently released FIRE poll of registered American voters, conducted by Morning Consult, one concern stood out: government regulation itself. Nearly half of respondents (45%) said they are “extremely” or “very” concerned that government regulation of election-related AI content could be abused to suppress criticism of elected officials. That’s a powerful signal that while Americans see the ri...
This Isn’t Just A Preference — It’s A Principled Stand
This isn’t just a preference — it’s a principled stand in favor of the core freedoms the First Amendment exists to protect. Political speech lies at the heart of those freedoms, and Americans clearly recognize that any government attempts to police what can or can’t be said pose a far greater threat to democracy than free... The chilling effects are already measurable. About 28% of voters said the...
The Last Decade Taught Us Painful Lessons About How Social
The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like Ch...