The Case For Campaign Ai Politico

Bonisiwe Shabane
-
the case for campaign ai politico

Journalists across the US launch 'News Not Slop' campaign seeking AI safeguards. Journalists at Politico in the US have won a ruling over the rollout of AI tools as a nationwide News Not Slop campaign was launched. The journalists at Axel Springer-owned Politico and E&E News (Politico’s energy and environment brand for professionals) in the US, represented by the PEN Guild, went to arbitration in a dispute with management over the... The dispute centred around a “Live Summaries” feature used during the 2024 Democratic Convention and a Capitol AI Report-Builder tool available for Politico Pro subscribers. According to the PEN Guild the arbitrator found: “Live Summaries were posted in prime ‘above-the-scroll’ homepage placements without human editing, outside the normal content system, and were not corrected despite containing factual errors, missing... The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows.

Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections.

Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence.

For Immediate Release: Dec.1, 2025Media Contact: Jen Sheehan, jen@nyguild.org; 610-573-0740 On the heels of a groundbreaking arbitration win at POLITICO, NewsGuild-CWA members launch a national campaign and a week of action on AI. WASHINGTON, D.C. – Unionized journalists across the country have become increasingly concerned about artificial intelligence, especially how the evolving technology is eroding the public’s trust in the news. “News, Not Slop” – a reference to the term for low-quality, surface-level digital content generated by AI – is a new NewsGuild-CWA campaign, launching today, to raise awareness about AI and its consequences and... The NewsGuild-CWA represents 27,000 members across North America at major media companies including The New York Times, Los Angeles Times, Reuters, Business Insider, POLITICO, ProPublica and more.

Central to the campaign is TNG-CWA’s Demands for Ethical AI in Journalism which outlines the demands of unionized journalists to protect their jobs and the quality of their work. Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social... AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their...

Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for... Campaigners—messengers, ad buyers, fundraisers, and strategists—are focused on efficiency and optimization. To them, AI is a way to augment or even replace expensive humans who traditionally perform tasks like personalizing emails, texting donation solicitations, and deciding what platforms and audiences to target. This is an incremental evolution of the computerization of campaigning that has been underway for decades. For example, the progressive campaign infrastructure group Tech for Campaigns claims it used AI in the 2024 cycle to reduce the time spent drafting fundraising solicitations by one-third. If AI is working well here, you won’t notice the difference between an annoying campaign solicitation written by a human staffer and an annoying one written by AI.

But AI is scaling these capabilities, which is likely to make them even more ubiquitous. This will make the biggest difference for challengers to incumbents in safe seats, who see AI as both a tacitly useful tool and an attention-grabbing way to get their race into the headlines. Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning. The use of artificial intelligence in political campaigns and messaging is ramping up. Already in this 2024 presidential race is AI being used to create fake robocalls and news stories and to generate campaign speeches and fundraising emails. The use of AI in political messaging has raised several alarms among experts, as there are currently no federal rules when it comes to using AI generated content in political material.

Peter Loge is the director of the GW School of Media and Public Affairs. Loge has nearly 30 years of experience in politics and communications, including a presidential appointment at the Food and Drug Administration and senior positions for Sen. Edward Kennedy and three members of the U.S. House of Representatives. He currently leads the Project on Ethics in Political Communication at the GW School of Media and Public Affairs and continues to advise advocates and organizations. Loge is an expert in communications and political strategy.

Loge says AI is being used in a number of ways for political campaigns right now and the use of this emerging technology can ultimately undermine public trust. “Campaigns are using artificial intelligence to predict where voters are, what they care about and how to reach them but they’re also writing fundraising emails, generating first drafts of scripts, first drafts of speeches... “There’s a lot of ethical concerns with AI in campaigns. The basic rule of thumb is, there aren’t AI ethics that different from everybody else’s ethics. You have a set of ethics. In a campaign, you should aim to persuade and inform, not deceive and divide.

That’s true with AI, with mail, with televions, with speeches,” Loge explains. “A lot of the questions we’re asking about AI are the same questions we’ve asked about rhetoric and persuasion for thousands of years.” Artificial intelligence is increasingly transforming the landscape of political campaigning, reshaping how candidates communicate with voters and manage their operations. According to political analysts, AI-driven tools are being used to analyze voter data, optimize messaging, and even generate campaign materials that resonate with specific demographics. This technology enables political teams to predict voter sentiment and behavior more accurately, giving candidates a significant edge in highly competitive races. However, the rise of “Campaign AI” is also raising ethical concerns.

Critics argue that the use of AI in elections could lead to voter manipulation, misinformation, and loss of transparency in campaign communication. As algorithms become capable of creating hyper-personalized content, voters may be targeted with messages designed to exploit their emotions or biases. Regulators and watchdogs are therefore urging greater oversight to ensure AI tools are used responsibly in democratic processes. Campaign strategists, on the other hand, see AI as an essential modernization of political outreach. They emphasize that AI can help campaigns reach younger, digitally native voters and improve inclusivity by identifying and engaging underrepresented groups. By automating repetitive tasks such as social media monitoring or data analysis, AI allows human staff to focus on creativity and genuine voter engagement.

As the 2026 election cycle approaches, the debate over AI in political campaigns is intensifying. Many experts believe that while AI can enhance campaign efficiency and voter connection, its misuse could threaten public trust in democracy itself. Striking the right balance between innovation and integrity will determine whether “Campaign AI” becomes a force for political progress or a new frontier of digital manipulation. From television to social media, every shift has raised questions about creativity, authenticity, and trust. AI is the next chapter in that story. It’s encouraging to see thoughtful reporting like this POLITICO piece on how AI is changing campaign communication.

I said it in the article, and I’ll say it again here: AI should be in every campaign’s toolbox. It’s not about replacing creativity or authenticity. It’s about using new tools responsibly to work smarter, faster, and more efficiently. At the American Association of Political Consultants (AAPC), we’re studying how voters respond to AI-generated content and helping our members navigate both the opportunities and the risks. The question isn’t whether campaigns should use AI. It’s how to use it ethically, effectively, and in ways that strengthen rather than undermine democratic communication.

https://lnkd.in/ecrhgtgt #PoliticalConsulting #AI #Elections #EthicalTech #AAPC #CampaignStrategy

People Also Search

Journalists Across The US Launch 'News Not Slop' Campaign Seeking

Journalists across the US launch 'News Not Slop' campaign seeking AI safeguards. Journalists at Politico in the US have won a ruling over the rollout of AI tools as a nationwide News Not Slop campaign was launched. The journalists at Axel Springer-owned Politico and E&E News (Politico’s energy and environment brand for professionals) in the US, represented by the PEN Guild, went to arbitration in ...

Now, Another Wave Of Technology Is Transforming How Voters Learn

Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a...

Meanwhile, Researchers And Firms Are Exploring The Use Of AI

Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers ...

For Immediate Release: Dec.1, 2025Media Contact: Jen Sheehan, Jen@nyguild.org; 610-573-0740

For Immediate Release: Dec.1, 2025Media Contact: Jen Sheehan, jen@nyguild.org; 610-573-0740 On the heels of a groundbreaking arbitration win at POLITICO, NewsGuild-CWA members launch a national campaign and a week of action on AI. WASHINGTON, D.C. – Unionized journalists across the country have become increasingly concerned about artificial intelligence, especially how the evolving technology is e...

Central To The Campaign Is TNG-CWA’s Demands For Ethical AI

Central to the campaign is TNG-CWA’s Demands for Ethical AI in Journalism which outlines the demands of unionized journalists to protect their jobs and the quality of their work. Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty ...