Release Standing Up To Protect Journalism From Ai Slop

Bonisiwe Shabane
-
release standing up to protect journalism from ai slop

For Immediate Release: Dec.1, 2025Media Contact: Jen Sheehan, jen@nyguild.org; 610-573-0740 On the heels of a groundbreaking arbitration win at POLITICO, NewsGuild-CWA members launch a national campaign and a week of action on AI. WASHINGTON, D.C. – Unionized journalists across the country have become increasingly concerned about artificial intelligence, especially how the evolving technology is eroding the public’s trust in the news. “News, Not Slop” – a reference to the term for low-quality, surface-level digital content generated by AI – is a new NewsGuild-CWA campaign, launching today, to raise awareness about AI and its consequences and... The NewsGuild-CWA represents 27,000 members across North America at major media companies including The New York Times, Los Angeles Times, Reuters, Business Insider, POLITICO, ProPublica and more.

Central to the campaign is TNG-CWA’s Demands for Ethical AI in Journalism which outlines the demands of unionized journalists to protect their jobs and the quality of their work. WASHINGTON, D.C. – Unionized journalists across the country have become increasingly concerned about artificial intelligence, especially how the evolving technology is eroding the public’s trust in the news. “News, Not Slop” — a reference to the term for low-quality, surface-level digital content generated by AI — is a new NewsGuild-CWA campaign that launched Dec. 1, to raise awareness about AI and its consequences and how unionized journalists are fighting to ensure that journalism for humans is led by humans. The NewsGuild-CWA represents 27,000 members across North America at major media companies including The New York Times, Los Angeles Times, Reuters, Business Insider, POLITICO, ProPublica and more.

Central to the campaign is TNG-CWA’s Demands for Ethical AI in Journalism which outlines the demands of unionized journalists to protect their jobs and the quality of their work. “We’ve seen countless examples of media companies’ haphazard implementation of AI in our newsrooms and the damage it causes to the credibility of the news industry,” said Ariel Wittenberg, environmental reporter at POLITICO and... “That’s why we’re taking action this week to say, in the clearest possible terms: News, not slop.” The campaign launch comes as PEN Guild announced it won its arbitration case against POLITICO management over the company’s unilateral introduction of artificial intelligence tools that bypassed negotiated safeguards and undermined core journalistic standards. PEN Guild represents nearly 260 journalists at POLITICO and E&E News. The case marks one of the nation’s first major labor-arbitration rulings addressing the impact of AI on journalists’ work, setting an important precedent for the entire U.S.

news industry. NewsGuild-CWA, representing 27,000 members across North America at major media companies, launches a campaign to “raise awareness about AI and its consequences and how unionized journalists are fighting to ensure that journalism for humans... A bill aimed at regulating the use of journalistic content by AI developers has been introduced to the U.S. Senate. Reporters Without Borders (RSF) welcomes this first step towards recognizing media rights in the face of AI but urges lawmakers to address several weaknesses in the text. On July 11, 2024, in Washington, DC, a bipartisan group of senators introduced a bill to protect journalists and artists against the unauthorized use of their works by AI models.

The bill, titled the "Content Origin Protection and Integrity from Edited and Deepfaked Media Act" (COPIED Act), also aims to facilitate the authentication of AI-generated content through the development of appropriate technical standards. “Bilateral partnerships concluded in recent months between media outlets and AI providers are neither a desirable nor viable solution. They pose a threat to the independence and pluralism of journalism as well as the sustainability of media excluded from negotiations. It is essential to develop a protection regime covering all journalistic content, and this bill is a first step in that direction. However, the text needs to be strengthened in several key areas, particularly regarding authenticity standards. RSF calls on American legislators to take our recommendations into account so they can pass a groundbreaking law that safeguards journalistic content as AI evolves.

Towards better protection of journalistic content in the U.S. Currently, the fair use doctrine allows any journalistic content to be used for training AI models without any permission or compensation. The COPIED Act, supported by several media industry players including major trade associations News/Media Alliance and the National Newspaper Association, would be a significant advancement in recognizing the rights of content owners. Last week at NAB Show New York, new data was unveiled that shows an overwhelming majority of Americans are concerned about how artificial intelligence and Big Tech dominance are threatening the survival of local... The national survey released alongside a panel on “The Future of News: AI, New Revenues and Risks, and the Policy Response” found three‑quarters of Americans are concerned about AI stealing or reproducing local news... An overwhelming 77% support Congress passing a law that would make it illegal for AI to steal or reproduce journalism and local news stories that are published online without permission or compensation.

These concerns come at a moment when trusted journalism is more vital than ever. The survey results show only 26% trust information produced by AI, while 68% say it is not trustworthy. Local broadcasters provide fact-based news, emergency alerts and community connection. But they do so under regulations written before the rise of streaming, social media and AI. That outdated framework leaves stations unable to invest, grow or compete on equal footing with global tech platforms that profit from broadcasters’ content without producing any of their own. As Hearst Television Executive Vice President and NAB Television Board Chair Nick Radziul said during the panel discussion, “Big tech is so outsized and so disproportionately large their influence and actions as gatekeepers to...

Americans agree. According to the survey results, 72% believe the federal government should place guardrails on AI to protect consumers. The public is clear in their desire for trustworthy news and AI oversight. Americans want local journalism to survive. Policymakers must work to ensure the proliferation of AI doesn’t come at a cost of the local broadcast news Americans trust most. Three US Senators introduced a bill that aims to rein in the rise and use of AI generated content and deepfakes by protecting the work of artists, songwriters and journalists.

The Content Original Protection and Integrity from Edited and Deepfaked Media (COPIED) Act was introduced to the Senate Friday morning. The bill is a bipartisan effort authorized by Sen. Marsha Blackburn (R-Tenn.), Sen. Maria Cantwell (D-Wash.) and Sen. Martin Heinrich (D-N.M.), according to a press alert issued by Blackburn’s office. The COPIED ACT would, if enacted, create transparency standards through the National Institutes of Standards and Technology (NIST) to set guidelines for “content provenance information, watermarking, and synthetic content detection,” according to the press...

The bill would also prohibit the unauthorized use of creative or journalistic content to train AI models or created AI content. The Federal Trade Commission and state attorneys general would also gain the authority to enforce these guidelines and individuals who had their legally created content used by AI to create new content without their... The bill would even expand the prohibition of tampering or removing content provenance information by internet platforms, search engines and social media companies. Three years after chatbots went mainstream, low-effort AI text has chewed through the internet. Legal filings, music, and yes - journalism - are getting hit with prose that reads confident but collapses under scrutiny. The risk isn't abstract.

It's in your inbox. Editors are getting pitches that look polished on the surface, but the work behind them is stitched together, cribbed, or stuffed with quotes that never happened. A Toronto editor recently traced a string of slick pitches back to a writer using the name "Victoria Goldiee." A quick search suggested real bylines. Then the red flags started stacking up: stilted phrasing in emails, borrowed structure, and quotes that fell apart on verification. Designer Young Huh, quoted in one of those pieces, flatly denied ever speaking to the writer. Other editors said the drafts leaned too heavily on existing work.

When confronted, the writer abruptly hung up. Several outlets - including The Guardian and Dwell - later removed her work. It wasn't an isolated fluke. Publications as established as Wired and Quartz have been fooled by similar tactics. You can read the original reporting on this episode in The Local here. The Rise of AI Slop: A Threat to Online Information Quality

The initial panic surrounding AI-generated misinformation has subsided as advancements in chatbot technology have reduced instances of blatant hallucinations. However, a new, more insidious threat has emerged: AI slop. This term refers to the deluge of low-quality, often meaningless content generated by AI, flooding the internet with text, images, videos, and even entire websites. Slop isn’t designed to deceive; rather, its purpose is often to exploit algorithms for profit or manipulate public perception through sheer volume. From fabricated events like the non-existent Dublin Halloween parade to misleadingly advertised experiences like the underwhelming Willy Wonka event in Glasgow, slop is seeping into both the digital and physical realms. The nature of AI slop is multifaceted.

It can manifest as “careless speech,” characterized by subtle inaccuracies and biased information presented with undue confidence. Unlike deliberate disinformation, careless speech doesn’t aim to lie but rather to persuade, mirroring the concept of “bullshitting.” This makes it particularly difficult to detect, as it often contains grains of truth or omits... The authoritative tone of AI-generated content further complicates the issue, potentially leading users to accept flawed information at face value. The dangers of careless speech are not immediate but cumulative, potentially leading to the homogenization of information and the erosion of truth over time. The proliferation of slop is fueled by the ease and low cost of AI content generation. Major platforms like YouTube, Facebook, and Instagram are embracing AI tools, allowing users to create AI-generated content with minimal effort.

This raises concerns about the future of online discourse, where algorithmic feeds may prioritize readily available slop over genuine human connection and valuable content. The internet risks becoming a vast digital trough filled with unappetizing, yet readily consumed, information. One of the most pressing concerns is the phenomenon of recursion. As AI-generated content floods the internet, it becomes part of the training data for future AI models. This creates a feedback loop where low-quality information is perpetually recycled, leading to a gradual decline in the overall quality and reliability of online information. This process is akin to environmental pollution, where the accumulation of waste degrades the overall ecosystem.

In the case of AI slop, the forest of online information becomes littered with digital debris, making it increasingly difficult to navigate and find genuine value.

People Also Search

For Immediate Release: Dec.1, 2025Media Contact: Jen Sheehan, Jen@nyguild.org; 610-573-0740

For Immediate Release: Dec.1, 2025Media Contact: Jen Sheehan, jen@nyguild.org; 610-573-0740 On the heels of a groundbreaking arbitration win at POLITICO, NewsGuild-CWA members launch a national campaign and a week of action on AI. WASHINGTON, D.C. – Unionized journalists across the country have become increasingly concerned about artificial intelligence, especially how the evolving technology is e...

Central To The Campaign Is TNG-CWA’s Demands For Ethical AI

Central to the campaign is TNG-CWA’s Demands for Ethical AI in Journalism which outlines the demands of unionized journalists to protect their jobs and the quality of their work. WASHINGTON, D.C. – Unionized journalists across the country have become increasingly concerned about artificial intelligence, especially how the evolving technology is eroding the public’s trust in the news. “News, Not Sl...

Central To The Campaign Is TNG-CWA’s Demands For Ethical AI

Central to the campaign is TNG-CWA’s Demands for Ethical AI in Journalism which outlines the demands of unionized journalists to protect their jobs and the quality of their work. “We’ve seen countless examples of media companies’ haphazard implementation of AI in our newsrooms and the damage it causes to the credibility of the news industry,” said Ariel Wittenberg, environmental reporter at POLITI...

News Industry. NewsGuild-CWA, Representing 27,000 Members Across North America At

news industry. NewsGuild-CWA, representing 27,000 members across North America at major media companies, launches a campaign to “raise awareness about AI and its consequences and how unionized journalists are fighting to ensure that journalism for humans... A bill aimed at regulating the use of journalistic content by AI developers has been introduced to the U.S. Senate. Reporters Without Borders ...

The Bill, Titled The "Content Origin Protection And Integrity From

The bill, titled the "Content Origin Protection and Integrity from Edited and Deepfaked Media Act" (COPIED Act), also aims to facilitate the authentication of AI-generated content through the development of appropriate technical standards. “Bilateral partnerships concluded in recent months between media outlets and AI providers are neither a desirable nor viable solution. They pose a threat to the...