Why Shouldn T Ai Be Used For News Articles Dallegenerate Art

Bonisiwe Shabane
-
why shouldn t ai be used for news articles dallegenerate art

With artificial intelligence transforming content creation, many wonder, “Why shouldn’t AI be used for news articles?” While AI offers speed and efficiency, it lacks the critical thinking, ethical responsibility, and investigative skills essential for... This article explores the risks of AI-generated news, its limitations in accuracy and ethics, and why human oversight remains crucial. AI is increasingly being adopted in journalism for: While AI can assist in speeding up processes, it lacks the human ability to analyze context, verify sources, and exercise ethical judgment, making it risky for serious news reporting. <img fetchpriority="high" decoding="async" class="size-full wp-image-857 aligncenter" src="https://dallegenerate.art/wp-content/uploads/2025/02/celebrity-portraits-generated-by-textual-input-ai-artflow-hidreley-fb22.png" alt="Newsroom journalist at work" width="1000" height="525" srcset="https://dallegenerate.art/wp-content/uploads/2025/02/celebrity-portraits-generated-by-textual-input-ai-artflow-hidreley-fb22.png 1200w, https://dallegenerate.art/wp-content/uploads/2025/02/celebrity-portraits-generated-by-textual-input-ai-artflow-hidreley-fb22-300x158.png 300w, https://dallegenerate.art/wp-content/uploads/2025/02/celebrity-portraits-generated-by-textual-input-ai-artflow-hidreley-fb22-1024x538.png 1024w, https://dallegenerate.art/wp-content/uploads/2025/02/celebrity-portraits-generated-by-textual-input-ai-artflow-hidreley-fb22-768x403.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /> Despite these benefits, AI’s inherent limitations make it unreliable for high-quality journalism.

Generative AI has sparked a tremendous backlash across the internet, as the early promise of the technology has been overshadowed by the wide range of problems it has introduced. Here are some of the reasons why the public is pushing back against AI in the arts: Large Language Models (LLMs) such as ChatGPT, and image generators like Midjourney and Dall-E, have introduced a new copyright conundrum, and provoked multiple lawsuits alleging copyright infringement. It’s true that no artist was asked if their work could be used to train these models. But even if the courts rule in favor of the machines, the practical application of the technology doesn’t seem worth the cost. Generative AI is incredibly energy-intensive, surprisingly labor-intensive, and requires constant input — annotation — from human workers to keep it functional, lest it spiral into hallucinogenic nonsense.

The Slop Cycle—How Every Media Revolution Breeds Rubbish and Art The popularization of the term “slop” for AI output follows a centuries-long pattern where new tools flood the zone, audiences adapt and some of tomorrow’s art emerges from today’s excess By Deni Ellis Béchard edited by Clara Moskowitz Old metal printing letters used for traditional letterpress text printing. Spam, fluff, clickbait, churnalism, kitsch—slop: These are all ways to describe mass-produced, low-quality content. The latter term is reserved for the newest variety, which comes from artificial intelligence.

Though references to AI slop date back at least to 2022, a poet and technologist who writes under the name “deepfates” popularized it two years later as “the term for unwanted AI generated content”... Shortly afterward, developer Simon Willison shared the concept in a blog post: “Not all AI-generated content is slop,” he wrote. “But if it’s mindlessly generated and thrust upon someone who didn’t ask for it, slop is the perfect term.” Today slop’s pejorative bite is increasingly aimed at all things AI, treating it as an... And most of it is—but by indiscriminately dismissing all of it, we risk missing out on the minority of creations that are keepers. Kamal Nahas is a freelance science journalist based in Oxford, UK Last year, a figure generated by artificial intelligence (AI) made it past peer review despite featuring gibberish text and a rat with enormous testes1.

The scientific community took to social media to pan the paper, leading the journal concerned to retract it2. Yet scientists continue to turn to bots to make cover art and figures, and professional illustrators are expressing concern over the hasty adoption of AI. Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Receive 51 print issues and online access According to Merriam-Webster, technophobia is defined as “fear or dislike of advanced technology or complex devices and especially computers.” When grocery stores and retail chain stores implemented self-checkout machines, it was the technophobes who...

Following the groundbreaking release of ChatGPT in 2022, it was the technophobes who warned that “AI will take our jobs.” Despite the generalized fear of AI’s potential to replace humans, the emergence of AI-image... Is this another wave of technophobic pushback, or are artists expressing valid concern about their livelihoods? To explore this, we must ask: What defines art? Plato developed the idea of art as “mimesis” – or imitation when translated from Greek. He believed art was a form of representation and should be valued according to the degree of which it replicated its subject. During the Romantic movement, artwork was meant to evoke an emotional response from its audience.

In the 20th century, Immanuel Kant argued that art should be judged not by aesthetic beauty, but by its formal qualities, which was a key factor as art became more abstract. Joshua Issa, in his blog post on Medium.com, argues that when we consider art as a sensory, creative and interpretive work, art may then be defined as “something that was intentionally created to provide... Considering Issa’s definition of art, along with the three elements that help define art, it is difficult to make a compelling argument for generative AI art to be considered a true form of art. The emergence of AI-generated art, powered by sophisticated machine learning models like diffusion models (e.g., Stable Diffusion, Midjourney, DALL-E 2) and generative adversarial networks (GANs), has sparked fervent debate within the art and technology... While the technology demonstrates impressive capabilities in generating visually compelling outputs based on textual prompts, a critical examination reveals several significant issues that warrant careful consideration before uncritically embracing ‘AI art.’ This article will... 1.

Data Provenance and Copyright Infringement: One of the most contentious aspects of AI art stems from the data used to train these models. Most commercially available AI art generators rely on massive datasets scraped from the internet, often without the explicit consent or knowledge of the original artists. These datasets frequently include copyrighted artwork, photographs, and other visual materials. The training process involves the AI learning to recognize patterns, styles, and compositions present in the training data. This learning process can effectively involve the AI memorizing and reproducing elements of copyrighted works, leading to potential copyright infringement.

The legal precedent surrounding this issue is still developing, but the ethical implications are already clear. The issue is compounded by the ‘black box’ nature of many AI models. It’s often impossible to definitively trace the provenance of specific visual elements within a generated image back to its original source within the training dataset. This opacity makes it difficult to prove copyright infringement, even when stylistic similarities are evident. Efforts are being made to develop techniques like watermarking and data poisoning to mitigate these problems, but the issue remains a significant obstacle. Senior Lecturer in Visual Communication & Digital Media, RMIT University

Associate Professor in Digital Media, Queensland University of Technology Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology Associate Professor, Washington State University T.J. Thomson receives funding from the Australian Research Council. He is an affiliate with the ARC Centre of Excellence for Automated Decision Making & Society.

Whenever discussing tech ethics, the most common thing I get asked about is the use of generative AI for artistic purposes. Previously, I’ve discussed the case of Jason M. Allen and his first place AI-generated entry at the Colorado State Fair. However, a great deal has happened since then in terms of technological innovation, wider public adoption, and legal wrangling. Marvel Studios, for example, was recently accused of using AI to generate posters for their Fantastic Four film given some strange “choices” – you be the judge. But Marvel is not alone; numerous other creators have been caught in the crosshairs.

Is all the outrage justified? What is actually at stake? Why are people so up in arms? Let’s consider some related concerns. Many arguments against Gen-AI art start by asserting that AI is inherently incapable of producing art, as it lacks human creativity or some other human-ness about it. But we should be clear about what we mean.

As I have previously discussed, there are over 20 different theories of consciousness out there in the academic world, but there are very good reasons for accepting the fact that these algorithms are just... Ultimately, generative-AI is a tool for humans to use, just like a camera, a paintbrush, or a chisel. Just like those tools, they will not work without human input and whatever product they produce that will be accepted as “finished” or “complete” or even “satisfactory” will depend on what the human wanted... If critics of AI art are going to charge that a person cannot make art with it, “because they typed a few buttons” then why can a photographer make art by clicking a shutter? This isn’t to suggest that anyone who uses Gen-AI instantly becomes an artist, but neither does anyone with a camera become a photographer. In other words, critics need to explain why some types of art can utilize technology while others cannot.

But, in a similar vein, some critics charge that AI cannot produce art because it is incapable of understanding human emotional qualities that are a necessary component of artistic expression. AI cannot understand or replicate the emotional intention behind art. First, it is important to note that in addition to the field of generative AI, there is a whole field of affective computing devoted to getting computers and AI to understand human emotions. There’s no obvious reason why insights from affective computing cannot build emotional understandings into an algorithmic model and have that influence the output. It is also known that AI-generated art can produce emotional responses in humans that we might expect any artform to do. Anyone who has seen the “priceless pancake” video on the internet can probably appreciate the level of emotional intuitiveness involved.

If artworks are supposed to induce certain emotional responses in the audience, a clear argument needs to be made why AI is incapable of communicating the desired emotional effect, particularly if it is further... Critics may also charge that because generative AI is trained on the images of other artists, it cannot truly be creative. But creativity is an ambiguous concept. While gen-AI models do take their cues from the inputs they are given, it is worth noting that they are also not completely deterministic, nor do they simply reproduce works that they have been... There is always room within the statistical mesh of relationships a model forms to produce something new; generative AI is capable of creating novelty out of what has come before. Ultimately, whether something is creative or not depends on what we “see” in the work.

There is also a sense that gen-AI art cannot produce art because of the intellectual theft or plagiarism of pre-existing works. But we should be careful to separate economic and aesthetic concerns. I wonder how critics would feel about a model trained entirely on artworks that exist in the public domain, or an artist who trains a model to produce new works using only their artworks... Would a lack of copyright concerns in these cases still preclude the idea that such models could produce (or at least contribute to) real works of art?

People Also Search

With Artificial Intelligence Transforming Content Creation, Many Wonder, “Why Shouldn’t

With artificial intelligence transforming content creation, many wonder, “Why shouldn’t AI be used for news articles?” While AI offers speed and efficiency, it lacks the critical thinking, ethical responsibility, and investigative skills essential for... This article explores the risks of AI-generated news, its limitations in accuracy and ethics, and why human oversight remains crucial. AI is incr...

Generative AI Has Sparked A Tremendous Backlash Across The Internet,

Generative AI has sparked a tremendous backlash across the internet, as the early promise of the technology has been overshadowed by the wide range of problems it has introduced. Here are some of the reasons why the public is pushing back against AI in the arts: Large Language Models (LLMs) such as ChatGPT, and image generators like Midjourney and Dall-E, have introduced a new copyright conundrum,...

The Slop Cycle—How Every Media Revolution Breeds Rubbish And Art

The Slop Cycle—How Every Media Revolution Breeds Rubbish and Art The popularization of the term “slop” for AI output follows a centuries-long pattern where new tools flood the zone, audiences adapt and some of tomorrow’s art emerges from today’s excess By Deni Ellis Béchard edited by Clara Moskowitz Old metal printing letters used for traditional letterpress text printing. Spam, fluff, clickbait, ...

Though References To AI Slop Date Back At Least To

Though references to AI slop date back at least to 2022, a poet and technologist who writes under the name “deepfates” popularized it two years later as “the term for unwanted AI generated content”... Shortly afterward, developer Simon Willison shared the concept in a blog post: “Not all AI-generated content is slop,” he wrote. “But if it’s mindlessly generated and thrust upon someone who didn’t a...

The Scientific Community Took To Social Media To Pan The

The scientific community took to social media to pan the paper, leading the journal concerned to retract it2. Yet scientists continue to turn to bots to make cover art and figures, and professional illustrators are expressing concern over the hasty adoption of AI. Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Receive 51 print issues and...