Addressing The Harms Of Ai Generated Inauthentic Content

Bonisiwe Shabane
-
addressing the harms of ai generated inauthentic content

Nature Machine Intelligence volume 5, pages 679–680 (2023)Cite this article Generative AI tools lower the cost of generating false but credible content at scale1, defeating the already weak moderation defenses of social media platforms. Using inauthentic accounts and other tricks to exploit algorithmic and socio-cognitive vulnerabilities, bad actors can monetize false and harmful content, commit fraud and manipulate opinions for political gains2,3. Generative AI tools such as ChatGPT make it easier to create large volumes of false (but convincing) social media profiles and content. Narratives can even be tailored to a particular community by an inauthentic influence campaign. For example, through health disinformation, a foreign adversary can make an entire population more vulnerable to a future pandemic4.

Image-generation models can create fake profile pictures that are indistinguishable from real people. It is occasionally possible to spot certain AI-generated images and text through glitches. For example, some generative adversarial network (GAN)-generated profiles can be detected from the positions of the eyes. Similarly, ChatGPT-generated content has been flagged from text patterns such as “as an AI model.…” These are obvious signatures that the ChatGPT API has been used to automatically create and post content. But even as these glitches can now be found everywhere, they reveal what is likely to be only the tip of an iceberg. Before our lab developed tools to detect social bots 10 years ago5, there was little awareness of bot manipulation.

Similarly, we currently have little awareness of the volume of inauthentic behaviour supported by AI. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Jul 30, 2024 | Brad Smith - Vice Chair & President AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation – especially to target kids and seniors. While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud.

In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children. While we and others have rightfully been focused on deepfakes used in election interference, the broad role they play in these other types of crime and abuse needs equal attention. Fortunately, members of Congress have proposed a range of legislation that would go a long way toward addressing the issue, the Administration is focused on the problem, groups like AARP and NCMEC and deeply... One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans. We don’t have all the solutions or perfect ones, but we want to contribute to and accelerate action.

That’s why today we’re publishing 42 pages on what’s grounded us in understanding the challenge as well as a comprehensive set of ideas including endorsements for the hard work and policies of others. Below is the foreword I’ve written to what we’re publishing. Changing the way people inform each other is AI-generated content. Be it from automatically done journalism to AI chatbots, the speed of making content is almost in thrall to that technological advancement. However, such advances show a dangerous manifestation. Such facets on the darker side of AI-generated content include misinformation, biases, legal concerns, and even security threats.

Let’s take a look at these issues and how they affect man and society. Content generation relies on data, a huge database used by AI tools. The AI will copy and magnify any inaccuracies, wrong citations, or even bias in those datasets with false and unsubstantiated evidence. One salient point is that AI models take most of their data from dubious online sources. In contrast to human research, an AI lacks a mechanism for determining if its source is reputable. Consequently, the generated, unverified form of AIs can quickly go around social media and news websites and cause a breakdown of public trust in digital content.

Bias is a major concern in AI-generated content. The training data of the AI-academic module coincides with the prevalent human bias, which throws the data into the shape of a bunch of possible or impossible frequent or biased outputs. Stereotypes support and amplify any prejudiced data over time. This creates hypothetical forms of new social divides due to bias. AI job descriptions or hiring tools may be created by unconscious bias towards specific demographics rather than others. Similarly, AI-news deliveries might be one-sided towards an issue, increasing a divide in people's minds.

Without stringent oversight, AI is going to perpetuate rather than remove systemic biases. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

People Also Search

Nature Machine Intelligence Volume 5, Pages 679–680 (2023)Cite This Article

Nature Machine Intelligence volume 5, pages 679–680 (2023)Cite this article Generative AI tools lower the cost of generating false but credible content at scale1, defeating the already weak moderation defenses of social media platforms. Using inauthentic accounts and other tricks to exploit algorithmic and socio-cognitive vulnerabilities, bad actors can monetize false and harmful content, commit f...

Image-generation Models Can Create Fake Profile Pictures That Are Indistinguishable

Image-generation models can create fake profile pictures that are indistinguishable from real people. It is occasionally possible to spot certain AI-generated images and text through glitches. For example, some generative adversarial network (GAN)-generated profiles can be detected from the positions of the eyes. Similarly, ChatGPT-generated content has been flagged from text patterns such as “as ...

Similarly, We Currently Have Little Awareness Of The Volume Of

Similarly, we currently have little awareness of the volume of inauthentic behaviour supported by AI. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Jul 30, 2024 | Brad Smith - Vice Chair & President AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and man...

In Short, We Need New Laws To Help Stop Bad

In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children. While we and others have rightfully been focused on deepfakes used in election interference, the broad role they play in these other types of crime and abuse needs equal attention. Fortunately, members of Congress have proposed a range of legislation that would go a long way toward address...

That’s Why Today We’re Publishing 42 Pages On What’s Grounded

That’s why today we’re publishing 42 pages on what’s grounded us in understanding the challenge as well as a comprehensive set of ideas including endorsements for the hard work and policies of others. Below is the foreword I’ve written to what we’re publishing. Changing the way people inform each other is AI-generated content. Be it from automatically done journalism to AI chatbots, the speed of m...