Safeguard Authenticity For Mitigating The Harms Of Generating Ai

Bonisiwe Shabane
-
safeguard authenticity for mitigating the harms of generating ai

Chandigarh College of Engineering and Technology, Chandigarh, India As transformer-based AI, exemplified by using ChatGPT, continues to permeate various domains, worries regarding authenticity and explainability are at the upward thrust. It is crucial to enforce sturdy detection, verification, and explainability mechanisms to counteract the ability harms stemming from AI-generated inauthentic content and clinical discoveries. These dangers include the spread of disinformation, incorrect information, and the possibility of generating unreproducible studies consequences. Urgent action is needed to establish and uphold moral requirements, fostering believe and transparency in AI applications. By prioritizing these efforts, this paper will harness the transformative electricity of AI for the advancement of technological know-how and society while mitigating its bad repercussions.

Additionally, fostering collaboration among technologists, policymakers, and area professionals is important to expand comprehensive answers that stability innovation with obligation. This collaborative method will facilitate the introduction of effective rules and frameworks to safeguard facts authenticity inside the age of AI, selling a climate of agree with and accountability in the digital panorama. Keywords: Machine Learning, Generative AI The speedy advancement of generative synthetic intelligence (AI) technologies, exemplified via transformer-based fashions like ChatGPT, has revolutionized content material advent throughout diverse domain names, from textual content technology to picture synthesis and beyond. While those AI structures demonstrate awesome abilties in producing content material that mimics human-like creativity[1], they also enhance vast worries regarding authenticity and the capability for misuse. In current years, the proliferation of AI-generated content has delivered to the leading edge the urgent want to guard authenticity and mitigate the potential harms related to the dissemination of misleading or misleading information.

The capacity of AI algorithms to create extraordinarily convincing and seemingly genuine content poses profound demanding situations to the integrity of virtual information, threatening to exacerbate troubles which includes incorrect information, disinformation, and the... Our new report draws on open-source intelligence to trace how extremist actors coordinate across online platforms to justify violence and recruit supporters, offering a framework for policy and platform response. The Working Group on Gaming and Regulation submitted feedback to the European Commission’s Digital Fairness Act, calling for clearer, better-enforced rules across Member States that close regulatory gaps without adding unnecessary complexity to the... The Working Group on Gaming and Regulation submitted feedback to the European Commission’s Consumer Agenda 2025–2030, urging the EU to strengthen enforcement against manipulative design practices in digital games and to modernize consumer protection... A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2025 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Nature Machine Intelligence volume 5, pages 679–680 (2023)Cite this article Generative AI tools lower the cost of generating false but credible content at scale1, defeating the already weak moderation defenses of social media platforms. Using inauthentic accounts and other tricks to exploit algorithmic and socio-cognitive vulnerabilities, bad actors can monetize false and harmful content, commit fraud and manipulate opinions for political gains2,3. Generative AI tools such as ChatGPT make it easier to create large volumes of false (but convincing) social media profiles and content. Narratives can even be tailored to a particular community by an inauthentic influence campaign. For example, through health disinformation, a foreign adversary can make an entire population more vulnerable to a future pandemic4.

Image-generation models can create fake profile pictures that are indistinguishable from real people. It is occasionally possible to spot certain AI-generated images and text through glitches. For example, some generative adversarial network (GAN)-generated profiles can be detected from the positions of the eyes. Similarly, ChatGPT-generated content has been flagged from text patterns such as “as an AI model.…” These are obvious signatures that the ChatGPT API has been used to automatically create and post content. But even as these glitches can now be found everywhere, they reveal what is likely to be only the tip of an iceberg. Before our lab developed tools to detect social bots 10 years ago5, there was little awareness of bot manipulation.

Similarly, we currently have little awareness of the volume of inauthentic behaviour supported by AI. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Partha Konwar is Associate Project Officer (Communications) at the United Nations Educational, Scientific and Cultural Organization Mahatma Gandhi Institute of Education for Peace and Sustainable Development (UNESCO MGIEP). The widespread adoption of generative artificial intelligence (AI) platforms like ChatGPT and DALL-E following the COVID-19 pandemic has transformed many facets of our digital lives. Using natural language generation (NLG) and large language models (LLM), generative AI has become an efficient productivity tool for creating a wide range of content—from articles to visuals, reports, videos and voiceovers—without explicit instruction.

Much like seasoning a dish with salt, AI enhances productivity but requires careful control. The emergence of generative AI has undoubtedly increased output, effectiveness and creativity. It also carries significant hazards, especially regarding information integrity and human rights, since AI systems are being increasingly incorporated into digital platforms. The risks to information integrity posed by generative AI Realistic AI-generated or -mediated content can be highly believable, hard to detect and rapidly spread. When such content conveys false or misleading information, it can deepen trust deficits.

People Also Search

Chandigarh College Of Engineering And Technology, Chandigarh, India As Transformer-based

Chandigarh College of Engineering and Technology, Chandigarh, India As transformer-based AI, exemplified by using ChatGPT, continues to permeate various domains, worries regarding authenticity and explainability are at the upward thrust. It is crucial to enforce sturdy detection, verification, and explainability mechanisms to counteract the ability harms stemming from AI-generated inauthentic cont...

Additionally, Fostering Collaboration Among Technologists, Policymakers, And Area Professionals Is

Additionally, fostering collaboration among technologists, policymakers, and area professionals is important to expand comprehensive answers that stability innovation with obligation. This collaborative method will facilitate the introduction of effective rules and frameworks to safeguard facts authenticity inside the age of AI, selling a climate of agree with and accountability in the digital pan...

The Capacity Of AI Algorithms To Create Extraordinarily Convincing And

The capacity of AI algorithms to create extraordinarily convincing and seemingly genuine content poses profound demanding situations to the integrity of virtual information, threatening to exacerbate troubles which includes incorrect information, disinformation, and the... Our new report draws on open-source intelligence to trace how extremist actors coordinate across online platforms to justify v...

Nature Machine Intelligence Volume 5, Pages 679–680 (2023)Cite This Article

Nature Machine Intelligence volume 5, pages 679–680 (2023)Cite this article Generative AI tools lower the cost of generating false but credible content at scale1, defeating the already weak moderation defenses of social media platforms. Using inauthentic accounts and other tricks to exploit algorithmic and socio-cognitive vulnerabilities, bad actors can monetize false and harmful content, commit f...

Image-generation Models Can Create Fake Profile Pictures That Are Indistinguishable

Image-generation models can create fake profile pictures that are indistinguishable from real people. It is occasionally possible to spot certain AI-generated images and text through glitches. For example, some generative adversarial network (GAN)-generated profiles can be detected from the positions of the eyes. Similarly, ChatGPT-generated content has been flagged from text patterns such as “as ...