The Impact And Opportunities Of Generative Ai In Fact Checking
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Generative AI, a technology that can create text, images, and other forms of media, is changing many jobs, especially in the field of information verification.
This article discusses how generative AI is being used in Fact-checking, the benefits and challenges it brings, and the importance of Human Expertise in this process. Fact-checking is a practice where individuals or organizations verify claims made in media, politics, and other sectors to ensure they are true and reliable. Fact-checkers help maintain the quality of information being shared and combat misinformation that can mislead the public. Generative AI has become popular in many workplaces, especially in large companies that want to use this technology for Efficiency. It can take basic information and use it to create human-like text or analyze data. AI models like ChatGPT can help create content faster, but they are also known to make mistakes or produce unreliable information.
Generative AI can assist fact-checkers in several ways: Editing: AI tools can help refine and improve the quality of fact-checking documents before they are published. They can identify grammatical errors and suggest better ways to present information. At the 2025 Milton Wolf Seminar, panel discussions tackled one of the urgent questions of the digital age: How can truth be verified in a world where its boundaries are increasingly blurred? Against a backdrop of widespread disinformation, increasing polarisation and declining public trust, the very notion of truth itself has become contested. In such an environment, fact-checking faces a dual challenge: not only is it harder to agree on what qualifies as truth, but disinformation now spreads with unprecedented speed and scale, outpacing traditional methods of...
Amid these challenges, large language models (LLMs) have emerged as both a source of the problem and, paradoxically, a potential part of the solution. LLMs are advanced generative artificial intelligence (AI) systems trained on large amounts of internet data to generate human-like output. On the one hand, LLMs can produce convincing falsehoods rapidly and at scale, exacerbating the spread of disinformation. On the other hand, its advanced capabilities might also be harnessed to detect, counter and even stop disinformation. This raises a critical question: Could generative AI, despite its risks, become an ally in the fight for truth? The 2025 Milton Wolf Seminar placed a strong emphasis on the dangers posed by AI, such as how it can produce disinformation and mislead individuals.
This blog post explores an alternative angle. Instead of viewing AI only through the lens of risk, it asks whether this technology might also serve as part of the solution. In this blog post, I will first discuss the evolution of fact-checking and how it adapted to the changing information ecosystem. Next, I will examine the challenges fact-checkers face today, especially the scale and speed of disinformation. I will then turn to LLMs, to consider whether this technology can help support the fact-checking process. Finally, I will reflect on how LLMs might strengthen the ongoing fight for truth.
Disinformation itself is not new, but social media has profoundly transformed how quickly and widely it spreads. Platforms such as Facebook and X (formerly Twitter) have redefined how citizens consume information. While democratising access, they have also created unregulated and minimally controlled spaces where disinformation can rapidly proliferate (Wittenberg & Berinsky, 2020). Unlike traditional journalism, where media professionals served as gatekeepers and information had to pass through institutional filters before reaching the public, social media platforms allow anyone to publish and share content instantly, without any... This has given rise to retroactive gatekeeping: a form of fact-checking that involves verifying the accuracy of claims after they have already begun circulating online (Singer, 2023). Generative AI tools are reshaping the information environment in ways most audiences never see.
From the data that trains them to the labour that maintains them, their inner workings raise urgent questions for journalism and democratic accountability. Our world is in the midst of a disruption triggered by the development of Artificial Intelligence (AI). Companies selling AI tools have become the most valuable corporations in modern times, worth trillions of dollars – more than the GDPs of most countries. They are becoming a pervasive influence on social, commercial, and political life, and shaking up industries. The media industry is among those facing new kinds of challenges due to the rise of AI. The practice and delivery of journalism, which is a vital component for functioning and healthy democracies, is changing in ways that are not obvious to its consumers.
To understand the impact of AI on our information environment and its political consequences requires a basic understanding of what Generative AI is and how it works. We need to “lift the bonnet” on what will increasingly power the information we receive and consume. READ I IT Rules 2025 test balance between safety and rights Judge Victoria Kolakowski sensed something was wrong with Exhibit 6C. Submitted by the plaintiffs in a California housing dispute, the video showed a witness whose voice was disjointed and monotone, her face fuzzy and lacking emotion. Every few seconds, the witness would twitch and repeat her expressions.
Kolakowski, who serves on California’s Alameda County Superior Court, soon realized why: The video had been produced using generative artificial intelligence. Though the video claimed to feature a real witness — who had appeared in another, authentic piece of evidence — Exhibit 6C was an AI “deepfake,” Kolakowski said. The case, Mendones v. Cushman & Wakefield, Inc., appears to be one of the first instances in which a suspected deepfake was submitted as purportedly authentic evidence in court and detected — a sign, judges and legal experts... Citing the plaintiffs’ use of AI-generated material masquerading as real evidence, Kolakowski dismissed the case on Sept. 9.
The plaintiffs sought reconsideration of her decision, arguing the judge suspected but failed to prove that the evidence was AI-generated. Judge Kolakowski denied their request for reconsideration on Nov. 6. The plaintiffs did not respond to a request for comment. Received 2024 Jan 24; Accepted 2024 May 15; Collection date 2024. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it...
For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. Generative AI (Gen AI), exemplified by ChatGPT, has witnessed a remarkable surge in popularity recently. This cutting-edge technology demonstrates an exceptional ability to produce human-like responses and engage in natural language conversations guided by context-appropriate prompts. However, its integration into education has become a subject of ongoing debate. This review examines the challenges of using Gen AI like ChatGPT in education and offers effective strategies. To retrieve relevant literature, a search of reputable databases was conducted, resulting in the inclusion of twenty-two publications.
Using Atlas.ti, the analysis reflected six primary challenges with plagiarism as the most prevalent issue, closely followed by responsibility and accountability challenges. Concerns were also raised about privacy, data protection, safety, and security risks, as well as discrimination and bias. Additionally, there were challenges about the loss of soft skills and the risks of the digital divide. To address these challenges, a number of strategies were identified and subjected to critical evaluation to assess their practicality. Most of them were practical and align with the ethical and pedagogical theories. Within the prevalent concepts, “ChatGPT” emerged as the most frequent one, followed by “AI,” “student,” “research,” and “education,” highlighting a growing trend in educational discourse.
Moreover, close collaboration was evident among the leading countries, all forming a single cluster, led by the United States. This comprehensive review provides implications, recommendations, and future prospects concerning the use of generative AI in education. Keywords: Gen AI, ChatGPT, Education, Challenges, Solutions, Theory, Authors’ perspective, UNESCO Artificial Intelligence (AI) refers to the field where machines or computer programs are designed to perform tasks that typically require human intellect, such as language processing, learning, problem-solving, and decision-making (Dalalah & Dalalah, 2023). Within AI, Gen AI constitutes a subset designed to produce new content, such as text, images, audio, or other data formats, often in a creative or human-like fashion. At the forefront of AI research and development stands OpenAI, a research company dedicated to advancing AI technology (Yilmaz & Yilmaz, 2023).
Among OpenAI notable achievements is ChatGPT (Yilmaz & Yilmaz, 2023), a prominent member of the generative pre-training transformer (GPT) model family and the largest publicly accessible language model (Dave, Athaluri & Singh, 2023) through... Huben Liu, Dimitris Papanikolaou, Lawrence D. W. Schmidt, Bryan Seegmiller Janice C. Eberly, Lukas Mahler, Jón Steinsson, Michèle Tertilt, David Wessel
People Also Search
- The Impact and Opportunities of Generative AI in Fact-Checking
- Generative AI's Role in Fact-Checking - Simple Science
- Exploring generative AI in the misinformation Era: Impacts as a ...
- Fact-Checking in the Digital Age: Can Generative AI Become an Ally ...
- Generative AI and journalism: Hidden risks reshaping information
- AI-generated evidence showing up in court alarms judges
- Generative AI and future education: a review, theoretical validation ...
- What jobs will be most affected by AI? - Brookings
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
This Article Discusses How Generative AI Is Being Used In
This article discusses how generative AI is being used in Fact-checking, the benefits and challenges it brings, and the importance of Human Expertise in this process. Fact-checking is a practice where individuals or organizations verify claims made in media, politics, and other sectors to ensure they are true and reliable. Fact-checkers help maintain the quality of information being shared and com...
Generative AI Can Assist Fact-checkers In Several Ways: Editing: AI
Generative AI can assist fact-checkers in several ways: Editing: AI tools can help refine and improve the quality of fact-checking documents before they are published. They can identify grammatical errors and suggest better ways to present information. At the 2025 Milton Wolf Seminar, panel discussions tackled one of the urgent questions of the digital age: How can truth be verified in a world whe...
Amid These Challenges, Large Language Models (LLMs) Have Emerged As
Amid these challenges, large language models (LLMs) have emerged as both a source of the problem and, paradoxically, a potential part of the solution. LLMs are advanced generative artificial intelligence (AI) systems trained on large amounts of internet data to generate human-like output. On the one hand, LLMs can produce convincing falsehoods rapidly and at scale, exacerbating the spread of disin...
This Blog Post Explores An Alternative Angle. Instead Of Viewing
This blog post explores an alternative angle. Instead of viewing AI only through the lens of risk, it asks whether this technology might also serve as part of the solution. In this blog post, I will first discuss the evolution of fact-checking and how it adapted to the changing information ecosystem. Next, I will examine the challenges fact-checkers face today, especially the scale and speed of di...