Tackling Falsehoods With Generative Ai A Systematic Cross Topic
We are the leading scholarly society concerned with the research and teaching of political science in Europe, headquartered in the UK with a global membership. Our groups and networks are pushing the boundaries of specialist sub-fields of political science, helping to nurture diversity and inclusivity across the discipline. This unique event has helped tens of thousands of scholars over nearly five decades hone research, grow networks and secure publishing contracts. An engaging platform for discussion, debate and thinking; Europe's largest annual gathering of political scientists from across the globe. A comprehensive programme of cutting-edge qualitative and quantitative methodological training delivered by experts. Important: e-prints posted on arXiv are not peer-reviewed by arXiv; they should not be relied upon without context to guide clinical practice or health-related behavior and should not be reported in news media as...
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. You have full access to this open access article
The rapid advancement of generative artificial intelligence (AI) has introduced both opportunities and challenges in the fight against misinformation. This scoping review synthesizes recent empirical studies to explore the dual role of generative AI—particularly large language models (LLMs)—in the generation, detection, mitigation, and impact of misinformation. Analyzing 24 empirical studies, our review suggests that LLMs can generate highly convincing misinformation, often exploiting cognitive biases and ideological leanings of the audiences, while also demonstrating the ability to detect false claims and... Mitigation efforts show mixed results, with personalized corrections proving effective but safeguards inconsistently applied. Additionally, exposure to AI-generated misinformation was found to reduce trust and influence decision-making. This review underscores the need for standardized evaluation metrics, interdisciplinary collaboration, and stronger regulatory measures to ensure the responsible use of generative AI in the information ecosystem.
Avoid common mistakes on your manuscript. In recent years, the proliferation of misinformation has emerged as one of the most pressing challenges facing contemporary society (Lewandowsky 2023; Swire-Thompson and Lazer 2020). While definitions vary, misinformation is broadly understood as false or misleading information shared without malicious intent (Ireton and Posetti 2018), and, more broadly, as an umbrella term that includes all forms of false or... 2022). In the health and science context, misinformation typically refers to content that contradicts expert consensus (Nan et al. 2023; Swire-Thompson and Lazer 2020; Vraga and Bode 2020).
Its societal impact is particularly acute in domains such as health, where it has been linked to vaccine hesitancy, resistance to public health measures, and confusion during crises like the COVID-19 pandemic (Nan et... 2022a). Compounding this challenge is the rise of generative artificial intelligence (AI), a rapidly evolving class of technologies capable of producing coherent, human-like text, images, audios, and videos. Powered by large language models (LLMs) such as OpenAI’s GPT series and Google’s Bard, generative AI systems now function not merely as information retrieval tools but as autonomous content creators (Cascella et al. 2023). Their fluency, scale, and adaptability offer unprecedented opportunities—and risks—for how information is generated, consumed, and trusted.
While generative AI holds promise for combating misinformation through tools for detection and correction, it also introduces new vectors for harm. One critical concern is the phenomenon of AI “hallucinations”—instances where LLMs confidently produce factually inaccurate responses (Bandara 2024). Such content, when delivered with persuasive language and without disclaimers, can mislead users who interpret AI outputs as authoritative or objective, a tendency rooted in the “machine heuristic” (Sundar 2008). Moreover, generative AI may be intentionally exploited by bad actors to fabricate convincing disinformation, synthetic media, or counterfeit scientific reports (Kim et al. 2024). Associate Professor, The University of Queensland
Senior Lecturer in Business Information Systems, The University of Queensland Stan Karanasios receives funding from Emergency Management Victoria, Asia-Pacific Telecommunity, and the International Telecommunications Union. Stan is a Distinguished Member of the Association for Information Systems. Marten Risius is the recipient of an Australian Research Council Australian Discovery Early Career Award (project number DE220101597) funded by the Australian Government. University of Queensland provides funding as a member of The Conversation AU. Generative artificial intelligence (GenAI) represents a pivotal development in the contemporary information ecosystem.
Large Language and Image Models now enable rapid and scalable creation of (hyper)realistic yet synthetic content. As these models become more accessible and sophisticated, so too do their capacities to distort public discourse, manipulate perceptions, and undermine trust in democratic institutions. At the same time, these technologies offer promising tools for detection, resilience building, and possibly countering falsehoods. As such, there is rapid global and cross-disciplinary interest in understanding how AI-driven tools have added another dimension to the existing challenge of disinformation. This Special Section brings together timely and original scholarship on this challenge, with the aim to explore the multifaceted role of AI, including how it can both contribute to, as well as potentially provide... López-Borrull, A.; Lopezosa, C.
Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications 2025, 13, 33. https://doi.org/10.3390/publications13030033 López-Borrull A, Lopezosa C. Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications.
2025; 13(3):33. https://doi.org/10.3390/publications13030033 López-Borrull, Alexandre, and Carlos Lopezosa. 2025. "Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review" Publications 13, no. 3: 33.
https://doi.org/10.3390/publications13030033 López-Borrull, A., & Lopezosa, C. (2025). Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications, 13(3), 33. https://doi.org/10.3390/publications13030033
People Also Search
- Tackling Falsehoods with Generative AI: A Systematic Cross-Topic ...
- Fact-checking with Generative AI: A Systematic Cross-Topic Examination ...
- Generative AI and misinformation: a scoping review of the role of ...
- Exploring generative AI in the misinformation Era: Impacts as a ...
- AI-Generated Misinformation: A Case Study on Emerging Trends in Fact ...
- Algorithms are pushing AI-generated falsehoods at an alarming rate. How ...
- Generative AI and Disinformation| (Generative) AI and Disinformation ...
- Mapping the Impact of Generative AI on Disinformation: Insights ... - MDPI
- Generative AI as a tool for truth | Science
- Misinformation, Disinformation, and Generative AI: Implications for ...
We Are The Leading Scholarly Society Concerned With The Research
We are the leading scholarly society concerned with the research and teaching of political science in Europe, headquartered in the UK with a global membership. Our groups and networks are pushing the boundaries of specialist sub-fields of political science, helping to nurture diversity and inclusivity across the discipline. This unique event has helped tens of thousands of scholars over nearly fiv...
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
The Rapid Advancement Of Generative Artificial Intelligence (AI) Has Introduced
The rapid advancement of generative artificial intelligence (AI) has introduced both opportunities and challenges in the fight against misinformation. This scoping review synthesizes recent empirical studies to explore the dual role of generative AI—particularly large language models (LLMs)—in the generation, detection, mitigation, and impact of misinformation. Analyzing 24 empirical studies, our ...
Avoid Common Mistakes On Your Manuscript. In Recent Years, The
Avoid common mistakes on your manuscript. In recent years, the proliferation of misinformation has emerged as one of the most pressing challenges facing contemporary society (Lewandowsky 2023; Swire-Thompson and Lazer 2020). While definitions vary, misinformation is broadly understood as false or misleading information shared without malicious intent (Ireton and Posetti 2018), and, more broadly, a...
Its Societal Impact Is Particularly Acute In Domains Such As
Its societal impact is particularly acute in domains such as health, where it has been linked to vaccine hesitancy, resistance to public health measures, and confusion during crises like the COVID-19 pandemic (Nan et... 2022a). Compounding this challenge is the rise of generative artificial intelligence (AI), a rapidly evolving class of technologies capable of producing coherent, human-like text, ...