2503 18293 Fact Checking Ai Generated News Reports Can Llms Catch

Bonisiwe Shabane
-
2503 18293 fact checking ai generated news reports can llms catch

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. This list is automatically generated from the titles and abstracts of the papers in this site.

Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that it is still far from being perfect. That is, in particular,

JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.

Xinyu Wang, Wenbo Zhang, Sai Koneru, Hangzhi Guo, Bonam Mingole, S. Shyam Sundar, Sarah Rajtmajer, Amulya Yadav [Have LLMs Reopened the Pandora’s Box of AI-Generated Fake News?](https://aclanthology.org/2025.naacl-long.142/) (Wang et al., NAACL 2025) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research.

Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. Scientific Reports , Article number: (2025) Cite this article We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing.

Please note there may be errors present which affect the content, and all legal disclaimers apply. Mainstream media, with its broad reach, plays a central role in shaping public opinion and thus warrants close scrutiny. Subtle forms of media bias–such as selective fact presentation and tone–can meaningfully influence public attitudes, even when reporting remains factually accurate. Although effects such as these have been widely studied by scholars of framing, much of the existing research focuses on specific topics and relies on manually constructed or pre-existing frames, limiting both scalability and... Here we introduce a novel framework that leverages large language models (LLMs) to generate synthetic news articles by systematically varying the selection and tone of the content while holding factual accuracy and other features... We evaluate the impact of these alternative framings in a large, pre-registered randomized experiment (N = 2,141), and find that selective presentation of accurate information can significantly shift individuals’ policy views and emotional responses...

These effects are consistently stronger for negative than positive framings and are more pronounced among individuals who say they are less informed about the topic. Our findings demonstrate the persuasive power of subtle bias in mainstream news as well as the value of LLMs as tools for scalable, controlled investigations of media effects. All survey data, experiment materials, and analysis code required for replicating the results can be found in the project’s OSF page (OSF link: https://osf.io/9g7sq/?view_only=189a8b9b2f644433bdbfcd4fc5c63ffe). Liedke, J. & Wang, L. News platform fact sheet.

Pew Res. Center 15, 253 (2023). Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15175)) Included in the following conference series: The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of...

Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional fact-checking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice. This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout

People Also Search

ArXivLabs Is A Framework That Allows Collaborators To Develop And

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...

Please Note: Providing Information About References And Citations Is Only

Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Plea...

JavaScript Is Requires In Order To Retrieve And Display Any

JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see ...

Xinyu Wang, Wenbo Zhang, Sai Koneru, Hangzhi Guo, Bonam Mingole,

Xinyu Wang, Wenbo Zhang, Sai Koneru, Hangzhi Guo, Bonam Mingole, S. Shyam Sundar, Sarah Rajtmajer, Amulya Yadav [Have LLMs Reopened the Pandora’s Box of AI-Generated Fake News?](https://aclanthology.org/2025.naacl-long.142/) (Wang et al., NAACL 2025) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are ...

Materials Published In Or After 2016 Are Licensed On A

Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. Scientific Reports , Article number: (2025) Cite this article We are providing an unedited version of this manuscript to give early access ...