Disinformation Detection An Evolving Challenge In The Age Of Llms
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2025 IEEE - All rights reserved.
Use of this web site signifies your agreement to the terms and conditions. Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that it is still far from being perfect.
That is, in particular, JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser.
For more information see our F.A.Q. In a recent submission to the arXiv* server, researchers comprehensively examined the detection of large language models (LLMs)-generated misinformation. *Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial... The emergence of LLMs, including models such as Chat Generative Pre-Trained Transformer (ChatGPT) and Meta’s language model (Llama), has marked a significant milestone in computational social science (CSS). While LLMs have opened doors to extensive studies of human language and behavior, concerns about their potential misuse for disinformation have arisen. As these models advance in generating highly convincing human-like content, the risk of their exploitation for the creation of misleading information on a large scale becomes evident.
Recent research has highlighted this concern, acknowledging the cost-effectiveness and effectiveness of AI-generated disinformation. In text generation, there has been a transition from small language models (SLMs) to LLMs with billions of parameters, resulting in significant advancements. Models such as the Language Model for Dialogue Application (LaMDA), Bloom, Pathways Language Model (PaLM), and the generative pre-trained transformer (GPT) family have demonstrated the ability to produce human-level responses. However, the format of input prompts can influence performance, and advanced prompt engineering techniques are crucial to guide LLMs toward more accurate and higher-quality responses. Before the rise of LLMs, disinformation detection was primarily centered around SLMs such as bidirectional encoder representations from transformers (BERT), GPT-2, and text-to-text transfer transformers (T5). Deep learning has played a pivotal role in detecting disinformation, with models such as the hybrid deep model for fake news (CSI) and FakeBERT employing neural networks to identify textual features indicative of disinformation.
The introduction of LLMs, with their vast parameters, has significantly complicated disinformation detection, given their ability to produce natural, human-like text. This shift raises critical questions about the effectiveness of existing disinformation detection methods designed around SLMs.
People Also Search
- Disinformation Detection: An Evolving Challenge in the Age of LLMs
- Combating misinformation in the age of LLMs: : Opportunities and ...
- Disinformation in the Age of Artificial Intelligence (AI): Implications ...
- Catching Chameleons: Detecting Evolving Disinformation ... - IEEE Xplore
- dblp: Disinformation Detection: An Evolving Challenge in the Age of LLMs.
- Proceedings of the 2024 SIAM International Conference on Data Mining ...
- Disinformation Detection in the Age of LLMs: Challenges and ... - AZoAi
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
Use Of This Web Site Signifies Your Agreement To The
Use of this web site signifies your agreement to the terms and conditions. Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For...
That Is, In Particular, JavaScript Is Requires In Order To
That is, in particular, JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser.
For More Information See Our F.A.Q. In A Recent Submission
For more information see our F.A.Q. In a recent submission to the arXiv* server, researchers comprehensively examined the detection of large language models (LLMs)-generated misinformation. *Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established ...
Recent Research Has Highlighted This Concern, Acknowledging The Cost-effectiveness And
Recent research has highlighted this concern, acknowledging the cost-effectiveness and effectiveness of AI-generated disinformation. In text generation, there has been a transition from small language models (SLMs) to LLMs with billions of parameters, resulting in significant advancements. Models such as the Language Model for Dialogue Application (LaMDA), Bloom, Pathways Language Model (PaLM), an...