Veracity An Online Open Source Fact Checking Solution
The proliferation of misinformation poses a significant threat to society, exacerbated by the capabilities of generative AI. This demo paper introduces Veracity, an open-source AI system designed to empower individuals to combat misinformation through transparent and accessible fact-checking. Veracity leverages the synergy between Large Language Models (LLMs) and web retrieval agents to analyze user-submitted claims and provide grounded veracity assessments with intuitive explanations. Key features include multilingual support, numerical scoring of claim veracity, and an interactive interface inspired by familiar messaging applications. This paper will showcase Veracity’s ability to not only detect misinformation but also explain its reasoning, fostering media literacy and promoting a more informed society. Experts have rated the dissemination of misinformation and disinformation as the #1 risk the world faces Torkington (2024).
This risk has only increased with the proliferation and advancement of generative AI Bowen et al. (2024); Pelrine et al. (2023b). Responses to misinformation have up to now been largely centred around platform moderation. As large-scale social media platforms actively eliminate their content moderation teams Horvath et al. (2025), they pass to the user the personal and social responsibility to assess the reliability of claims and figure out how to make well-grounded decisions in a landscape of uncertain information.
In the absence of strong platform-based approaches, solutions that support and empower individuals with tools to validate the information they encounter become essential in dampening the societally corrosive effects of misinformation. Misinformation is particularly dangerous when it influences public health and democratic processes, as seen in the spread of vaccine-related disinformation and politically motivated claims about censorship, both of which have been shown to exacerbate... With the rollback of content moderation efforts and increasing concerns over algorithmic bias on social media platforms, independent, reliable fact-checking tools are more necessary than ever. A promising solution in this area is an AI Steward that helps people fact-check and filter out manipulative and fake information. In fact, AI can outperform human fact-checkers in both accuracy Wei et al. (2024); Zhou et al.
(2024) and helpfulness Zhou et al. (2024). Although there is rapid progress in improving the accuracy of such systems Tian et al. (2024); Wei et al. (2024); Ram et al. (2024), there is much less research on how to make a high-accuracy system into a helpful and trustworthy one that users can rely on Augenstein et al.
(2024). Our AI-powered open-source solution, Veracity, deploys large language models (LLMs) working with web retrieval agents to provide any member of the public with an efficient and grounded analysis of how factual their input text... Moreover, through open-sourcing our platform, we hope to bring a test-bed for the research community to design effective fact-checking strategies. A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis, supporting efforts in misinformation detection. AskVeracity is an agentic AI system that verifies factual claims through a combination of NLP techniques and large language models. The system gathers and analyzes evidence from multiple sources to provide transparent and explainable verdicts.
AskVeracity is built with a modular architecture: Option 1: Using Streamlit secrets (recommended for local development) Push your code to the Hugging Face repository or upload files directly through the web interface Sign up or log in to verify information and build trust with those you share it with. We strictly limit data collection to only what is essential for functionality—nothing more. Describe your interests as you like, be it keywords or sentence, you’ll get relevant papers every day via AI semantic matching.
IJCAI, ICML, CVPR, KDD... Access papers from top AI conferences, all in one tool, for free. Keep your research organized with our bookmarking feature. Build a well-structured knowledge hub at your fingertips. Got questions? Ask in ChatDOC with one single click.
Use AI to pull data, clarify terms, and verify facts with our precise word-level tracing feature. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
In today’s digital landscape, misinformation spreads at unprecedented speeds. The ease with which false information can propagate through social media platforms, news aggregators, and messaging apps has created an urgent need for effective fact-checking solutions. To address this challenge, we have developed AskVeracity – an AI-powered misinformation detection and fact-checking application designed to verify recent news and factual claims by gathering and analyzing evidence from multiple sources in real... This blog post will walk through the architecture, implementation, and effectiveness of the AskVeracity system, providing insights into how modern AI techniques can be applied to combat misinformation. AskVeracity is a fact-checking and misinformation detection system that analyzes claims to determine their truthfulness through evidence gathering and analysis. Built with a focus on transparency and reliability, the application aims to support broader efforts in countering misinformation.
AskVeracity follows an agentic architecture based on the ReAct (Reasoning + Acting) framework. The system is built around a central agent that orchestrates individual tools to perform the fact-checking process. The following diagram illustrates the system architecture.
People Also Search
- Veracity: An Online, Open-Source Fact-Checking Solution
- Veracity: An Open-Source AI Fact-Checking System - arXiv.org
- Fact-checking and misinformation detection tool. - GitHub
- Veracity AI
- Veracity: An Open-Source AI Fact-Checking System | AI Research Paper ...
- Veracity: An Open-Source AI Fact-Checking System
- Veracity: an Online, Open-source Fact Checking Solution
- AskVeracity: An Agentic Fact-Checking System for Misinformation ...
The Proliferation Of Misinformation Poses A Significant Threat To Society,
The proliferation of misinformation poses a significant threat to society, exacerbated by the capabilities of generative AI. This demo paper introduces Veracity, an open-source AI system designed to empower individuals to combat misinformation through transparent and accessible fact-checking. Veracity leverages the synergy between Large Language Models (LLMs) and web retrieval agents to analyze us...
This Risk Has Only Increased With The Proliferation And Advancement
This risk has only increased with the proliferation and advancement of generative AI Bowen et al. (2024); Pelrine et al. (2023b). Responses to misinformation have up to now been largely centred around platform moderation. As large-scale social media platforms actively eliminate their content moderation teams Horvath et al. (2025), they pass to the user the personal and social responsibility to ass...
In The Absence Of Strong Platform-based Approaches, Solutions That Support
In the absence of strong platform-based approaches, solutions that support and empower individuals with tools to validate the information they encounter become essential in dampening the societally corrosive effects of misinformation. Misinformation is particularly dangerous when it influences public health and democratic processes, as seen in the spread of vaccine-related disinformation and polit...
(2024) And Helpfulness Zhou Et Al. (2024). Although There Is
(2024) and helpfulness Zhou et al. (2024). Although there is rapid progress in improving the accuracy of such systems Tian et al. (2024); Wei et al. (2024); Ram et al. (2024), there is much less research on how to make a high-accuracy system into a helpful and trustworthy one that users can rely on Augenstein et al.
(2024). Our AI-powered Open-source Solution, Veracity, Deploys Large Language Models
(2024). Our AI-powered open-source solution, Veracity, deploys large language models (LLMs) working with web retrieval agents to provide any member of the public with an efficient and grounded analysis of how factual their input text... Moreover, through open-sourcing our platform, we hope to bring a test-bed for the research community to design effective fact-checking strategies. A streamlined we...