Askveracity An Agentic Fact Checking System For Misinformation
A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis, supporting efforts in misinformation detection. AskVeracity is an agentic AI system that verifies factual claims through a combination of NLP techniques and large language models. The system gathers and analyzes evidence from multiple sources to provide transparent and explainable verdicts. AskVeracity is built with a modular architecture: Option 1: Using Streamlit secrets (recommended for local development) Push your code to the Hugging Face repository or upload files directly through the web interface
In today’s digital landscape, misinformation spreads at unprecedented speeds. The ease with which false information can propagate through social media platforms, news aggregators, and messaging apps has created an urgent need for effective fact-checking solutions. To address this challenge, we have developed AskVeracity – an AI-powered misinformation detection and fact-checking application designed to verify recent news and factual claims by gathering and analyzing evidence from multiple sources in real... This blog post will walk through the architecture, implementation, and effectiveness of the AskVeracity system, providing insights into how modern AI techniques can be applied to combat misinformation. AskVeracity is a fact-checking and misinformation detection system that analyzes claims to determine their truthfulness through evidence gathering and analysis. Built with a focus on transparency and reliability, the application aims to support broader efforts in countering misinformation.
AskVeracity follows an agentic architecture based on the ReAct (Reasoning + Acting) framework. The system is built around a central agent that orchestrates individual tools to perform the fact-checking process. The following diagram illustrates the system architecture. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Organize your preprints, BibTeX, and PDFs with Paperpile. Enhance arXiv with our new Chrome Extension. Agentic Fact-Checking System Architecture refers to computational frameworks where autonomous or semi-autonomous agents orchestrate information retrieval, evidence evaluation, reasoning, and verdict explanation to assess the veracity of claims. Recent work conceptualizes "agentic" systems as those that decompose, coordinate, and dynamically adapt fact-checking workflows across modular components, supporting scalable, transparent, and often interactive operations in complex, real-world misinformation settings (Miranda et al., 2019).
Agentic fact-checking architectures are organized into pipelines comprised of modular, sequential components that reflect the multi-step workflow of human fact-checkers. A canonical design includes: Claim→[Retrieval]→[Ranking]→[NLI]→Verdict\text{Claim} \rightarrow [\text{Retrieval}] \rightarrow [\text{Ranking}] \rightarrow [\text{NLI}] \rightarrow \text{Verdict}Claim→[Retrieval]→[Ranking]→[NLI]→Verdict Describe your interests as you like, be it keywords or sentence, you’ll get relevant papers every day via AI semantic matching. IJCAI, ICML, CVPR, KDD... Access papers from top AI conferences, all in one tool, for free.
Keep your research organized with our bookmarking feature. Build a well-structured knowledge hub at your fingertips. Got questions? Ask in ChatDOC with one single click. Use AI to pull data, clarify terms, and verify facts with our precise word-level tracing feature. There was an error while loading.
Please reload this page. The proliferation of misinformation poses a significant threat to society, exacerbated by the capabilities of generative AI. This demo paper introduces Veracity, an open-source AI system designed to empower individuals to combat misinformation through transparent and accessible fact-checking. Veracity leverages the synergy between Large Language Models (LLMs) and web retrieval agents to analyze user-submitted claims and provide grounded veracity assessments with intuitive explanations. Key features include multilingual support, numerical scoring of claim veracity, and an interactive interface inspired by familiar messaging applications. This paper will showcase Veracity’s ability to not only detect misinformation but also explain its reasoning, fostering media literacy and promoting a more informed society.
Experts have rated the dissemination of misinformation and disinformation as the #1 risk the world faces Torkington (2024). This risk has only increased with the proliferation and advancement of generative AI Bowen et al. (2024); Pelrine et al. (2023b). Responses to misinformation have up to now been largely centred around platform moderation. As large-scale social media platforms actively eliminate their content moderation teams Horvath et al.
(2025), they pass to the user the personal and social responsibility to assess the reliability of claims and figure out how to make well-grounded decisions in a landscape of uncertain information. In the absence of strong platform-based approaches, solutions that support and empower individuals with tools to validate the information they encounter become essential in dampening the societally corrosive effects of misinformation. Misinformation is particularly dangerous when it influences public health and democratic processes, as seen in the spread of vaccine-related disinformation and politically motivated claims about censorship, both of which have been shown to exacerbate... With the rollback of content moderation efforts and increasing concerns over algorithmic bias on social media platforms, independent, reliable fact-checking tools are more necessary than ever. A promising solution in this area is an AI Steward that helps people fact-check and filter out manipulative and fake information. In fact, AI can outperform human fact-checkers in both accuracy Wei et al.
(2024); Zhou et al. (2024) and helpfulness Zhou et al. (2024). Although there is rapid progress in improving the accuracy of such systems Tian et al. (2024); Wei et al. (2024); Ram et al.
(2024), there is much less research on how to make a high-accuracy system into a helpful and trustworthy one that users can rely on Augenstein et al. (2024). Our AI-powered open-source solution, Veracity, deploys large language models (LLMs) working with web retrieval agents to provide any member of the public with an efficient and grounded analysis of how factual their input text... Moreover, through open-sourcing our platform, we hope to bring a test-bed for the research community to design effective fact-checking strategies.
People Also Search
- GitHub - ankanghosh/askveracity: Fact-checking and misinformation ...
- AskVeracity: An Agentic Fact-Checking System for Misinformation ...
- Veracity: An Open-Source AI Fact-Checking System
- Agentic AI for Misinformation Detection and Fact-Checking - Futurist ...
- Agentic Fact-Checking Architecture - emergentmind.com
- Veracity: An Open-Source AI Fact-Checking System | AI Research Paper ...
- askveracity/README.md at master · ankanghosh/askveracity
- Veracity: An Open-Source AI Fact-Checking System - arXiv.org
A Streamlined Web Application That Analyzes Claims To Determine Their
A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis, supporting efforts in misinformation detection. AskVeracity is an agentic AI system that verifies factual claims through a combination of NLP techniques and large language models. The system gathers and analyzes evidence from multiple sources to provide transparent and explai...
In Today’s Digital Landscape, Misinformation Spreads At Unprecedented Speeds. The
In today’s digital landscape, misinformation spreads at unprecedented speeds. The ease with which false information can propagate through social media platforms, news aggregators, and messaging apps has created an urgent need for effective fact-checking solutions. To address this challenge, we have developed AskVeracity – an AI-powered misinformation detection and fact-checking application designe...
AskVeracity Follows An Agentic Architecture Based On The ReAct (Reasoning
AskVeracity follows an agentic architecture based on the ReAct (Reasoning + Acting) framework. The system is built around a central agent that orchestrates individual tools to perform the fact-checking process. The following diagram illustrates the system architecture. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individua...
Have An Idea For A Project That Will Add Value
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Organize your preprints, BibTeX, and PDFs with Paperpile. Enhance arXiv with our new Chrome Extension. Agentic Fact-Checking System Architecture refers to computational frameworks where autonomous or semi-autonomous agents orchestrate information retrieval, evidence evaluation, reasoning, and verdict ...
Agentic Fact-checking Architectures Are Organized Into Pipelines Comprised Of Modular,
Agentic fact-checking architectures are organized into pipelines comprised of modular, sequential components that reflect the multi-step workflow of human fact-checkers. A canonical design includes: Claim→[Retrieval]→[Ranking]→[NLI]→Verdict\text{Claim} \rightarrow [\text{Retrieval}] \rightarrow [\text{Ranking}] \rightarrow [\text{NLI}] \rightarrow \text{Verdict}Claim→[Retrieval]→[Ranking]→[NLI]→Ve...