Fact Checking And Misinformation Detection Tool Github
A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis, supporting efforts in misinformation detection. AskVeracity is an agentic AI system that verifies factual claims through a combination of NLP techniques and large language models. The system gathers and analyzes evidence from multiple sources to provide transparent and explainable verdicts. AskVeracity is built with a modular architecture: Option 1: Using Streamlit secrets (recommended for local development) Push your code to the Hugging Face repository or upload files directly through the web interface
The MuMiN dataset is a challenging misinformation benchmark for automatic misinformation detection models. The dataset is structured as a heterogeneous graph and features 21,565,018 tweets and 1,986,354 users, belonging to 26,048 Twitter threads, discussing 12,914 fact-checked claims from 115 fact-checking organisations in 41 different languages, spanning a... The dataset has three different sizes and features two graph classification tasks: See Getting Started for a quickstart as well as an in-depth tutorial, including the building and training of multiple misinformation classifiers on MuMiN. We have created a tutorial which takes you through the dataset as well as shows how one could create several kinds of misinformation classifiers on the dataset. The tutorial can be found here.
See the leaderboard for a list of the best performing models. For new submissions, please email ryan.mcconville@bristol.ac.uk. Explore AI Frontiers, Master Industry Trends Your Daily AI Brief - Never Miss What's Next Smart Product Discovery - Comprehensive Market Intelligence AI Product Power Rankings - Performance, Buzz & Trends
Submit Your AI Product - Amplify Reach & Drive Growth Haonan Li, Xudong Han, Hao Wang, Yuxia Wang, Minghan Wang, Rui Xing, Yilin Geng, Zenan Zhai, Preslav Nakov, Timothy Baldwin [Loki: An Open-Source Tool for Fact Verification](https://aclanthology.org/2025.coling-demos.4/) (Li et al., COLING 2025) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research.
Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. FacTool: Factuality Detection in Generative AI Links to conference/journal publications in automated fact-checking (resources for the TACL22/EMNLP23 paper). Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
OSINT resources and tools by country, structured for fact-checkers and digital profilers Development environment for Meedan Check, a collaborative media annotation platform We introduce Loki, an open-source tool designed to address the growing problem of misinformation. Loki adopts a human-centered approach, striking a balance between the quality of fact-checking and the cost of human involvement. It decomposes the fact-checking task into a five-step pipeline: breaking down long texts into individual claims, assessing their check-worthiness, generating queries, retrieving evidence, and verifying the claims. Instead of fully automating the claim verification process, Loki provides essential information at each step to assist human judgment, especially for general users such as journalists and content moderators.
Moreover, it has been optimized for latency, robustness, and cost efficiency at a commercially usable level. Loki is released under an MIT license and is available on GitHub.111https://github.com/Libr-AI/OpenFactVerification We also provide a video presenting the system and its capabilities.222https://www.youtube.com/watch?v=L_3Dp41Lk_k Loki: An Open-Source Tool for Fact Verification Haonan Li1,2 Xudong Han1,2 Hao Wang1, Yuxia Wang1,2 Minghan Wang3 Rui Xing2,4 Yilin Geng1,4 Zenan Zhai1 Preslav Nakov2 Timothy Baldwin1,2,4 1LibrAI 2MBZUAI 3Monash University 4The University of Melbourne In today’s digital landscape, the rapid spread of misinformation has become a significant societal problem, with far-reaching consequences for politics, public health, and social stability (Pan et al., 2023; Augenstein et al., 2023). With the rise of online platforms, users are exposed to large volumes of information, often without the ability to assess its accuracy.
While manual fact-checking is reliable, it is labor-intensive, time-consuming, and often requires domain expertise, creating a gap where misinformation can spread unchecked and cause harm before being addressed. To address this problem, automated fact-checking systems have been proposed, but have mostly focused on full automation, which can negatively impact quality. Misinformation detection is a field of study to detect documents that include falsified information in the form of text, images, videos, etc. The research explores the ideas to detect misinformed facts from social media posts, news articles, videos and related multimedia data. Cheema, G.S., Hakimov, S., Sittar, A., Müller-Budack, E., Otto C. and Ewerth, R., (2022).
MM-Claims: A Dataset for Multimodal Claim Detection in Social Media. Findings of the North American Chapter of the Association for Computational Linguistics (Findings of NAACL 2022) PDF Git repo Müller-Budack, E., Theiner, J., Diering, S., Idahl, M., Hakimov, S. and Ewerth, R., (2021). Multimodal news analytics using measures of cross-modal entity and context consistency. International Journal of Multimedia Information Retrieval, pp.1-15.
PDF Git repo Cheema, G.S., Hakimov, S., Müller-Budack, E. and Ewerth, R., (2021). On the Role of Images for Analyzing Claims in Social Media. In the Proceedings of the CLEOPATRA workshop co-located with The Web Conference (WWW) PDF Git repo Cheema, G.S., Hakimov, S.
and Ewerth, R., (2020) TIB’s Visual Analytics Group at MediaEval’20: Detecting Fake News on Corona Virus and 5G Conspiracy. MediaEval workshop FakeNews task PDF Git repo Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval This repository provides scripts and workflows for translating fact-checking datasets and automating claim classification using large language models (LLMs). Code associated with the NAACL 2025 paper "COVE: COntext and VEracity prediction for out-of-context images" Tathya (तथ्य, "truth") is an Agentic fact-checking system that verifies claims using multiple sources including Google Search, DuckDuckGo, Wikidata, and news APIs.
It provides structured analysis with confidence scores, detailed explanations, and transparent source attribution through a modern Streamlit interface and FastAPI backend. debunkr.org Dashboard is a Browser extension that helps you analyze suspicious content on the web using AI-powered analysis. Simply highlight text on any website, right-click, and let our egalitarian AI analyze it for bias, manipulation, and power structures. Several news outlets across Brazil are participating in the Codesinfo project to produce digital tools to fight disinformation. Image: Screenshot, Codesinfo Brazil’s Institute for the Development of Journalism, or Projor for its initials in Portuguese, launched the second phase of the Innovation Fund to Combat Disinformation (Codesinfo), focused on the dissemination of five open-source digital...
Developed by Brazilian media outlets in late 2024, the solutions are available free of charge to any journalistic organization inside and outside Brazil. To reach international media outlets, Francisco Belda, Projor’s director of operations and coordinator of Codesinfo, said that the Codesinfo website is being translated into English and Spanish and the partner media outlets that created... “We believe that the five tools strengthen civic journalism in general,” Belda told LatAm Journalism Review (LJR). “This is due to their role in valuing the concept of authorship (Quem Disse? tool), fact-checking (Check-up), scientific evidence in environmental and climate change coverage (Capí chatbot), production of short videos based on textual reports (Mosaico) and in the provision of updated contextual information (Xarta).” Capí is an artificial intelligence chatbot developed by Ambiental Media that was launched in beta version in November 2024.
The tool’s purpose is to provide clear, up-to-date, and reliable answers to users’ questions on climate issues.
People Also Search
- Fact-checking and misinformation detection tool. - GitHub
- MuMiN - A Large-Scale Multilingual Multimodal Fact-Checked ...
- Popular GitHub repositories related to Fact Checking
- Loki: An Open-Source Tool for Fact Verification - ACL Anthology
- fact-checking · GitHub Topics · GitHub
- Top AI Fact-Checking Tools for GitHub in 2025 - Slashdot
- Loki: An Open-Source Tool for Fact Verification - arXiv.org
- Misinformation Detection - Sherzod Hakimov
- misinformation-detection · GitHub Topics · GitHub
- 5 Free Open Source Digital Tools to Combat Disinformation
A Streamlined Web Application That Analyzes Claims To Determine Their
A streamlined web application that analyzes claims to determine their truthfulness through evidence gathering and analysis, supporting efforts in misinformation detection. AskVeracity is an agentic AI system that verifies factual claims through a combination of NLP techniques and large language models. The system gathers and analyzes evidence from multiple sources to provide transparent and explai...
The MuMiN Dataset Is A Challenging Misinformation Benchmark For Automatic
The MuMiN dataset is a challenging misinformation benchmark for automatic misinformation detection models. The dataset is structured as a heterogeneous graph and features 21,565,018 tweets and 1,986,354 users, belonging to 26,048 Twitter threads, discussing 12,914 fact-checked claims from 115 fact-checking organisations in 41 different languages, spanning a... The dataset has three different sizes...
See The Leaderboard For A List Of The Best Performing
See the leaderboard for a list of the best performing models. For new submissions, please email ryan.mcconville@bristol.ac.uk. Explore AI Frontiers, Master Industry Trends Your Daily AI Brief - Never Miss What's Next Smart Product Discovery - Comprehensive Market Intelligence AI Product Power Rankings - Performance, Buzz & Trends
Submit Your AI Product - Amplify Reach & Drive Growth
Submit Your AI Product - Amplify Reach & Drive Growth Haonan Li, Xudong Han, Hao Wang, Yuxia Wang, Minghan Wang, Rui Xing, Yilin Geng, Zenan Zhai, Preslav Nakov, Timothy Baldwin [Loki: An Open-Source Tool for Fact Verification](https://aclanthology.org/2025.coling-demos.4/) (Li et al., COLING 2025) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective cop...
Materials Published In Or After 2016 Are Licensed On A
Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. FacTool: Factuality Detection in Generative AI Links to conference/journal publications in automated fact-checking (resources for the TACL2...