2410 15669 Learning To Generate And Evaluate Fact Checking
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Darius Feher, Abdullah Khered, Hao Zhang, Riza Batista-Navarro, Viktor Schlegel
Research output: Contribution to journal › Article › peer-review Research output: Contribution to journal › Article › peer-review T1 - Learning to generate and evaluate fact-checking explanations with transformers N2 - In an era increasingly dominated by digital platforms, the spread of misinformation poses a significant challenge, highlighting the need for solutions capable of assessing information veracity. Our research contributes to the field of Explainable Artificial Antelligence (XAI) by developing transformer-based fact-checking models that contextualise and justify their decisions by generating human-accessible explanations. Importantly, we also develop models for automatic evaluation of explanations for fact-checking verdicts across different dimensions such as (self)-contradiction, hallucination, convincingness and overall quality.
By introducing human-centred evaluation methods and developing specialised datasets, we emphasise the need for aligning Artificial Intelligence (AI)-generated explanations with human judgements. This approach not only advances theoretical knowledge in XAI but also holds practical implications by enhancing the transparency, reliability and users’ trust in AI-driven fact-checking systems. Furthermore, the development of our metric learning models is a first step towards potentially increasing efficiency and reducing reliance on extensive manual assessment. Based on experimental results, our best performing generative model achieved a Recall-Oriented Understudy for Gisting Evaluation-1 (ROUGE-1) score of 47.77 demonstrating superior performance in generating fact-checking explanations, particularly when provided with high-quality evidence. Additionally, the best performing metric learning model showed a moderately strong correlation with human judgements on objective dimensions such as (self)-contradiction and hallucination, achieving a Matthews Correlation Coefficient (MCC) of around 0.7. Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers.
Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. This list is automatically generated from the titles and abstracts of the papers in this site.
People Also Search
- [2410.15669] Learning to Generate and Evaluate Fact-checking ...
- Learning to generate and evaluate fact-checking explanations with ...
- Generating Fact Checking Explanations - ACL Anthology
- Related papers: Learning to Generate and Evaluate Fact-checking ...
- arXiv:2410.15669v1 [cs.CL] 21 Oct 2024
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
Research Output: Contribution To Journal › Article › Peer-review Research
Research output: Contribution to journal › Article › peer-review Research output: Contribution to journal › Article › peer-review T1 - Learning to generate and evaluate fact-checking explanations with transformers N2 - In an era increasingly dominated by digital platforms, the spread of misinformation poses a significant challenge, highlighting the need for solutions capable of assessing informati...
By Introducing Human-centred Evaluation Methods And Developing Specialised Datasets, We
By introducing human-centred evaluation methods and developing specialised datasets, we emphasise the need for aligning Artificial Intelligence (AI)-generated explanations with human judgements. This approach not only advances theoretical knowledge in XAI but also holds practical implications by enhancing the transparency, reliability and users’ trust in AI-driven fact-checking systems. Furthermor...
[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova Et Al., ACL 2020) ACL
[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies ...
Site Last Built On 27 November 2025 At 10:42 UTC
Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. This list is automatically generated from the titles and abstracts of the papers in this site.