Generating Fact Checking Explanations Springerlink

Bonisiwe Shabane
-
generating fact checking explanations springerlink

Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process—generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model (The content in this chapter has been reprinted,... (2020)).

This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18). Curran Associates Inc, USA, pp 9525–9536. http://dl.acm.org/citation.cfm?id=3327546.3327621

Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513. https://aclanthology.org/W18-5513 Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein

[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers.

Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review T1 - Generating Fact Checking Explanations N2 - Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying... A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims.

This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming.

Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth.

We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check (The content in this chapter has been reprinted, with permission, from the original work... This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout Due to the increased cost and execution time of the complex annotation task, and following related work that manually evaluates FC explanations (Atanasova et al. 2020b) and machine-generated summaries (Liu and Lapata 2019). Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling.

In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513. https://aclanthology.org/W18-5513 Current research on explainability and interpretability of machine learning algorithms This site is a community effort by the Explainable AI members.

Please join the group and reach out to the administrators if you have any questions. Made with Jekyll and Hyde. Idea and base code for the website adapted from ml4code. Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction.

Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.

People Also Search

Most Existing Work On Automated Fact Checking Is Concerned With

Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process—generating justifications for verdicts on claims. Th...

This Is A Preview Of Subscription Content, Log In Via

This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18). Curran Associates Inc, USA, pp 9525–9536. http://dl.acm.org/cit...

Alhindi T, Petridis S, Muresan S (2018) Where Is Your

Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513. https://aclanthology.org/W18-5513 Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Auge...

[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova Et Al., ACL 2020) ACL

[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies ...

Site Last Built On 27 November 2025 At 10:42 UTC

Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with pa...