Pdf Generating Fact Checking Explanations Acl Anthology
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community?
Learn more about arXivLabs. Semantics: Textual Inference and Other Areas of Semantics Long Paper © 2020 Association for Computational Linguistics Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modeled jointly with veracity prediction.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modeled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. When a potentially viral news item is rapidly or indiscriminately published by a news outlet, the responsibility of verifying the truthfulness of the item is often passed on to the audience. To alleviate this problem, independent teams of professional fact checkers manually verify the veracity and credibility of common or particularly check-worthy statements circulating the web. However, these teams have limited resources to perform manual fact checks, thus creating a need for automating the fact checking process. The current research landscape in automated fact checking is comprised of systems that estimate the veracity of claims based on available metadata and evidence pages. Datasets like LIAR Wang (2017) and the multi-domain dataset MultiFC Augenstein et al.
(2019) provide real-world benchmarks for evaluation. There are also artificial datasets of a larger scale, e.g., the FEVER Thorne et al. (2018) dataset based on Wikipedia articles. As evident from the effectiveness of state-of-the-art methods for both real-world – 0.492 macro F1 score Augenstein et al. (2019), and artificial data – 68.46 FEVER score (label accuracy conditioned on evidence provided for ‘supported’ and ‘refuted’ claims) Stammbach and Neumann (2019), the task of automating fact checking remains a significant and poignant... A prevalent component of existing fact checking systems is a stance detection or textual entailment model that predicts whether a piece of evidence contradicts or supports a claim Ma et al.
(2018); Mohtarami et al. (2018); Xu et al. (2018). Existing research, however, rarely attempts to directly optimise the selection of relevant evidence, i.e., the self-sufficient explanation for predicting the veracity label Thorne et al. (2018); Stammbach and Neumann (2019). On the other hand, Alhindi et al.
(2018) have reported a significant performance improvement of over 10% macro F1 score when the system is provided with a short human explanation of the veracity label. Still, there are no attempts at automatically producing explanations, and automating the most elaborate part of the process - producing the justification for the veracity prediction - is an understudied problem. In the field of NLP as a whole, both explainability and interpretability methods have gained importance recently, because most state-of-the-art models are large, neural black-box models. Interpretability, on one hand, provides an overview of the inner workings of a trained model such that a user could, in principle, follow the same reasoning to come up with predictions for new instances. However, with the increasing number of neural units in published state-of-the-art models, it becomes infeasible for users to track all decisions being made by the models. Explainability, on the other hand, deals with providing local explanations about single data points that suggest the most salient areas from the input or are generated textual explanations for a particular prediction.
Keywords: Generating Explanations, automated checking, predicting claims, generating justifications About · Contact · Privacy Policy · Legal Notice [Evaluating Evidence Attribution in Generated Fact Checking Explanations](https://aclanthology.org/2025.naacl-long.282/) (Xing et al., NAACL 2025) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research.
Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d.
People Also Search
- Generating Fact Checking Explanations - ACL Anthology
- [2004.05773] Generating Fact Checking Explanations - arXiv.org
- Chapter 4 Generating Fact Checking Explanations - Springer
- ACL2020: Generating Fact Checking Explanations
- PDF Generating Fact Checking Explanations - ACL Anthology
- Generating Fact Checking Explanations | CopeNLU
- [2004.05773] Generating Fact Checking Explanations
- Generating Fact Checking Explanations - OpenReview
- Generating Fact Checking Explanations
- Evaluating Evidence Attribution in Generated Fact Checking Explanations
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating
Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-S...
The ACL Anthology Is Managed And Built By The ACL
The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, ...
Learn More About ArXivLabs. Semantics: Textual Inference And Other Areas
Learn more about arXivLabs. Semantics: Textual Inference and Other Areas of Semantics Long Paper © 2020 Association for Computational Linguistics Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle ...
Our Results Indicate That Optimising Both Objectives At The Same
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. Most existing work on automated fact checking is concerned w...
The Results Of A Manual Evaluation Further Suggest That The
The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. When a potentially viral news item is rapidly or indiscriminately published by a news outlet, the responsibility of verifying the truthfulness of the item is often passed on to the audience. To alleviate this problem, ind...