Generating Fact Checking Explanations Acl Anthology

Bonisiwe Shabane
-
generating fact checking explanations acl anthology

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.

The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community?

Learn more about arXivLabs. Semantics: Textual Inference and Other Areas of Semantics Long Paper © 2020 Association for Computational Linguistics Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modeled jointly with veracity prediction.

Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process—generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.

The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model (The content in this chapter has been reprinted,... (2020)). This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18).

Curran Associates Inc, USA, pp 9525–9536. http://dl.acm.org/citation.cfm?id=3327546.3327621 Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513.

https://aclanthology.org/W18-5513 This work is licensed under a Creative Commons Attribution 4.0 International License. Fighting misinformation is a challenging, yet crucial, task. Despite the growing number of experts being involved in manual fact-checking, this activity is time-consuming and cannot keep up with the ever-increasing amount of Fake News produced daily. Hence, automating this process is necessary to help curb misinformation. Thus far, researchers have mainly focused on claim veracity classification.

In this paper, instead, we address the generation of justifications (textual explanation of why a claim is classified as either true or false) and benchmark it with novel datasets and advanced baselines. In particular, we focus on summarization approaches over unstructured knowledge (i.e. news articles) and we experiment with several extractive and abstractive strategies. We employed two datasets with different styles and structures, in order to assess the generalizability of our findings. Results show that in justification production summarization benefits from the claim information, and, in particular, that a claim-driven extractive step improves abstractive summarization performances. Finally, we show that although cross-dataset experiments suffer from performance degradation, a unique model trained on a combination of the two datasets is able to retain style information in an efficient manner.

Keywords: Generating Explanations, automated checking, predicting claims, generating justifications About · Contact · Privacy Policy · Legal Notice Daniel Russo, Serra Sinem Tekiroğlu, Marco Guerini [Benchmarking the Generation of Fact Checking Explanations](https://aclanthology.org/2023.tacl-1.71/) (Russo et al., TACL 2023) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License.

Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d.

People Also Search

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-S...

The ACL Anthology Is Managed And Built By The ACL

The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, ...

Learn More About ArXivLabs. Semantics: Textual Inference And Other Areas

Learn more about arXivLabs. Semantics: Textual Inference and Other Areas of Semantics Long Paper © 2020 Association for Computational Linguistics Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle ...

Our Results Indicate That Optimising Both Objectives At The Same

Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. Most existing work on automated fact checking is concerned w...

The Results Of A Manual Evaluation Further Suggest That The

The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model (The content in this chapter has been reprinted,... (2020)). This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout Adebayo J, Gilmer J, Muelly M...