Generating Fact Checking Explanations University Of Copenhagen

Bonisiwe Shabane
-
generating fact checking explanations university of copenhagen

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review T1 - Generating Fact Checking Explanations N2 - Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying... A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims.

This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model. Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders.

Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process—generating justifications for verdicts on claims.

This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model (The content in this chapter has been reprinted,... (2020)). This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout

Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B (2018) Sanity checks for saliency maps. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18). Curran Associates Inc, USA, pp 9525–9536. http://dl.acm.org/citation.cfm?id=3327546.3327621 Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER).

Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513. https://aclanthology.org/W18-5513 Semantics: Textual Inference and Other Areas of Semantics Long Paper © 2020 Association for Computational Linguistics Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims.

A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process – generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modeled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.

People Also Search

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein Research

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review T1 - Generating Fact Checking Explanations N2 - Most existing work on automated fact checking is conce...

This Paper Provides The First Study Of How These Explanations

This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further s...

Materials Prior To 2016 Here Are Licensed Under The Creative

Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of voluntee...

Both Individuals And Organizations That Work With ArXivLabs Have Embraced

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Most existing work on automated fact checking is concerned w...

This Paper Provides The First Study Of How These Explanations

This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further s...