Generating Fluent Fact Checking Explanations With Unsupervised Mdpi

Bonisiwe Shabane
-
generating fluent fact checking explanations with unsupervised mdpi

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Fact-checking systems have become important tools to verify fake and misguiding news.

These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence. In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation.

In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check (The content in this chapter has been reprinted, with permission, from the original work... This is a preview of subscription content, log in via an institution to check access. Tax calculation will be finalised at checkout Due to the increased cost and execution time of the complex annotation task, and following related work that manually evaluates FC explanations (Atanasova et al.

2020b) and machine-generated summaries (Liu and Lapata 2019). Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513. https://aclanthology.org/W18-5513

Jolly, Shailza, Pepa Atanasova, and Isabelle Augenstein. “Generating fluent fact checking explanations with unsupervised post-editing.” Information 13.10 (2022): 500. Generating more readable, fluent explanations for fact-check veracity labels from ruling comments (RCs), and create a coherent story, while also preserving the information important for fact-checking. In-depth explanations for predicted veracity labels given by human fact-checkers. Edit the explanation after it has been completely generated. Why is there a need for accurate, scalable, and explainable automatic fact-checking systems?

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein [Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.

The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader. All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables.

For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess. Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers. Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.

Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

People Also Search

ArXivLabs Is A Framework That Allows Collaborators To Develop And

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...

These Systems Become More Trustworthy When Human-readable Explanations Accompany The

These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of these explanations is expensive and time-consuming. Recent work has used extractive summarization to select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these...

In Addition, We Show The Applicability Of Our Approach In

In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, namely LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check (The content in this chapter has been reprinted, with permission, from the original work... T...

2020b) And Machine-generated Summaries (Liu And Lapata 2019). Alhindi T,

2020b) and machine-generated summaries (Liu and Lapata 2019). Alhindi T, Petridis S, Muresan S (2018) Where is your evidence: improving fact-checking by justification modeling. In: Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Association for Computational Linguistics, Brussels, pp 85–90. https://doi.org/10.18653/v1/W18-5513. https://aclanthology.org/W18-5513

Jolly, Shailza, Pepa Atanasova, And Isabelle Augenstein. “Generating Fluent Fact

Jolly, Shailza, Pepa Atanasova, and Isabelle Augenstein. “Generating fluent fact checking explanations with unsupervised post-editing.” Information 13.10 (2022): 500. Generating more readable, fluent explanations for fact-check veracity labels from ruling comments (RCs), and create a coherent story, while also preserving the information important for fact-checking. In-depth explanations for predic...