Pdf Generating Fluent Fact Checking Explanations With Researchgate
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Jolly, Shailza, Pepa Atanasova, and Isabelle Augenstein.
“Generating fluent fact checking explanations with unsupervised post-editing.” Information 13.10 (2022): 500. Generating more readable, fluent explanations for fact-check veracity labels from ruling comments (RCs), and create a coherent story, while also preserving the information important for fact-checking. In-depth explanations for predicted veracity labels given by human fact-checkers. Edit the explanation after it has been completely generated. Why is there a need for accurate, scalable, and explainable automatic fact-checking systems? Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein
[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers.
Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of such explanations is expensive and time-consuming. Recent works frame explanation generation as extractive summarization, and propose to automatically select a sufficient subset of the most important facts from the ruling comments (RCs) of a professional journalist to obtain fact-checking explanations. However, these explanations lack fluency and sentence coherence.
In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark datasets, LIAR-PLUS and PubHealth. We show that our model generates explanations that are fluent, readable, non-redundant, and cover important information for the fact check. In today’s era of social media, the spread of news is a click away, regardless of whether it is fake or real.
However, the quick propagation of fake news has repercussions on peoples’ lives. To alleviate these consequences, independent teams of professional fact checkers manually verify the veracity and credibility of news, which is time and labor-intensive, making the process expensive and less scalable. Therefore, the need for accurate, scalable, and explainable automatic fact checking systems is inevitable (Kotonya and Toni, 2020a). Current automatic fact checking systems perform veracity prediction for given claims based on evidence documents (Thorne et al. (2018); Augenstein et al. (2019), inter alia), or based on long lists of supporting ruling comments (RCs, Wang (2017); Alhindi et al.
(2018)). RCs are in-depth explanations for predicted veracity labels, but they are challenging to read and not useful as explanations for human readers due to their sizable content. Recent work (Atanasova et al., 2020c; Kotonya and Toni, 2020b) has thus proposed to use automatic summarization to select a subset of sentences from long RCs and used them as short layman explanations. However, using a purely extractive approach (Atanasova et al., 2020c) means sentences are cherry-picked from different parts of the corresponding RCs, and as a result, explanations are often disjoint and non-fluent. While a sequence-to-sequence model trained on parallel data can partially alleviate these problems, as Kotonya and Toni (2020b) propose, it is an expensive affair in terms of the large amount of data and compute... Therefore, in this work, we focus on unsupervised post-editing of explanations extracted from RCs.
In recent studies, researchers have addressed unsupervised post-editing to generate paraphrases (Liu et al., 2020) and sentence simplification (Kumar et al., 2020). However, they use small single sentences and perform exhaustive word-level or a combination of word and phrase-level edits, which has limited applicability for longer text inputs with multiple sentences, e.g., veracity explanations, due to... Hence, we present a novel iterative edit-based algorithm that performs three edit operations (insertion, deletion, reorder), all at the phrase level. Fig. 1 illustrates how each post-editing step contributes to creating candidate explanations that are more concise, readable, fluent, and creating a coherent story. Our proposed method finds the best post-edited explanation candidate according to a scoring function, ensuring the quality of explanations in fluency and readability, semantic preservation, and conciseness (§3.2.2).
To ensure that the sentences of the candidate explanations are grammatically correct, we also perform grammar checking (§3.2.4). As a second step, we apply paraphrasing to further improve the conciseness and human readability of the explanations (§3.2.5).
People Also Search
- (PDF) Generating Fluent Fact Checking Explanations with ... - ResearchGate
- [2112.06924] Generating Fluent Fact Checking Explanations with ...
- (PDF) Learning to Generate and Evaluate Fact-checking Explanations with ...
- Generating Fluent Fact-Checking Explanations with Unsupervised Post ...
- Generating Fluent Fact Checking Explanations with Unsupervised ... - MDPI
- Generating Fact Checking Explanations - ACL Anthology
- Generating Fact Checking Explanations | Request PDF - ResearchGate
- PDF Generating Fluent Fact Checking Explanations with ... - ResearchGate
- PDF Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
“Generating Fluent Fact Checking Explanations With Unsupervised Post-editing.” Information 13.10
“Generating fluent fact checking explanations with unsupervised post-editing.” Information 13.10 (2022): 500. Generating more readable, fluent explanations for fact-check veracity labels from ruling comments (RCs), and create a coherent story, while also preserving the information important for fact-checking. In-depth explanations for predicted veracity labels given by human fact-checkers. Edit th...
[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova Et Al., ACL 2020) ACL
[Generating Fact Checking Explanations](https://aclanthology.org/2020.acl-main.656/) (Atanasova et al., ACL 2020) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies ...
Site Last Built On 27 November 2025 At 10:42 UTC
Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. Fact-checking systems have become important tools to verify fake and misguiding news. These systems become more trustworthy when human-readable explanations accompany the veracity labels. However, manual collection of such explanations is expensive and time-consuming. Recent works frame explanation generation as extractive summa...
In This Work, We Present An Iterative Edit-based Algorithm That
In this work, we present an iterative edit-based algorithm that uses only phrase-level edits to perform unsupervised post-editing of disconnected RCs. To regulate our editing algorithm, we use a scoring function with components including fluency and semantic preservation. In addition, we show the applicability of our approach in a completely unsupervised setting. We experiment with two benchmark d...