Github Franciellevargas Selfar The Sentence Level Factual Reasoning
Most existing fact-checking systems are unable to explain their decisions by providing relevant rationales (justifications) for their predictions. It highlights a lack of transparency that poses significant risks, such as the prevalence of unexpected biases, which may increase political polarization due to limitations in impartiality. To address this critical gap, we introduce a new method to improve explainable fact-checking. The SEntence-Level FActual Reasoning (SELFAR) relies on fact extraction and verification by predicting the news source reliability and factuality (veracity) of news articles or claims at the sentence level, generating post-hoc explanations using SHAP/LIME... Our experiments show that unreliable news stories predominantly consist of subjective statements, in contrast to reliable ones. Consequently, predicting unreliable news articles at the sentence level by analyzing impartiality and subjectivity is a promising approach for fact extraction and improving explainable fact-checking.
Furthermore, LIME outperforms SHAP in explaining predictions on reliability. Additionally, while zero-shot prompts provide highly readable explanations and achieve an accuracy of 0.71 in predicting factuality, their tendency to hallucinate remains a challenge. Lastly, we present the first study on explainable fact-checking in the Portuguese language. The SELFAR to enhance explainable fact-checking. SELFAR encompasses three main tasks: Fact Extraction (FE), Fact Verification (FV), and Explanation Generation (EG), as shown in figure as follows. Please cite our paper if you use the SELFAR:
Francielle Vargas, Isadora Salles, Diego Alves, Ameeta Agrawal, Thiago A. S. Pardo, Fabrício Benevenuto [Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning](https://aclanthology.org/2024.fever-1.23/) (Vargas et al., FEVER 2024) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License.
Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. Excited to share “SELFAR: Sentence-Level Factual Reasoning”, a benchmark for explainable fact-checking in Portuguese! This project was funded by Google and is the result of an inspiring collaboration with my colleagues Ameeta Agrawal (Portland State University), Diego Alves (Saarland University), and Fabrício Benevenuto (Federal University of Minas Gerais).
The SELFAR covers the entire fact-checking pipeline (claim extraction and verification), media bias analysis, and explainability evaluation using SHAP, LIME, and zero-shot prompting. It is the first in Portuguese, promoting responsible AI with interpretable models, open-source data and code for reproducibility. We found that unreliable news is more biased, LIME outperforms SHAP, and zero-shot models achieve 71% accuracy for claim verification but still struggle with hallucinations. Repository: https://lnkd.in/dHN7fQk5 Check our paper published in EMNLP 2024 FEVER: https://lnkd.in/dz-M3i-N Great work, Francielle! Thanks for sharing.
People Also Search
- GitHub - franciellevargas/SELFAR: The SEntence-Level FActual Reasoning ...
- Francielle Vargas
- Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning
- The Sentence-Level Factual Reasoning (SELFAR) for explainable ...
- franciellevargas (Francielle Vargas) · GitHub
- Improving Explainable Fact-Checking via Sentence-Level Factual ...
- PDF Predicting Sentence-Level News Source Reliability for Fact-Checking
- PDF Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning
Most Existing Fact-checking Systems Are Unable To Explain Their Decisions
Most existing fact-checking systems are unable to explain their decisions by providing relevant rationales (justifications) for their predictions. It highlights a lack of transparency that poses significant risks, such as the prevalence of unexpected biases, which may increase political polarization due to limitations in impartiality. To address this critical gap, we introduce a new method to impr...
Furthermore, LIME Outperforms SHAP In Explaining Predictions On Reliability. Additionally,
Furthermore, LIME outperforms SHAP in explaining predictions on reliability. Additionally, while zero-shot prompts provide highly readable explanations and achieve an accuracy of 0.71 in predicting factuality, their tendency to hallucinate remains a challenge. Lastly, we present the first study on explainable fact-checking in the Portuguese language. The SELFAR to enhance explainable fact-checking...
Francielle Vargas, Isadora Salles, Diego Alves, Ameeta Agrawal, Thiago A.
Francielle Vargas, Isadora Salles, Diego Alves, Ameeta Agrawal, Thiago A. S. Pardo, Fabrício Benevenuto [Improving Explainable Fact-Checking via Sentence-Level Factual Reasoning](https://aclanthology.org/2024.fever-1.23/) (Vargas et al., FEVER 2024) ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are l...
Permission Is Granted To Make Copies For The Purposes Of
Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. The ACL Anthology is managed and built by the ACL Anthology team of volunteers. Site last built on 27 November 2025 at 10:42 UTC with commit 542848d. Excited to share “SELFAR: Sentence-Level Factual Reasoning”...
The SELFAR Covers The Entire Fact-checking Pipeline (claim Extraction And
The SELFAR covers the entire fact-checking pipeline (claim extraction and verification), media bias analysis, and explainability evaluation using SHAP, LIME, and zero-shot prompting. It is the first in Portuguese, promoting responsible AI with interpretable models, open-source data and code for reproducibility. We found that unreliable news is more biased, LIME outperforms SHAP, and zero-shot mode...