SMAITE
Preventing the Spread of Misinformation with AI-generated Text Explanations

State-of-the-art research on fact verification mostly focuses on the capability to identify misleading claims. However, providing context that serves as an explanation why exactly a claim was identified as wrong is a vital requisite for the establishment of trust in autonomous systems from the point of view of end-users. SMAITE takes a different direction in explainable fact verification research, grounding it to the specific task of verifying claims. Instead of just predicting whether a claim is true or not, we develop a fact verification system underpinned by deep learning based, generative language models that generate explanations providing information that contextualises the prediction. Additionally, we develop models for automatically evaluating the quality of generated explanations.

Relevant Links

Project Details

Funder(s)

  • AI4Media (Horizon 2020 Centre of Excellence)

Lead(s)

Researcher(s)