Explainable and Robust Automatic Fact Checking

ExplainYourself aims to develop explainable automatic fact-checking methods using machine learning to enhance transparency and user trust through diverse, accurate explanations of model predictions.

Subsidie
€ 1.498.616
2023

Projectdetails

Introduction

ExplainYourself proposes to study explainable automatic fact checking, the task of automatically predicting the veracity of textual claims using machine learning (ML) methods, while also producing explanations about how the model arrived at the prediction.

Challenges in Current Methods

Automatic fact checking methods often use opaque deep neural network models, whose inner workings cannot easily be explained. Especially for complex tasks such as automatic fact checking, this hinders greater adoption, as it is unclear to users when the models' predictions can be trusted.

Existing explainable ML methods partly overcome this by reducing the task of explanation generation to highlighting the right rationale. While a good first step, this does not fully explain how a ML model arrived at a prediction.

Complexity of Fact Checking

For knowledge-intensive natural language understanding (NLU) tasks such as fact checking, a ML model needs to learn complex relationships between the claim, multiple evidence documents, and common sense knowledge in addition to retrieving the right evidence. There is currently no explainability method that aims to illuminate this highly complex process.

In addition, existing approaches are unable to produce diverse explanations, geared towards users with different information needs.

Proposed Innovations

ExplainYourself radically departs from existing work in proposing methods for explainable fact checking that more accurately reflect how fact checking models make decisions, and are useful to diverse groups of end users.

Future Applications

It is expected that these innovations will apply to explanation generation for other knowledge-intensive NLU tasks, such as:

  1. Question answering
  2. Entity linking

To achieve this, ExplainYourself builds on my pioneering work on explainable fact checking as well as my interdisciplinary expertise.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.498.616
Totale projectbegroting€ 1.498.616

Tijdlijn

Startdatum1-9-2023
Einddatum31-8-2028
Subsidiejaar2023

Partners & Locaties

Projectpartners

  • KOBENHAVNS UNIVERSITETpenvoerder

Land(en)

Denmark

Vergelijkbare projecten binnen European Research Council

ERC Starting...

Interactive and Explainable Human-Centered AutoML

ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.

€ 1.459.763
ERC Starting...

Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI

Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.

€ 1.470.250
ERC Starting...

Uniting Statistical Testing and Machine Learning for Safe Predictions

The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.

€ 1.500.000
ERC Starting...

Controlling Large Language Models

Develop a framework to understand and control large language models, addressing biases and flaws to ensure safe and responsible AI adoption.

€ 1.500.000
ERC Starting...

Machine learning in science and society: A dangerous toy?

This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.

€ 1.500.000

Vergelijkbare projecten uit andere regelingen

Mkb-innovati...

eXplainable AI in Personalized Mental Healthcare

Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.

€ 350.000
Mkb-innovati...

Onderzoek haalbaarheid AI factcheck module

Het project onderzoekt de haalbaarheid van een AI-module die claims in content kan extraheren en verifiëren op juistheid en relevantie met diverse databronnen.

€ 20.000
Mkb-innovati...

Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)

Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen.

€ 20.000
Mkb-innovati...

HURL

Dit project onderzoekt de haalbaarheid van Uitlegbare Reinforcement Learning (URL) om eindgebruikers inzicht te geven in algoritmische beslissingen binnen een gesimuleerde handelsomgeving.

€ 19.200
Mkb-innovati...

Project Hominis

Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.

€ 20.000