Explainable and Robust Automatic Fact Checking
ExplainYourself aims to develop explainable automatic fact-checking methods using machine learning to enhance transparency and user trust through diverse, accurate explanations of model predictions.
Projectdetails
Introduction
ExplainYourself proposes to study explainable automatic fact checking, the task of automatically predicting the veracity of textual claims using machine learning (ML) methods, while also producing explanations about how the model arrived at the prediction.
Challenges in Current Methods
Automatic fact checking methods often use opaque deep neural network models, whose inner workings cannot easily be explained. Especially for complex tasks such as automatic fact checking, this hinders greater adoption, as it is unclear to users when the models' predictions can be trusted.
Existing explainable ML methods partly overcome this by reducing the task of explanation generation to highlighting the right rationale. While a good first step, this does not fully explain how a ML model arrived at a prediction.
Complexity of Fact Checking
For knowledge-intensive natural language understanding (NLU) tasks such as fact checking, a ML model needs to learn complex relationships between the claim, multiple evidence documents, and common sense knowledge in addition to retrieving the right evidence. There is currently no explainability method that aims to illuminate this highly complex process.
In addition, existing approaches are unable to produce diverse explanations, geared towards users with different information needs.
Proposed Innovations
ExplainYourself radically departs from existing work in proposing methods for explainable fact checking that more accurately reflect how fact checking models make decisions, and are useful to diverse groups of end users.
Future Applications
It is expected that these innovations will apply to explanation generation for other knowledge-intensive NLU tasks, such as:
- Question answering
- Entity linking
To achieve this, ExplainYourself builds on my pioneering work on explainable fact checking as well as my interdisciplinary expertise.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.498.616 |
Totale projectbegroting | € 1.498.616 |
Tijdlijn
Startdatum | 1-9-2023 |
Einddatum | 31-8-2028 |
Subsidiejaar | 2023 |
Partners & Locaties
Projectpartners
- KOBENHAVNS UNIVERSITETpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Interactive and Explainable Human-Centered AutoMLixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications. | ERC Starting... | € 1.459.763 | 2022 | Details |
Conveying Agent Behavior to People: A User-Centered Approach to Explainable AIDevelop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction. | ERC Starting... | € 1.470.250 | 2023 | Details |
Uniting Statistical Testing and Machine Learning for Safe PredictionsThe project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications. | ERC Starting... | € 1.500.000 | 2024 | Details |
Controlling Large Language ModelsDevelop a framework to understand and control large language models, addressing biases and flaws to ensure safe and responsible AI adoption. | ERC Starting... | € 1.500.000 | 2024 | Details |
Machine learning in science and society: A dangerous toy?This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society. | ERC Starting... | € 1.500.000 | 2025 | Details |
Interactive and Explainable Human-Centered AutoML
ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.
Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI
Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.
Uniting Statistical Testing and Machine Learning for Safe Predictions
The project aims to enhance the interpretability and reliability of machine learning predictions by integrating statistical methods to establish robust error bounds and ensure safe deployment in real-world applications.
Controlling Large Language Models
Develop a framework to understand and control large language models, addressing biases and flaws to ensure safe and responsible AI adoption.
Machine learning in science and society: A dangerous toy?
This project evaluates the epistemic strengths and risks of deep learning models as "toy models" to enhance understanding and trust in their application across science and society.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
eXplainable AI in Personalized Mental HealthcareDit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg. | Mkb-innovati... | € 350.000 | 2022 | Details |
Onderzoek haalbaarheid AI factcheck moduleHet project onderzoekt de haalbaarheid van een AI-module die claims in content kan extraheren en verifiëren op juistheid en relevantie met diverse databronnen. | Mkb-innovati... | € 20.000 | 2023 | Details |
Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen. | Mkb-innovati... | € 20.000 | 2020 | Details |
HURLDit project onderzoekt de haalbaarheid van Uitlegbare Reinforcement Learning (URL) om eindgebruikers inzicht te geven in algoritmische beslissingen binnen een gesimuleerde handelsomgeving. | Mkb-innovati... | € 19.200 | 2022 | Details |
Project HominisHet project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert. | Mkb-innovati... | € 20.000 | 2022 | Details |
eXplainable AI in Personalized Mental Healthcare
Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.
Onderzoek haalbaarheid AI factcheck module
Het project onderzoekt de haalbaarheid van een AI-module die claims in content kan extraheren en verifiëren op juistheid en relevantie met diverse databronnen.
Haalbaarheidsonderzoek online tool voor toepassing Targeted Maximum Likelihood Estimation (TMLE)
Researchable B.V. ontwikkelt een SaaS-oplossing die TMLE gebruikt om de onzichtbare laag van AI-berekeningen zichtbaar te maken via Explainable AI (XAI) voor betere inzicht in voorspellingen.
HURL
Dit project onderzoekt de haalbaarheid van Uitlegbare Reinforcement Learning (URL) om eindgebruikers inzicht te geven in algoritmische beslissingen binnen een gesimuleerde handelsomgeving.
Project Hominis
Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.