Causal Argumentative Learning Assistant
CArLA aims to create a transparent and interactive platform for causal discovery in AI, enhancing understanding and trust in high-stakes domains like healthcare and finance.
Projectdetails
Introduction
Causal AI is widely perceived as crucial in the current AI landscape as it allows capturing causal effects amongst features in data, rather than simple correlations.
Importance of Causal Discovery
Causal discovery is an important aspect of machine learning as it paves the way towards achieving Causal AI. It amounts to extracting causal graphs from data, encoding the structure of causal relations amongst features in data.
Current Challenges
There is a gap in the state-of-the-art in AI as concerns causal discovery, in that most approaches are black boxes, hard to understand and explain, and unable to engage domain experts to integrate and possibly contest the learnt graphs when they are misaligned with human knowledge and values.
Objectives of CArLA
CArLA aims at developing a platform for:
- Transparent
- Explainable
- Interactive
- Contestable causal discovery
This platform will be based upon an existing principled methodology and prototype, as well as XAI techniques, developed within the ERC Advanced ADIX project.
Benefits of the Platform
CArLA’s platform will support understanding the causal discovery process from data and experts while being able to influence it. Any developer or user of applications in high-stakes domains will benefit from using this uniquely trustworthy platform.
Demonstrators and Commercialization
CArLA aims to develop two demonstrators of the beneficial use of the platform in the high-stakes domains of healthcare and finance, to pave the way to commercialization with a spinout.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 150.000 |
Totale projectbegroting | € 150.000 |
Tijdlijn
Startdatum | 1-1-2025 |
Einddatum | 30-6-2026 |
Subsidiejaar | 2025 |
Partners & Locaties
Projectpartners
- IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINEpenvoerder
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Building the First Automated Causal Discovery PlatformAutoCD aims to develop an automated causal discovery software to enhance expert productivity and accessibility, while validating its commercial potential with industry partners. | ERC Proof of... | € 150.000 | 2022 | Details |
Interactive and Explainable Human-Centered AutoMLixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications. | ERC Starting... | € 1.459.763 | 2022 | Details |
Conveying Agent Behavior to People: A User-Centered Approach to Explainable AIDevelop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction. | ERC Starting... | € 1.470.250 | 2023 | Details |
Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical MatchingAct.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries. | ERC Proof of... | € 150.000 | 2024 | Details |
Towards an Artificial Cognitive ScienceThis project aims to establish a new field of artificial cognitive science by applying cognitive psychology to enhance the learning and decision-making of advanced AI models. | ERC Starting... | € 1.496.000 | 2024 | Details |
Building the First Automated Causal Discovery Platform
AutoCD aims to develop an automated causal discovery software to enhance expert productivity and accessibility, while validating its commercial potential with industry partners.
Interactive and Explainable Human-Centered AutoML
ixAutoML aims to enhance trust and interactivity in automated machine learning by integrating human insights and explanations, fostering democratization and efficiency in ML applications.
Conveying Agent Behavior to People: A User-Centered Approach to Explainable AI
Develop adaptive and interactive methods to enhance user understanding of AI agents' behavior in sequential decision-making contexts, improving transparency and user interaction.
Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching
Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.
Towards an Artificial Cognitive Science
This project aims to establish a new field of artificial cognitive science by applying cognitive psychology to enhance the learning and decision-making of advanced AI models.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
eXplainable AI in Personalized Mental HealthcareDit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg. | Mkb-innovati... | € 350.000 | 2022 | Details |
Counterfactual Assessment and Valuation for Awareness ArchitectureThe CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications. | EIC Pathfinder | € 3.132.460 | 2022 | Details |
Context aware learning systemAIROC onderzoekt de haalbaarheid van een AI-gestuurd leerplatform dat bedrijven en onderwijs verbindt voor een gepersonaliseerde en interactieve leerervaring. | Mkb-innovati... | € 19.824 | 2020 | Details |
Haalbaarheidsstudie AI Sales CoachHet project onderzoekt de haalbaarheid van een AI Sales Coach om verkopers te ondersteunen bij het vergroten van hun sales-omvang. | Mkb-innovati... | € 20.000 | 2020 | Details |
Haalbaarheidsonderzoek naar AIPerLearn (AI-Powered Personalized Learning)STARK Learning onderzoekt de toepassing en training van AI-modellen om het ontwikkelen van gepersonaliseerde lesmaterialen te automatiseren en de kwaliteit en validatie te waarborgen. | Mkb-innovati... | € 20.000 | 2023 | Details |
eXplainable AI in Personalized Mental Healthcare
Dit project ontwikkelt een innovatief AI-platform dat gebruikers betrekt bij het verbeteren van algoritmen via feedbackloops, gericht op transparantie en betrouwbaarheid in de geestelijke gezondheidszorg.
Counterfactual Assessment and Valuation for Awareness Architecture
The CAVAA project aims to develop a computational architecture for awareness in biological and technological systems, enhancing user experience through explainability and adaptability in various applications.
Context aware learning system
AIROC onderzoekt de haalbaarheid van een AI-gestuurd leerplatform dat bedrijven en onderwijs verbindt voor een gepersonaliseerde en interactieve leerervaring.
Haalbaarheidsstudie AI Sales Coach
Het project onderzoekt de haalbaarheid van een AI Sales Coach om verkopers te ondersteunen bij het vergroten van hun sales-omvang.
Haalbaarheidsonderzoek naar AIPerLearn (AI-Powered Personalized Learning)
STARK Learning onderzoekt de toepassing en training van AI-modellen om het ontwikkelen van gepersonaliseerde lesmaterialen te automatiseren en de kwaliteit en validatie te waarborgen.