Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching

Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.

Subsidie
€ 150.000
2024

Projectdetails

Introduction

The vision behind Act.AI is to utilize statistical matching for mitigating and auditing bias in Artificial Intelligence (AI) models. AI has been rapidly growing in various industries, from financial services to healthcare, education, and job recruitment.

Concerns About AI Fairness

However, as AI algorithms have become increasingly sophisticated and pervasive in decision-making processes, concerns have arisen about their fairness and compliance with regulations. In particular, the EU AI Act requires that AI providers in high-risk applications -- such as employment, credit, or healthcare -- identify (and thereby address) discrimination by their algorithms against certain demographics of people.

Challenges for AI Startups

Ensuring compliance with the Act can be challenging, particularly for AI startups that may not have the resources or expertise to fully understand and implement the Act's requirements.

Addressing Disconnects

Addressing existing disconnects between AI fairness toolkits' capabilities and current practitioner needs, the Act.AI tool can be easily integrated into any AI workflow, in a plug-and-play fashion, to continuously monitor and improve its fairness.

Key Features of Act.AI

A key aspect of Act.AI is the ability to operate with different types of data, including:

  • Tabular data
  • Images
  • Text

It functions in a variety of contexts, such as:

  1. Binary classification
  2. Multiclass classification
  3. Regression

Additionally, it is able to match datasets in different domains, including out-of-distribution data, even if these datasets have different numbers of variables or features.

Stakeholder Integration

To ensure usability of Act.AI, it will integrate feedback from relevant stakeholders from two immediate target markets:

  • Financial services
  • Healthcare

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 150.000
Totale projectbegroting€ 150.000

Tijdlijn

Startdatum1-6-2024
Einddatum30-11-2025
Subsidiejaar2024

Partners & Locaties

Projectpartners

  • BCAM - BASQUE CENTER FOR APPLIED MATHEMATICSpenvoerder

Land(en)

Spain

Vergelijkbare projecten binnen European Research Council

ERC STG

MANUNKIND: Determinants and Dynamics of Collaborative Exploitation

This project aims to develop a game theoretic framework to analyze the psychological and strategic dynamics of collaborative exploitation, informing policies to combat modern slavery.

€ 1.497.749
ERC STG

Elucidating the phenotypic convergence of proliferation reduction under growth-induced pressure

The UnderPressure project aims to investigate how mechanical constraints from 3D crowding affect cell proliferation and signaling in various organisms, with potential applications in reducing cancer chemoresistance.

€ 1.498.280
ERC STG

Uncovering the mechanisms of action of an antiviral bacterium

This project aims to uncover the mechanisms behind Wolbachia's antiviral protection in insects and develop tools for studying symbiont gene function.

€ 1.500.000
ERC STG

The Ethics of Loneliness and Sociability

This project aims to develop a normative theory of loneliness by analyzing ethical responsibilities of individuals and societies to prevent and alleviate loneliness, establishing a new philosophical sub-field.

€ 1.025.860

Vergelijkbare projecten uit andere regelingen

ERC STG

Measuring and Mitigating Risks of AI-driven Information Targeting

This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.

€ 1.499.953
ERC COG

Biases in Administrative Service Encounters: Transitioning from Human to Artificial Intelligence

This project aims to analyze communicative biases in public service encounters to assess the impact of transitioning from human to AI agents, enhancing service delivery while safeguarding democratic legitimacy.

€ 1.954.746
EIC Accelerator

Quality Assurance for AI

GISKARD is developing an open-source SaaS platform for automated AI quality testing to address ethical biases and prediction errors, aiming to lead in compliance with the EU AI Act.

€ 2.499.999
MIT Haalbaarheid

Strijd tegen ongelijke verdeling in financiële keuzes.

Het project richt zich op het bestrijden van ongelijkheid door AI en big data in te zetten voor het identificeren van vooroordelen en het verbeteren van de toegang tot middelen.

€ 20.000