Measuring and Mitigating Risks of AI-driven Information Targeting

This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.

Subsidie
€ 1.499.953
2022

Projectdetails

Introduction

We are witnessing a massive shift in the way people consume information. In the past, people had an active role in selecting the news they read. More recently, the information started to appear on people's social media feeds as a byproduct of one's social relations.

Current Trends

At present, we see a new shift brought by the emergence of online advertising platforms where third parties can pay ad platforms to show specific information to particular groups of people through paid targeted ads. These targeting technologies are powered by AI-driven algorithms.

Risks of AI-Driven Targeting

Using these technologies to promote information, rather than promote products as they have been initially designed for, opens the way for self-interested groups to use users' personal data to manipulate them. European Institutions recognize the risks, and many fear a weaponization of the technology to engineer polarization or manipulate voters.

Project Goals

The goal of this project is to study the risks with AI-driven information targeting at three levels:

  1. Human-level: In which conditions targeted information can influence an individual's beliefs.
  2. Algorithmic-level: In which conditions AI-driven targeting algorithms can exploit people's vulnerabilities.
  3. Platform-level: Are targeting technologies leading to biases in the quality of information different groups of people receive and assimilate.

Then, we will use this understanding to propose protection mechanisms for platforms, regulators, and users.

Methodology

This proposal's key asset is the novel measurement methodology I propose that will allow for a rigorous and realistic evaluation of risks by enabling randomized controlled trials in social media. The measurement methodology builds on advances in multiple disciplines and takes advantage of our recent breakthrough in designing independent auditing systems for social media advertising.

Conclusion

Successful execution will provide a solid foundation for sustainable targeting technologies that ensure healthy information targeting.

Financiële details & Tijdlijn

Financiële details

Subsidiebedrag€ 1.499.953
Totale projectbegroting€ 1.499.953

Tijdlijn

Startdatum1-10-2022
Einddatum30-9-2027
Subsidiejaar2022

Partners & Locaties

Projectpartners

  • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUEpenvoerder
  • CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS

Land(en)

France

Vergelijkbare projecten binnen European Research Council

ERC STG

MANUNKIND: Determinants and Dynamics of Collaborative Exploitation

This project aims to develop a game theoretic framework to analyze the psychological and strategic dynamics of collaborative exploitation, informing policies to combat modern slavery.

€ 1.497.749
ERC STG

Elucidating the phenotypic convergence of proliferation reduction under growth-induced pressure

The UnderPressure project aims to investigate how mechanical constraints from 3D crowding affect cell proliferation and signaling in various organisms, with potential applications in reducing cancer chemoresistance.

€ 1.498.280
ERC STG

The Ethics of Loneliness and Sociability

This project aims to develop a normative theory of loneliness by analyzing ethical responsibilities of individuals and societies to prevent and alleviate loneliness, establishing a new philosophical sub-field.

€ 1.025.860
ERC STG

Uncovering the mechanisms of action of an antiviral bacterium

This project aims to uncover the mechanisms behind Wolbachia's antiviral protection in insects and develop tools for studying symbiont gene function.

€ 1.500.000

Vergelijkbare projecten uit andere regelingen

ERC POC

FARE_AUDIT: Fake News Recommendations - an Auditing System of Differential Tracking and Search Engine Results

FARE_AUDIT develops a privacy-protecting tool to audit search engines, aiming to enhance public awareness and empower users to identify and mitigate disinformation online.

€ 150.000
ERC COG

Enhancing Protections through the Collective Auditing of Algorithmic Personalization

The project aims to develop mathematical foundations for auditing algorithmic personalization systems while ensuring privacy, autonomy, and positive social impact.

€ 1.741.309
ERC ADG

VIrtual GuardIan AngeLs for the post-truth Information Age

The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.

€ 2.490.000