Measuring and Mitigating Risks of AI-driven Information Targeting
This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.
Projectdetails
Introduction
We are witnessing a massive shift in the way people consume information. In the past, people had an active role in selecting the news they read. More recently, the information started to appear on people's social media feeds as a byproduct of one's social relations.
Current Trends
At present, we see a new shift brought by the emergence of online advertising platforms where third parties can pay ad platforms to show specific information to particular groups of people through paid targeted ads. These targeting technologies are powered by AI-driven algorithms.
Risks of AI-Driven Targeting
Using these technologies to promote information, rather than promote products as they have been initially designed for, opens the way for self-interested groups to use users' personal data to manipulate them. European Institutions recognize the risks, and many fear a weaponization of the technology to engineer polarization or manipulate voters.
Project Goals
The goal of this project is to study the risks with AI-driven information targeting at three levels:
- Human-level: In which conditions targeted information can influence an individual's beliefs.
- Algorithmic-level: In which conditions AI-driven targeting algorithms can exploit people's vulnerabilities.
- Platform-level: Are targeting technologies leading to biases in the quality of information different groups of people receive and assimilate.
Then, we will use this understanding to propose protection mechanisms for platforms, regulators, and users.
Methodology
This proposal's key asset is the novel measurement methodology I propose that will allow for a rigorous and realistic evaluation of risks by enabling randomized controlled trials in social media. The measurement methodology builds on advances in multiple disciplines and takes advantage of our recent breakthrough in designing independent auditing systems for social media advertising.
Conclusion
Successful execution will provide a solid foundation for sustainable targeting technologies that ensure healthy information targeting.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 1.499.953 |
Totale projectbegroting | € 1.499.953 |
Tijdlijn
Startdatum | 1-10-2022 |
Einddatum | 30-9-2027 |
Subsidiejaar | 2022 |
Partners & Locaties
Projectpartners
- INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUEpenvoerder
- CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS
Land(en)
Vergelijkbare projecten binnen European Research Council
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
MANUNKIND: Determinants and Dynamics of Collaborative ExploitationThis project aims to develop a game theoretic framework to analyze the psychological and strategic dynamics of collaborative exploitation, informing policies to combat modern slavery. | ERC STG | € 1.497.749 | 2022 | Details |
Elucidating the phenotypic convergence of proliferation reduction under growth-induced pressureThe UnderPressure project aims to investigate how mechanical constraints from 3D crowding affect cell proliferation and signaling in various organisms, with potential applications in reducing cancer chemoresistance. | ERC STG | € 1.498.280 | 2022 | Details |
The Ethics of Loneliness and SociabilityThis project aims to develop a normative theory of loneliness by analyzing ethical responsibilities of individuals and societies to prevent and alleviate loneliness, establishing a new philosophical sub-field. | ERC STG | € 1.025.860 | 2023 | Details |
Uncovering the mechanisms of action of an antiviral bacteriumThis project aims to uncover the mechanisms behind Wolbachia's antiviral protection in insects and develop tools for studying symbiont gene function. | ERC STG | € 1.500.000 | 2023 | Details |
MANUNKIND: Determinants and Dynamics of Collaborative Exploitation
This project aims to develop a game theoretic framework to analyze the psychological and strategic dynamics of collaborative exploitation, informing policies to combat modern slavery.
Elucidating the phenotypic convergence of proliferation reduction under growth-induced pressure
The UnderPressure project aims to investigate how mechanical constraints from 3D crowding affect cell proliferation and signaling in various organisms, with potential applications in reducing cancer chemoresistance.
The Ethics of Loneliness and Sociability
This project aims to develop a normative theory of loneliness by analyzing ethical responsibilities of individuals and societies to prevent and alleviate loneliness, establishing a new philosophical sub-field.
Uncovering the mechanisms of action of an antiviral bacterium
This project aims to uncover the mechanisms behind Wolbachia's antiviral protection in insects and develop tools for studying symbiont gene function.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
FARE_AUDIT: Fake News Recommendations - an Auditing System of Differential Tracking and Search Engine ResultsFARE_AUDIT develops a privacy-protecting tool to audit search engines, aiming to enhance public awareness and empower users to identify and mitigate disinformation online. | ERC POC | € 150.000 | 2022 | Details |
Enhancing Protections through the Collective Auditing of Algorithmic PersonalizationThe project aims to develop mathematical foundations for auditing algorithmic personalization systems while ensuring privacy, autonomy, and positive social impact. | ERC COG | € 1.741.309 | 2024 | Details |
VIrtual GuardIan AngeLs for the post-truth Information AgeThe VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society. | ERC ADG | € 2.490.000 | 2024 | Details |
FARE_AUDIT: Fake News Recommendations - an Auditing System of Differential Tracking and Search Engine Results
FARE_AUDIT develops a privacy-protecting tool to audit search engines, aiming to enhance public awareness and empower users to identify and mitigate disinformation online.
Enhancing Protections through the Collective Auditing of Algorithmic Personalization
The project aims to develop mathematical foundations for auditing algorithmic personalization systems while ensuring privacy, autonomy, and positive social impact.
VIrtual GuardIan AngeLs for the post-truth Information Age
The VIGILIA project aims to develop AI-driven tools to detect cognitive biases in information processing, mitigating the effects of misinformation and enhancing trust in society.