BiasScore
BiasScore biedt een innovatieve oplossing om genderbias in geschreven content te identificeren en te waarborgen dat deze inclusief is.
Projectdetails
Inleiding
How can someone verify if they are gender-neutral or inclusive and if their writing matches the requirements of any given target group? For example, let's consider an employee responsible for marketing and communication across a company in the Netherlands.
Belang van Genderneutraal Schrijven
After finishing writing a series of articles that need to be published, it is essential to ensure that the writing is gender-neutral and free from possible implicit bias.
Problemen met Huidige Analyse
However, entrusting colleagues to conduct an analysis would not guarantee anything since everyone could hold bias (even unconsciously) differently.
Gebrek aan Informatie
For an individual and even within an entire company, there may not be enough data or knowledge internally to accurately understand how different genders experience events, texts, and speech.
Vooringenomenheid in Data
Most datasets are taken from a male perspective, based on male data, or generated by men. Data is collected inconsistently across gender groups or does not exist, which results in gender bias being embedded in almost every form of written content.
Voorbeeld in de Journalistiek
Even in journalism, there is no check if an article is written from a male or female perspective.
Markt-specifieke Problemen
Lack of diversity in training data, difficulties identifying subtle biases, and ethical considerations like privacy and data security are examples of market-specific problems.
Innovatie
Our solution enables media companies, education facilities, and many more to prove that an article or another form of content is gender-inclusive.
Keuze voor de Lezer
This means that we allow the potential reader to choose not to read the information that does not have our Stamp of Approval of gender inclusivity.
Taalbenadering van de Oplossing
The innovation of the solution stands in its linguistic approach to identifying gender bias in written content by not only flagging words but also looking at descriptive writing and tone.
Aanpasbaarheid en Verbetering
It can also be customized to meet the specific needs of different industries and organizations and can improve over time as it is used.
Schaalbare Oplossing
The Bias Score provides a scalable solution to a complex problem that has historically been difficult to address.
Financiële details & Tijdlijn
Financiële details
Subsidiebedrag | € 20.000 |
Totale projectbegroting | € 60.000 |
Tijdlijn
Startdatum | 1-8-2023 |
Einddatum | 31-7-2024 |
Subsidiejaar | 2023 |
Partners & Locaties
Projectpartners
- Stichting Fe/male Switchpenvoerder
Land(en)
Vergelijkbare projecten binnen MIT Haalbaarheid
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Bias NeutraliserCorTexter ontwikkelt een deep learning software om onbedoelde vooroordelen in recruitmentteksten te herkennen en te neutraliseren, waardoor gelijke kansen voor werkzoekenden worden bevorderd. | Mkb-innovati... | € 20.000 | 2021 | Details |
Bias Score for Recruitment softwareBCR ontwikkelt een tool om bias in recruitmentsoftware te identificeren en te verminderen, waardoor inclusiviteit en de beste kandidaten worden bevorderd. | Mkb-innovati... | € 20.000 | 2022 | Details |
Equilo – Automatic Gender Impact ConsultancyHet project ontwikkelt een platform dat met behulp van AI en Big Data organisaties snel en kosteneffectief advies biedt om te voldoen aan de genderdoelstellingen van de SDG's. | Mkb-innovati... | € 19.360 | 2020 | Details |
Generiek linguïstisch AI-voorspellingsmodel voor eerlijke HR-besluitvormingSeedlink ontwikkelt een generiek AI-voorspellingsmodel voor HR-besluitvorming, gericht op het verbeteren van nauwkeurigheid en eerlijkheid zonder specifieke klantdata, toegankelijk voor kleinere bedrijven. | Mkb-innovati... | € 20.000 | 2020 | Details |
Project HominisHet project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert. | Mkb-innovati... | € 20.000 | 2022 | Details |
Bias Neutraliser
CorTexter ontwikkelt een deep learning software om onbedoelde vooroordelen in recruitmentteksten te herkennen en te neutraliseren, waardoor gelijke kansen voor werkzoekenden worden bevorderd.
Bias Score for Recruitment software
BCR ontwikkelt een tool om bias in recruitmentsoftware te identificeren en te verminderen, waardoor inclusiviteit en de beste kandidaten worden bevorderd.
Equilo – Automatic Gender Impact Consultancy
Het project ontwikkelt een platform dat met behulp van AI en Big Data organisaties snel en kosteneffectief advies biedt om te voldoen aan de genderdoelstellingen van de SDG's.
Generiek linguïstisch AI-voorspellingsmodel voor eerlijke HR-besluitvorming
Seedlink ontwikkelt een generiek AI-voorspellingsmodel voor HR-besluitvorming, gericht op het verbeteren van nauwkeurigheid en eerlijkheid zonder specifieke klantdata, toegankelijk voor kleinere bedrijven.
Project Hominis
Het project richt zich op het ontwikkelen van een ethisch AI-systeem voor natuurlijke taalverwerking dat vooroordelen minimaliseert en technische, economische en regelgevingsrisico's beheert.
Vergelijkbare projecten uit andere regelingen
Project | Regeling | Bedrag | Jaar | Actie |
---|---|---|---|---|
Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical MatchingAct.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries. | ERC Proof of... | € 150.000 | 2024 | Details |
Mapping and Matching Content Diversity and Bias in EU Online Social NetworksPolarScopEU aims to develop a tool for measuring and mapping online political polarization in Greece, Portugal, and Spain, enhancing awareness of biases and improving understanding of political content. | ERC Proof of... | € 150.000 | 2024 | Details |
Measuring and Mitigating Risks of AI-driven Information TargetingThis project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies. | ERC Starting... | € 1.499.953 | 2022 | Details |
Novel diffuse Optical method to combat skin color bias in non-invasive optical biomarker sensing devices such as pulse oximetersNOBIAS aims to develop a groundbreaking bias-free optical biomarker sensing technology using multilayer TDDOS to enhance accuracy and eliminate skin color bias in medical devices. | ERC Starting... | € 1.582.349 | 2025 | Details |
Diving into Data Diversity for Fair and Robust Natural Language ProcessingDataDivers aims to create a framework for measuring data diversity in NLP datasets to enhance model fairness and robustness through empirical and theoretical insights. | ERC Starting... | € 1.500.000 | 2025 | Details |
Developing Bias Auditing and Mitigation Tools for Self-Assessment of AI Conformity with the EU AI Act through Statistical Matching
Act.AI aims to enhance AI fairness and compliance with the EU AI Act by providing a versatile, plug-and-play tool for continuous bias monitoring across various data types and industries.
Mapping and Matching Content Diversity and Bias in EU Online Social Networks
PolarScopEU aims to develop a tool for measuring and mapping online political polarization in Greece, Portugal, and Spain, enhancing awareness of biases and improving understanding of political content.
Measuring and Mitigating Risks of AI-driven Information Targeting
This project aims to assess the risks of AI-driven information targeting on individuals, algorithms, and platforms, and propose protective measures through innovative measurement methodologies.
Novel diffuse Optical method to combat skin color bias in non-invasive optical biomarker sensing devices such as pulse oximeters
NOBIAS aims to develop a groundbreaking bias-free optical biomarker sensing technology using multilayer TDDOS to enhance accuracy and eliminate skin color bias in medical devices.
Diving into Data Diversity for Fair and Robust Natural Language Processing
DataDivers aims to create a framework for measuring data diversity in NLP datasets to enhance model fairness and robustness through empirical and theoretical insights.