1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Germany highlights discrimination risks of AI

August 30, 2023

Fears are growing that artificial intelligence could amplify structural racism and other forms of discrimination. Germany's anti-discrimination commissioner wants to change that.

AI image of a digital brain
Experts are concerned that AI could replicate biases found it its input data, or apply its own prejudices to dataImage: Alexander Limbach/Zoonar/picture alliance

"AI makes many things easier — unfortunately also discrimination." That is how Ferda Ataman, the German government's independent Anti-Discrimination Commissioner, assessed the potential of artificial intelligence (AI) at a press conference in Berlin on Wednesday morning.

She was there to present a long-awaited expert report, whose purpose was to better protect people against possible discrimination by self-learning algorithmic decision-making (ADM) systems. Ataman's report cited the many examples of the use of this type of AI already in place today: application procedures, loans at banks, insurance companies, or the allocation of state benefits such as social welfare.

How AI reproduces prejudices

"Here, probability statements are made on the basis of sweeping group characteristics," the anti-discrimination officer said. "What appears objective at first glance can automatically reproduce prejudices and stereotypes. Under no circumstances should we underestimate the dangers of digital discrimination."

Anti-Discrimination Officer Ferda Ataman is calling for better controls of AIImage: Bernd von Jutrczenka/dpa/picture alliance

The dystopian stories are well-documented. In 2019, more than 20,000 people in the Netherlands experienced what the use of supposedly incorruptible technology can lead to: They were wrongly ordered to pay back child benefits under threat of heavy fines. A discriminatory algorithm in the software was partly responsible, and people with dual citizenship were particularly affected.

To prevent such cases, Ataman is demanding that companies work transparently. In other words, she wants companies that employ AI to provide information about the data used and how their system works. In her expert report, written by legal scholar Indra Spiecker and her colleague Emanuel V. Towfigh, the character of AI-based systems is described as a "black box." For those affected, it is practically impossible to trace the causes of being disadvantaged.

Is artificial intelligence helpful or harmful?

02:20

This browser does not support the video element.

"A specific phenomenon of the use of ADM systems is that their potential for discrimination may already be inherent in the system itself," the report states. The cause could be a data set that is faulty, unsuitable for the intended purpose, or distorted.

Potential discriminatory feature: the postal code

What this means is illustrated in the report with typical examples: "The characteristic of the postal code, for example, which is not discriminatory in itself, becomes a proxy for the prohibited discriminatory characteristic of origin, because, for example, many migrants live in a certain city district for historical reasons."

This can have negative consequences for people living there. If, for instance, they apply for loans, they may be regarded as financial risks who may not be able to repay their debts. Experts call this "discrimination through statistics" — the practice of attributing characteristics obtained by statistical means, based on actual or assumed average values of a group.

Fact check: How to spot AI images?

08:21

This browser does not support the video element.

Ataman calls for arbitration body

For these and other cases, the Federal Anti-Discrimination Commissioner wants to set up a conciliation office in her agency, and is also calling for the General Equal Treatment Act (AGG) to be supplemented by a mandatory conciliation procedure.

To illustrate the urgent need for action, she mentions further cautionary examples from other countries: In the USA, incorrectly programmed algorithms in Apple credit cards systematically discriminated against women when granting credit. In Australia, after an error in an AI-based decision-making system, hundreds of thousands of people were forced to pay back the welfare benefits to which they were entitled.

Ataman's conclusion from the report is simple: "Digitization is the future. But it must not become a nightmare. People must be able to trust that they will not be discriminated against by AI." And that they can defend themselves if it does happen. That's why, she said, clear and comprehensible rules are needed.

This article was originally written in German.

While you're here: Every Tuesday, DW editors round up what is happening in German politics and society. You can sign up here for the weekly email newsletter Berlin Briefing.

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW