Bundestag study: Broad discrimination through AI is often difficult to understand

Source: Heise.de added 25th Nov 2020

  • bundestag-study:-broad-discrimination-through-ai-is-often-difficult-to-understand

Those affected can often not or only with difficulty understand “algorithmic unequal treatment” and how it came about. This is the conclusion of the Office for Technology Assessment at the Bundestag (TAB) in a study on possible discrimination through algorithmic decision-making systems (AES), machine learning and other forms of artificial intelligence (AI).

Cause of the unequal treatment Algorithms are an essential element of digital applications, state Alma Kolleck and Carsten Orwat in the am Investigation published Tuesday. The program routines determined the best route for a planned trip, the hopefully suitable partner in a single exchange or your own creditworthiness. They also support medical diagnoses, for example. A large part of such decision-making and scoring functions remain largely unnoticed by many.

The authors use examples to illustrate the extensive areas of life in which AES are already doing their job . In doing so, they work out how challenging it can be to discover the cause of the respective inequality of treatment. According to the analysis, the first case of medical care makes it clear how algorithmic fallacies can come about if important information is not stored in the system.

Built-in discrimination For example, an AES in a clinic calculated a lower risk of death for patients with multiple illnesses and chronic illnesses than for those who were only affected by pneumonia. Less need for inpatient treatment was seen here. The fallacy is explained by the fact that the AES “was trained with data in which chronically and multiply ill people had received intensive medical treatment and therefore had a low mortality rate”.

The Austrian labor market service had since 2018 has been using software that assesses job seekers based on their socio-demographic data with regard to their proximity to the labor market and assigns them to groups, the authors explain. The published calculation rule “shows that the female gender per se” leads to a deduction. Care obligations that would only be applied to women worsened the result.

Data collected pre-digitally Also with the The researchers are concerned with the Compas risk assessment system. It assesses the likelihood of recidivism and comes into play in the USA, for example, when a decision is made on the suspension of a suspended sentence or the amount of bail. In the high-risk group, it turned out that the prediction was wrong for many African-American offenders. So 45 percent of them remained law-abiding, while this quota among whites was only 24 percent.

The fourth case is also about inequality of treatment based on skin color: Comparative studies of different facial recognition systems showed that common commercial AES in the USA most often incorrectly recognized female and dark-skinned faces. Reasons for biased results are often already “pre-digital”, the authors trace.

Justify the evaluation The question of whether a specific unequal treatment is actually unjustified and thus discriminatory is answered within a Society and jurisprudence are very controversial. This depends, for example, on the social framework conditions of the deployment and the implementation of the results. If, for example, people have not received a loan based on their address or women looking for work are generally considered a market obstacle, “the underlying statistical generalizations at least appear to be in need of justification and may conflict with basic social values ​​or laws,” it says.

There were already technology-neutral legal bases against discrimination such as the General Equal Treatment Act, personal rights in the Basic Law and the General Data Protection Regulation (GDPR). The latter prohibits fully automated decisions with legal effect about persons and stipulates information obligations.

Identification obligation The scientists refer to suggestions to minimize the risk of discrimination against AES. The focus of the debate would be more transparency and control, an evaluation of the technology and uniform regulation. A labeling requirement could help to make the use of AES clear to those affected. A “risk-adapted assessment” would be able to assess social consequences ex ante and, depending on the criticality, to establish various control measures. It is also conceivable to guarantee collective legal protection through a representative action.

In the research project “KI Testing & Auditing” (ExamAI), which is headed by the Gesellschaft für Informatik, those involved are already investigating a number of the questions raised. For example, they should sound out what procedures could look like that enable a “controllable, understandable and fair” use of AI.

(kbe)

Read the full article at Heise.de

media: Heise.de  
keywords: Software  Sound  

Related posts


Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Related Products



Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91

Warning: Invalid argument supplied for foreach() in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91