Debate about AI: Manufacturers should be liable even through no fault of their own

Source: Heise.de added 11th Nov 2020

  • debate-about-ai:-manufacturers-should-be-liable-even-through-no-fault-of-their-own

The difficult question of liability for damage caused by the use of artificial intelligence was the focus of the forum “It wasn’t me – it was the computer. Who is liable for the use of adaptive robots?” of the Weizenbaum Institute for the Networked Society.

In his technical introduction, Rainer Rehak, research assistant at the institute, pointed out the problem that At the end of the configuration, the AI ​​developers often do not know why their AI has come to a certain result or a decision, even if they have understood the components of the system beforehand. “Such a black box can also exist in non-AI systems,” he said. “Ultimately, machine learning has little to do with human learning.”

Considerate behavior is difficult for AI to implement The machine is not an actor but remain a tool. It follows that responsible and considerate behavior within the meaning of Section 1 of the Road Traffic Act is very difficult for an AI to implement. In contrast to the hardware capacity according to Moore’s Law, their technical performance increases only linearly.

The IT law professor and director of the Weizenbaum Institute Herbert Zech advocated strict liability on the part of manufacturers for high-risk AI . “We should introduce it because it internalizes risks and provides the right incentives to reduce them,” he said. Robotics, the ability to learn and networking are risk drivers. A legislative proposal by the EU Parliament from October 2020 also goes in this direction. Zech interpreted a resolution of parliament in such a way that damage caused by high-risk AI should be presumed to be faulty. This corresponds to a reversal of the burden of proof of EU product liability and a parallel to no-fault motor vehicle owner liability.

As a reason for his recommendation, he named the purposes of liability and the control effects . “If you are only liable if you are at fault, you as a manufacturer will only behave in such a way that you are not liable. But if you are liable in any case, you weigh up whether the assumption of obligations is worthwhile. Risks should only be taken if it makes sense is. ” This also creates an incentive to develop techniques more safely. His idea of ​​introducing a liability-replacing statutory AI accident insurance similar to that for occupational disability is still a long way off. Zech hopes that this will promote technology and more technology acceptance as well as avoiding problems of evidence.

Supervision for compliance with fundamental rights In her lecture, the legal philosopher Kirsten Bock addressed seven ethical principles in the use of AI that are currently being discussed by experts in the EU. Bock represents the supervisory authorities of the federal states in working groups of the European data protection committee EDSA. First and foremost, the supervision of technology is necessary to comply with fundamental and human rights. You have to consider which authorities should supervise and what options they need to intervene. Further criteria are technical robustness, data governance and data protection as well as transparency.

The principle of non-discrimination by AI systems has provided a lot of discussion material in recent years. Often times, the data fed in had led to unfair decisions without it being intended. In order to secure the “social and ecological well-being”, Bock thought it worth considering sending sustainability managers to the company based on the example of data protection officers. She named accountability as the seventh and final principle, which is essentially covered by product liability and contract law.

Autonomous wheelchair instead of autonomous car She then spoke of the fundamental problem that companies are exclusively geared towards maximizing profits. Society must therefore consider how to regulate and reduce the resulting risks for the benefit of the common good. “Why are we only talking about the autonomous car and not about the autonomous wheelchair? It is less commercially interesting, but would be more valuable to society.”

At the end of the forum, the three experts commented on the SPD’s recommendation to introduce “tentative regulation” for AI. “I don’t have the courage to take a position. The AI ​​survey reports are too cautious. They’d better give a direction,” said Rehak. Zech explained that regulation can only react. You have to weigh up in which areas you don’t want AI. Bock called the SPD formulation “trivial”. “We are always feeling our way. I miss the question: How do we want to live in the future? Do we want more AI? Young people in particular should decide that.”

(mho)

Read the full article at Heise.de

brands: COMPLY  
media: Heise.de  

Related posts


Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Related Products



Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91

Warning: Invalid argument supplied for foreach() in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91