Artificial Intelligence: Unpredictable Risks and Attack Methods

Source: Heise.de added 07th Jan 2021

  • artificial-intelligence:-unpredictable-risks-and-attack-methods

The advantages of artificial intelligence (AI) are considerable, states the European IT security agency ENISA. AI can help with security and the fight against online crime. Instruments such as “intelligent firewalls” are helpful here. At the same time, in its new report on AI threats, the agency also highlights major problems that the “emerging technology” is bringing with it for IT security.

Particularly in safety-critical applications that rely on automated decision-making such as autonomous vehicles, intelligent manufacturing and e-health, your application can sometimes “expose individuals and organizations to unpredictable risks”, warns ENISA in the publication. The technology opens up potentially new attack methods. It is to be feared that it undermines data protection. ENISA is intended to support the EU and its member states in questions of network security and in the analysis of security problems.

According to the analysis, artificial intelligence goes through many steps in the supply chain and is required large amounts of data to function efficiently. The EU must therefore take “more targeted and appropriate security measures” to mitigate the threats identified. Above all, it is necessary to carefully examine the use of the technology in sectors such as health, automotive and finance in advance. The complexity and breadth of the challenges require an EU ecosystem for safe and trustworthy AI that must encompass all elements of the supply chain.

Race ENISA speaks of a “race” which is “of particular importance” for the EU in view of its long-term strategic goals for AI. In addition to a common understanding of the risks, an “AI toolbox with specific remedial measures” for various actors is required. Furthermore, it is crucial to secure “the diverse assets of the AI ​​ecosystem and the life cycle” of relevant applications across borders and sectors.

In the field of IT Security could jeopardize the technology, for example, the integrity, confidentiality and authenticity as well as other aspects of IT systems such as non-repudiation, availability and robustness, according to the report. Attackers are also likely to aim to undermine the required transparency, explainability and accountability of AI solutions. Even poor data quality or “distorted” sets of input measurements could lead to algorithmic decisions that “wrongly classify people and exclude them from certain services or withhold their rights”.

Impact on fundamental freedoms In general, according to the authors, AI systems and applications are able to significantly limit human control over personal data and thus to draw conclusions about people who had a direct impact on their fundamental freedoms. This could happen because the results of the machine deviate from the results expected by humans or do not meet assumptions.

The developed threat taxonomy begins with malicious activities and deliberate abuse. Such practices could be designed, for example, to change or destroy the underlying IT systems, infrastructures and networks. The scenarios range from unlawful eavesdropping on data communication or the takeover of processes via physical attacks on hardware, for example after unauthorized access to facilities, to unintended damage such as destruction or damage to systems.

“KI made in Europe” is to become a quality mark Malfunctions of hardware or software ventilate ENISA as well as the “unexpected interruption” or quality reduction of a service. An accident or a natural disaster “causing great damage or loss of life” can also be expected. Last but not least, the agency counts legal acts by third parties among the conceivable threats in order to prohibit acts or claim damages. Last year Europol had already highlighted the potential for misuse of AI in the area of ​​crime.

ENISA recommends that existing gaps in the future research directions relating to AI and IT security be identified as soon as possible. It can already be seen that further work is required in the area of ​​automatic formal verification and validation, explainability and new types of security techniques to ward off emerging AI threats. This is the only way to create trustworthy AI algorithms and solutions in Europe that improve industrial and safety-related processes as well as the competitiveness of the internal market. The brand “KI made in Europe” must stand as a seal of approval for ethical, safe and ultra-modern systems in order to be recognized worldwide.

( ds)

Read the full article at Heise.de

brands: Advance  AIM  Crucial  Direct  Diverse  New  other  
media: Heise.de  
keywords: Software  

Related posts


Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Related Products



Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91

Warning: Invalid argument supplied for foreach() in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91