EU AI Regulation

The AI Regulation (COM(2021) 206 final, also known as the "AI Regulation", "Artificial Intelligence Act" or "AIA") is - as far as can be seen - the first such comprehensive regulation of artificial intelligence ("AI") worldwide. It contains obligations for providers, users, importers, distributors and operators of AI and applies regardless of where the provider is based (i.e. worldwide). The regulations apply to both stand-alone software and embedded software. The scope of the AI Regulation relates to the placing on the market, commissioning and use of all AI systems.

No exceptions to the GDPR

It should be noted at the outset that the AI Regulation contains almost exclusively restrictions and regulatory requirements for the use of AI, e.g. the requirement for CE marking.

The major wish for an AI regulation, namely a regulation on the comprehensive use of data for the purpose of AI training or machine learning, is not fulfilled by the AI Regulation. Such approvals are almost completely absent. Only the question of the use of special categories of personal data (e.g. health data) is somewhat simplified. Therefore, the question of whether data may be used for the purpose of AI training remains subject to the other regulations, in particular the GDPR, as soon as personal data is available.

Definition of AI

The definition of "artificial intelligence" or "AI system" according to the terminology of the AI Regulation is currently still very broad. According to this, artificial intelligence is

  • Software that has been developed with

    • machine learning approaches and/or

    • logic and knowledge-based approaches and/or

    • statistical approaches

  • and which can produce results for a series of human-defined goals

  • results (e.g. predictions, recommendations or support for specific decision making)that influence the environment.

  • that influence the environment.

A definition of this kind runs the risk of also covering "classic software" that is not artificial intelligence in the strict sense (see TensorFlow, for example). For example, even a simple pocket calculator could be considered logic-based in the sense of the above definition. The same applies, for example, to "decision support software" in the medical environment.

Due to the widespread criticism of the definition, it is to be expected that the draft AI Regulation will be amended in this regard; initial concrete considerations are already underway.

Risk classes

The AI Regulation distinguishes between three groups of artificial intelligence:

  • prohibited AI

  • high-risk AI

  • simple AI

Prohibited AI is in particular AI that uses

  • subliminal techniques beyond human perception that can cause physical or psychological harm to a person

  • Exploiting the weaknesses of certain groups of people due to their age or disability

  • Determining the reliability of natural persons by monitoring their social behavior over a certain period of time, if this is done by the state

  • Real-time systems for biometric recognition in public spaces (apart from certain areas of law enforcement)

High-risk AI includes artificial intelligence in the area of

  • medical devices

  • Critical (transportation) infrastructure

  • Evaluations of evidence, certificates or creditworthiness

  • access to education or career paths, job applications, personnel management, safety components of products

Numerous other areas covered as high-risk AI can be found in annexes to the AI Regulation.

"Simple AI" is all other artificial intelligence that is not prohibited or high-risk AI.

Regulatory requirements

The AI Regulation then regulates different requirements depending on the AI:

For example, the following requirements apply to all AIs:

  • Labeling/information obligation (e.g. chatbots)

  • Provision of "sandboxes" for checking processes by member states

The following requirements, among others, apply to high-risk AI

  • Conformity assessment procedure by a so-called Notified Body

  • Subsequent affixing of the CE marking

  • Registration in an EU database

  • Obligation to report detected defects

  • Post-market monitoring and market surveillance

  • Quality management system, technical documentation, automatic logging

  • Consideration of cybersecurity aspects

  • Human supervision of the AI

Sanctions

In the event of violations, fines of up to 6% of annual global turnover or up to EUR 30 million may be imposed.

AI laboratories

The AI Regulation also introduces so-called "AI real laboratories". Personal data that was lawfully collected for other purposes can be processed in AI laboratories for the development and testing of an AI system if, in addition, in particular

  • there is a significant public interest in the AI (e.g. in health research)

  • no use of other data sets is possible and there is no impact on data subjects and the personal data is subsequently deleted

  • there is a functionally separate, isolated data processing environment


This article is part of the overview of the current changes to the EU Data Strategy and the New Legislative Framework. Please note that the legislative proposal is currently a draft (although marked as "final"). It is therefore not yet applicable law and there may still be changes in the legislative process. However, due to the manageable "transitional periods", it is already necessary to "take a look" at the upcoming law.

Date: 2. Nov 2022