EU AI Regulation
Which AI systems and AI software are covered?
The AI Regulation obliges providers, users, importers, distributors and operators of AI to comply with and implement various obligations. The regulations apply regardless of where the provider is based, i.e. worldwide.
The AI Regulation applies to AI in the form of stand-alone software. However, it also applies to AI in the form of embedded software.
The AI Regulation now contains a legal definition of AI systems, including AI software. The English text of the regulation, which is currently the only one available, contains the following provision:
'artificial intelligence system' (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
The Annex I referred to regulates:
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
The attempt at a definition is to be welcomed. However, numerous delimitation difficulties and classification issues will arise. According to the above definition, the issues merely shift from the definition of AI software to other terms, such as machine learning.
Risk-based approach and requirements depending on the risk of AI
The AI Regulation is part of the EU's New Legislative Framework (NLF). In this respect, there are numerous regulations that are linked to the risk posed by an AI with regard to humans. A distinction must therefore be made between three "risk levels" in particular:
1st level: Basic requirements for every AI
The AI Regulation contains numerous general requirements for AI. The following points are examples:
The use of AI towards consumers must be recognizable to the consumer. This applies, for example, to AI-based interactions via chatbots.
There is a labeling obligation for deep fakes generated by AI. Deep fakes are, for example, AI-generated, realistic videos of people in which actions and statements of these people are recognizable that they have never taken or expressed. (Irrespective of the labeling requirement, the question of admissibility must also be assessed, e.g. with regard to the personal rights of the persons depicted).
In addition, an instrument from the field of IT security is now envisaged, namely so-called sandboxes. These are to be made available by the member states, among other things, in order to be able to check the operation of the AI. Small companies and start-ups are to be given preferential access to the sandboxes. Liability for testing within the sandbox should also remain with the sandbox participants.
Stage 2: High-risk AI
Providers of high-risk AI must, among other things:
Carry out a conformity assessment procedure . At a low risk level, this can be carried out by the provider itself. At higher risk levels, the involvement of an external Notified Body is required.
affix a CE marking after the conformity assessment has been carried out
register the AI in an EU database
report defects to the authorities
monitor the AI on the market(post-market monitoring and market surveillance)
set up a quality management system
prepare technical documentation
Provide for automatic logging
Ensure accuracy, robustness and cybersecurity
Furthermore, human supervision of the AI must be possible.
Stage 3: Prohibited AI
There are also certain areas in which AI is prohibited. These are primarily
The use of subliminal techniques beyond human perception;
Exploiting the weaknesses of certain groups of people due to their age or disability;
Determining the reliability of natural persons by monitoring their social behavior over a certain period of time, if this is done by the public authorities;
Real-time systems for biometric recognition in public spaces;
Infringements and sanctions
AI supervisory authorities are also to be set up at Member State level (i.e. in each Member State in accordance with their respective national law). These AI supervisory authorities will be granted extensive technical powers in some cases. For example, comprehensive access to training data is to be made possible via an API.
Fines of up to 6% of annual global turnover or up to EUR 30 million are to be imposed for breaches of the AI Regulation.
Summary and further development
The above description relates to the first draft of the AI Regulation, which will certainly still change in detail. In this respect, further developments should be closely monitored. It is also important to take the foreseeable legal framework into account now when designing products. Particularly for manufacturers who have previously manufactured outside of conformity assessment procedures, the new approach to conformity assessment procedures, the involvement of notified bodies and the affixing of CE markings, including the necessary technical documentation, could mean that extensive new process structures are required in companies. Other companies, on the other hand, e.g. medical device manufacturers, will be able to "dock" the new regulations to existing processes within the company, even if this means additional work.
The above brief presentation is a summary given bythe author on July 6, 2021 as part of the Tübingen Innovation Days 2021 at CyberValley Tübingen and Technologiepark Tübingen-Reutlingen.