European AI Act - EU Parliament adopts world's first multinational regulation of artificial intelligence

March 2024 · Estimated read time: mins

European AI Act

On 13 March 2024, the EU Parliament adopted the so-called AI Act with a large majority. With 523 votes, the majority of members of the European Parliament voted in favour of the draft, which had been debated the day before. This was preceded by controversial trilogue meetings in which a compromise was reached in December. The EU claims a pioneer role in setting a global standard in the regulation of artificial intelligence. The comprehensive AI Act is based on a risk assessment model that provides for various consequences depending on the risk assessment for systems in the field of artificial intelligence.

What companies can expect now

The introduction of the risk assessment model results in various categories into which AI systems will be allocated in future:

  • Unacceptable risks: AI systems that fall into this category will be banned in future. Existing systems will have to be withdrawn from the market within the EU. This applies in particular to manipulative AI systems that are designed to prevent informed decisions. This also includes AI systems that are based on biometric categorisation or work with facial recognition from internet databases or surveillance cameras.

  • High-risk systems: These tools are subject to extensive compliance requirements. This applies not only to companies that develop these systems, but also to those that distribute or use them. This will also affect companies that enable their employees to use such AI systems in order to make their work easier.

  • AI systems that do not fall into these categories are also subject to regulation under the AI Act. Among other things, they trigger transparency requirements.

The introduction of comprehensive new regulations results in many companies asking themselves as to what will change specifically in the future. The AI Act imposes numerous obligations on companies that provide their employees with AI systems for the performance of their work. Just a few of them are highlighted here:

  • Transparency obligations: When using AI systems, their use must be disclosed to the persons, such as employees or applicants, who are confronted with the content generated in this way.

  • Training obligations in relation to high-risk AI: Companies are obliged to ensure that employees who exercise control over high-risk AI have sufficient skills and receive training.

  • Monitoring obligations: When using high-risk AI, companies are also obliged to check whether the systems are being used in accordance with the instructions for use drawn up for them.

The formal adoption of the AI Act by the Council and the Parliament's rectification procedure, in which a final review by linguistic and legal experts will take place, are still pending. The AI Act will gradually enter into force following its final publication in the official journal of the EU. Prohibited AI systems must be deactivated within six months of the AI Act becoming effective.

Companies that already use AI systems today are well advised to check these systems for compliance with the requirements of the AI Act. The introduction of guidelines for employees on the use of the systems is also a good way of giving employees a clear scope of action. Finally, companies should already consider training measures on how to use AI for their employees.

 We currently advise companies worldwide on the introduction of AI systems with regard to all associated legal aspects.


Get in touch with us. We provide you with individual support in dealing with the challenges you face as an employer under employment law.
Find a contact near you
Subscribe to our newsletter