The first two sets of the EU’s Artificial Intelligence Act’s (‘AI Act’) provisions have taken effect this year: 1) provisions on forbidden AI practices; and 2) provisions on general-purpose AI models. Many companies may be surprised to find themselves within the scope of the Act, having initially assumed it only applies to providers and distributors of AI products. In reality, it is the role of the employer that brings most companies under the Act’s remit and imposes obligations on them.
The AI Act may impose various obligations on employers, depending on the roles they assume. For most employers, this includes at least the role of a deployer (a natural or legal person putting an AI system into professional use), as more and more organisations adopt AI-powered tools and programmes for use by their staff.
As an employer, make sure to:
As the AI Act begins to apply, it is important to review the AI systems in use in the workplace to assess whether their deployment imposes obligations on the employer. Examples include AI-powered writing assistants, document management systems, HR platforms, recruitment tools, or workforce scheduling software.
The AI Act categorises AI systems into four risk levels, each with corresponding requirements:
High Risk: AI systems that pose serious risks to health, safety or fundamental rights in critical areas. AI systems that classify as high-risk have been defined in chapter III and annexes of the AI Act. This includes AI systems that are
1) intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; and
2) intended to be used to make decisions affecting terms of work-related relationships, such as promotions, terminations, task allocation based on behaviour or traits, or performance monitoring.
Rules on high-risk AI systems started to apply on 2 August 2025. The AI Act sets obligations for deployers of high-risk AI systems, which are introduced more closely below.
Limited Risk: AI systems that pose limited risks are subject to transparency obligations. These systems include, for example, chatbots and AI-generated content, where it is important that users are aware they are interacting with AI. The transparency obligations outlined in Article 50 of the AI Act, will apply from 2 August 2026 and are shared between providers and deployers.
Minimal Risk: AI systems with minimal or no risk, such as AI-enabled video games or spam filters. The AI Act does not impose rules on these systems.
After conducting a review of the AI systems in use in the workplace, determine what type of requirements apply to you based on this assessment. If any AI system is deployed, you must ensure your staff has a sufficient level of AI literacy.
High-risk AI systems create additional obligations, as the responsibilities are shared between providers, deployers and other parties. The deployer obligations of high-risk AI systems are set in Article 26 of the AI Act and include:
Employers are obliged to take measures to ensure staff and others operating AI systems on their behalf have sufficient level of AI literacy. When defining these measures, it is important to consider the staff’s technical knowledge, experience, education and training, as well as the the context in which the AI systems are used and the affected.
This creates a new obligation for internal training, which could be conducted in a similar way as (e.g.) data protection, code of conduct or anti-bribery trainings are offered in the company. Training should be tailored to staff roles and the risk levels of the AI systems in use – for example, high-risk HR systems create different needs than AI writing assistants.
This obligation for ensuring AI literacy can also be connected to the obligation on taking technical and organizational measures to ensure high-risk AI systems are used in accordance with the instructions.
Employers must inform workers and their representatives if high-risk AI systems will be used. This obligation arises from different provisions of the AI Act and it is also linked to the general transparency requirements in Article 50. Informing workers may require specific procedures as defined under national employment law.
The AI Act imposes strict penalties for non-compliance, and enforcement is overseen by both EU-level bodies and the authorities of each Member State. Non-compliance with the Act’s provisions causes significant financial risks, including:
In addition to the AI Act, employers must consider other legal frameworks governing AI use in the workplace. One of the most relevant is the General Data Protection Regulation (GDPR), particularly Article 22, which restricts decisions based solely on automated processing – including profiling – that produce legal effects on or similarly significantly affects individuals.
In the context of employment, decisions made using AI – e.g. hiring, promotion, or termination – almost always fall under this restriction. There are only three exceptions to this rule:
The decision is necessary for entering into or performing a contract between the data subject and the controller
The decision is authorised by Union or Member State law, with appropriate safeguards
The decision is based on the data subject’s explicit consent
Even when exceptions apply, employers must implement appropriate safeguards, including the right to obtain human intervention, to express a point of view, and to contest the decision.
Moreover, many AI services used in the workplace are provided by vendors located outside the European Economic Area (EEA). In such cases, Chapter V of the GDPR applies, governing international transfers of personal data. Employers must ensure that appropriate safeguards are in place, such as standard contractual clauses or adequacy decisions.
Finally, national laws may impose additional requirements on the processing of employee data. If such provisions have been adopted under Article 88 of the GDPR, breaches may result in administrative fines of up to EUR 20 million or 4% of global annual turnover, whichever is higher.
The use of AI in the workplace can boost efficiency and productivity by streamlining processes, automating routine tasks, and improving decision-making. As organisations embrace AI technologies, it is crucial to comply with the requirements set by the AI Act. Bird & Bird offers tailored guidance to help you navigate the new obligations and effective use of AI in the workplace. You can find more guidance on the EU AI Act in Bird & Bird’s AI Guide, available here.