Employers and artificial intelligence: How to ensure the compliant use of AI in the workplace

Written By

riku rauhanen Module
Riku Rauhanen

Senior Associate
Finland

I am a Senior Associate in our Commercial and Privacy & Data Protection groups in Helsinki, where I work with our local and international clients advising them on data protection, other data regulation, and commercial contracts.

The first two sets of the EU’s Artificial Intelligence Act’s (‘AI Act’) provisions have taken effect this year: 1) provisions on forbidden AI practices; and 2) provisions on general-purpose AI models. Many companies may be surprised to find themselves within the scope of the Act, having initially assumed it only applies to providers and distributors of AI products. In reality, it is the role of the employer that brings most companies under the Act’s remit and imposes obligations on them.

The AI Act may impose various obligations on employers, depending on the roles they assume. For most employers, this includes at least the role of a deployer (a natural or legal person putting an AI system into professional use), as more and more organisations adopt AI-powered tools and programmes for use by their staff. 

As an employer, make sure to:

Review the AI systems in use in the workplace 

As the AI Act begins to apply, it is important to review the AI systems in use in the workplace to assess whether their deployment imposes obligations on the employer. Examples include AI-powered writing assistants, document management systems, HR platforms, recruitment tools, or workforce scheduling software. 

The AI Act categorises AI systems into four risk levels, each with corresponding requirements:

  • Unacceptable Risk: AI systems considered a clear threat to the safety, livelihoods and rights of people are prohibited. Article 5 of the Act outlines eight specific banned practices, including emotion recognition in workplaces and educational institutions. It also prohibits biometric categorisation to deduce certain protected characteristics such as political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation. These prohibitions came into force in February 2025.
  • High Risk: AI systems that pose serious risks to health, safety or fundamental rights in critical areas. AI systems that classify as high-risk have been defined in chapter III and annexes of the AI Act. This includes AI systems that are 

    1) intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; and 

    2) intended to be used to make decisions affecting terms of work-related relationships, such as promotions, terminations, task allocation based on behaviour or traits, or performance monitoring.

    Rules on high-risk AI systems started to apply on 2 August 2025. The AI Act sets obligations for deployers of high-risk AI systems, which are introduced more closely below. 

  • Limited Risk: AI systems that pose limited risks are subject to transparency obligations. These systems include, for example, chatbots and AI-generated content, where it is important that users are aware they are interacting with AI. The transparency obligations outlined in Article 50 of the AI Act, will apply from 2 August 2026 and are shared between providers and deployers.

  • Minimal Risk: AI systems with minimal or no risk, such as AI-enabled video games or spam filters. The AI Act does not impose rules on these systems.

Identify the requirements that apply to you

After conducting a review of the AI systems in use in the workplace, determine what type of requirements apply to you based on this assessment. If any AI system is deployed, you must ensure your staff has a sufficient level of AI literacy. 

High-risk AI systems create additional obligations, as the responsibilities are shared between providers, deployers and other parties. The deployer obligations of high-risk AI systems are set in Article 26 of the AI Act and include:

  • Technical and organisational measures to ensure that AI systems are used as instructed
  • Ensuring human oversight
  • Ensuring that input data is relevant and sufficiently representative
  • Monitoring AI system operation and reporting risks to providers, distributors or the market surveillance authority
  • Retaining AI system-generated logs for at least 6 months
  • Notifying workers and representatives about the use of high-risk AI systems
  • Conducting a data protection impact assessment
  • Informing affected persons about AI-assisted decision-making
  • Cooperation with authorities

Ensure staff AI literacy 

Employers are obliged to take measures to ensure staff and others operating AI systems on their behalf have sufficient level of AI literacy. When defining these measures, it is important to consider the staff’s technical knowledge, experience, education and training, as well as the the context in which the AI systems are used and the affected.

This creates a new obligation for internal training, which could be conducted in a similar way as (e.g.) data protection, code of conduct or anti-bribery trainings are offered in the company. Training should be tailored to staff roles and the risk levels of the AI systems in use – for example, high-risk HR systems create different needs than AI writing assistants. 

This obligation for ensuring AI literacy can also be connected to the obligation on taking technical and organizational measures to ensure high-risk AI systems are used in accordance with the instructions.

Inform workers and their representatives if they will be subject to the use of a high-risk AI system

Employers must inform workers and their representatives if high-risk AI systems will be used. This obligation arises from different provisions of the AI Act and it is also linked to the general transparency requirements in Article 50. Informing workers may require specific procedures as defined under national employment law.

Why AI Act compliance matters? 

The AI Act imposes strict penalties for non-compliance, and enforcement is overseen by both EU-level bodies and the authorities of each Member State. Non-compliance with the Act’s provisions causes significant financial risks, including:

  • Up to EUR 35 million or 7% of global annual turnover for breaches involving prohibited AI practices
  • Up to EUR 15 million or 3% of global annual turnover for breaches of deployer obligations (Article 26) or transparency obligations (Article 50)

Other applicable legislation on the use of AI

In addition to the AI Act, employers must consider other legal frameworks governing AI use in the workplace. One of the most relevant is the General Data Protection Regulation (GDPR), particularly Article 22, which restricts decisions based solely on automated processing – including profiling – that produce legal effects on or similarly significantly affects individuals.

In the context of employment, decisions made using AI – e.g. hiring, promotion, or termination – almost always fall under this restriction. There are only three exceptions to this rule:

  1. The decision is necessary for entering into or performing a contract between the data subject and the controller

  2. The decision is authorised by Union or Member State law, with appropriate safeguards

  3. The decision is based on the data subject’s explicit consent

Even when exceptions apply, employers must implement appropriate safeguards, including the right to obtain human intervention, to express a point of view, and to contest the decision.

Moreover, many AI services used in the workplace are provided by vendors located outside the European Economic Area (EEA). In such cases, Chapter V of the GDPR applies, governing international transfers of personal data. Employers must ensure that appropriate safeguards are in place, such as standard contractual clauses or adequacy decisions.

Finally, national laws may impose additional requirements on the processing of employee data. If such provisions have been adopted under Article 88 of the GDPR, breaches may result in administrative fines of up to EUR 20 million or 4% of global annual turnover, whichever is higher. 

The use of AI in the workplace can boost efficiency and productivity by streamlining processes, automating routine tasks, and improving decision-making. As organisations embrace AI technologies, it is crucial to comply with the requirements set by the AI Act. Bird & Bird offers tailored guidance to help you navigate the new obligations and effective use of AI in the workplace. You can find more guidance on the EU AI Act in Bird & Bird’s AI Guide, available here.

Latest insights

More Insights
featured image

Women in Tech: At the forefront of innovation - Key takeaways from Louise Lachmann, Ugly Duckling Ventures

3 minutes Oct 28 2025

Read More
Curiosity line yellow background

Australia: The Long and the Short of It: Long Service Leave and Flexible Parental Leave Updates for 2025

4 minutes Oct 27 2025

Read More
Curiosity line pink background

New Cybersecurity Incident Reporting Measures in China: Critical Compliance Updates for Businesses

5 minutes Oct 27 2025

Read More