AI Act in progress: interview with Kai Zenner

Written By

feyo sickinghe Module
Feyo Sickinghe

Of Counsel
Netherlands

I am a Principal Regulatory Counsel in our Regulatory & Public Affairs practice in the Netherlands and Brussels. I have a focus on tech and comms and digital markets regulation, drawing on in-depth business knowledge and extensive experience in TMT and public administration.

This article contains a debrief of an interview on the state of play of the AI Act with Kai Zenner, assistant to Axel Voss, member of the European Parliament, Group of the European People's Party (Christian Democrats).

The interview was conducted by Feyo Sickinghe during the American Conference Institute’s AI Law Ethics, Safety and Compliance, in Washington D.C.

What do you see as the main accomplishments of the AI Act?

The AI Act can be seen as the second generation of digital laws in Europe. In 2010, Europe started with the Agenda for the Digital Decade as the first generation. For the AI Act, European policy makers opted for a principle-based and transparent approach involving stakeholders from the industry, society and academia, close to policy making in the UK and the US. The EU has initiated public-private partnerships such as the regulatory sandboxes, the scientific panel, the advisory forum and a cooperation mechanism for stakeholders with the AI Office, in terms of a future orientated and flexible regulatory framework. The AI Act took inspiration from the work carried by the OECD and focuses on the entire value chain.

In terms of deepfake transparency there are concerns whether the AI Act is already outdated. OpenAI has created a text watermarking tool that could help ChatGPT comply with the AI Act, which mandates that AI-generated content be marked as such by August 2026. Despite having the tool ready for a year, OpenAI has not yet released it, fearing the loss of users. Do we need further regulation of deepfakes?

I was involved in regulating deepfake applications in the context of the drafting of the GDPR, but the result was already outdated when the GDPR came into force in 2018. During the AI Act negotiations in 2023, we could not come to a common approach with regards to labelling, watermarking, and whether other techniques should be used. That is why Article 50 of the AI Act contains a principle-based approach for the detection and labelling of artificially generated or manipulated content that should be complemented with a code of practice. The Commission will set ground rules; the industry will be invited to share best practices. I expect more specific rules for deepfakes complementing the AI Act will be introduced. Article 50 will enter into force in August 2026. In the meantime, companies like OpenAI may still decide not to introduce a text watermarking tool.

The AI Act contains provisions for the protection of copyrighted materials. Is the framework sufficient to deal with the current market challenges?

On a political level lengthy discussions were held whether the Copyright Directive needed to be changed or whether a new regulation needed to be introduced. The co-legislators opted for inserting provisions with regard to foundation models in the AI Act with reference to the text and data mining exemption in the Copyright Directive, creating obligations to provide detailed summaries of the input data used. Similar to deep fakes, no in-depth discussions between the co-legislators took place as of lack of time. The Commission will present a template in the Autumn of this year specifying what a detailed summary should look like. This is a good example of the AI Act being complemented with secondary legislation, codes of practice and conduct, guidelines and templates. As a result, companies will only be able to have an overarching view of the obligations when these are in place in the course of 2027. The Act itself can be changed through implementing and delegating acts. There will never be a final text, it will be constantly evolving.

In your view, what are the challenges in terms of the implementation of the AI Act?

First, the AI Act is creating an overly complex governance structure between the relevant actors. Unfortunately, we could not agree with the co-legislators on a one stop shop mechanism. Coordination between competent authorities will be a challenge. Second, there is a strong need for AI experts and talent in the institutions, the industry and notifying bodies. This will take a lot of time and money. The lack of human and financial resources could even result in the AI Act not being successful in the end.

Third, the AI Act aims to be an exclusive regulation. However, the Commission is considering an AI regulation for the financial and employment sectors as well as changes to the Copyright Directive. This may also happen in the agricultural sector. I expect a battle of competences between the European institutions and regulators since the AI Act took away their previous powers and responsibilities. This clearly needs to be avoided. There is already an overlap of the AI Act with the Platform to Business Regulation, the Digital Services Act and the GDPR with unclear relationships. We need to keep the amount of additional AI regulations to a minimum and to clean up the existing overlaps. Most companies have Data Protection Officers (DPOs). Companies now also need an AI Officer and a Cybersecurity Officer (CISO). Over time, the cost of compliance will increase.

Where should companies begin addressing these challenges?

In the Bruegel sheet we have identified 116 existing European digital laws. Each of these laws are of importance to the market. To start, companies are advised to use the Bruegel sheet as a reference point for a scoping assessment and to have the outcome validated or co-drafted with an external legal advisor. Most companies will also fall within the scope of the European cyber regulations like the NIS2 Directive and the forthcoming Cyber Resilience Act. I think there is a clear risk of overregulation, even the European Commission seems not to have a complete overview. This leads to legal uncertainty for companies. This does not help Europe in terms of digital competitiveness.

For more information, please contact Feyo Sickinghe 

EU AI Act Guide – now ready to download! 

To guide you through the EU AI Act, our multi-disciplinary global team of AI experts has launched our EU AI Act Guide which summarises key aspects of the new regulation and highlights the most important actions organisations should take in seeking to comply with it. Serving a similar purpose as our GDPR Guide, our EU AI Act Guide is divided into thematic sections, with a speed-read summary and a list of suggested priority action points. 

To access the guide, click here.

This article was originally published in our monthly Connected newsletter, to sign-up to receive future newsletters for the latest Regulatory & Public Affairs news and updates, see below:

TO SUBSCRIBE TO OUR CONNECTED NEWSLETTER CLICK HERE

Latest insights

More Insights
flower

NEWSFLASH - The UK’s New Consultation on AI and Copyright: Purr-suing Balance?

Dec 19 2024

Read More
laptop phone

EU/UK sanctions regarding Russia and Belarus (16-12-2024)

Dec 19 2024

Read More
Mobile Phone in hand on purple background

SEP & FRAND before the UPC - what has been happening in 2024?

Dec 18 2024

Read More