As the EU’s landmark Artificial Intelligence Act (AI Act) moves from legislative text to implementation, the first half of 2025 has brought a series of notable developments. These include updates to the implementation timeline, delays in the Code of Practice for General-Purpose AI (GPAI) and a public consultation on high-risk AI systems. While progress is being made, the path to full enforcement remains complex and there are increasing calls for a pause in AI Act implementation.
In force since August 2024, the AI Act is the world’s first comprehensive legal framework for AI. It categorises AI systems by risk and imposes corresponding obligations, with heightened requirements for high-risk and GPAI systems. A recent briefing by the European Parliamentary Research Service (EPRS) outlines the phased implementation:
Despite this structured timeline, implementation is encountering delays. The Commission has indicated that some key deliverables may be postponed, raising questions about enforcement readiness and legal clarity.
One of the more contentious issues is the delay in the Code of Practice for GPAI models. Initially expected by 2 May 2025, the Code is intended to offer voluntary but influential guidance for developers of foundation models.
Reports suggest that the delay is due to differing views among stakeholders regarding the Code’s scope and enforceability. While some advocate for a more flexible approach, others are calling for stronger safeguards, particularly in areas such as transparency, data governance and systemic risk.
Kai Zenner, Head of Office and Digital Policy Adviser to MEP Axel Voss (EPP), noted during the recent GPAI Paris Summit that the delay reflects broader tensions between fostering innovation and ensuring accountability. The Code is now tentatively expected after the summer recess.
In parallel, the Commission launched a public consultation on high-risk AI systems in May 2025. This initiative seeks feedback on how to operationalise the AI Act’s high-risk classification, particularly in sectors like healthcare, education, and law enforcement.
The consultation addresses several critical questions:
Stakeholders have until 18 July 2025 to respond. The consultation is seen as a crucial step in translating the Act’s broad principles into actionable guidance.
Adding to the already complex implementation landscape, recent signals from Brussels suggest that the Commission may be open to revisiting or softening certain aspects of the AI Act to support innovation and competitiveness. The shift comes amid transatlantic tensions, with the U.S. administration urging the EU to ease regulatory burdens, and Commission President Ursula von der Leyen increasingly framing AI as a tool for restoring Europe’s economic strength.
In this context, Poland which holds the rotating Presidency of the Council of the EU until the end of June, has reportedly proposed postponing the application of rules for general-purpose AI (GPAI) models if the Code of Practice (CoP) is not finalised in time. This proposal reflects growing concern among Member States about the practical feasibility of enforcing GPAI-related obligations without clear guidance and operational enforcement structures.
The second half of 2025 will be critical in determining the AI Act’s trajectory. Key developments to watch include:
For businesses, the message remains clear: despite delays and ongoing discussions, the AI Act’s core obligations are advancing. Organisations developing or deploying AI in the EU—particularly in high-risk areas or involving GPAI—should continue to assess their compliance strategies.