In December of last year, the Chairs of the Working Groups released the First Draft of the Code of Practice on Transparency of AI-Generated Content. While this document serves as a preliminary draft for consultation, it provides the first tangible roadmap for how the transparency obligations under Article 50 of the AI Act will be operationalised.
In the broader context of Artificial Intelligence Act (Regulation (EU) 2024/1689, “AI Act”), Article 50 may act as the compliance baseline for the AI economy. While it does not impose the heavy conformity assessments of “High-Risk” AI or the stringent prohibitions of Article 5, its scope is significantly broader. In essence, Article 50 establishes transparency hygiene rules for four specific scenarios:
AI systems interacting with natural persons (e.g., chatbots);
the technical marking of synthetic content;
emotion recognition and biometric categorisation systems; and
the visible labelling of "deep fakes" and public interest text.
However, in our advisory practice, we sometimes observe that Article 50 can be simultaneously overestimated and underestimated, depending on where a company sits in the value chain. The publication of the Code of Practice on Transparency of AI-Generated Content ("Draft Code") is the perfect moment to correct these misconceptions. It forces us to look beyond the legal text and understand the distinct compliance burdens for three key actors: the Model Providers, the System Providers, and the Deployers.
Before we go into the specific details of the Draft Code, it is worth clarifying what significance Article 50 has for the respective actors along the AI value chain.
For the developers of foundation models (like LLMs or image diffusion models), the regulatory spotlight is naturally fixed on Article 53. This article governs General-Purpose AI (GPAI) models and imposes copyright-related and documentation duties. Some Model Providers might assume that if they master the Article 53 hurdles, they are in the clear. That can easily lead to overlooking the immense technical implications of the labelling obligation in Article 50(2). This provision mandates that outputs be marked as artificially generated. The Draft Code clarifies that Model Providers must implement these marking techniques (like watermarking) at the model level before placing it on the market.
This group creates the “blindest spot” in the market. These are companies that neither train their own base models nor modify third-party models but instead integrate existing models (e.g., via API) into user-facing systems. This group will likely not qualify as “Model Providers”, meaning the heavy GPAI rules of Article 53 will not apply to them. Consequently, they often believe they have no significant obligations under the AI Act. They forget that they are still Providers of an GPAI System. As such, they are fully liable under Article 50(2) to ensure that the content generated by their system is marked.
This category encompasses most businesses – from marketing agencies to law firms – that use AI tools to generate content. Deployers are largely spared from the heavy technical engineering of watermarking. Instead, they face the omnipresent “visible” obligations. For example, under Article 50(4), they must label deep fakes and public interest text. While these duties are technically less invasive than watermarking, they carry high reputational risk.
Against this backdrop of varying risks, the Draft Code may serve as a preliminary unifying technical manual – subject to the final version of the Codes of Practice. It is not a top-down decree from the European Commission; rather, it is the result of a massive multi-stakeholder exercise involving hundreds of participants. From a legal perspective, the final Code of Practice will be technically voluntary. Signatories commit to the measures to demonstrate compliance, but they are not legally forced to sign. However, for companies, the final Code of Practice will offer a significant legal benefit: the presumption of conformity. Companies that will sign the final Code of Practice and implement its measures will be presumed to be compliant with the obligations under Article 50.
In practice, this creates a strong gravitational pull. While the final Code of Practice will acknowledge that providers can demonstrate compliance through “alternative means”, choosing a path that deviates from the final Code of Practice, this may introduce a significant regulatory uncertainty. In an enforcement scenario, market surveillance authorities will inevitably use the final Code of Practice as the benchmark for what constitutes “state of the art”. If a company chooses an alternative technical solution, the burden of proof rests entirely on them to show that their solution is at least as effective as the Code of Practice's measures. For most legal departments, sticking to the Code of Practice will be the only commercially viable risk strategy.
Crucially, whether a company signs the Code of Practice or not, it will still need an internal compliance regime – either to (i) implement the Code of Practice internally, or (ii) demonstrate that it follows equivalent measures that achieve the same outcomes. In other words, “not signing” does not avoid the work – it typically shifts the burden from “follow the blueprint” to “prove your alternative blueprint is equally good”.
For the Model and System Providers identified above, the Draft Code may deliver a crucial reality check: according to this Draft Code, the search for a single technical “silver bullet” to fulfil Article 50(2) may be over. The Draft Code explicitly acknowledges that “no single active marking technique suffices” to meet the legal requirements of robustness and reliability. Consequently, it mandates a “multi-layered approach.”
To be compliant, providers may very likely not simply rely on one method according to the Draft Code. Instead, the wording of the Draft Code suggests the implementation of a combination of techniques that function as a “defence-in-depth” mechanism:
Metadata Embedding (The Standard): The Draft Code stipulates embedding provenance information (e.g., using C2PA standards) into the file. While this is the industry standard, it is fragile; metadata is easily stripped when a user takes a screenshot or uploads the file to certain social platforms.
Interwoven Watermarking (The Hardening): To counter metadata loss, Draft Code envisages also to implement imperceptible watermarking that is “interwoven” with the content. This mark shall be robust enough to withstand common transformations like compression or cropping.
Fingerprinting (The Fallback): Where active marking fails or is insufficient, the Draft Code suggests “fingerprinting” or logging as a safety net to identify content ex-post.
For companies developing LLMs, Measure 1.2.1 is particularly relevant: The Draft Code recognizes that watermarking text is notoriously difficult without degrading the quality or utility of the output (a “robotic” sounding text). Therefore, for text, the Draft Code permits a pragmatic alternative: “Provenance Certificates”. Instead of embedding a watermark into the words themselves, Providers can issue a digitally signed manifest that formally guarantees the origin of the content.
However, the multi-layered approach is significantly more complex in multimodal and composite workflows – where text, image, audio, and video outputs (often from different models) are merged into a single asset.
That is why the Draft Code’s level of detail matters: accordingly, regulators may not accept “we use watermarks” as a general statement. They may expect evidence of where and how content is marked, that the marking survives common transformations, and how this is tested, monitored, and documented in practice.
To give an overview, the Draft Code further proposes the following additional measures:
To enable downstream providers of AI systems to meet their own obligations, upstream model providers are expected to ensure that their models include content marking ‘by design’ before the models are placed on the market—making this a key dependency for downstream system providers (Measure 1.4);
It goes beyond simply ‘adding a mark’ and emphasises the need to preserve markings and prevent their removal, including safeguards against tampering and contractual restrictions in the terms of use (Measure 1.5);
It reframes deployer disclosure as a built‑in product feature by requiring default, interface‑level options that allow perceptible labels for deepfakes to be embedded directly at the point of generation (Measure 1.7);
It anticipates external verification by requiring free detection tools and confidence scores for users and third parties (Measure 2.1).
It grounds all of this in an audit‑ready compliance framework that may be shared with market surveillance authorities (Measure 4.1).
While Section 1 deals with the invisible technical layer for providers, Section 2 of the addresses the deployers – the companies and individuals actually creating and using the content.
As a result, the practical scope can extend well beyond “classic AI companies”. For instance, any organisation communicating externally may be addressed – such as media and entertainment businesses, advertisers, agencies, brands with always-on social channels, and corporate comms teams. The Draft Code may therefore best read as a communications-governance framework, not merely a narrow “labelling task”.
This section operationalises Article 50(4), which mandates visible labelling for deep fakes and text published to inform the public on matters of public interest. To avoid a fragmented landscape of different warning labels, the Draft Code proposes a “Common Icon”. Until an EU-wide interactive symbol is finalised, the Draft Code suggests an interim solution: a visual label containing the acronym “AI” (or the local language equivalent, e.g., “KI” in Germany or “IA” in France).
The Draft Code introduces a two-tier taxonomy between “Fully AI-generated” content and “AI-assisted” content:
Fully AI-generated: Content created autonomously and fully by the system without human-authored authentic content (e.g., images).
AI-assisted: Content with mixed human and AI involvement, where AI-assisted content generation or modification affects meaning, factual accuracy, emotional tone, or other elements that may falsely appear authentic or truthful (such as object removal or face/voice replacement)
The Draft Code envisages a common taxonomy that provides a harmonised terminology to consistently identify content falling under Article 50(4) and to classify the degree of artificial manipulation—for example whether content is fully AI‑generated or AI‑assisted—for the purpose of determining disclosure obligations.
The label may also be read as a commercial signal in other legal areas. If a company labels a work as “fully AI-generated”, third parties could interpret this as indicating limited human creative contribution, which – under European copyright concepts – may affect whether copyright protection exists at all. If protection (and third-party rights) is absent, the practical consequence is that the content may be treated as (effectively) free to reuse by others, including competitors, platforms, and downstream publishers – making the classification commercially sensitive. Conversely, “AI-assisted” leaves more room for human authorship and editorial control arguments.
For deep fakes (content creating a resemblance to real persons/events, etc.), the Draft Code mandates clear disclosure. It differentiates by modality:
Real-Time Video (e.g., Livestreams): Requires a continuous, non-intrusive icon plus a disclaimer at the start of the exposure.
Static Content: Requires a permanently visible icon placed “consistently” (e.g., in the corner).
The Code of Practice provides a range of additional measures for deepfake labelling:
It sets clear expectations for the design and placement of the interim icon—requiring visibility at first exposure, context‑appropriate positioning, and alignment with the taxonomy (Measure 4.2));
It introduces a light reporting mechanism that enables users and authorities to report, assess, and correct missing or incorrect labels in cooperation with relevant third parties (Measure 2.3).
It requires non‑intrusive disclosures for artistic and fictional works while still safeguarding third‑party rights, including personality and privacy rights (Measure 4.3);
And it narrows the exception for ‘human review/editorial responsibility’ for public‑interest text by effectively requiring a documented editorial workflow with identified responsible persons, rather than a mere assertion that a human review occurred (Measure 5).
A core message of the Draft Code is that both providers and deployers may need to put internal guidelines in place.
Firstly, to meet the marking/labelling duties and to ensure these duties are met consistently across modalities, channels, and user journeys (e.g., UI design, content workflows, and distribution patterns).
Secondly, Article 50’s marking and disclosure obligations sit at the intersection of multiple legal issues which also need to be considered:
Disclosures should prevent misleading impressions (e.g., “authentic”, implied endorsements, “human-made” claims);
Notices shall be clear and understandable, which may require the standards for vulnerable users (e.g., minors, persons with disabilities);
Transparency should inform context without unduly constraining artistic choices (freedom of expression);
Implementation must align with personality rights, intellectual property, and other EU rules such as the Digital Services Act and the General Data Protection Regulation (where provenance signals and logs may constitute personal data).
Article 50 compliance may therefore be a cross-functional programme – legal, product, UX, engineering, and comms – not a simple “add a label” exercise.
Digesting this Draft Code, it is crucial to remember the lesson of the recent GPAI Guidelines: In that process, the final version introduced entirely new structural concepts – such as the “AI Lifecycle” – that were virtually absent from earlier drafts, forcing companies to pivot their compliance strategies at the last minute. Stakeholders should therefore treat this document as a directional signal rather than a destination. The chairs have explicitly stated that this draft is “high-level” and that “insufficient time” prevented detailed proposals on all issues. Significant changes are not just possible; they are expected.
The consultation window is extremely tight: written feedback is due by January. Current planning suggests a second draft around March, with the final version expected in June 2026. This condensed timeline suggests that the Commission is under immense pressure to finalize the Code or Practice before the statutory deadline. Regardless of when the Code of Practice is finalized, the statutory clock is ticking. The transparency obligations under Article 50 will become fully applicable 24 months after the AI Act’s entry into force – i.e., in August 2026. While recent discussions suggest a limited six-month grace period systems already on the market by that date, this buffer will likely not apply to new systems released after the deadline.
A key dynamic to watch will be the list of signatories. Just as with the GPAI Code of Practice, we expect many of the major global AI providers to be the first to sign, particularly regarding the provider obligations under Article 50(2). Their participation will effectively set the technical standard. Once the “Big Tech” providers agree on a specific watermarking or metadata standard (e.g., C2PA), it will likely become the de facto market requirement, making it difficult for smaller players to deviate.