UK signs first legally-binding international treaty governing the safe use of AI: Our analysis

Written By

kate deniston Module
Kate Deniston

Professional Support Lawyer
UK

As the knowledge lawyer in our Tech Transactions team, I play a key role in keeping both clients and colleagues at the forefront of tech-related legal and market developments.

louise lanzkron Module
Louise Lanzkron

Dispute Resolution Knowledge & Development Lawyer
UK

As the knowledge and development lawyer in our International Dispute Resolution team in London, I play a key role keeping my colleagues at the forefront of legal developments, trends and case law – covering litigation and international arbitration – for the benefit of our clients.

On Thursday 5 September 2024, the UK signed a new legally-binding international treaty governing the safe use of AI. Other signatories included the US and EU.

Officially known as the “Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law” (the “AI Convention”), the AI Convention aims to “ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law” (Article 1).  

Its main focus is to protect human rights, democracy and the rule of law from the risks posed by AI by providing an international legal standard of obligations and principles to be followed by states across the world. For example, to protect democracy, Article 5 requires signatory states to adopt measures to ensure that AI systems are not used to undermine democratic institutions and processes. Article 4 requires signatory states to ensure that AI systems are used in accordance with that state’s international and domestic human rights law. Other provisions seek to ensure that the use of AI respects human dignity, equality and privacy.  

The AI Convention text was adopted by the Council of Europe on 17 May 2024, having taken two years to draft. It was written by the 46 Council of Europe member states (of which the UK is one), the EU, and 11 non-member states including Australia, Japan and the US. 

Each signatory state is expected to adopt or maintain measures to give effect to the requirements in the AI Convention. 

The treaty forms part of the new regulations, pledges and agreements being developed by governments across the world to regulate the risks arising from rapid advancements in AI. It follows in the footsteps of Biden’s Executive Order on AI (October 2023), the Bletchley Declaration (November 2023), the US and UK safety institute collaboration (April 2024) and the King’s speech announcement that the UK government plans to introduce AI legislation on the most powerful AI models (July 2024). Many of the principles in the AI Convention chime with concepts in the EU AI Act which came into force on 1 August 2024, such as transparency for AI-generated content, oversight requirements and accountability. Whilst China is not a signatory to the AI Convention, it has introduced its own AI-related measures and also signed the Bletchley Declaration. 

Our Analysis

The signing of the AI Convention has been hailed by human rights supporters and proponents of the global governance of AI as a landmark achievement.  
Whilst the adoption of the AI Convention is a welcome development, there are some issues that those in the ‘AI lifecycle’ should be aware of:

1. Scope

The principles and obligations in the AI Convention apply to public authorities - including private actors acting on their behalf – and private actors.

However, under Article 3 (scope) signatory states have a choice as to how the AI Convention applies to private actors. They must choose whether to apply the principles and obligations directly to the activities of private actors or whether to take “other appropriate measures”. 

It is likely drafted like this in order to cater for the differences in the signatories’ legal systems across the world. But it could lead to discrepancies between how the AI Convention is applied to private actors in different signatory states across the globe. This could cause confusion for private companies operating on a global scale. Also, “public authority” is not defined, probably for similar reasons, but this could cause issues when applying the principles of the AI Convention in practice. 

2. Broad principles rather than specific requirements

This is to allow signatory states to interpret the AI Convention in accordance with their own legal, political and social traditions (see Articles 7-13). However, this will result in national regulation transposing the AI Convention to vary widely.

3. Vague compliance structure

There is only a vague compliance mechanism. Compliance reporting is required (Article 23) but there are no strict enforcement criteria and so the effectiveness and impact of the AI Convention could be limited.

4. Remedies

The AI Convention does require signatories to provide remedies for breaches of human rights in relation to the obligations and principles set out in the AI Convention and ensure that a body is in place for persons to lodge complaints. However, no remedies (such as fines) have been suggested and any remedies legislated for on a national level could vary widely between jurisdictions. 

Latest insights

More Insights
mountain scape

European Union Artificial Intelligence Act Guide

Nov 06 2024

Read More

Beware: French Court of Cassation rules that clauses limiting or exonerating liability, agreed between the contracting parties, may also be enforced against third parties

Oct 30 2024

Read More

The leading decision procedure is coming – doubts remain

Oct 25 2024

Read More