This article discusses legislative and regulatory developments relating to the procurement of artificial intelligence (“AI”) solutions in the UK public sector, recent commercial developments, risks posed by AI technologies, and practical steps that contracting authorities can take to mitigate these risks.
OECD Guidelines and UK Guidelines getting published
In June 2020, the World Economic Forum Centre for the Fourth Industrial Revolution Global Network published a guidance and corresponding toolkit, “AI Procurement in a Box” for the procurement of AI solutions in the public sector. The UK Government, which had co-created this guidance, simultaneously published UK-specific guidelines (the “Guidelines”) and included a section with AI-specific considerations for contracting authorities.
UK Guidelines considerations and AI specific considerations
At the time, the Guidelines were published as part of the UK Government’s strategy to become a global leader in AI and data and intended to support contracting authorities to “engage effectively with innovative suppliers or to develop the right AI-specific criteria and terms and conditions that allow effective and ethical deployment of AI technologies”. The Guidelines largely mirrored the principles and considerations outlined in the OECD’s version of the guidance. However they were a useful supplement, as the current UK’s public procurement legislation is silent on the use of AI technologies.
The Guidelines were split into two sections:
These top 10 considerations were:
“1. Include your procurement within a strategy for AI adoption.
2. Make decisions in a diverse multidisciplinary team.
3. Conduct a data assessment before starting your procurement process.
4. Assess the benefits and risks of AI deployment.
5. Engaging effectively with the market from the outset.
6. Establish the right route to market and focus on the challenge rather than a specific solution.
7. Develop a plan for governance and information assurance.
8. Avoid Black Box algorithms and vendor lock in.
9. Focus on the need to address technical and ethical limitations of AI deployment during your evaluation.
10. Consider the lifecycle management of the AI system.”
The Guidelines subsequently set out AI-specific considerations at each stage of the procurement process. These were:
The Guidelines noted that it “mostly refer(red) to the use of machine learning. Machine learning is a subset of AI, and refers to the development of digital systems that improve their performance on a given task over time through experience. Machine learning is the most widely-used form of AI, and has contributed to innovations like self-driving cars, speech recognition and machine translation.” Perhaps unsurprisingly, the Guidelines did not anticipate, or account for, the rapid, large-scale developments and innovations in the field shortly thereafter, namely, the launch of Generative AI solutions, such as the introduction of ChatGPT in November 2022.
Generative AI, according to the Generative AI Framework for HMG guidance published in January 2024, and (discussed further below) is “a form of Artificial Intelligence (AI) - a broad field which aims to use computers to emulate the products of human intelligence - or to build capabilities which go beyond human intelligence.
Unlike previous forms of AI, generative AI produces new content, such as images, text or music. It is this capability, particularly the ability to generate language, which has captured the public imagination, and creates potential applications within government.”
This subsequently led to multiple publications, and UK government initiatives to understand, regulate and ensure the safety of the use of AI more generally, including, but not limited to:
The UK’s new Procurement Act 2023 (“the Act”) received Royal Assent on 26 October 2023, and will come into force in October 2024. The key objectives of the Procurement Act 2023 are to localise, increase flexibility and consolidate the public procurement regime in the UK (other than Scotland – which has decided not to be subject to the new Act). While it does not largely deviate from the current EU-based public procurement regime, it remains silent on the procurement of AI technologies and so, in the case of AI procurement, the Act would need to be read alongside current and future guidance. For further details and information on the new Procurement Act, we have also recently published an article on it here.
The EU AI Act is also in the process of implementation, having obtained political agreement on 9 December 2023 and, following further procedures in the coming months, is anticipated to come into force mid-2026. The EU AI Act (which will not come into force in the UK – following the UK’s departure from the EU) introduces into the EU member states, a tiered, risk-based approach to the classification of different types of AI ranging from those that are used in facial recognition systems, to generative AI, to “simpler” systems which do not pose harm or discrimination. These categories (ranging from “Prohibited” practices to minimal risk) place corresponding transparency, reporting and management obligations to ensure the AI system can be deployed safely and securely. The EU AI Act is nonetheless worth consideration for UK-based contracting authorities as it is likely to affect supplier business practices to accommodate their EU customers, their supply chain and other entities within their organisation.
While use of artificial intelligence in the public sector may carry specific risks, implementing the right safeguards do not necessarily call for novel strategies to be put in place. Indeed, guidance published in 2019 by the Alan Turing Institute, together with the Office for Artificial Intelligence, remains relevant. This set of guidance “Managing your artificial intelligence project” sets out main risks posed by AI and relevant mitigations to address these risks:
“Managing risk in your AI project
Risk | How to mitigate |
Project shows signs of bias or discrimination | Make sure your model is fair, explainable, and you have a process for monitoring unexpected or biased outputs |
Data use is not compliant with legislation, guidance or the government organisation’s public narrative | Consult guidance on preparing your data for AI (i.e. preparing data for AI to ensure it is secure and unbiased) |
Security protocols are not in place to make sure you maintain confidentiality and uphold data integrity | Build a data catalogue to define the security protocols required |
You cannot access data or it is of poor quality | Map the datasets you will use at an early stage both within and outside your government organisation. It’s then useful to assess the data against criteria for a combination of accuracy, completeness, uniqueness, relevancy, sufficiency, timeliness, representativeness, validity or consistency |
You cannot integrate the model | Include engineers early in the building of the AI model to make sure any code developed is production-ready |
There is no accountability framework for the model |
Establish a clear responsibility record to define who has accountability for the different areas of the AI model (i.e consideration of whether:
|
The Guidelines mentioned above are also transferable across to newer types of AI, such as generative AI. One of the most recent sets of guidance, the “Generative AI Framework for HMG”, published in January 2024, provides similar narrative. In a similar fashion to the Guidelines, 10 principles are defined to “guide the safe, responsible and effective use of generative AI in government organisations”:
“Principle 1: You know what generative AI is and what its limitations are.
Principle 2: You use generative AI lawfully, ethically and responsibly.
Principle 3: You know how to keep generative AI tools secure.
Principle 4: You have meaningful human control at the right stage.
Principle 5: You understand how to manage the full generative AI lifecycle.
Principle 6: You use the right tool for the job.
Principle 7: You are open and collaborative.
Principle 8: You work with commercial colleagues from the start.
Principle 9: You have the skills and expertise needed to build and use generative AI.
Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place.”
It also discusses “Buying generative AI” where the Guidelines have been cited as a providing “a summary of best practice when buying AI technologies in government”. The section largely reiterates principles and considerations from the Guidelines but nonetheless notes the following practical recommendations:
In summary, consideration of the UK Guidelines outlined at the beginning of this article should remain in active contemplation in the minds of contracting authorities in addition to other risks not specific to AI-related procurement.
While there are a growing number of resources to keep up with the developments in AI, alongside imminent implementation across the UK (and internationally), the majority of resources emphasise that contracting authorities should “treat AI technology like you would any other technological solution, and use it in appropriate situations. Every service is different, and many of the decisions you make about technology will be unique to your service.” In other words, it is crucial for contracting authorities to adopt a principled, risk-based approach to remain compliant with current and upcoming legislation, but also ensure that such approaches are sufficiently flexible to ensure policies and processes are able to be updated as and when as AI projects evolve over the course of the contract.