In May 2022, most of us were unaware of the potential of generative Artificial Intelligence (AI) tools. We’d seen AI generated content in presentations, but they always seemed to recycle the same examples. The overall impression was that AI content generation was hard and expensive.
Much has changed in the short time since then. In June 2022, GitHub launched Co-Pilot, allowing software developers to incorporate AI generated code into their projects. Image creation was next up with MidJourney’s OpenBeta in July, Stable Diffusion in August and DALL-E v2 in September. Language generation using large language models (LLMs) wasn’t far behind; ChatGPT launched in November 2022 based on GPT-3 and GPT-4 was released in March 2023.
Unsurprisingly organisations are moving quickly to harness this potential as users or developers of such tools. Many are trying to understand what this technology really is, what it can and can’t do and how it might be useful for them.
If life is moving fast for generative AI technology, the legal landscape for generative AI is also moving fast. November 2022 saw a US class action against Co-Pilot claiming that its training process had breached open source licence terms. January then saw a US class action against three AI image generators alleging copyright violations. This was followed shortly by proceedings brought by Getty Images in the UK and US against the creators of Stable Diffusion. Between them these lawsuits raise questions regarding the use of training data protected by copyright to train AI systems and the relationship in, in copyright terms, between the training data and outputs from generative AI systems.
European data protection authorities have also recently started to look at some generative AI providers. In March 2023, the Italian data protection authority (Garante) blocked ChatGPT’s processing of personal data (effectively blocking the service in Italy) until ChatGPT complies with certain remediations required by the authority. In April 2023, the Spanish data protection authority (AEPD) initiated its own investigation. It is likely other data protection authorities will follow – the European Data Protection Board (EDPB) has since launched a task force on ChatGPT. European data protection authorities are concerned with the use of personal data in AI systems, including to train it, and questions around lawful processing, transparency, data subject rights and data minimisation in particular.
Aside from copyright infringement issues and data protection, many organisations ask questions regarding the retention and reuse of inputs and outputs by generative AI tool providers, the accuracy and ownership of outputs, the scope of open source licence terms, the protection of confidential information, terms of use and their general liability exposure both as users and developers of generative AI technology. And that’s all before we get to considering emerging regulatory frameworks for AI technology such as the EU’s draft AI Act and sector specific regulations and codes of conduct.
How do you balance these risks with the massive opportunity presented by generative AI? The starting point must be understanding the potential risks, balancing them against the opportunities and developing appropriate policies and guidelines.
With the publicity surrounding generative AI tools and free and easy access to a number of high-profile generative AI tools, some employees will inevitably start to experiment. They will often be in creative roles, developing software or devising marketing campaigns. Not having a generative AI policy in place will start exposing the business to possibly unquantified and unmanaged risks. From another perspective, it also leaves the opportunity to harness the potential of generative AI to chance.
The simplest policy or guidance would be a total ban on use of generative AI in an organisation and blocking access to generative AI tool providers. Certain companies operate in a highly regulated sector and the potential risks from using generative AI are high. Maybe the development of generative AI poses a risk to certain industries, requiring businesses to take a public anti-generative AI stance. Other companies may want to buy some breathing space to better assess and understand the risks and formulate better informed policy or guidelines.
But what if an organisation wants policy or guidelines which allow the business to start using generative AI in a controlled way? The key lesson we have taken from working with clients on developing policies for the use of generative AI is that there is no one-size fits all approach. The nature and extent of the risks from generative Ai tools varies depending on the context.
Some of the relevant factors will be:
Developing and refining a generative AI policy or guidelines is an iterative process. It requires input from a range of different stakeholders. It needs to (i) align with the organisation’s overall strategy for generative AI and(ii) be kept under review as the legal landscape for generative AI tools develops.
Having burst onto the scene there is a lot to think about with generative AI. Engage with it and the opportunities are huge.
And speak to Bird & Bird to help you understand the risks and develop and refine your policy and guidelines.