The rapid advancement of generative AI means that it is high on the agenda in Australia and globally. Like any disruptive technology, generative AI has unleashed a wave of opportunities and challenges for individuals and businesses in the retail industry.
As we continue to witness the integration of AI in the retail sector, we see businesses capitalising on the use of AI including in:
But the opportunities need to be balanced against the risks, including copyright infringement issues and data protection concerns (as addressed further below).
Taking one subsector of retail as an example, the State of Fashion 2024 survey of global fashion executives published by McKinsey found that 73% of respondents said generative AI will be an important priority for their businesses in 2024. However, only 5% of those surveyed believed their businesses had the capability to fully leverage generative AI, which suggest fashion companies are not yet capturing its value in the creative process.[i]
In this article, we explore the gap between opportunities and risks posed by AI in the Australian retail sector. We highlight some of the key legal and regulatory issues applicable to AI (including generative AI), being:
While this article focuses on the use of generative AI in Australia, for a global perspective, see our firm's extensive coverage on the issue here.
AI (including generative AI) is regulated in Australia, but not with AI-specific legislation. Instead, it is governed by existing legislation (including consumer, data protection, competition and copyright law).
The Australian Government’s position is generally that it supports the safe and responsible deployment and adoption of AI across the government and private sector. However, it does not have in place an overall national AI strategy.
However, the Australian Government is taking steps which indicate that it is seriously considering how best to regulate AI in Australia, including:
There is also, of course, a keen interest in Australia to keep track of international developments on the regulation of AI globally, including the Europe’s AI Act and the Digital Services Act.
There are myriad issues involving AI and intellectual property law. Our discussion below focuses on generative AI and the IP implications of using large data sets for training and ownership of AI-generated works.
Use of training data for AI models
Content creators are becoming increasingly concerned about potential misuse of their content by generative AI tools, particularly when those tools access, copy or “scrape” online content for training purposes. Where retail businesses are deploying AI tools, there are risks around what training data has been used to train the AI model. For example, if it is data about a competitor’s product or service offering, has this data been obtained by the AI tool through legitimate means? How can a business protect itself from being on the receiving end of an infringement claim?
While there are currently no cases before Australian courts where third parties have alleged IP infringement by AI systems, the influx of court proceedings overseas, namely in the United States, United Kingdom and China (which we reported on here) provide some insight into how Australian courts may be expected to grapple with these issues in the near future.
Broadly, the cases have involved artists, companies or software developers suing AI providers for copyright infringement on the basis of:
In most of these cases, AI companies have attempted to argue that use of copyright works to train AI falls under a “fair use” exception under copyright law. It may be some time before we see jurisprudence emerging from courts on whether this defence is tenable. However, such jurisprudence will have limited application in Australia where there is no “fair use” defence to copyright infringement. In its place is the much narrower concept of ‘fair dealing’ where infringement can be exempt if certain factors are satisfied and the use is for the primary purpose of research or study, criticism or review, parody or satire, reporting the news, reproduction for professional advice or judicial proceedings or enabling a person with a disability to access material. It is difficult to envisage a situation where training AI tools with unauthorised third party IP falls into one of these exceptions.
In addition to copyright infringement, there is also the issue of potentially infringing an author’s moral rights in the unauthorised reproduction of copyright work for AI-training purposes, i.e. does the AI training data correctly attribute the author of the work, avoid false attribution and, if taking parts of a work, respect the integrity of an author’s work?
As employees increasingly have access to, and the propensity to use, AI tools in the course of their work, employers and businesses are forced to confront the issues it creates. By way of example, a marketing team may deploy AI tools to create campaigns for a new product launch or designers may use AI images to create a base product from which they develop their designs. As a general rule in Australia, copyright works created in the course of employment are owned by the employer. The situation becomes muddy around an employer’s potential exposure to an infringement claim by a third party if the AI tool used by the employee has used that third party’s work in training data. Employers should ensure they have sufficient controls around their employee’s use of AI tools and pay close attention to indemnity clauses of the AI tools they have permitted employees to use, as many of these tools (particularly the ones that are free) disclaim responsibility for IP infringement.
Ownership of AI-generated IP
There is a spectrum of AI involvement in the creation of IP: at one end, AI involvement is minimal and humans are using AI as a tool to assist in the development of an invention or creation of a work. At the other end of the spectrum, there is minimal human involvement. Somewhere in between is AI responding to prompts generated by humans. Where the lines between AI and human-creation are blurred, there is a question about whether humans have made sufficient contributions to be considered an ‘inventor’ in the case of a patent, or an author in the case of a copyright work.
Currently in Australia, it is not possible for AI to be considered the author or owner of a copyright work. Under the Copyright Act 1968 (Cth) an “author” must be a qualified person at the time the work was made, namely Australian citizen or resident or a body corporate incorporated under Commonwealth or state laws, and they must have exerted “independent intellectual effort” in creating the work. Copyright ownership is therefore connected with the concept of authorship, where AI does not neatly fit in.
The question of the named inventor of a patent application involving AI has been conclusively decided (at least for now). In 2021, a single judge of the Federal Court of Australia allowed an AI-system, DARBUS, to be named as the sole inventor on a patent application. However, on appeal to a five judge bench of the Full Federal Court, the decision was overturned (reported here) and special leave to appeal to the High Court was refused (reported here). The position in Australia is now in line with other jurisdictions (save for South Africa where patents are not examined before grant) where patent applications have been rejected because DARBUS was named as the sole inventor: the European Patent Office, New Zealand, United States and the United Kingdom (reported here). This is good news for businesses where AI has been used as a tool in the inventive process: provided that the invention meets other patentability requirements, patents for products or methods involving AI can be obtained if a human inventor is listed. However, this may not be as straightforward in practice as there are usually multiple actors where AI is involved: the person who developed the training algorithm, the person who presented the prompt, the AI model developer and the owner of the data, could all potentially have a claim to inventorship.
On 13 February 2024, the USPTO has released Draft Guidance for AI-Assisted Inventions on how it proposes to analyse inventorship issues. The guidelines explain that while AI-assisted inventions are not categorically unpatentable, the inventorship analysis should focus on the natural persons that provided a “significant contribution” to the invention. IP Australia is yet to follow suit in terms of releasing any guidance material. However, it did publish an exploratory paper that set out a series of “provocations” in July 2023, highlighting the complexity of developing a AI framework for IP rights. What is clear for businesses deploying or developing AI tools for now, is the crucial need to keep a paper trail of all human involvement and processes, in the event of an ownership or inventorship challenge.
Global regulatory environment
The swift rise of generative AI has also given rise to new challenges from a privacy perspective, with privacy regulators around the world already showing a high degree of interest in ChatGPT, and its privacy implications.
In March 2023, the Italian data protection authority temporarily banned ChatGPT in Italy and opened an investigation into the privacy practices of OpenAI, citing the following reasons in a public statement:
The temporary ban was subsequently lifted, in April 2023, subject to a number of conditions imposed by the Italian data protection authority, after OpenAI expressed willingness to put in place concrete measures to protect individual privacy.
ChatGPT has since been the subject of various other investigations by privacy regulators across Europe and elsewhere and the European Data Protection Board has also established a dedicated task force on ChatGPT.
Australian compliance requirements
So what are some of the key things that you should be thinking about when assessing the use of a generative AI system through the lens of the Australian Privacy Principles (APPs)?
We would suggest turning your mind to the following five areas: transparency, collection, use and disclosure, integrity and individual rights.
From a transparency perspective, ask whether any collection, use and disclosure of personal information by the generative AI system is disclosed in your privacy policy?
From a collection perspective, ask:
In terms of use and disclosure, ask:
In terms of integrity, ask:
And, finally, from an individual rights perspective, ask whether there is a mechanism for dealing with requests by individuals for access to, or correction of, personal information held in the AI system.
Regulator activity in Australia
We are yet to see any case law in Australia, or Privacy Commissioner investigations, concerning the application of the APPs to ChatGPT or other generative AI systems.
However, in July 2023, it was announced by the Digital Platform Regulators Forum, of which the OAIC is a member, that its strategic priorities for the 2023-2024 financial year will include a new focus on understanding and assessing the benefits, risks and harms of generative AI.
While not specific to generative AI, recent decisions by the Privacy Commissioner, and the Administrative Appeals Tribunal, have involved consideration of privacy law issues arising in relation to AI systems generally in the context of facial recognition tools which involve the use of machine learning algorithms.
In particular, in late 2021, the Privacy Commissioner found that 7-Eleven had breached the privacy of its customers by collecting biometric information through a facial recognition tool and that Clearview AI had breached the privacy of Australians by scraping their biometric information from the web, and disclosing it, through a similar tool.
Clearview AI sought a review of this decision in the Administrative Appeals Tribunal and the Tribunal found, in May 2023, that Clearview AI had collected sensitive information about individuals without consent and, consequently, had not taken reasonable steps to implement practices, procedures and systems to ensure compliance with the APPs.
Proposed reforms to the Privacy Act
Automated decision making is a topic that is being addressed as part of the Privacy Act Review.
In particular, the Australian Government has agreed to proposals that:
The Australian Government has also relevantly agreed ‘in-principle’ to a proposal that there be a requirement to provide individuals with information about targeting, including clear information about the use of algorithms and profiling to recommend content (Proposal 20.9).
[i]Business of Fashion - McKinsey State of Fashion 2024 Survey, reported inThe State of Fashion 2024 report | McKinsey, accessed 30.05.24.