The creative industry is facing a radical new challenge: the rise of machine-created artwork. All of a sudden, technology has advanced to a point that bespoke "art" of genuine aesthetic interest (or at least commercial utility) can be conjured up by a computer – on demand, instantly and for free. The commercial impact on content creators and traditional artists is obvious. But what does the law have to say? Who is the AI artist? And are these machines borrowing too much from existing copyright works?
"Robot painting a picture of a courtroom" generated using DALL-E 2
Artificial Intelligence text-to-image generators create digital images based on natural language text inputs. DALL-E 2 (a leading image generation engine) suggests asking it to paint "an astronaut lounging in a tropical resort in space in a photorealistic style" or "a bowl of soup that looks like a monster spray-painted on a wall". We asked it to paint the legal portrait above. Other text-to-image AI platforms include Midjourney, Google's Imagen and Parti, and Microsoft's NUWA-Infinity.
These products rose to prominence in 2022, largely due to the influence of Stable Diffusion, created by the company StabilityAI. Images generated by text-to-image AIs have already started to be used commercially, including on the covers of The Economist and Cosmopolitan magazines.
However, AI-generated images present novel legal challenges and uncertainties. The UK government's 2020 to 2022 consultation on Artificial Intelligence and Intellectual Property "agree[d] that the current approach to computer-generated works is unclear"1, noting concerns about whether these images would meet the threshold of originality required to benefit from copyright protection and how authorship (and therefore ownership) would be attributed. There are also concerns about whether using copyright images to "train" AI models amounts to copyright infringement. Differing approaches internationally and a lack of reciprocity between countries over granting protection to AI-generated images creates further uncertainties.
Copyright subsists in "original…artistic works."2 Successive cases have defined originality with reference to the expression of human creativity in the work (not in the idea itself), being the author's "own intellectual creation"3 which "reflects the personality of its author, as an expression of his free and creative choices."4
It is uncertain whether AI images have the necessary quality of "originality" to benefit from copyright protection. A person entering a text prompt is not making any of the sorts of choices that would usually express human creativity in creating an image: composition, colour, shape etc. This is generated by the AI. However, current AI models do not seem to be expressing creativity in the same way as a human, instead generating images seemingly using more random and mechanical methods.
Whether a work will qualify for copyright protection further depends on the status of the author, an individual's nationality, citizenship or residence, or country of incorporation in the case of a company.5
Only natural or legal persons can be authors of copyright works. This was demonstrated in the intriguing US case of Naruto v. Slater (aka the "monkey selfie copyright dispute") over the copyright status of "selfies" taken by macaques using equipment set up by photographer David Slater. PETA (acting on behalf of one of the macaques in the selfies) sued Slater for using the images which PETA argued belonged to the monkeys that took the selfies. Finding in Slater's favour, the Appeals Court affirmed that animals do not have legal standing to own copyright or sue for infringement under US law. An AI "author" may similarly face issues of legal standing in relation to copyright works.
For artistic works, an author must have played a substantial role in fixing the work in some material form. A person that merely conceives an idea and directs someone else to create the artwork will not be an author of the work.6 AI text-to-image generation seems to have moved beyond cases where a computer is being used as the mere tool of a human author.7 The input of text appears more like the insubstantial role of conceiving an idea and directing another (the AI) to create it. Yet an AI is not within a class of persons who can own copyright. Does this mean the AI-generated image has no legal author and therefore does not attain copyright protection at all?
Seemingly anticipating this, the UK has created a special category of computer-generated works in section 9(3) CDPA, being "those works generated by a computer in circumstances such that there is no human author". The author of such works is taken to be the person "by whom the arrangements necessary for the creation of the work are to be undertaken".
However, there have been very few cases exploring authorship under section 9(3). One case8 concerning the authorship of composite frames in a video game found that the creators of the game (those who devised its appearance and wrote the program) were the authors of the successive frames of the game and not the player. The player's input was not considered artistic in nature and player inputs were not the arrangements necessary for the creation of the frame images.
This suggests the author and first owner of copyright would be the human creators of the AI rather than the user inputting the text descriptions, but it could easily be the person setting the parameters for image generation. UK courts may also be willing to look past the originality question when considering authorship under section 9(3).
To generate a "new" image, an AI must first be trained to understand text inputs and know what an image matching that text looks like. In basic terms, AI models are "trained" to do this by processing existing images to identify patterns which set how much weight is given to certain parameters in the model.
Huge amounts of high-quality data are required. Many images (even those generally available to view online) will be protected by copyright. Owners of copyright images often require licences to be purchased to use their images for commercial purposes. Whether the use of these images in AI training without the consent of the copyright owner amounts to infringement is uncertain, and several legal complaints have already been raised.
On 17 January 2023, Getty Images (an image licensing company) announced it was commencing UK legal action for copyright infringement against Stability AI, alleging stock images owned or represented by Getty have been scraped from its website and processed by Stability AI without consent [https://newsroom.gettyimages.com/en/getty-images/getty-images-statement]. In the US, several artists have launched a similar claim against Stability AI and Midjourney, alleging the companies have used their copyright artwork to train AI models without permission.
Time will tell whether the US doctrine of "fair dealing" or similar exemptions available under UK or EU law may apply. Should the above claims (or any similar claims in future) make it to court, they will be important test cases for defining the limits of using copyright images in AI training data without permission (and payment) to the copyright owner.
Perhaps anticipating this issue, the UK government has recently announced plans to permit parties to mine datasets and use them to train AI models – without this being considered an infringement of copyright. However, the law is not yet in place and would not prevent copyright owners from charging for access to their images if they choose. See Dentons' earlier reporting on this.
A related concern is that existing artworks have been used to "train" AI models in such a way that the software can produce works that mimic the style and output of particular artists. This is naturally a concern for artists – their "style" is unlikely to be protectable by copyright in most jurisdictions. In the UK, depending on the context in which works are sold, there may be scope for arguing distribution of such works amounts to passing-off.
The UK approach under section 9(3) CDPA is also not matched or necessarily recognised by other jurisdictions (notably the EU and US) since current international copyright treaties do not require countries to recognise and reciprocate work with this type of "authorship".
See, for example, section 313.2 of the Compendium of [US] Copyright Office Practices:
"…the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author. The crucial question is whether the 'work' is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine."
Equally, an AI image generated in a territory that does not recognise copyright in such images may not be granted protection in the UK. International copyright treaties rely on reciprocal protection being granted between countries, which the UK may be unable to provide where the image is not protected in its country of origin.
Differing international approaches to whether using images for AI training without permission amounts to copyright infringement may also lead to a fractured environment where certain AI training practices are permitted in some jurisdictions but not others. This would be a difficult situation to navigate when vast amounts of data may be scraped from many different sources online.
This leaves the user of AI text-to-image generators with questions about how to ensure ownership of copyright, tackle infringement and use the images internationally. These should ideally be considered before the AI images are put into use.
It is important to ensure that the licence permitting use of the AI text-to-image tool addresses copyright appropriately, since this may govern first ownership of copyright in generated images and the rights afforded to users.
Where AI images are being used as part of projects involving or commissioned by third parties, the contract with third parties should consider the legal status of AI images and the issues outlined above.
Where AI images are to be used internationally, it would be wise to consider how local laws treat AI images as part of a wider strategy to tackle infringement.
The ongoing legal uncertainties around infringement via AI training practices may also create concerns for users of AI text-to-image generators when there is a growing backlash from artists and media organisations. If the data used to train certain AI models is found to have been used unlawfully, this may create further uncertainties about the legal status of the generated images.
There are many other steps to consider depending on how a particular project will use AI-generated images.