Defamed by machine
Defamed by machine
Brian Hood, Mayor of Hepburn Shire Council in Australia, is said to be considering legal action against OpenAI for false information output by its artificial intelligence chatbot, ChatGPT.
When asked about Mr Hood, ChatGPT falsely reported he was imprisoned for bribery when working at a subsidiary of Australia’s national bank. Mr Hood was actually a whistleblower who told officials and journalists about the bribery taking place, and was not one of those subsequently arrested or charged.
Mr Hood’s experience and potential lawsuit raises interesting questions of liability when it comes to generative AI and defamatory content composed by a machine.
Defamation and libel laws vary around the world, and the jurisdiction where action is pursued needs careful consideration. In the pre-ChatGPT world of 2012, Google was sued for defamation in Germany by Bettina Wulff, wife of Germany’s former president, for displaying disparaging autocomplete suggestions when users input her name.
Google’s autocomplete is not nearly as sophisticated as ChatGPT (not least because its function is rather different); however, it is a form of AI that uses a combination of natural language processing and predictive algorithms to generate suggestions for search queries based on a user’s input.
The outcome of the action brought by Bettina Wulff is unclear, though the facts of the case are useful for illuminating why questions about the best cause of action and appropriate jurisdiction for such claims must be asked.
For example, in the US, a defamation claim like Bettina Wulff’s would likely struggle, for reason that Google’s autocomplete function bases its output on content provided by other users. Under section 230 of the US Communications Decency Act 1996, providers and users of an “interactive computer service” are shielded from liability for third party content that they publish, making actions of this nature difficult to pursue.
Elsewhere in Europe though, a case brought on similar grounds to Bettina Wulff’s succeeded, with the Court of Milan ruling that Google was responsible for the defamatory meaning of words that appeared in its autocomplete function when the name of the Italian claimant, Carlo Piana, was searched for.
It is less clear that the same conclusion would be reached for the likes of Carlo Piana if his matter was pursued in the UK. In Metropolitan International Schools Ltd (t/a SkillsTrain) v Designtechnica Corp (t/a Digital Trends) [2009] EWHC 1765, Mr Justice Eady stated that, when a search is carried out by a user via Google, there is no human input from Google, and, there being no input from it, Google could not be characterised as a publisher at common law.
Mr Justice Eady held that:
[Google] has not authorised or caused the snippet to appear on the user’s screen in any meaningful sense. It has merely, by the provision of its search service, played the role of a facilitator.
Defamation Act 2013
Autocomplete, which Mr Piana took issue with, is a somewhat different creature to Google’s standard search results function and web page, as was in issue in Metropolitan International Schools. Nonetheless, a claim such as Mr Piana’s (or Mr Hood’s) could still run into difficulties in the UK given Mr Justice Eady’s decision and considering also the definitions given in section 1 of the UK Defamation Act 2013.
- An author may be liable for a defamatory act, where “author” means the originator of a statement. However, the definition of author does not include a person who did not intend their statement to be published at all. Those who searched for Bettina Wulff or Carlo Piana using Google are unlikely to have intended for their search terms to be republished.
- An editor may be liable, where “editor” means a person having editorial or equivalent responsibility for the content of the statement or the decision to publish it. However, when it comes to Google’s autocomplete (as well as ChatGPT’s responses), strictly speaking no editorial decision is made by any person to publish the content concerned. Rather, the “decision”, if it can be called that, is made by machine.
- A publisher may be liable, where “publisher” means a commercial publisher whose business is issuing material to the public, or a section of the public, where they issue material containing a defamatory statement in the course of that business. However, following Bunt v Tilley and others [2006] EWHC 407, intermediaries who merely facilitate the publication of matter created by others are not publishers for the purposes of common law.
The Defamation Act 2013 has more closely aligned liability with editorial control, and this issue is what matters involving defamatory statements in computed output would need to focus on in UK proceedings to succeed.
Reflecting on the nature of the technologies referred to above though, there are many stark differences between Google’s autocomplete and search functions, and ChatGPT’s Generative Pre-trained Transformer architecture. These could potentially strengthen Mr Hood’s hand, if his claim was able to be pursued in the courts of England and Wales. (See “Defamation in the cyber age. Where should your digital reputation be tried?” for information on jurisdictional issues that a claimant may face when alleging defamation over a statement published in digital form.)
In simple terms, autocomplete is essentially just a gathering up of the search terms and phrases previously entered by Google users. The most highly searched terms are presented to a user when their terms are a near-match for those previously searched for. ChatGPT, however, is a language generation model, based on deep learning techniques.
ChatGPT’s language model has been trained on a large body of data, such as books, articles, and web pages, just as it might be said Google’s autocomplete is trained on the past search queries of its users. However, unlike Google’s autocomplete, from these input data ChatGPT actually generates the human-like text that it gives in response to a particular prompt or conversation, rather than just repeating back something previously said by others.
ChatGPT is thereby closer to an author than a mere publisher, although exceptions to this premise can readily be found. For example, ask the chatbot “What is the first paragraph of Macbeth?” and ChatGPT assumes the role of (mere) publisher, facilitating the publication of matter created by William Shakespeare. However, ask ChatGPT “When should we meet again and will it be in thunder, lightning or in rain?” and its role appears to shift. ChatGPT then formulates a unique response to the question, based on its understanding and computation of the relevant variables (such as the weather), potentially making it the author of the output concerned.
Through these two simple examples the changing nature of ChatGPT’s role is fairly easy to spot. Yet in situations such as Mr Hood’s, the blackbox nature of deep learning may make it much more difficult to determine whether the system is simply publishing content and matter created by others, or penning the answers to questions of his role in the bribery scandal. This is in part why the expertise of lawyers well-versed in technology is needed when bringing or defending such claims.
Control is key
While defamatory statements made through computer systems are subject to the same legal standards and consequences as those made through traditional means of communication, the complexity of establishing authorship for those statements can make liability harder to establish. When it comes to generative AI therefore, arguments of control are likely to be crucial. This is apparent from the comments of Lord Justice Richards in Tamiz v Google Inc [2013] EWCA Civ 68 which concerned a third-party blog hosted by Google.
In Tamiz, Lord Justice Richards approved of the lower court’s decision that by “effective control” in section 3(e) of the Defamation Act it was likely the draftsman had in mind effective day-to-day control rather than the possibility of intervention in reliance on a contractual term about the permitted content of a web page. (In Tamiz, the Court of Appeal ruled that a website operator may be regarded as a “publisher” after being put on notice of a complaint.)
Where the operators of a deep-learning generative AI system have day-to-day control over development or the inputs used to train the system, they may arguably be said to have effective control over the system’s outputs and therefore liability for any resultant defamatory statements made. However, the actual degree of control they have over the responses given is likely to be hotly debated as matters such as Mr Hood’s are litigated. Following Tamiz, the speed of their response to any problems raised will also be a factor relevant to their success or failure.
Conclusion
Lawyers acting for parties concerned by online or other defamatory statements made by computer must interrogate the technology and consider the evidence in the round.
As a part of this process, they should also question whether defamation or libel is the best cause of action for a claimant to pursue and what preliminary steps ought to be taken in any proceedings (such as information orders), especially if authorship is unclear or publication is not widespread. The latter issue is possible due to the variability of generative AI’s responses when asked the same question. (This variability is part of what makes generative AI systems human-like.) Negligence, harassment, misrepresentation, discrimination and breach of privacy could all provide fertile ground for a claim, depending on the facts.
What is increasingly becoming clear, however, is that artificial intelligence is challenging conventional legal tests and principles. Regulators therefore have an important role to play in resolving the issues that are emerging, and in shaping arguments as to where liability for communications generated by AI should lie.