Misleading machines: The legal perils of AI hallucinations
Misleading machines: The legal perils of AI hallucinations
As artificial intelligence becomes increasingly embedded in our daily lives, the legal concerns surrounding AI “hallucinations” are gaining significant attention.
AI hallucinations refer to situations where an AI system produces outputs that are incorrect, misleading, or entirely fabricated, while presenting them as reliable. The phenomenon can have profound consequences when they occur in high-stakes environments, such as healthcare, legal services, and financial systems. Even in consumer contexts, the courts are already having to grapple with the problems they can create.
Earlier this year, in the decision of Moffatt v Air Canada 2024 BCCRT 149, a Canadian Civil Resolution Tribunal found Air Canada liable for its chatbot’s “misleading words”, after the chatbot misrepresented to a customer how they could obtain the airline’s bereavement fares. Having followed the chatbot’s erroneous advice, the customer lost the benefit of a CAD $650 discount he was entitled to. The Tribunal’s award of damages to the customer for the system’s error provides early insight into how courts may find companies accountable when their AI systems provide inaccurate information.
What causes AI hallucinations ?
AI hallucinations occur due to the inherent limitations of how AI models, and in particular large language models (LLMs), generate information. These models are trained on vast amounts of data and make predictions based on patterns they’ve learned, without true understanding or factual verification. When faced with incomplete, ambiguous or unclear inputs, they can generate responses that look or sound coherent but which are factually incorrect or entirely fabricated. The issues are particularly common in generative models, where the goal of the system is to predict the next word or item in a sequence based on learned probabilities, rather than presenting a result from a verified knowledge base.
The legal risks of AI hallucinations
AI hallucinations raise important questions concerning liability, regulation and compliance, and could present significant legal risks as well. Where errors in AI-generated outputs cause harm or financial loss, an action in negligence may be one avenue of redress. Contractual liability could also arise and provide a party with a remedy, particularly where they have procured an AI solution or service from a third-party vendor.
In the UK, under the Consumer Rights Act 2015, there is an implied term that services must be provided with reasonable care and skill. If an AI hallucination were to lead to a failure in the provision of services to this standard, a business could be held in breach of these implied terms.
In the EU, the EU AI Act, which aims to regulate the development, deployment and use of AI systems across the Union, plays a pivotal role in addressing the risks associated with AI hallucinations. Under the Act, AI systems deemed high-risk will be subject to strict regulatory requirements, including obligations for transparency, human oversight, and accuracy. A failure to comply with the Act’s provisions could lead to a fine of up to €30 million or 6% of a company’s global annual turnover (whichever is higher) where a breach involves a high-risk AI system.
What can be done?
Robust testing and validation is obvious, but businesses may need to go further and consider if there are topics that their AI systems should avoid altogether, because the risk of hallucinations or other errors is unacceptably high. Content filters, ethical guardrails and other policy blocks may need implementing if so.
Businesses should also ensure transparency with users by clearly communicating the limitations of AI-generated information and encouraging them to verify critical data.
Interpretable AI tools and frameworks may also help companies and users of AI systems better understand how a system arrives at a given decision or output, making it easier for issues to be spotted (and potentially rectified).
Effective human oversight will also be critical for ensuring that certain AI systems do not undermine fundamental rights or cause harm, not least for businesses operating in the EU, because this is a requirement of the EU AI Act. This will involve designing AI systems in a way that human operators can oversee so that they can intervene when necessary.
Conclusion
As AI technologies become more deeply embedded in critical sectors, the legal implications of AI hallucinations will grow. The EU, in particular, is implementing a robust legal framework to regulate AI, and EU AI businesses must ensure compliance with its standards accordingly. Although the UK has not committed to replicating the EU AI Act, the Act nonetheless applies to UK (and other) AI businesses that operate in the EU, similar to how companies outside the EU must comply with the GDPR if they process the personal data of EU citizens.
By implementing strong risk management strategies, including robust testing, transparency and other forms of oversight, businesses can mitigate the legal risks associated with AI hallucinations and avoid potentially costly litigation.
In the second video in my new 6-part series, “Artificial Intelligence: Navigating the Legal Frontier”, I look at how some basic AI systems work, consider some common problems with them (including hallucinations), and discuss what businesses need to think about to avoid the pitfalls. Join me as we dive into these questions and more in this video series.
Newer: Guarding against AI bias
Older: Definitions matter