Definitions matter
Definitions matter
Definitions provide the foundation for how we regulate, enable, enforce against, understand and use things throughout our daily lives. But how do you define something as abstract as AI?
Those familiar with the UK’s Dangerous Dogs Act 1991 may know it has faced some criticism over the years owing to its breed-specific focus. Challenges have arisen when trying to classify canines under it, as initial efforts to ban Bully XL dogs proved, and the approach taken by the Act is also said to have inadvertently contributed to a misconception that only certain breeds pose risks. Regardless of your views, the Act exemplifies why definitions are about more than just splitting hairs.
Just as the Dangerous Dogs Act shows how definitions shape outcomes, attempts to regulate artificial intelligence are beginning to reveal how definitions could impact innovation and accountability. Around the world, countries are grappling with the very real questions of if and how to define and regulate AI—questions that seemingly grow more urgent as AI technologies advance.
Arguably leading the pack, the EU’s Artificial Intelligence Act (Regulation 2024/1689) came into force on 1 August 2024. It aims to ensure the safe and ethical development and deployment of artificial intelligence across the EU and uses a risk-based approach to classify AI systems to protect fundamental rights and public safety. Under Article 3 of the EU Regulation, an AI system is defined as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The Act’s definition thereby captures a broad range of AI technologies, from simple automated systems to more advanced adaptive AI systems that continuously learn from their inputs.
In contrast, the United States lacks a unified AI regulatory framework like the EU’s. The limited descriptions of artificial intelligence that appear in its few AI policies and directives tend to be centred around specific AI applications, rather than falling to be defined by an overarching umbrella term.
The US National AI Initiative Act of 2020, for example, which supports AI research and development across government agencies but does not impose specific regulatory requirements on AI applications, defines AI as:
A machine-based system that can for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to (i) Perceive real and virtual environments; (ii) Abstract such perceptions into models through analysis in an automated manner; and (iii) Use model inference to formulate options for information or action.
Looking closer to home, the United Kingdom is taking a sector-specific, principles-based approach to AI, as the previous government outlined in its 2023 White Paper, ‘A pro-innovation approach to AI regulation.’ As in the US, there is no unified regulatory framework for AI in the UK at present, nor is there one binding definition of artificial intelligence. Rather, UK regulators are adopting tailored definitions suited to their particular sectors. However, this has led to some fairly significant variability. The National Security and Investment Act 2021, for example, defines artificial intelligence in relatively granular terms as:
technology enabling the programming or training of a device or software to—(i) perceive environments through the use of data; (ii) interpret data using automated processing designed to approximate cognitive abilities; and (iii) make recommendations, predictions or decisions; with a view to achieving a specific objective.
By comparison, 2023 guidance released by the UK Information Commissioner’s Office (ICO), which aims to give practical advice to businesses on their use of AI, describes artificial intelligence somewhat more informally as:
an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking.
The variability and frequent lack of specificity that is apparent among many of the definitions emerging around the world raises significant questions. Looking first at the ICO’s approach, what if an algorithm-based technology solved simple tasks rather than complex tasks, which nonetheless had significant detrimental effects on individuals of the sort the ICO ought to regulate? For that matter, what if the technology carried out a function that did not previously require human thinking, potentially because the technology and its application is entirely novel?
Rewinding slightly to the EU AI Act, its more formal definition is also not without problems. For instance, the Act states that AI systems “may” exhibit adaptiveness and “can” influence environments, leaving plenty of room for interpretation.
One of the very few obligations of the EU’s definition is that an AI system will be one that is designed to operate with “varying levels of autonomy”, but the phrase “varying levels” also leaves the door wide open to argument. Does it amount to “autonomy” if an algorithm-heavy system only partially assists in a process, or if the same machine-based system can operate only on the instruction of a human operator? Will it still then fall to be regulated under the EU AI Act (and if not, why not, if it presents the same harm)?
Clear definitions set the boundaries for what’s allowed, who is accountable, and how we navigate risks in both traditional and emerging fields. As AI technologies develop at an unprecedented pace, refining these definitions becomes more critical—and more challenging.
If anything at all is clear from the various definitions of artificial intelligence already in use around the globe, it is that the courts are going to have their work cut out for them making clear any definition that is used—and they are going to need the assistance of lawyers who can effectively explain how a given system works, if its operations amount to artificial intelligence, and why that matters.
In this first video in my new 6-part series, “Artificial Intelligence: Navigating the Legal Frontier”, I discuss how AI is defined in a legal context, and how we might distinguish it from other systems. Join me as we dive into these questions and more in the upcoming series.