Guarding against AI bias

Legal concerns about potential bias in artificial intelligence systems arguably grow in significance as AI becomes more integrated into business operations and our daily lives.

Bias is a tendency that can lead to unfair or skewed outcomes, favouring some groups while disadvantaging others. When it comes to artificial intelligence, these biases—be they subtle or stark—have serious implications in the legal landscape. As businesses increasingly adopt AI, they must understand and address bias risks to ensure regulatory compliance and avoid costly litigation.

Types of bias in AI: More than just data

Bias in AI can arise from a variety of sources, affecting every stage in a system’s lifecycle from development to data selection and deployment. A few examples businesses must be alert to include:

  1. Input Data Bias: As the adage goes, “garbage in, garbage out.” When biased historical data is used to train an AI system, the system can replicate and reinforce those biases, creating a cycle of discrimination. If left unchecked, this bias can significantly impact protected groups, leading to biased decision-making.
  1. Amplification Bias: AI systems often amplify existing biases, sometimes making them more pronounced. For example, if an AI system observes a pattern of employers adopting a higher hiring rate for a particular demographic from the system’s recommendations, then it may amplify this trend when recommending candidates to those employers in future, amplifying discriminatory hiring practices.
  1. Proxy Bias: When AI models use indirect variables that correlate with sensitive attributes, the risk of discrimination looms large. For example, postcodes might inadvertently serve as a proxy for race or socioeconomic status in a lending model, leading to discriminatory outcomes even if race is not directly included as a variable when assessing applicants for credit.
  1. Counterfactual Bias: This type of bias can arise within AI algorithms themselves, and also when an AI model makes predictions or recommendations that are prompted by the biased instructions or interactions of a user, even if those factors are irrelevant to the decision at hand.

…and that’s just the start!

Legal risks

Both UK and EU law provide frameworks for addressing bias in AI systems.

In the UK, the Equality Act 2010, for example, protects against discrimination based on characteristics such as race, gender, age and disability. If an AI system generates biased outputs that result in unlawful discriminatory treatment, the business deploying the system may face claims under the Equality Act.

The General Data Protection Regulation (GDPR) also plays a significant role, particularly under Article 22, which grants individuals the right not to be subject to automated decision-making where it has significant effects. The GDPR’s fairness principle adds another layer, requiring that data processing, including by AI, be conducted in a way that respects fairness and avoids prejudice.

For providers of AI systems or services into or within the European Union, the EU AI Act further heightens scrutiny, imposing strict compliance requirements on certain AI systems, particularly in matters of fairness and transparency. When fully in force, vendors of certain systems will be obliged, under the Act, to take steps to prevent biased outcomes, and the penalties for non-compliance are severe.

Strategies to manage AI bias

To manage the risks of bias in AI systems, businesses need to consider several key practices, including:

  1. Contractual Clauses for Procurement: Warranties can help protect businesses against, for example, data sets used in the development of a system that do not have the appropriate statistical properties.
  1. Staff Training: Comprehensive training for staff in the responsible use of AI systems is essential to minimise the introduction and perpetuation of bias.
  1. Diverse Training Data: Diversity in training data can significantly reduce the risk of bias, providing a fairer foundation for AI models. Careful selection of data can prevent historical biases from creeping into AI outputs.
  1. Bias Audits and Fairness Assessments: Regular bias audits are essential. These can help companies detect bias early, ensuring that AI outputs align with anti-discrimination laws and internal policies.
  1. Human Oversight and Intervention: Human intervention remains a critical safeguard in AI. Businesses should ensure that human reviews are integrated into AI workflows to catch potential biases before they reach end-users.
  1. Explainability Tools: These tools allow companies to see inside the “black box” of AI decisions. By understanding how models weigh various factors, businesses can detect and control for proxy bias and other unintended discriminatory effects.
  1. Documentation for Compliance: Businesses should document their efforts to mitigate bias, including records of fairness tests, data quality checks and conformity assessments. Such documentation can be invaluable if a company’s practices come under legal scrutiny.

While not exhaustive, these steps can help businesses create a robust framework for identifying and mitigating the risks of AI bias.

Conclusion

Bias in AI systems is a serious legal and operational risk. To safeguard against the risks of discrimination claims and regulatory penalties, businesses must be proactive, transparent and rigorous in their approach to bias detection and mitigation.

By adopting the right tools, auditing practices and compliance frameworks, companies can not only address their liability but also foster trust in their AI systems.


In the third video in my 6-part series, “Artificial Intelligence: Navigating the Legal Frontier”, I look at how bias can arise in AI systems and discuss what businesses need to think about to avoid the pitfalls. Join me as we dive into these questions and more in this video.

Paul Schwartfeger on 6 November 2024