Automating bias?

In R (Bridges) v The Chief Constable of South Wales Police [2020] EWCA Civ 1058, [2020] All ER (D) 26, the Court of Appeal upheld three of five grounds of appeal against the South Wales Police (SWP) force’s use of automated facial recognition (AFR) technology.

Among its conclusions, the court found that the force’s discretion for its use of AFR was too broad to meet the standard required by Article 8(2) of the European Convention on Human Rights (ECHR), albeit it found that the force’s use of AFR was proportionate in the circumstances. However, the court’s wider comments in its judgment serve as a reminder of the potential for unlawful discrimination to be perpetrated by technology.

South Wales Police force has conducted trials of mobile AFR technology since 2017 to locate wanted offenders and suspects on several local watchlists at public events. Up to 50 faces per second can be scanned by SWP’s AFR technology, by means of a live feed camera, for comparison against a watchlist of up to 2,000 stored photographs of faces.

If a match occurs between the SWP’s live feed camera and its watchlist profiles, the system operator is alerted, so that a decision can be taken on whether to make an intervention. If the live feed image does not match someone in the watchlist, it is automatically deleted.

Images of the Appellant, Mr Bridges, were potentially captured by SWP’s AFR technology in December 2017 and March 2018, when he was in the vicinity of the equipment. While Mr Bridges was not included on the watchlist on either of these occasions, he challenged the lawfulness of the force’s use of the technology on the basis that it was not compatible with the right to respect for private life under ECHR Article 8, data protection legislation, and the Public Sector Equality Duty under section 149 of the Equality Act 2010.

The Court of Appeal found against Mr Bridges’ submissions that the force’s use of the technology was not proportionate under ECHR Article 8(2) and that the Divisional Court was wrong not to reach a conclusion on whether SWP had an appropriate policy document in place for sensitive processing in accordance with section 42 of the Data Protection Act 2018. However, the court upheld Mr Bridge’s complaints that:

  • SWP’s interference with his ECHR Article 8(1) rights was not “in accordance with the law” for the purposes of ECHR Article 8(2), as there was no clear guidance as to where the technology could be used and who could be included on an AFR watchlist;
  • the data protection impact assessment that was required under section 64 of the Data Protection Act 2018 was deficient, as it had been written on the basis that ECHR Article 8 was not infringed; and
  • the force had not complied with its Public Sector Equality Duty under section 149 of the Equality Act 2010, as it had not taken reasonable steps to determine whether the software had a racial and/or sex bias.

The court observed that there was no clear evidence that SWP’s AFR software was in fact biased on grounds of race and/or sex. However, by recognising the Respondent’s failure to comply with its Public Sector Equality Duty and the risks attached thereto, the court has brought into focus the need for organisations to consider the potential for AFR (and other) technologies that process personal data to be biased and/or unlawfully discriminatory.

There are many reasons why unlawful discrimination may occur when such solutions are deployed. In facial recognition systems, bias can be created when unrepresentative data sets are used to “train” the software to recognise faces. Improper representation of ethnicities and sexes, for example, can lead to significant error rates being programmed in.

In a police setting similar to that created by SWP during its trials, false positives caused by such errors could feasibly result in members of certain groups being significantly more likely to be stopped or subjected to other forms of intervention than others. It is this risk that the Public Sector Equality Duty referred to in Bridges is intended to address.

As the name suggests, the Public Sector Equality Duty only applies to public sector bodies in the UK. In short, the duty obliges public authorities to have due regard to the need to eliminate unlawful discrimination and to advance equality of opportunity. However, while only public authorities are under this duty, discrimination on grounds of race or sex can be unlawful whether perpetrated by public or private organisations.

Under section 19 of the Equality Act 2010, for example, indirect discrimination occurs when a person applies a facially neutral provision, practice or criterion to persons who don’t share a protected characteristic, which puts persons who share that protected characteristic at a particular disadvantage when compared with others who do not share it. Accordingly, if an AFR solution took live feed photographs of the public at large, but persons from minority ethnic groups were more likely than others to be misidentified and then stopped by the police, then indirect discrimination on grounds of race would occur. Inherent bias in the system may be the cause of the disparity, but it would not be a defence to indirect discrimination that was unjustifiable and thus unlawful.

It is not only AFR technologies that may suffer bias. Artificial intelligence (AI) is now behind many business operations and decisions. While such changes have removed human bias, complaints of technology-powered race and sex-based bias and discrimination have since emerged.

In New York State, regulators last year announced an investigation into the algorithm used by Apple to determine the credit limits for users of its credit card. This followed complaints of sex bias given a purported pattern of women being offered lower lines of credit than men. Apple may have removed human bias from credit scoring decisions, but the automation of these decisions does not mean that unlawful discrimination cannot arise.

In response to the risks it perceives are posed by facial recognition technology, IBM recently announced that it will no longer offer general purpose facial recognition software. As its CEO Arvind Krishna remarked in a letter to the US Congress in June, “Vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.”

The Court of Appeal in Bridges makes much the same point in its judgment, when recognising that software can sometimes have an inbuilt bias and that such needs to be tested for. The court’s observations should therefore serve as a warning to vendors of the potential risks of unlawful discrimination and bias that can be caused by technology.

First published by New Law Journal, 18 August 2020.

Paul Schwartfeger on 23 August 2020