British police officers are among those concerned that the use of artificial intelligence in fighting crime is raising the risk of profiling bias, according to a report commissioned by government officials.

The paper warned that algorithms might judge people from disadvantaged backgrounds as “a greater risk” since they were more likely to have contact with public services, thus generating more data that in turn could be used to train the AI.

“Police officers themselves are concerned about the lack of safeguards and oversight regarding the use of algorithms in fighting crime,” researchers from the defence think-tank the Royal United Services Institute said. 

The report acknowledged that emerging technology including facial recognition had “many potential benefits”. But it warned that assessment of long-term risks was “often missing”. 

The lack of regulation targeting the use of algorithms by law enforcement around the world has come under growing scrutiny but privacy watchdogs and legal experts have struggled to limit widespread adoption of the new technology even though its efficiency and accuracy have been questioned. 

One police officer called the use of AI a “patchwork quilt, uncoordinated and delivered to different standards in different settings and for different outcomes”. The officer was referring to new technology such as facial recognition and predictive policing, which uses past crime data to assess whether someone is likely to commit a future offence or to identify hotspots in which to deploy officers.

The report, which included views from legal experts, police forces and civil society organisations, also highlighted the risk of police officers becoming over-reliant on AI. It is part of an ongoing review into algorithmic bias in policing by the government’s Centre for Data Ethics and Innovation, which was founded last year to help prepare the UK for the proliferating use of artificial intelligence.

A government spokesperson said the CDEI was expected to “analyse the findings as it develops its report into the broader issue of algorithmic bias”.

Hannah Couchman, from civil rights campaigners Liberty, said the use of data and algorithms in policing has “enormous potential” to infringe on human rights. “The creation of national standards will not be enough, and we should ban the use of predictive policing programmes,” she said.

This month the UK’s High Court ruled in favour of the South Wales Police’s use of facial recognition, after a civilian had argued it was a violation of privacy to scan people’s faces as they move around in public. The case was the world’s first legal challenge to the use of facial recognition by police, according to the court.

The ruling came weeks after the UK’s data protection regulator launched an investigation into the use of facial recognition technology in the King’s Cross area of London, arguing it risked not complying with data protection law.

Simon Kempton, technology lead at the Police Federation of England and Wales, which represents police constables, said: “It is only right that police should be using any technology available to them to protect the public at a time where officer numbers have been decimated.” 

Kent Police, which was the first UK force to trial predictive policing, last year cancelled the project after five years, saying it had been hard to prove that it had helped it reduce crime.

Additional reporting by Bethan Staton

LEAVE A REPLY

Please enter your comment!
Please enter your name here