#privacy: Report warns that AI policing tools may “amplify” prejudices

Evidence has suggested that the absence of consistent guidelines for the use of automation and algorithms, may lead to discrimination in police work. 

The Royal United Services Institute (RUSI) published a report which was commissioned by the Centre for Data Ethics and Innovation (CDEI), whereby 50 experts, including senior police officers in England and Wales, were interviewed. 

It was found that the use of AI policing tools could result in potential bias occurring. The report stated that algorithms that are trained on prior police data “may replicate (and in some cases amplify) the existing biases inherent in the dataset”, such as under- or over-policing of certain communities. 

A police officer that was interviewed, commented: “Young black men are more likely to be stop and searched than young white men, and that’s purely down to human bias. That human bias is then introduced into the datasets, and bias is then generated in the outcomes of the application of those datasets.”

The report explained that a biased sample could amplify algorithmic predictions via a feedback loop, whereby future crime is not predicted, but rather future policing is. 

Another officer mentioned that “we pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there’s more policing going into that area, not necessarily because of discrimination on the part of officers.”

The report also explained that individuals from disadvantaged socio-demographic backgrounds are more likely to engage with public services frequently, therefore the police have more data about them. Subsequently, this may lead to them being calculated as posing a greater risk. 

It was also warned algorithm fairness is not just about data. In order to achieve fairness there has to be careful consideration of the wider “operational, organisational and legal context” as well as the overall decision-making process.

“Officers often disagree with the algorithm,” one officer said. “I’d expect and welcome that challenge. The point where you don’t get that challenge, that’s when people are putting that professional judgement aside.”

Other officers in the study mentioned that some may be too willing to ignore algorithm recommendations, and therefore it can be questioned if “professional judgement might just be another word for bias.”

However Alexander Babuta, research fellow at the RUSI, stated that this issue could be addressed. He explained that police forces are now exploring the opportunities of “these new types of data analytics for actually eliminating bias in their own data sets,” however clearer processes need to be implemented “to ensure that those safeguards are applied consistently”.

Andy Davies, consultant for the Police and Intelligence Services at SAS UK and Ireland, added: “[AI] should not be used as a standalone solution. Rather, this new technology should complement the work of the emergency services by providing them with the necessary insights that they need to make informed, and potentially life-saving decisions.”

Assistant Chief Constable Jonathan Drake said:

“For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to explore new approaches to achieve these aims.

“But our values mean we police by consent, so anytime we use new technology we consult with interested parties to ensure any new tactics are fair, ethical and producing the best results for the public.”


PrivSec Conferences will bring together leading speakers and experts from privacy and security to deliver compelling content via solo presentations, panel discussions, debates, roundtables and workshops.

For more information on upcoming events, visit the website.

We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.

Privacy Culture: Data Privacy and Information Security Consulting, Culture & Behaviour, Training, and GDPR maturity, covered. https://www.privacyculture.com/