U.K.: Studies Find Too Many Facial Recognition Flaws

Thirteen UK police forces scan millions of faces. The system gets it right about half the time in the field. And when it gets it wrong, you have to prove your own innocence.

Li Nguyen

Thirteen U.K. police forces use live facial recognition. The tech is expanding across retail. Watchdogs say oversight is lagging dangerously behind. Innocent people are bearing the cost.


The Guardian led a series of investigations into the U.K.’s use of facial recognition technology — one of the most comprehensive examinations of the subject in British media. The findings are troubling across three distinct areas. 1) Oversight of police facial recognition is lagging far behind the pace of deployment. 2) The technology used in U.K. law enforcement carries documented bias and accuracy problems. And, 3) ordinary people — shoppers, commuters, members of the public — are being wrongly identified, publicly humiliated, and then forced to prove their own innocence before they can clear their names.

Taken together, the three investigations describe a country that has built significant facial recognition infrastructure — across police forces and retail chains — without building the legal framework, independent oversight, or accountability mechanisms to govern it. The Home Office ran a public consultation on a new legal framework earlier in the year. New laws, if any, are expected to take approximately two years. In the meantime, the cameras are already scanning.

What’s Happening & Why It Matters

How Live Facial Recognition Works — and Who Uses It

The U.K. currently uses three categories of facial recognition technology in law enforcement. Retrospective Facial Recognition (RFR) compares images from CCTV footage, mobile phones, or social media against custody photos held on the Police National Database after an incident has occurred. Police conduct more than 25,000 RFR searches per month across England and Wales. Live Facial Recognition (LFR) uses cameras in public spaces to scan faces in real time. The system compares each face against a watchlist of people wanted by police or courts. If the system detects a potential match, nearby officers receive an alert. If no match is found, the image is immediately deleted. Operator-Initiated Facial Recognition (OIFR) is a near-real-time variant currently used by South Wales Police and Gwent Police — and being trialled by the Metropolitan Police.

As of March 2026, 13 of the 43 police forces in England and Wales use LFR. A national rollout is planned. The Home Office announced in January 2026 that it would expand the number of facial recognition vans from 10 to 50 and make them available to all forces across England and Wales. On 21 April 2026, the U.K. High Court dismissed a legal challenge to the Metropolitan Police’s use of LFR — ruling the technology lawful and compatible with human rights. The Equality and Human Rights Commission had intervened in the case, backing the claimants. The court disagreed. Big Brother Watch — the civil liberties organisation that brought the challenge alongside youth worker Shaun Thompson, who was wrongly identified by LFR in 2024 — is appealing.

The Accuracy Problem: What Lab Tests Don’t Capture

The police use Corsight Apollo 4 — a facial recognition system supplied by Israeli firm Corsight AI. The National Physical Laboratory (NPL) tested the system in controlled lab conditions in March 2026. Lab results showed a true positive rate of 89% — meaning the system correctly identified 89 out of every 100 people on a watchlist. At the operational threshold used by police, the false positive rate in lab conditions was 0.017% — approximately 1 in every 5,700 faces scanned. Those figures sound reassuring. Street conditions, however, are not labs.

Cambridge University separately evaluated how the system performed in actual Essex Police operations. The findings were starkly different. At the operational threshold of 55 — the confidence threshold Essex Police used — the true match rate was just 50.7%. That means the system correctly identified a person on the watchlist roughly half the time under real conditions. The Cambridge evaluation found demographic disparities. The system worked more reliably on men than women. It was more likely to correctly identify Black participants than women and participants from other ethnic groups. Essex Police suspended its LFR programme in March 2026 after civil rights groups discovered the force had been using confidence thresholds from a different algorithm than the one being deployed — meaning every operational assessment of the system’s performance had been conducted on the wrong basis. The Information Commissioner’s Office (ICO) and Cambridge University are auditing the system.

The Wrongful Arrest That Started in January 2026

The gap between lab accuracy and real-world performance has direct human consequences. In January 2026, an Asian man in England was wrongfully arrested after Retrospective Facial Recognition technology incorrectly identified him as a suspect. His case is not isolated. Tests on a commonly used RFR algorithm conducted in 2025 found that Black women were the subject of the highest percentage of false positive identifications — reaching 9.9% at a 0.8 confidence threshold. That figure means that for every 100 Black women whose faces the system processes, nearly 10 are incorrectly flagged as potential suspects.

Amnesty International U.K. has stated clearly: “Introducing biased technology into contexts where racial discrimination already occurs will only exacerbate the problem.” The Biometrics and Surveillance Camera Commissioner has repeatedly raised concerns. Police forces have continued to expand deployments regardless. Big Brother Watch director Silkie Carlo described the situation with precision after the High Court ruling. “There has never been a more important time to stand up for the public’s rights against dystopian surveillance tech that turns us into walking ID cards and treats us like a nation of suspects.”

Retail Facial Recognition: Guilty Until Proven Innocent

The second arm of the facial recognition problem in the U.K. is not in police hands — it is in retail. Facewatch operates facial recognition systems in Sainsbury’s, B&M, Budgens, Costcutter, SPAR, Sports Direct, and other U.K. retailers. The system maintains watchlists of people previously associated with theft or antisocial behaviour in those stores. When a camera detects a face that matches someone on the watchlist, it alerts store staff.

In February 2026, Warren Rajah — a 42-year-old tech worker — visited a Sainsbury’s store at Elephant and Castle in south London. Staff confronted him, told him he had been flagged by the facial recognition system, and escorted him from the premises. Rajah had never committed any offence. When he contacted Facewatch, the company told him he was not on its database. Sainsbury’s subsequently apologised and offered him a £75 voucher. Facewatch described the incident as “human error in-store, where a member of staff approached the wrong customer.” Rajah’s response was more pointed. “Am I supposed to walk around fearful that I might be misidentified as a criminal?” he asked. “Imagine how mentally debilitating this could be to someone vulnerable, after that kind of public humiliation.”

The Proof-of-Innocence Problem

Rajah’s experience illustrates a systemic problem that extends far beyond one supermarket. When a person is wrongly identified by a facial recognition system — whether in a police context or a retail one — the burden of proving their innocence falls entirely on them. Rajah had to provide his passport and a headshot to Facewatch before the company would confirm he was not on its database. He was required to submit biometric data to prove he should not have been subjected to biometric surveillance in the first place.

Jasleen Chaggar, Legal and Policy Officer at Big Brother Watch, described the pattern she observes regularly. “The idea that we are all just one facial recognition mistake away from being falsely accused of a crime or ejected from a store without any explanation is deeply chilling. To add insult to injury, innocent people seeking a remedy must jump through hoops and hand over even more personal data to discover what they’re accused of. In the vast majority of cases, they are offered little more than an apology when companies are finally forced to admit the tech got it wrong.” Silkie Carlo added: “Members of the public are being put on secret watchlists, without their knowledge and without being shown any evidence, and then electronically [denylisted] from their high streets.”

The Oversight Gap: Rules That Don’t Keep Up

The investigation found that watchdogs responsible for overseeing facial recognition are consistently outpaced by the speed of deployment. The ICO has issued guidance. The Biometrics Commissioner has raised concerns. Parliament’s own Public Administration and Constitutional Affairs Committee has called for reform. Yet the technology continues to expand faster than any of the bodies can meaningfully constrain it.

The Home Office consultation that closed in February 2026 proposed a new single regulatory body to oversee biometrics. The consultation explored whether the framework should cover voice recognition, iris scanning, gait analysis, and emotion detection — a scope that includes live facial recognition alone. New laws governing the technology are expected to take approximately two years to pass. In the interim, the Met Police is expanding its LFR deployments to 10 per week. The Home Office is moving toward 50 facial recognition vans across England and Wales. And retail chains continue to add Facewatch cameras without any dedicated statutory framework governing their use. In the rest of Europe, live facial recognition in retail settings is effectively banned for private companies. The U.K. is a complete outlier.

TF Summary: What’s Next

Big Brother Watch is appealing the High Court’s April ruling upholding Metropolitan Police LFR use. That appeal will be the next major legal test for the technology in the U.K. The Equality and Human Rights Commission intervened in the original case and backed the claimants. Its position in the appeal proceedings will be closely watched. A civil lawsuit brought by a teenage girl who was wrongly identified by Facewatch technology at Home Bargains — searched, removed from the store, and told she was barred from multiple shops — is also progressing through the courts.

MY FORECAST: The government’s facial recognition framework — if passed — will not arrive for at least two years. Every month of delay adds another set of LFR deployments, another set of retail facial recognition installations, and another set of innocent people processed through a system that has documented accuracy problems, documented demographic disparities, and no adequate mechanism for those wrongly identified to seek remedy. The technology is running ahead of the law. The people bearing the cost of that gap are the ones least equipped to challenge it.


[gspeech type=full]

Share This Article
Avatar photo
By Li Nguyen “TF Emerging Tech”
Background:
Liam ‘Li’ Nguyen is a persona characterized by his deep involvement in the world of emerging technologies and entrepreneurship. With a Master's degree in Computer Science specializing in Artificial Intelligence, Li transitioned from academia to the entrepreneurial world. He co-founded a startup focused on IoT solutions, where he gained invaluable experience in navigating the tech startup ecosystem. His passion lies in exploring and demystifying the latest trends in AI, blockchain, and IoT
Leave a comment