UK’s Met Police’s facial recognition technology isn’t, 98% of the time

cctv-whitehall
The security camera commissioner has said he is concerned about quantity of false positives Getty

‘Intrinsically Orwellian’ systems must be scrapped, campaigners say as biometrics commissioner brands them ‘not yet fit for use’

14 May 2018 | Jon Sharman | Independent

Facial recognition software used by the UK’s biggest police force has returned false positives in more than 98 per cent of alerts generated, The Independent can reveal, with the country’s biometrics regulator calling it “not yet fit for use”.

The Metropolitan Police’s system has produced 104 alerts of which only two were later confirmed to be positive matches, a freedom of information request showed. In its response the force said it did not consider the inaccurate matches “false positives” because alerts were checked a second time after they occurred.

Facial recognition technology scans people in a video feed and compares their images to pictures stored in a reference library or watch list. It has been used at large events like the Notting Hill Carnival and a Six Nations Rugby match.

The system used by another force, South Wales Police, has returned more than 2,400 false positives in 15 deployments since June 2017. The vast majority of those came during that month’s Uefa Champion’s League final in Cardiff, and overall only 234 alerts – fewer than 10 per cent – were correct matches.

Both forces are trialling the software.

DNA in the dock: how flawed techniques send innocent people to prison

Many juries believe crime-scene DNA evidence is watertight – but this is far from the case. As forensic technology gets ever more sophisticated, experts are only just realising how difficult interpreting the evidence can be

The UK’s biometrics commissioner, Professor Paul Wiles, told The Independent that legislation to govern the technology was “urgently needed”.

He said: “I have told both police forces that I consider such trials are only acceptable to fill gaps in knowledge and if the results of the trials are published and externally peer-reviewed. We ought to wait for the final report, but I am not surprised to hear that accuracy rates so far have been low as clearly the technology is not yet fit for use.

“In terms of governance, technical development and deployment is running ahead of legislation and these new biometrics urgently need a legislative framework, as already exists for DNA and fingerprints.

“The Home Office has promised to publish a biometric strategy in June and I trust that this will propose a legislative framework. It is important in terms of public trust that the public are clear when their biometrics might be taken and what they might be used for, and that parliament has decided those rules.”

But a Home Office spokesman admitted this week that the department could not say when the long-delayed biometrics strategy would be published.

Campaigners said the “intrinsically Orwellian” facial recognition software should be scrapped, while a senior academic warned that governments faced “grave challenges” in preventing potential abuse of the technology.

Silkie Carlo, director of the Big Brother Watch pressure group, which is to launch a campaign on the issue on Tuesday, said: “Police must immediately stop using real-time facial recognition if they are to stop misidentifying thousands of innocent citizens as criminals.

“It is an intrinsically Orwellian police tool that has resulted in ordinary people being stopped and asked for their ID to prove their innocence.

“It is alarming and utterly reckless that police are using a technology that is almost entirely inaccurate, that they have no legal power for, and that poses a major risk to basic democratic freedoms. It must be dropped.”

Tao Zhang, a senior lecturer at Nottingham Trent University, told The Independent that a lack of open debate about facial recognition technology “could clearly be exploited by an authoritarian state for purpose of political control, as the case of China illustrates”.

While checks and balances existed in democracies like Britain, she added, “with such a rapidly developing technology, there is danger that public policy may not keep pace”.

Dr Zhang added: “From medical research, healthcare to crime control and many other fields, facial recognition potentially has huge benefits, but it also imposes grave challenges for the government to prevent commercial and political exploitation of it for illegal acts.”

The Met told The Independent no end date for its experiment had been set. The force said it had made no arrests through the system, and deleted images involved in false positive matches within 30 days of the error. Images that do not generate alerts are “immediately” deleted, a spokesman said.

At least year’s Notting Hill Carnival, however, one person was reportedly detained erroneously following the use of facial recognition. The Met insisted they were not technically arrested, and instead released when officers realised they had already been dealt with for a public order offence, Sky News reported at the time.

 

Original Link | Metropolitan Police’s facial recognition technology 98% inaccurate, figures show

Read This Day from Hawkins Bay Dispatch

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s