May 4, 2018 at 12:16AM
via www.wired.co.uk
At the end of each summer for the last 14 years, the small Welsh town of Porthcawl has been invaded. Every year its 16,000 population is swamped by up to 35,000 Elvis fans. Many people attending the yearly festival look the same: they slick back their hair, throw on oversized sunglasses and don white flares.
At 2017's Elvis festival, impersonators were faced with something different. Police were trialling automated facial recognition technology to track down criminals. Cameras scanning the public spotted 17 faces that they believed matched those stored in databases. Ten were correct, and seven people were wrongly identified.
South Wales Police has been testing an automated facial recognition system since June 2017 and has used it in the real-world at more than ten events. In the majority of cases, the system has made more incorrect matches than the times it has been able to correctly identify a potential suspect or offender.
The automated facial recognition system has been used at sporting events, concerts and during coordinated police crackdowns on certain types of crime. Figures from South Wales Police, released following a Freedom of Information request, show the number of times its system made correct and incorrect matches. The police force has now also published the data on its website.
During the UEFA Champions League Final week in Wales last June, when the facial recognition cameras were used for the first time, there were 2,470 alerts of possible matches from the automated system. Of these 2,297 turned out to be false positives and 173 were correctly identified – 92 per cent of matches were incorrect. A spokesperson for the force blamed the low quality of images in its database and the fact that it was the first time the system had been used.
But experts have warned that the systems used by South Wales Police and other forces, have little regulatory oversight and lack transparency about how they work. There are also questions over their accuracy. On the other hand, police say the correct matches have led to arrests and that the system is improving.
"These figures show that not only is real-time facial recognition a threat to civil liberties, it is a dangerously inaccurate policing tool," says Silke Carlo the director of rights group Big Brother Watch. The group is planning on launching a campaign against facial recognition tech in parliament later this month. "South Wales’ statistics show that the tech misidentifies innocent members of the public at a terrifying rate, whilst there are only a handful of occasions where it has supported a genuine policing purpose," Carlo adds.
At the Anthony Joshua versus Kubrat Pulev boxing match in October 2017 – the same month police improved the algorithm in the system they were using – five true positives were made compared to 46 false alerts, equally 90 per cent false positives. At the Wales versus Australia rugby international six positives were outweighed by 42 negatives.
During a royal visit from Prince Harry and Meghan Markle to Cardiff in January 2018, the automated facial recognition system used by South Wales Police didn't make any matches, either positive or false. And, on November 3, 2017, when police recovered the body of a man who had jumped into the River Taff and died, automated facial recognition was used to help discover his identity.
Testing facial recognition
Automatic systems that scan people's faces in public and try to make matches are at an early stage in the UK. In China, systems are more advanced with a BBC News reporter being located, during a stunt, within just seven minutes. Systems in China have also been used to spot a man at a concert who was later arrested.
The benefits of facial recognition for policing are evident. The Welsh police force says it intends its system to help with the detection and prevention of crime. "For police it can help facilitate the identification process and it can reduce it to minutes and seconds," says Alexeis Garcia-Perez, a researcher on cyber security management at Coventry University. "They can identify someone in a short amount of time and in doing that they can minimise false arrests and other issues that the public will not see in a very positive way."
South Wales Police, in its privacy assessment of the technology, says it is a "significant advantage" that no "co-operation" is required from a person. Its system is comprised of two CCTV cameras that are connected to a laptop or server. The CCTV feed is recorded and faces are pulled from the footage, which is compared automatically against a watch list. This list, formed of a database, can be comprised of thousands of facial images.
"Watchlists and the associated metadata are manually added to the system and will be reviewed regularly to ensure accuracy and currency and will be deleted at the conclusion of the respective deployment," South Wales Police says in its privacy assessment.
In the future, the police force says, it may be possible to integrate the facial recognition technology with databases from other sources. It says the Police National Database (which has more than 19 million images), the Automatic Number Plate Recognition database, passport or driving licence could be added to its system.
"I think the false positive rates are disappointingly realistic," says Martin Evison a forensic science professor who has researched police recognition techniques at Northumbria University. "If you get a false positive match, you automatically make a suspect of somebody that is perfectly innocent." He says facial recognition systems being used in the real-world often can't perform at the same levels as those in more controlled conditions. "The existing algorithms and systems that have been used are still limited," adds Garcia-Perez.
UK police are now using fingerprint scanners on the streets to identify people in less than a minute
Evison explains that systems being used by police need to be carefully configured for accuracy. They shouldn't be producing more false positives than is necessary. "If you want to catch a suspect, you have to set the threshold sufficiently low so you won't miss them and make a false exclusion," he says. "If you do set the threshold low then you inevitably get a higher number of false matches."
South Wales Police says the government's Home Office has not set any false positive rates that should be targeted but the system does allow the "operator to vary the match threshold against a watch list as a whole".
The force says each time the system issues an alert a police officer reviews whether it is a match and if it may be can send an "intervention team". These teams are only sent when it is believed the match is correct.
"When the intervention team is dispatched this involves an officer having an interaction with the potentially matched individual," a spokesperson for the police force says. "Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods i.e. normally a dialogue between the officer/s and the individual". Those incorrectly identified are directed to an explanation that's posted online.
Multiple arrests have been made as a result of South Wales' use of the tech. In January, it was reported 50 charges and 12 arrests had been made as a result of its use. In February, one police officer using the system tweeted: "It’s another UK Policing first! Positively identified from night club CCTV a month ago using the technology and located in Cardiff during today’s deployment."
But the use of the system raises privacy considerations. "It is accepted that civil rights [sic] may start to voice concerns over the invasion of privacy by this technology," the Welsh police force's privacy assessment of its own system says.
Legal basis
South Wales Police isn't the only UK constabulary that has been testing facial recognition systems in public places. In 2015, police in Leicestershire tested facial recognition technology at the Download music festival where 90,000 people attended. The system did not catch any criminals at the event.
Elsewhere, London's Metropolitain police has tested its technology at both the Notting Hill Carnival and a remembrance service held at the Cenotaph, in November 2017.
At the time of the Met's most recent test, civil rights group Liberty argued there was "no legal basis" for the use of the real-time system. Fellow rights group Big Brother Watch also points out parliament has never scrutinised the technology's use in public places. And it was reported that an erroneous arrest was made due to the technology.
The automated facial recognition systems being used at public events are separate to police uses of face matching technology that uses other images: such as social media photos, or pre-recorded CCTV footage. In 2012, the High Court ruled it was unlawful for millions of photos of innocent people to be kept on police databases. At present, these can only be removed if a person makes a complaint to police.
The UK's regulators for biometric data and surveillance cameras have both called for stronger rules around the use of automated facial recognition technology. In his 2017 annual report, Tony Porter the Surveillance Camera Commissioner said automatic facial recognition is "fascinating" but it can be used for "intrusive capabilities" and have "crass" applications.
"The public will be more amenable to surveillance when there is justification, legitimacy and proportionality to its intent," Porter said. "Currently there are gaps and overlaps in regulatory oversight."
Evison says there should be more consideration around when facial recognition systems are used. "I'd feel much more comfortable if the police were concentrating on investigating particular types of crime," he says. "In the context of football matches it might be violence, in the context of rock concerts it might be cellphone snatching by organised crime gangs."