Facial recognition tech has been banned in U.S. cities like San Francisco and, most recently, Boston, on top of a concerted campaign to outlaw it on university campuses. But it seems the scrutiny can’t come fast enough.
It started when the police began looking into a case of shoplifting at a local store that took place in October 2018. CCTV footage from the incident recorded a dark-skinned, heavyset man, dressed in black and wearing a St. Louis Cardinals cap, carrying out the theft of some luxury watches.
[Keep up with the latest in privacy and security. Sign up for the ExpressVPN blog newsletter.]
A still image from the video was uploaded to Michigan’s facial recognition database. DataWorks’ software identified Williams, who is black, as a possible suspect, along with several other individuals. The results were then shown to a loss-prevention contractor at the store where the theft took place, who pointed back to Williams as the lead suspect. It wasn’t clear whether she had actually witnessed the shoplifting, or merely reviewed the video after the fact.
Standard operating procedures require that Michigan police conduct a thorough follow-up investigation to back up the software identification, such as obtain eyewitness reports, biometric identification, and more before issuing an arrest order. They did none of that, proceeding to handcuff the man on his front lawn and drag him to a nearby detention center for further questioning.
Despite multiple attempts, investigators were unable to positively identify Williams as the culprit. Still pictures of the CCTV video placed next to Williams’s driver’s license showed a clear mismatch between the two. Yet, he was forced to spend a night in jail, spending 30 hours there before being released on bail.
The American Civil Liberties Union of Michigan took an active interest in the case and represented Williams in a hearing that took place two weeks after the initial arrest.
The charges have since been dropped, with the Wayne County prosecutor’s office apologizing for the arrest.
Clear bias in facial recognition technology
Large tech firms like Amazon, IBM, and Microsoft have recently announced their intention to stop building facial recognition tech for law enforcement and associated agencies. The decision comes after a plethora of studies showing clear biases and errors within the algorithms.
Rekognition, Amazon’s take on facial recognition, once identified popular U.S. television host Oprah Winfrey as male. It also erroneously matched 28 members of Congress with a mugshot database.
A study by the National Institute of Science and Technology revealed that visible minorities and disadvantaged groups such as African-American and Asian faces were misidentified up to 100 times more than Caucasian faces.
Despite the disconcerting social implications of facial recognition bias, some countries are plowing ahead with the tech. Unless addressed, the long-term ramifications of widely adopting facial recognition technology might be extremely troubling.