By Brett Wilkins

The American Civil Liberties Union (ACLU) has filed a complaint on behalf of a Michigan man who was wrongfully arrested and detained by police earlier this year after an incorrect facial recognition match.

Detroit police were investigating the theft of five watches with an estimated value of $3,800 from a Shinola retail store using face matching technology from Rank One Computing. The digital image analysis software incorrectly matched the drivers license of 42-year-old Robert Julian-Borchak Williams of Farmington Hills, located about 25 miles (40 km) northwest of Detroit, with a grainy security camera image of an alleged shoplifter. Both the suspect and Williams are Black.

NPR reports police arrived at Williams’ home in January and arrested him on his front lawn in front of his wife Melissa Williams and two daughters, ages 2 and 5, who cried as police took their father away. When his wife asked where he was being taken, an officer reportedly told her to “Google it.”

“[Officers] never even asked him any questions before arresting him,” ACLU of Michigan attorney Phil Mayor said of Williams. “They never asked him if he had an alibi.”

During his interrogation, Williams was shown three photos — two from the store surveillance camera and one of his driver’s license. “When I look at the picture of the guy, I just see a big Black guy. I don’t see a resemblance. I don’t think he looks like me at all,” Williams told NPR. “I picked it up and held it to my face and told him, ‘I hope you don’t think all Black people look alike.'”

Williams says officers released him on bail after 30 hours, acknowledging that “the computer” must have been wrong. He still must attend a court hearing.

“I never thought I’d have to explain to my daughters why daddy got arrested,” Williams told the Washington Post. “How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?”

“Seeing their dad get arrested, that was their first interaction with the police,” added Melissa Williams. “So it’s definitely going to shape how they perceive law enforcement.”

The ACLU complaint requests that Detroit police stop using facial recognition technology, “as the facts of Mr. Williams’ case prove both that the technology is flawed and that investigators are not competent in making use of such technology.”

Reuters reports that Rank One Computing dismissed concerns about misidentification as “misconceptions,” citing US government research claiming the high accuracy of leading facial recognition technology.

However, Ethics In Tech has identified numerous cases of AI and facial recognition errors. Last month, Microsoft announced that it had fired dozens of journalists after deciding to replace them with AI software. A week later, the new automated system confused two mixed-race members of the British pop group Little Mix. Microsoft claimed that the mix-up was not a result of algorithmic bias but rather a malfunctioning experimental feature in the automated system.

Last month employees of Walmart, the world’s largest retailer, contacted journalists to express their concerns about anti-shoplifting AI software made by the Irish firm Everseen, which some workers had dubbed “Neverseen” because of its frequent mistakes. In addition to failing to flag stolen items, Everseen allegedly frequently misinterprets innocent behavior as potential shoplifting.

Earlier this month, both Amazon and Microsoft said that they would stop selling facial recognition technology to police, citing concerns about human rights and racial bias. The announcements came amid worldwide protests over police and white supremacist killings of unarmed Black people including George Floyd and Breonna Taylor.

Government and law enforcement use of facial recognition has drawn the opposition of human rights activists and digital privacy advocates because of its potential for abuse, as well as for its alarming rate of misidentification of people (especially women) of color, which critics say all but guarantees that certain vulnerable and over-policed groups will be falsely prosecuted for crimes they did not commit.

A landmark 2018 study by Black researchers Joy Buolamwini, Deb Raji and Timnit Gebru found that some facial analysis algorithms misclassified Black women nearly 35 percent of the time, with no errors for white men. Last year, the federal government conducted its own study that concluded facial recognition algorithms performed best when identifying middle-aged white men, and more poorly when identifying people of color, women, children or elderly people.

Given the enduring systemic and institutional racism endured by Black, Latinx and Native American people in the United States, and given that just one wrongful arrest can lead to destroyed lives — or worse — , civil and digital rights advocates caution against the rush to adopt facial recognition technology by law enforcement agencies.

“To avoid repeating the mistakes of our past, we must read our history and heed its warnings,” writes ACLU of Massachusetts Technology for Liberty Director Kade Crockford. “If government agencies like police departments and the FBI are authorized to deploy invasive face surveillance technologies against our communities, these technologies will unquestionably be used to target Black and Brown people merely for existing.”

“That’s why racial justice organizations… are calling for a ban on the government’s use of this dystopian technology,” Crockford added.

There are currently no transparent rules regulating the use and sharing of facial recognition data. Congress, however, has been working out bipartisan legislation to regulate facial recognition use by government, law enforcement and the private sector. The House Oversight and Reform Committee has held several hearings on the issue over the past year.

In the meantime, state and local governments have stepped up to address the issue. Last October, California became the third state after Oregon and New Hampshire to ban facial recognition software in police body cameras. This followed moves by cities including San Francisco and Oakland to prohibit law enforcement and other agency use of the technology.

“The movement to ban face recognition is gaining momentum,” writes Matthew Guariglia of the Electronic Frontier Foundation (EFF), a San Francisco-based digital advocacy group. “The historic demonstrations of the past two weeks show that the public will not sit idly by while tech companies enable and profit off of a system of surveillance and policing that hurts so many.”

Please subscribe to receive the latest newsletter!
We respect your privacy.