In a letter to Congress, IBM said it’s shutting down its use of facial recognition technology and will no longer provide the controversial technology to police departments.
Arvind Krishna, IBM's CEO, said the technology -- which can be used for racial profiling and mass surveillance -- could be used by police to violate "basic human rights and freedoms,” which wouldn’t align with the company’s values.
"IBM no longer offers general purpose IBM facial recognition or analysis software," Krishna said in the letter. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency."
Addressing racial inequity
In the wake of the killing of George Floyd by police in Minneapolis, Krishna said IBM believes “now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
The letter included Krishna’s suggestions on how technology like artificial intelligence could be used responsibly by law enforcement. Krishna suggested that the technology be thoroughly checked to ensure it’s free of bias. Police misconduct should also be met with tougher penalties, he said.
"Congress should bring more police misconduct cases under federal court purview and should make modifications to the qualified immunity doctrine that prevents individuals from seeking damages when police violate their constitutional rights," Krishna said. "Congress should also establish a federal registry of police misconduct and adopt measures to encourage or compel states and localities to review and update use-of-force policies."
Controversial technology
In addition to its potential to be used in a way that violates human rights, facial recognition technology has been criticized for being less accurate in identifying people of color. Krishna cited the risk of producing discriminatory results as one of IBM’s key reasons for abandoning "general purpose" facial recognition software.
"Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe," he wrote to Congressional Democrats. "But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported."