The world’s biggest retailer is partnering with law enforcement agencies across the country to sell them facial recognition software. It’s not exactly a secret that Amazon has been peddling its Rekognition software to police officers and others with an interest in identifying strangers on the street, but little is known about how the service works.
In recent months, the Congressional Black Caucus and 70 civil rights groups have asked Amazon to be more transparent about how it handles its potentially invasive technology.
Those concerns sparked the American Civil Liberties Union (ACLU) to put Rekognition to the test, the group reported on Thursday. For just $12.33, the ACLU says it was able to scan the faces of Congress members against a database of 25,000 mugshots. The results were not encouraging, let alone accurate.
According to the ACLU, the software mismatched 28 members of Congress with people in the mugshot photos. Men and women and young and old alike were included in the group, though the false IDs happened disproportionately to people of color. Six members of the Congressional Black Caucus were falsely identified as the criminals, the ACLU said.
“An identification — whether accurate or not — could cost people their freedom or even their lives,” the group writes. “People of color are already disproportionately harmed by police practices, and it’s easy to see how Rekognition could exacerbate that.”
Some police agencies are already using Rekognition.
Racial blind spots
When the Congressional Black Caucus wrote to Amazon’s Jeff Bezos last month, the group described Artificial Intelligence as something with “boundless economic potential,” but the committee simply asked Amazon to “engage with us in a substantive dialogue” about its concerns.
In contrast to the claims made by disgruntled workers like James Damore, research has shown that Silicon Valley’s workforce is overwhelmingly white. Most recently, the non-profit organization Reveal published a new analysis indicating that “ten large technology companies in Silicon Valley did not employ a single black woman in 2016,” while three firms “had no black employees at all.”
It’s not the first time that Silicon Valley’s racial blind spots have been made painfully obvious in Artificial Intelligence failures. In 2015, a software developer revealed that the image recognition feature in Google Photos classified his black friends as “gorillas.”
Three years later, Google supposedly “fixed” the problem by disabling any algorithm identifying “gorillas” altogether.