When a recent paper suggested that self-driving cars are more likely to hit pedestrians with darker skin, it was met with some skepticism. Critics quickly waved away the concerns by noting that the researchers didn’t have access to any individual company’s self-driving models or datasets. But that’s because such industry data is not offered to the public, leaving researchers with little way to independently vet the technology already being unleashed on public roads.
“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers,” Kate Crawford, an AI researcher not involved with the paper, said in a post defending the work. “But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks.”
The risks described by the Georgia Institute of Technology researchers in their report, published in late February, were a five percentage point difference in the ability to detect if a pedestrian was in a machine’s path, depending on if the pedestrian had darker or lighter skin. The authors said they studied popular object-detection software programs for their report, reasoning that the same software may also be used by self-driving car companies.
“The main takeaway from our work is that vision systems that share common structures to the ones we tested should be looked at more closely,” one of the co-authors told Vox.
Trouble detecting pedestrians
Such warnings didn’t appear to make a difference in industry hype over self-driving cars. On April 22, a day that Tesla named “Autonomy Investor Day,” CEO Elon Musk said that a million robo-taxis with full self-driving capabilities would be ready for consumers sometime next year.
General Motors, in the midst of cutting 14,000 jobs to refocus on autonomous technology, recently said that its $2.1 billion profits from the first quarter of 2019 would go toward producing a fleet of autonomous vehicles to be used for ridesharing, also ready by sometime next year. And state lawmakers in Utah in late March unanimously voted to open their roads to self-driving car testing with a measure that legally designates computer software as a driver.
Whether the cars are ready to deploy without an actual human inside, however, remains an open question under debate between the industry and its critics.
Phil Koopman is a robotics engineer at Carnegie Mellon University and a safety tester for companies experimenting with self-driving technology. Though he can’t disclose his individual clients, he is transparent about the open safety questions that he says the industry has yet to publicly answer.
"We've seen some of these systems have trouble identifying construction workers,” he tells ConsumerAffairs, describing anecdotal experience with the technology. “We speculate it's because they're wearing yellow and green coats that no one else wears.”
"It also has trouble with women wearing short skirts with bare legs,” he says. “We’ve actually found instances where it totally does not see them."
Independent researchers have previously said that detecting motorcyclists may also be a problem, and industry engineers have admitted that detecting bicyclists is yet another blind spot. (Representatives from Waymo and Ford have previously proposed equipping censors to cyclists as one possible solution, something that cyclists say unfairly shifts the burden to vulnerable road users).
Koopman had not reviewed the paper comparing differences in how object-detection software responds to skin color when he spoke to ConsumerAffairs. But generally, he noted that technology is only as biased as the humans who create it. If AI is not “shown” enough people who look a certain way, then the machine won’t learn to see them as people.
"It isn't really that they have dark skin, it’s that they have skin color that's different than most of the dataset,” he says.
Information about whether the industry is using enough people of color in their datasets isn't publicly available, like much else with autonomous vehicles. But research into other emerging technologies in Silicon Valley isn’t encouraging on this front. In January, a report from the Massachusetts Institute of Technology (MIT) said that Amazon's facial recognition software, Rekognition, appeared to struggle with identifying women and black people.
“Consequently, the potential for weaponization and abuse of facial analysis technologies cannot be ignored,” the MIT researchers warned.
As for self-driving cars, Koopman says that there are two major questions that the industry is facing right now.
“When are they going to deploy? When are they going to be acceptably safe?” he says. “I would strongly prefer that before they deploy. But right now, with our current regulatory system, they get to deploy when they think they are safe, which is not the same thing as knowing that they are safe.”
The A.V. Start Act stalls
The auto and tech companies that make up the self-driving industry have assured consumers that their technology will save lives, repeatedly pointing to the thousands of car crashes that humans cause every year as justification. The federal guidelines regulating self-driving cars are mostly voluntary and currently left to states like Arizona, where lawmakers have promised to “pave the way for new technology” and refrain from doing anything that would “put the brakes on innovation.”
The A.V. Start Act, a bill that would allow for expanded self-driving testing and deployment at the federal level, has stalled amid resistance from national organizations representing cyclists, police officers, nurses, wheelchair-users, and brain injury victims, to name a few of the groups who say that the technology needs to undergo further testing on closed courses.
“Even if you never get inside a driverless car, everyone will share the roads with them,” notes Ralph Hotchkiss, the founder of a company that produces wheelchairs.
But the industry and some lawmakers insist that the technology is ready for public roadways. When Utah’s lawmakers voted to bring self-driving testing to the state in March, state officials characterized the technology as a public good.
“We believe that connectivity and autonomy will save lives,” Blaine Leonard, an engineer with the state’s department of transportation, recently told Government Technology magazine. “We believe in the long term, this will be safer, so we want to encourage it and we want to promote it.”
Safety advocates remain skeptical of that narrative, pointing out that the industry has been open about the enormous amounts of money they expect autonomous technology to generate.
“Autonomous driving technology will enable a new ‘Passenger Economy’ worth US $7 trillion,” Intel has optimistically predicted, “more than the projected 2017 GDPs of Japan and Brazil combined.”