PhotoA security researcher has discovered that self-driving cars with laser-powered sensors that detect and avoid obstacles in their paths can easily be fooled by a line-of-sight attacker using a laser pointer to trick those sensors into detecting and avoiding obstacles that don't actually exist.

Self-driving or driverless cars are widely predicted to be the next big innovation in automotive technology — indeed, it's possible that today's infants will come of age in a world where “driving your own car” is as obsolete as horse-and-buggy combos are now.

Google has already developed and tested a semi-driverless car (which still requires a licensed and alert human driver as a failsafe in case anything goes wrong). Various car manufacturers including Lexus, Mercedes and Audi are developing self-driving prototypes of their own. But, of course, driverless cars with wireless computer controls are as vulnerable to hacking as any other Internet-connected device – and have a few other vulnerabilities as well.

Lidar systems

Driverless cars use laser ranging systems, known as “lidar” (a riff off of “radar”), to detect obstacles and navigate their way through them. Radar, which was originally a semi-acronym for RAdio Detection And Ranging, “sees” things by sending out radio waves, then measuring whether and how many of those waves reflect back after bouncing off of various objects. Lidar does the same thing with lasers, which are narrower and far more precise than the radio waves used in radar.

Jonathan Petit, a scientist at the software-security company Security Innovation, told IEEE Spectrum that he was able to fool the lidar systems of self-driving cars with a device he made out of only $60 worth of off-the-shelf technology.

“I can take echoes of a fake car and put them at any location I want. And I can do the same with a pedestrian or a wall.” Petit made his device using a low-powered laser and a pulse generator, although he said “you don’t need the pulse generator when you do the attack. You can easily do it with a Raspberry Pi or an Arduino. It’s really off the shelf.”

Once he made this device, Petit could use it to create from a lidar's perspective the illusion of a car, wall or pedestrian while he was anywhere from 20 to 350 meters (roughly 65 to 1,500 feet) away from the lidar system. Perhaps even more disturbingly, Petit could carry out these attacks on a lidar-equipped car without the car's passengers even being aware of it.

The good news is that, according to Petit, there is a way for car or lidar manufacturers to solve this problem. “A strong system that does misbehavior detection could cross-check with other data and filter out those that aren’t plausible,” he said. “But I don’t think carmakers have done it yet. This might be a good wake-up call for them.

Petit plans to formally present his findings at the Black Hat Europe security conference this November.

Share your Comments