Photo
A 1902 Century steam car (Source: Wikipedia)

It's sometimes fun to sit here now, at the Pinnacle of Technological History To Date, and read old-fashioned predictions about their future (our past) and see how utterly wrong most of them were.

For example: when railroads were first invented — more specifically, when inventors first realized “Hey, these newfangled 'engine' things might not just power labor-saving machines; it's possible that, for the first time in history, people could travel on land faster than even the fastest horse can run!” — there were fears that the human body simply couldn't withstand prolonged exposure to crazy-fast speeds like 15 miles per hour, and at trulyridiculous speeds, like 30 or 40 mph, you'd either suffocate because it's impossible to breathe with air being sucked out of your carriage that quickly, or be unable to see anything because the human optical system probably can't process images of such unnaturally fast motion.

At the other extreme, there were predictions that proved far too wildly optimistic — the year 2000 came and went with no moon bases, flying cars, robot servants or 15-hour standard workweeks.

Heh heh heh. Silly old-timers with their silly old predictions, right? Except if you sit here now, at the Pinnacle of Technological History To Date, and make your own predictions about the future, there's a very good chance that one day your predictions will prove just as silly.

Driverless car

Consider the eventual promise of a true driverless car: get in your vehicle, give it your destination, then go to sleep if you wish, confident you'll be at your destination when you wake up. Too optimistic, or on its way? Google has already developed and tested a semi-driverless car: you can't actually sleep while you're in it, because the car requires an attentive human failsafe in case anything goes wrong.

But this is the earliest and most primitive form of the technology, so perhaps the suggestion, “A truly driverless robot car will come, even within modern people's lifetimes,” won't be listed among the silly-oldtimer predictions of the future.

Photo

Indeed, most of the “driverless cars are/will be much better than cars with drivers” arguments sound perfectly logical, even airtight, from a year-2014 perspective: it's true that of all the car accidents involving human drivers to date, the vast majority were caused by either human error or human frailty — sleepy drivers, distracted drivers, drunk or drugged drivers, careless drivers and just plain bad drivers.

Then, too, there are the accidents caused by generally good and responsible drivers whose reflexes, response times and senses are merely human rather than superhuman; could a well-designed and well-programmed robot car avoid the sort of accidents certain to happen with frail, faulty humans?

Life savers

Here's two articles from the past week exploring the near-future possibilities — and possible ramifications — of automated driving systems. The first is a May 7 article from IEEE Spectrum's blog, asking “How many lives will robocar technologies save?”

[Three-word summary: possibly a lot.]

This isn't about actual driverless or robot-driven cars, more about robot-assisted driving technology: warning systems far better and more precise than human senses. In a project sponsored by Toyota, some engineers at Virginia Tech ran a long, complicated and detailed study which led to thoroughly unsurprising results: yes, cars equipped with lane-change-warning devices would probably reduce the number of accidents caused by accidental or careless lane changes. Cars equipped with collision-warning systems are less likely to get into collisions. Why bother studying such obvious things?

But Spectrum addresses that:

You may well ask why anyone bothers to model the advantages of these warning systems when it stands to reason that they must save lives. Problem is, what stands to reason hasn’t always turned out to be true.

The early antilock braking systems (ABS) seemed so obviously good that the public flocked to buy them as optional features, and insurers offered discounts on policies for cars equipped with them.  But when the accident reports rolled in, insurers found that ABS had made no visible improvement. It seems that drivers, emboldened by their super-automatic brakes, had driven a little more aggressively than before.

So proximity sensors and collision-avoidance systems and other warning devices will almost certainly be standard features on cars of the near future, whether driven by humans or not.

Photo
A Google self-driving car (Source: Google)

Tough decisions

Meanwhile, Wired took its driving-technology predictions even further, in this piece by Patrick Lin addressing some of the philosophical and ethical questions which programmers of true robot-driven cars would have to consider: “The robot car of tomorrow may just be programmed to hit you.”

Here's a problem: even assuming perfected driverless-car technology, and further assuming a time when all cars are driverless (and thus human error and human weakness no longer cause car accidents), 100% perfect accident avoidance still won't be possible:

In future autonomous cars, crash-avoidance features alone won’t be enough. Sometimes an accident will be unavoidable as a matter of physics, for myriad reasons – such as insufficient time to press the brakes, technology errors, misaligned sensors, bad weather, and just pure bad luck. Therefore, robot cars will also need to have crash-optimization strategies.

What does that mean? Here's a couple of examples: suppose a crash is unavoidable for some reason, there's two cars ahead and your robo-car will hit one of them. One car is a heavy, sturdily-made SUV with an excellent safety rating, the other choice is a flimsy little mini-car. Which should your robo-car hit?

Well, the robot programming would want to minimize the chance of causing death or serious injury, which means hitting the big safe car is the better choice, as the flimsy car's driver is far more likely to be hurt. Except that means robot cars would effectively be programmed to single out cars with high safety ratings.

As Lin pointed out:

Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges.

Helmeted or not?

And other challenges are even worse. Lin posited another hypothetical: suppose a crash with a motorcyclist is unavoidable. The car must choose between hitting a driver who wears a helmet, or a driver who does not. A driver without a helmet is far more likely to die in an accident, and reducing the chance of death or serious injury is surely a laudable goal, yet there are obvious ethical problems with programming cars to target helmet-wearing motorcyclists:

...we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road.

Will driverless cars really take off in your lifetime, or at least your kid's? Will solutions be found for the ethical problems they pose? Wait long enough, and the question's bound to answer itself.


Share your Comments