What kind of driverless cars do people want?

…the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. One question asked respondents to rate the morality of an autonomous vehicle programmed to crash and kill its own passenger to save 10 pedestrians; the rating dropped by a third when respondents considered the possibility of riding in such a car.

Similarly, people were strongly opposed to the idea of the government regulating driverless cars to ensure they would be programmed with utilitarian principles. In the survey, respondents said they were only one-third as likely to purchase a vehicle regulated this way, as opposed to an unregulated vehicle, which could presumably be programmed in any fashion.

That is from an MIT press release, here is the background:

The paper, “The social dilemma of autonomous vehicles,” is being published today in the journal Science. The authors are Jean-Francois Bonnefon of the Toulouse School of Economics; Azim Shariff, an assistant professor of psychology at the University of Oregon; and Rahwan, the AT&T Career Development Professor and an associate professor of media arts and sciences at the MIT Media Lab.

The abstract notes that if drivers are required to purchase “utilitarian-programmed” vehicles, they may be less willing to buy at all, thus postponing the adoption of what is likely to be a much safer technology.

For the pointer I thank Charles Klingman.


A better question is would you feel safer in a Google car or a typical taxi today.

I think people grasp the biggest difference. Thie question about pedestrians is hair splitting.

I agree the Trolley type problem always comes up, but I think it's pretty irrelevant in the end. Just implement the damn thing.


It's all fun and games, until you are sentenced to life in prison for pushing a fat man off a bridge.

or are sentenced for not pushing the fat man off the bridge.

Or are sentenced for either pushing or not pushing a fat man off the bridge.

Although let's face it, we're more likely to be the allegorical fat man than the pusher.

I find it best to simply avoid all railroad switches and bridges. However, now I'm wondering whether this decision itself is morally significant?

What if you are the fat man who fights to not be thrown off the bridge and succeeds, or simply starts running after seeing the runaway train and the way his railway boss is looking at him?

There are lots of better questions than "would you buy a car 'programmed to crash and kill' you?"

But the social scientists want to get in on the hype train (trolley).

They could do better. "Would you buy a car that would protect pedestrians, even if it meant higher risk for passengers?"

"Programmed to crash and kill" is idiotic. The car is not programmed to kill you.

At least until someone hacks it, anyway.

"At least until someone hacks it, anyway."

Any car that can have it's driving algorithm's remotely modified is a flawed designed. And yes a 'hacker' could technically hack it if they had access to the car. But of course they could also just puncture your brake lines.

JWatts has it right. This sort of hacking is a Hollywood fiction.

There is an entire industry, called application security, that the driverless car makers can use to help reduce the risks. The manufacturers already engage with that industry on other fronts.

Hacking is a serious issue, but it can be addressed with technical and legal remedies if the principals want to address it.

Well ok, in related news an AI programmed to kill you:


AI on the battlefield is scarier than AI on the highway.

If the car makers wanted, they could allow remote control of the cars. Remote controlled cars have been around for a while.

Not if you are assessing how self-interested versus group-oriented the person is. I wonder how people's answer to this type of question correlates with their position on Brexit or voting Trump.

The thing that bothers me as an engineer is that we are no where near a "car" or "AI" that can take a image scan or short range radar image and translate it into a conceptual certainty that someone must die. I'm not even sure that human drivers ever have that moment. If there is a kid in the road, you go off, and try like hell to high-center on the guard rail before you go off the 1000 foot cliff or whatever.

A car would be programmed to look for an opening, always an opening, that reduces total casualties. Presented like that, it's a different question. The car is no longer an antagonistic agent, "killing you" to save another.

And if the AI ever were developed to the point of choosing who will die 0.3 seconds from now, it would also be able to use its predictive powers a half second earlier to avoid killing anyone.

What will happen in reality is that the car will realize its going 60mph and has to choose among hitting a building, a full-grown tree, or a person. And it will hit the person.

And realize that there should never be pedestrians where a car is going 60mph. It would involve someone else doing something very stupid, like jumping out of a car on a freeway.

So, you would program the car to hit the policeman giving a ticket to a reckless driver actually driving his car?

Maybe I'm too Kirk, still rejecting the Kobayashi Maru scenario, but IMO that's the right way to live.

The priority list should be as follows:
* The life of the members of the autonomous vehicle firmware programming team, in order of (hierarchical) seniority.
* Other staff of the company making the firmware, and their competitors (under reciprocity) by seniority along with their families and pets.
* The owner of the autonomous vehicle.
* People specified by the owner of the vehicle, in order. This often will include the occupants, but perhaps not if the vehicle was used to kidnap people.
* High ranking government officials, of the owner's political party.
* Everyone else, based on their annual income.

It may not be able to infer all that without good real-time facial recognition, so it should just kill the pedestrians for the time being. It's not like the AV staff are going to walk anywhere ever again.

I would rather ride in taxi than a google car. Google cars are unproven, they are padding the safety stats in a misleading way, and they don't go where I want to go. But is a trick question, I would never voluntarily ride in any vehicle not under my ownership and control.

Agreed. In fact, self-driving cars should be banned on public streets outright.

They are part of a campaign to disempower humans from controlling their own lives, step by step.

Empower humans by banning them from making a choice you don't like...nice!

It won't be their choice for long. Self-driving cars are introduced to become mandatory eventually.

Believing otherwise is incredibly naive.

Maybe, but that is speculation. For now, the only person advocating a coercive policy is you.

Dan, without foresight, you might just as well be sheep. The best way to establish an unshakeable totalitarian rule is through the salami tactic, i.e. small steps toward a much more authoritarian world.

We see this in surveillance and now it creeps into the physical space. "Autonomous" cars means loss of the autonomy of the traveller. The first step is to normalize it voluntarily, but the end goal is always to destroy the alternative.

I think the correct term is "sheeple".

Only mandatory on public roads. Do what you like on private property.


I disagree with you about banning driverless cars, but my goodness the level of libertarian obtuseness in this comment thread is off the charts.

They have already predicted and planned it: http://www.e-reading.club/chapter.php/82060/20/Isaac_Asimovs_Worlds_of_Science_Fiction._Book_9__Robots.html
Make no mistake, it shall come to pass.

I will wait until the George Costanza model is available.

The interior is draped in velvet.

A) to think that regulators will understand anything in a system as complicated as a driverless car is laughable.

B) Once driverless cars come out the surplus will be so obvious that they will sweep the market, only the fringe will continue to 'self drive'

C) even though regulators are know nothings, do not underestimate their ability to destroy a market through their quest for power and kickbacks

"Once driverless cars come out the surplus will be so obvious that they will sweep the market, only the fringe will continue to ‘self drive’"

You are insane

What is insane about this?

@carlospln would appear to be a dim-witted bot.

"What is insane about this?"

I think any idea outside of calospln's world view bubble is automatically viewed as a form of insanity.

It's just like GM foods. The risks are so small and the gains so enormous that countries everywhere accept them without issue. Oh wait....

Where the gains are enormous, countries generally do accept them. Where the gains are less enormous (rich countries) is where they face the most pushback.

"Where the gains are less enormous (rich countries) is where they face the most pushback."

I think you're forgetting the history of GMO in poor African countries. Zambia even refused GMO during a famine.

In a farming nation, buying grain is beneficial because you can both grind it to eat but also plant it to grow lots more grain.

But the terms for GMO grain is you can not plant it as a farmer.

Only people who see capital as something only to be burned can believe that never planting grain makes sense.

@Mulp: Not being able to save see is not inherent in GMO crops. Terms like that were around since long before GMO corn or whatever existed. So to associate it with GMOs is either ignorant or deceptive.

Do you know why farmers agreed to those terms, though? For two reasons. First, because those original hybrid seeds that started that, they had a much higher yield than non-hybrids so that alone made them very attractive. Second, even if you did try to save the seeds, they would not breed true and the second generation plants would have a much lower yield, and then lower again in the third generation, and so on. Not very attractive for a farmer trying to maximize their yield, so they'd be unlikely to save the seeds anyways, even if that term in the contract never existed.

That's probably one of the most common pieces of misinformation about GMO crops out there, along with the whole terminator seeds thing.

I think both men and women should have equal rights before the law. Sorry, but that's just how I feel...

Dear "Maybe it's me BOT",
Can we do without the Driverless comments?

I don't think I would trust my loved ones' lives or mine to a machine's judgment. Sorry, but that’s just how I feel…

Do you understand how modern cars work? Or how elevator control systems work? Or commercial jets' avionics? Or ...

You are already trusting your life to a machine's judgment all the time. Every second you're driving a car at 70 MPH on the highway, and the car refrains from deploying the driver airbag or slamming on the right front wheel's brake, you survive based on the machine behaving correctly.

"I don’t think I would trust my loved ones’ lives or mine to a machine’s judgment. "

You mean like an elevator or a roller coaster? Because if you ride on those you're already trusting your safety to a machine's design and judgment.

The more man gives up his attributions and reasoning in favor of machines, more of his soul is lost. Sorry, but that’s just how I feel…

Ummm, I'll gladly give up all of my soul that's invested in driving the car, washing dishes and doing laundry.

I would rather die than give up my humanity. Sorry, but that’s just how I feel…

It seems like if you really meant that, you'd adopt an Amish life style. If you haven't adopted a much less machine assisted lifestyle, maybe you don't really hate machines as much as you tell yourself you do?

It seems to me that part of using my human reasoning processes well is recognizing when a machine can do a job that I can't do, or can do a job more safely than I can do it. Part of what we are, as humans, is tool creators, engineers, designers. (I'm also a techie, and I've had a hand in developing some moderately widely used technology, so perhaps the whole enterprise of making things to serve humans is something I'm invested in more than most people.)

Every night while I sleep, a not-too-bright set of machines guards my family. If a fire starts, one of these machines will notice the smoke[1] and sound a loud alarm all through my house. These alarms probably save at least several hundred lives per year in the US. I do not perceive my soul to be imperiled in the least by trusting these machines to do this, anymore than I do trusting in the bureaucratic mechanisms of building and electrical codes and testing labs that makes it a lot less likely that my house will burn down than was true of houses in the past.

Another machine uses its judgment to keep my house reasonably comfortable while trying not to waste energy or money. I'm much happier to offload this task to a machine, rather than trying to (say) turn on and off radiators in every room to get the desired temperature.

Some judgment tasks *can't* be done by humans. My car does a lot of adjustments of engine parameters in realtime, to get a good mix of performance and efficiency. (Earlier cars did this too, but they used analog control systems rather than a computer.) Even if I could employ a full-time human to watch all the dials and operate all the levers, he couldn't manage the speed of those adjustments. If I wreck the car, a machine detects the impact and triggers airbags and seatbelt pre-tensioners with a reaction time no human could match.

When I try to start the car, the computer inside the car runs a cryptographic protocol with my key, and decides whether or not I'm authorized to start up the car. This can fail in annoying ways, but it also makes my car a whole lot harder to steal.

If I get sick and need a CT or MRI scan, machines take a jumble of information and turn it into a three-dimensional model of what's going on inside my body, which is given to a surgeon to decide what needs to be done next. I don't think the surgeon sees this as a loss of his autonomy or soul, just a better way to make sure he doesn't needlessly take out a healthy appendix or miss a cancer or something.

None of these look to me like a fundamental loss of autonomy. Tools extend our reach and abilities, and let us do things we couldn't have done without them. That's as true of a computer eventually driving my car as it is of a computer keeping my house comfortable or monitoring my vital signs while I'm in the hospital.

[1] Also sometimes water vapor from a shower, and on a couple notably annoying occasions, some ants.

This is all nonsense. Car accident rarely happen because of a moral dilemma of hitting pedestrians or running into a wall. They happen because of the incompetence of the 200 lbs of meat behind the steering wheel. If you eliminate drunk, distracted and aggressive drivers, you've cut automobile fatalities by a massive amount. And even when these moral dilemmas might occur, they occur because they human driver was too aggressive or not watching what he was doing.

Spare me. This kind of dilemma will rarely come up. It's frankly scare-mongering.


It is simply people who would have no hope being part of the creation of such a complex piece of software attempting to gain some control over it.

Yes, but how do you convince the public of that if the media keep running these kinds of stories about this kind of research that keeps being done?

Convenience will win out, as long as the technology isn't banned before it gets started.

And insurance costs. Assuming the machines are much less accident prone, then the insurance costs will plummet. So, insurance companies will start offering much cheaper rates to customers who own autonomous vehicles.

Well, we could charge them with WrongSpeak and put them in prison.

No, there's no need to restrict speech. However, it is important to counter that type of rhetoric with as many basic facts as possible.

To extend the point a bit further: if the autonomous vehicle all but eliminates driver error, who is most likely at fault if a pedestrian is suddenly at risk of being struck and killed? It only seems fair that they be the one to bear the harm.

+1 Just imagine how people would basically just throw themselves into the streets anywhere anytime if they knew all the cars were programmed to avoid them. It would be a game to some sociopaths to see how many cars they could wreck/damage/inconvenience.

This will happen in the short term. Then people will get serious about enforcing jaywalking laws, because the cars have video evidence.

A classic scifi story from decades ago had the situation as you describe. Everyone jay walks everywhere because they know the cars will stop for them. Even an issue of Amazing Spider-Man from 1987 had the villain's self-driving van crash because a bag-lady pushed her cart into the middle of the street and the van crashed to avoid her.

Guess we'll just have to insert a chip into the pedestrians to ensure they don't jaywalk.

Or just have the driver who's car slammed on the brakes in an emergency stop call the police and given them the video evidence from the car camera. Since you can buy car camera's that mount on the mirror for $200, I suspect it will become standard on autonomous cars.

There is no great stagnation. Self-walking shoes!

Right. Cars should be programmed to not hit pedestrians.

Car accidents are sometimes also avoidable by taking actions that are contrary to traffic laws. I think anyone who drives a fair amount has had to do that. Will self-driving cars do that? Or will traffic laws be hard-coded in them? If the latter, avoidable wrecks will happen.

As if it would be possible to program these choices reliably without compromising the machine.

Robotic cars will be primary programmed to avoid accidents and avoid accident prone situations. That's the positive. The big risk in robotic vehicles - until they become the default - will be being hit by someone else.

Life is not a trolley problem, with tracks and track switches, and clear choices.

The best action in case of a possible accident still is to remove energy from the system (=brake really hard) and stay on track, if simple (avoiding any contact) evasion is not possible.
The utilitarian approach needs full information, which the car does not have.
It might drive off the cliff to avoid the elderly couple on the road (provided that there is only one passenger....), but it cannot know whether there is a crowded beach on the bottom of the cliff.
The sophisticated "trolley car algorithms" just add a lost of complexity without being of much practical use - and complexity produces failure in this kind of environment.

If the one passenger is 19 and the elderly couple have a combined life expectancy of 23 years then the car going over the cliff would have erred.

These calculations are going to get complicated: https://en.wikipedia.org/wiki/Quality-adjusted_life_year

Yes, but would they vote for the regulation, or are they irrational? (Well, both maybe.)

Human beings don't use morality or utilitarian principles to avoid car accidents. When stimuli are interpreted as indicating danger they rely upon one or more ingrained behavior patterns, typically involving braking and/or swerving. In other words, when danger is recognized they rely on "muscle memory". The deliberative part of the brain isn't involved, although it is quite happy to make up explanations for why the visual and motor centers acted as they did after the fact.

Learner drivers who haven't developed what is referred to as "muscle memory" may try to think things through. This makes them extremely dangerous drivers as it slows their response time.

A driverless car will, and I presume does, use heuristics to avoid accidents, just like a human does, although hopefully it will be a lot better at it.

A driverless car that can make high level utilitarian decisions without error and still avoid accidents as well as a heuristic that says, "hit as few objects that appear to be human as possible," is a car that might be able to eliminate most driving by working out just where it is that people are likely to be happiest and leaving them there.

Unlike humans, computer drivers would have time to "think things through" in real time due to their processing speed. If a driverless car identified two potential obstacles, it would be able to assign probabilities to various outcomes and compute what object it is most important to avoid. It would also be able to weigh potential harm to a pedestrian against potential harm to riders.

I think the focus on this dilemma is rather silly, since it is a contrived case that is very unlikely to occur. But if such a situation did occur, the car could indeed process the information and make a high-level decision. It would be based on probabilities rather than "without error", though.

'Unlike humans, computer drivers would have time to “think things through” in real time due to their processing speed.'

It isn't a question of speed, it is a question of integration and robustness - especially when one realizes that a self-driving car will essentially require the sort of maintenance required for an airliner to ensure that robustness. Well, some airliners, at least. As per wikipedia on Air France Flight 447AF - 'At 02:06 UTC, the pilot warned the cabin crew that they were about to enter an area of turbulence. Probably two to three minutes after this the airplane encountered icing conditions (the cockpit voice recorder recorded what sounded like hail or graupel on the outside of the airplane, and the engine anti-ice system came on) and ice crystals started to accumulate in the pitot tubes. The pilots turned the aircraft slightly to the left and decreased its speed from Mach 0.82 to Mach 0.8 (the recommended "turbulence penetration speed").

At 02:10:05 UTC the autopilot disengaged and the airplane transitioned from normal law to alternate law 2. The engines' auto-thrust systems disengaged three seconds later. Without the auto-pilot, the aircraft started to roll to the right due to turbulence, and the pilot reacted by deflecting his side-stick to the left. One consequence of the change to alternate law was an increase in the aircraft's sensitivity to roll, and the pilot's input over-corrected for the initial upset.' https://en.wikipedia.org/wiki/Air_France_Flight_447

What is applicable to self-driving cars is not the failure of a single sensor leading to a chain of events that essentially involved a flight crew literally dropping their airplane into the ocean, but the reality that the problem with the pitot tubes had been known about, and only partially remedied, for an extended period. The airline industry is extremely highly regulated, and extremely sensitive to the problem of airplanes not functioning correctly while carrying passengers.

Now imagine 100,000 privately owned vehicles, 10,000 of them suffering from an unanticipated form of sensor degradation causing the system to perform incorrectly, leading to a string of failures (fatalities, destroyed vehicles, destroyed property - however defined). Now try to imagine a comparable level of effort to determine and remove the problem as one sees by airliner crashes. Including mandatory training and maintenance procedures for operators to avoid this problem in the future.

Robustness is the real problem when dealing with computer software/hardware, not processing speed. Robustness is a broad concept. For example, would it be more or less robust for the people within an automonous vehicle to have the possibility to stop (or in a more complex variant, to override) the system or not? This has nothing to do with processing speed, But as seen by this other Airbus accident, it is not exactly a theoretical question - 'To ensure that the thrust-reverse system and the spoilers are only activated in a landing situation, the software has to be sure the airplane is on the ground even if the systems are selected mid-air. The spoilers are only activated if at least one of the following two conditions is true:

* there must be weight of at least 6.3 tons on each main landing gear strut
* the wheels of the plane must be turning faster than 72 knots (133 km/h).

The thrust reversers are only activated if the first condition is true. There is no way for the pilots to override the software decision and activate either system manually.

In the case of the Warsaw accident neither of the first two conditions was fulfilled, so the most effective braking system was not activated. Point one was not fulfilled because the plane landed inclined (to counteract the anticipated crosswind). Thus the pressure of 12 tons on both landing gears combined required to trigger the sensor was not reached. Point two was not fulfilled either due to a hydroplaning effect on the wet runway.

Only when the left landing gear touched the runway did the automatic aircraft systems allow the ground spoilers and engine thrust reversers to operate. Due to the braking distances in the heavy rain the aircraft could not stop before the end of the runway. The computer did not actually know the aircraft had landed until it was already 125 meters beyond the halfway point of runway 11.

As a result of the accident, Airbus Industrie changed the required compression value from 6.3 tons to just 2 tons per main landing gear.' https://en.wikipedia.org/wiki/Lufthansa_Flight_2904

The pilots made a number of mistakes in this incident, it is true - but the system designed to prevent operator error meant actual braking was prevented by the computer, even as the flight crew tried to slow the aircraft. Computers are only as good as the fallible humans that program them, and computer controlled systems are only as good as the people that maintain them. However, one should note that essentially all industrial equipment, computer controlled or not, does have an emergency cut-off switch, because in the real world, things happen that are simply unanticipated. Or even more accurately, fully anticipated - earthquakes, fires, operator error being just 3 examples.

'It would be based on probabilities rather than “without error”, though.'

Note the Lufthansa incident - those deaths had nothing to do with probabilities. The system simply worked as designed, as any programmer with any experience in commercial software would say. The blame lays elsewhere in that perspective, as the program did exactly what it was supposed to do. The problem being that the programming was utterly inadequate to handle a real world situation of an airliner landing in stormy conditions - but it flawlessly worked as designed.

"Computers are only as good as the fallible humans that program them,"

Yes. Mistakes happen, and mistakes in self-driving cars will undoubtedly happen, in some cases leading to crashes. However, the question is not "can self-driving cars be perfect?" but "can self-driving cars be as good as or better than human drivers?" This is not a particularly difficult standard to meet.

There are existing methodologies for developing highly reliable software for critical systems. We already rely on software every day. This is not a new challenge for self-driving cars. Airlines are largely flown by software and have an outstanding safety record. Airline crashes due to software failure are exceedingly rare. In both examples that you cite, human error was a significant contributor (as you note).

In your second example in particular, I don't think "the programming was utterly inadequate to handle a real world situation..." is at all justified. Incorrect information on wind speed was given, the runway was wet, and the pilots didn't get all wheels on the ground until they were more than halfway down the runway. Activating braking systems before the plane is actually on the ground would be disastrous, which is why the software contains safeguards to prevent this. Airbus did tweak the system after this to give the pilots slightly more margin for error, but I don't think this proves the original software was flawed. It was certainly adequate to allow successful landing in those conditions, and only an accumulation of multiple mistakes prevented it.

"and computer controlled systems are only as good as the people that maintain them."

I'm not sure what major issues you see here? Computer control is already used to control the engine and maintain vehicle speed, and the sensors for this are proven and highly reliable. Steer by wire is proven as a concept, though not used in production cars yet. In addition to this, a self-driving cars needs cameras and GPS, which are proven technologies that don't require regular maintenance.

Cars already self-diagnose when problems occur, and this is, again, proven technology that can be applied to self-driving cars.

Of course some accidents will occur due to lack of maintenance, but there is no reason to believe that this will be anything but an extremely rare occurrence. It will not offset the safety benefit incurred by eliminating human drivers.

'“can self-driving cars be as good as or better than human drivers?” This is not a particularly difficult standard to meet.'

Well, except for not actually meeting that standard as of now, and for the foreseeable future. That point about sensors is critical - unless one accepts the idea of an automonous vehicle not functioning when the tolerance level is exceeded - note that fly-by-wire systems still assume a pilot is available to handle situations which fall outside of the envelope the systems are designed to handle.

To give a couple of not so far fetched examples - how well would an automonous vehicle function when involved in fleeing a wildfire with extremely thick smoke, or how would it react when a flash food occurs. To give another example of robustness - will the passengers be allowed to open the doors when the vehicle considers itself still in motion, such as the GPS system registering motion as a vehicle is carried away in a flood? And do you think that any company is going to program such scenarios voluntarily when not required to do so? Particularly in light of how difficult such testing would be, unless it is merely modelled - and let us be honest, modelling suffers from the exact restraints, leading again to works as designed.

'Computer control is already used to control the engine and maintain vehicle speed, and the sensors for this are proven and highly reliable.'

Sure, until something like this happens - http://blog.caranddriver.com/massive-takata-airbag-recall-everything-you-need-to-know-including-full-list-of-affected-vehicles/ Again, the question is how robust the entire system is, not just its individual components. Imagine an automonous vehicle where a component manufacturer provides a flawed product - this is generally handled in the airline industry through extremely strict part tracking rules and forbidding aircraft to fly until the flawed component is replaced in more serious cases. In the example above, Takata isn't precisely sure what vehicles have what airbags. The airbag does go off reliably, as designed when the flawlessly working control systems tells it to. This is not about the computer, it is about the entire system and its integration. And the problems described here are straightforward physical ones - if you have any experience in commercial software, you would know that 2 software suppliers will endlessly blame the other for any problem that arise when the two packages are used together and something goes wrong.

'a self-driving cars needs cameras and GPS, which are proven technologies that don’t require regular maintenance'

Lens get scratched, dirtied, cracked, covered in mud or snow - the camera may work as designed from a certain perspective, but GIGO still applies. And this is not exactly a solved problem - what happens to an autonomous vehicle when it encounters such rarities as frost, snow, mud, tar, leaves, etc.?

'Cars already self-diagnose when problems occur, and this is, again, proven technology that can be applied to self-driving cars.'

Self-diagnosis only works within an established framework. And at least in the case of my car, about a year ago, the engine warning signal - the one essentially saying do not drive this vehicle at all, major engine damage will occur - went off. How was this solved? The dealership reset the diagnosis system back to null, after not being able to find anything actually wrong with the car (as the past year has demonstrated, by the way). Such transient events are utterly normal in any number of systems, actually- but how would that be handled by an autonomous vehicle? By trusting the diagnosis, or ignoring it in the middle of crowded highway traffic? And further, that diagnosis of no problem cost several hundred euros in the end - admittedly, a bit more expensive than possibly required as I insisted on several of the likely less reliable sensor systems being replaced so as to hopefully reduce such transients arising - generally, sensors degrade over time, if only due to corrosion involving contacts and wiring.

'but there is no reason to believe that this will be anything but an extremely rare occurrence'

Until it isn't, as the airbag problem shows.

'It will not offset the safety benefit incurred by eliminating human drivers.'

Maybe - fly-by-wire, apart from when it occasionally fails, does seem to be generally better at flying an aircraft in routine circumstances. Assuming that all maintenance directives are followed and the flight crew is well trained (the young Air France co-pilots literally dropping an airplane into the ocean, in part because apparently they hadn't been trained in terms of stall/angle of attack) to handle the situations that occur that are not as routine, or which are outside of the programmed envelope.

Which is the point - in many ways, it would seem to be easier to create autonomous aircraft, though even after more than 2 decades of creating fly-by-wire systems, we really aren't all that close.

It is easy to imagine things - actually doing them is something else. And the world around us is extremely unforgiving of even the smallest flaw in a system.

Oops - that part about the software suppliers was supposed to end along the lines of both suppliers will fight endlessly, as both are exceedingly unlikely to voluntarily locate and fix the problem.

"Well, except for not actually meeting that standard as of now, and for the foreseeable future."

Here is some discussion: http://phys.org/news/2016-03-autonomous-cars-safe.html

Google's commissioned study found they already have a lower accident rate than human drivers. Of course, one should consider the source. More evidence is needed, including independent evaluation. But I see no reason for your confident assertion that this standard is not being met now, and will not be met "for the foreseeable future". All of the evidence I see points the opposite way (even if one is skeptical of Google's current finding).

Overall, you have some anecdotal examples of failures and hypothetical scenarios. But 35,000 people die each year in auto accidents in the U.S., largely due to human error. Autonomous vehicles have the potential to greatly reduce that number. The risks you mention are real but are dwarfed in comparison (the Takata airbag flaw, for example, despite being the largest recall in history by far, caused 10 deaths in the U.S.).

'But I see no reason for your confident assertion that this standard is not being met now'

Any autonomous vehicles actually on the road right now?

And the amusing thing is that I have sat in a Mercedes doing 120kph on the B462, without the driver (a Mercedes manager) touching any control for at least 10 minutes. Mercedes is quite clever - the sensors are quite good at reading German (and one would assume other national) road signs and traffic, while remaining completely possible for the driver to overrule. The radar system not only measures the road itself, keeping the vehicle in its proper lane, the speed/distance of preceding traffic determines the car's actions (much like the way that my 1985 MSF class taught that car drivers can be used as somewhat reliable warning signals of the physical condition of the road), causing it to adapt to the behavior of the cars in front, in any number of situations, from suddenly blinding sunlight to road damage to suddenly blinding snowfall to avoiding a car braking as wild pigs cross the road, etc. The close range sensors simply react the way a good driver would when following what is possible - these ethical debates seem to be completely lacking in Germany when it comes to designing vehlcles. Instead, the German perspective seem to be along the lines of if something is in front of the car, brake as much as possible to reduce speed, and swerve if the other sensors indicate there is space to swerve. To the extent that some problems have no solutions, the car attempts to reduce the damage as much as possible, primarily by reducing the energy of impact (that such systems better shield the car's occupants - such as belt tightening systems - goes without saying).

Again, this is all based on factors generally distinct from mapping databases (that Mercedes used GPS, but not for driving), processing speed (somebody with computer experience from the mid-80s would have been impressed, somebody from 2003 a lot less so), and builds on years of experience of Mercedes building/integrating such concepts/systems.

And it is Germany which has the only highway stretch where autonomous vehicles are tested - http://www.zdnet.com/article/germany-to-digitise-autobahn-for-self-driving-car-tests/ (depending on what 'tested' means of course - I truly don't know if the A9 has any speed limit free stretches like those on the A5).

Google has some fascinating ideas, and lots of resources - but Mercedes makes its money from building vehicles, both commercial and private, not selling ads.

What is coming out of the U.S. in terms of real world autonomous driving is apparently not at the practical level already available (like that crossover), much less that likely to be offered for sale in the nearer term future, at least in Germany (that Mercedes I sat in was last year's model - it was recently swapped out for a newer model station wagon - in part because the demand for his previous car was too high, so Mercedes sold it before he was due the scheduled replacement - such are the vagaries of having company provided cars).

Engineers look for solutions to problems, and rarely follow grand visions of those who are not engineers.

For example - 'DÜSSELDORF, Germany — Daimler Trucks demonstrated what it called the next milestone in autonomous driving by running a three-truck platoon on a public stretch of the autobahn.

Daimler used the March 21-22 event here, attended by more than 300 journalists and guests from 36 countries, to expand upon its vision of how connectivity will continue to revolutionize all aspects of the supply chain, while drawing global attention to the European Truck Platooning Challenge, featuring six of the continent’s truck manufacturers.

“Connected trucks will have a huge impact. They will transform transportation completely,” said Wolfgang Bernhard, head of Daimler’s global truck and bus division. “And I promise, when we look back in 10 years we’ll recognize this was the turning point.”


The trucks use Highway Pilot Connect, an advanced version of the Highway Pilot technology that was featured in the Future Truck concept vehicle in Germany in 2014 and the Freightliner Inspiration truck at the Hoover Dam last May.

During the demonstration, when the lead vehicle was ready to create the platoon, it asked the second vehicle via dedicated short-range communication to join, producing yellow blinking lights that became visible to all motorists. At the touch of a button, the second truck was linked and the process repeated from the second to the third vehicle. Daimler said as many as 10 trucks could be in a platoon, and each has access to a video link of what the lead truck sees.

Once linked, the drivers removed their hands from the steering wheels and feet from the pedals. After a few moments, as they approached a highway junction, the vehicles automatically adjusted to 50 meters apart, before returning to 15 meters once the conditions were suitable.

Likewise, when a car that was part of the demonstration made its way between the back two trucks, the third one slowed to create additional room, though the first two in the platoon remained at 15 meters.

That experience was duplicated during test drives for journalists along the same stretch of road the next day.

Upon entering the autobahn, there was a steady stream of traffic, and almost all other drivers unlikely aware of what was taking place. On one occasion, the lead driver began the platooning process but briefly called it off because conditions were not ideal.

Even when not actively driving, the truckers remained focused on surroundings. When the lead vehicle changed lanes, the others were notified with a beep and moved over manually. During that time, Highway Pilot Connect remained engaged and once the maneuver was completed the vehicle continued in autonomous mode.' http://www.ttnews.com/articles/basetemplate.aspx?storyid=41377&page=2

(Freightliner is owned by Daimler, in case that was not clear.)

Big data also works by collecting real data from real situations, though that does actually involve spending the money for collecting and correlating empirical data. However, after a few years, Daimler is likely to be able to optimize its collected data (for the older people, the more accurate term is 'experience') to reach the next step of mastering what is actually a fairly difficult problem in engineering dynamic systems to be both safe and efficient.

And in case one wonders how that works, it is how Mercedes approached Formula One racing in the past, for example by building motors and then running them, duplicating the driving conditions of an individual track, then tearing down the motor - over and over again. Mercedes was not trusting its computer models - it was testing them, over and over again, before considering them valid. Which only seems efficient from the perspective of actually winning races, compared to saving money, or using cutting edge software for modelling instead of wasting resources building dozens of motors in a racing season, most of which never will appear on the track.

p_a is right, if verbose. There's a lot of hand-waving away how insanely difficult implementing these systems will be. Google has a vested interest in feeding you propaganda, and guys like our hosts here buy into it hook line & sinker.

@Urso, does anyone really dispute that Google cars have driven a couple million miles with few accidents? A lot of info has been made public about this.

Maybe the technology is a year or two further out than Google claims, but I don't think it is plausible that Google is saying the technology works, but it really doesn't.

thanks to prior_test2 for the interesting info. I think self driving cars will probably happen but there is no need to be a Pollyanna about it.

Dan, I will certainly agree that the technology works under certain conditions.

"...GPS, which are proven technologies that don’t require regular maintenance."

You obviously don't know much about GPS.

GPS spoofing is a significant concern.

But much more important is interference, and many conservatives dismissed the evidence of interference, and demanded Lightsquared get FCC approval because government technocrats are all power grubbing stupid idiots. Besides, if any GPS equipment stopped working its the user to blame, not the Lightsquared broadcast signals inside the guard band provided GPS receivers.

A simple case that could come up and might want some specific programming is what to do when the car is moving fast, a pedestrian runs in front of the car, and the only way to avoid him is to go into the ditch, certainly damaging the car and likely injuring the driver and passengers. This will come up, and it makes sense to think about it, even though it's not a common situation.

With the reaction times of autonomous cars that type of scenario will be very rare. It's common now, because humans routinely speed through neighborhoods (30 in a designated 20 is pretty common). Furthermore, they are distracted and not watching the road at all.

Humans are bad drivers. The rule of thumb is that humans need 1.5 seconds to start braking, if they are paying attention. 0.7 seconds is considered the fastest a normal driver can brake and 0.5 seconds is the lower bound for race car drivers.


In autonomous vehicle terms, a computer should be able to react in less than 100 ms. That's 0.1 seconds. A computer is going to have engaged the brakes 1,400 ms before an average human starts braking.

To put that in more easily understood terms a computer will go from 20 to a full stop in that amount of time.

I don't think you can assume that better technology will always avoid the need to figure out how to handle such an issue. Sometimes the speed along the road is 45 MPH and a child runs out from behind a car with no warning. Or a guy on a motorcycle crosses into your lane from the other side on a two-lane highway. Or the sensor package misses something (which can definitely happen with current sensors) until it's too late to respond. Or....

Eventually, the car will be a much better driver than a human can be, but there will still sometimes be impossible-to-avoid accidents, and in those cases, it makes sense to work out how to minimize the damage. It doesn't seem crazy to me to think about whether minimizing the damage includes only the people in your car, or also includes people outside your car.

You could plausibly conclude any of:

a. The people in my car are my only concern.

b. I'm willing to consider Pareto-improvements (leaving the people in my car strictly no worse off, perhaps accepting more damage to the vehicle) to help outsiders.

c. I'm willing to accept some added risk or damage to the people in my car, in order to massively improve the chances of survival of outsiders. (Like ditching a car at 35 MPH in order to avoid running over a child who has just run into the road.)

d. I treat the people inside my car and outside as equally important and valuable--I will (to the extent I have the power) raise the probability of killing my passenger from 90% to 100% in order to decrease the probability of killing some pedestrian from 100% to 89%.

Best comment of the thread.

I doubt anyone will think about it that deeply.

And it might become mandatory. I expect they will be taxed in some way to make up for the shortfall in traffic fines. Then the government will have an incentive to force you to use them.

But will the government to then levy taxes high enough to maintain the roads, or will the policy be "we don't need to fill potholes more than onenough month per year because the computers will drive safely around the bad ones"?

Since driving is the most interesting and exciting aspect of the generally boring and meaningless lives of most moderns there will be great reluctance to the acceptance of driverless vehicles by private parties. At the same time, trucking companies and bus lines will want to put them to use as quickly as possible.

I suspect we will end up with a dual system, the utilitarian system and the autonomous system. That way travelers can choose between a safer but somewhat less convenient system and a convenient but much less safe system. Of course, the former is called "transit" and the latter is called "cars". What Americans like to believe is that they can have it all: great taste, less filling; Big Macs, flat stomachs; world class schools, low taxes. I suppose there's something endearing about a culture that won't accept reality; endearing perhaps but also irrational.

Will mature driver-less car technology be less safe than public transit? This is unclear. Is public transit the utilitarian solution (maximizing benefit) in the American context? Even less clear.

Also, American education spending is high: http://nces.ed.gov/programs/coe/indicator_cmd.asp

As for your assertion that Americans don't think Big Macs make them fat: citation needed.

"Transit", or in the real world, mass transit, is not really a functional equivalent to "cars". Shared utilitarian dining halls are not a functional equivalent to private home kitchens for the same reasons.

While I have no interest in a driverless car today, I am getting old enough to be able to foresee a possible future interest, should eyesight or other infirmity make self driving problematic. But even in that case, I'd much prefer to own rather than share. The automotive equivalent of hot bunking has only cheapness to recommend it.

"This vehicle speeds up to kill FASCISTS."

Please, that is an owner option - '“This vehicle speeds up to kill MUSLIMS/JEWS/BICYCLE RIDERS.”

Shouldn't that bumper sticker be in German?

no need for driving schools

If driverless cars cannot be programmed any better than to avert collision with pedestrians at crosswalks or cyclists pedaling on roadsides, then the only equitable solution is to load driverless vehicles with just enough explosives to detonate upon impact that would kill all of a cars' occupants and those pedestrians or cyclists with which they collide owing to the poor navigational and propulsion systems they're equipped with.

Or, just let Google and other driverless car developers begin investing now in liability insurance, which they may well need anyway once they begin to unleash their ingenious products.

The focus on the this issue is strange to me. How are drivers currently making this kind of decision? I would say it's essentially random, with people just acting on instinct in the split second available. Really any thought through decision making algorithm would be better.

The benefit of driverless car technology isn't so much the possibility of driverless cars replacing cars driven by people but the disruption in transportation patterns so we look at them with fresh eyes and maybe adopt more efficient patterns. For example, Uber has had a profound effect on transportation patterns and how we view transportation. http://www.slate.com/articles/technology/future_tense/2016/06/the_autonomous_vehicle_revolution_will_be_underwhelming.html Watching traffic from above a city impresses me just how unorganized it is, cars traveling in every direction like a herd of cats. Coordination it is not. Ants seem to be able to coordinate their travel much more efficiently. Of course, most people equate travel with freedom, freedom to choose where they live and where they want to go, although freedom comes at a high cost. I suppose it's an improvement that "only" 32,675 people died in auto accidents in the US in 2014 (down from the peak of 54,589 in 1972). If driverless Uber cars replaced all cars, would our streets and highways be more or less congested, would transportation be more or less efficient, would there be more or less air pollution? Today, the average car is driven only 12,000 miles per year, while the average (full-time) Uber car is driven over 60,000 miles per year. See the cited article. We'd need fewer cars but they'd have to be replaced much more often. Would that be an improvement? I suppose it depends on whether we would spend more or less time in cars. Patterns of the use of Uber suggest that we'd take more trips and spend more time in cars. Again, see the cited article. Would that be an improvement?

Ants are extremely inefficient. Thousands of them in a line just to transport a small amount of food into their nest? I'm not impressed by that at all!

Free will is far too inefficient - looking forward to my highly efficient future as a simple worker ant, tightly controlled by my technocratic betters at Google and the Mercatus Center!

Re: Watching traffic from above a city impresses me just how unorganized it is, cars traveling in every direction like a herd of cats.

Huh? Cars, unlike cats, do not travel "in every direction" Unless the driver is completely incompetent they are constrained to travel along designated traffic ways.

In other words, human beings value choice. Breaking news from 1215 AD.

I'm all for driverless cars, but I think the people who favor them the most are the people who live in little commuter cities that have layouts that are conducive to driverless cars. They'd be an obvious boon to a place like Ottawa or Boston. They'd be an obvious bane to places like Vancouver or Salt Lake City. Context matters.

Just remember:

Whatever rule you choose for cars,

The Truck Industry will say it should also apply to Trucks,

Think As IF....

The solution is simple!

1: Once we are all chipped, simply have each person's chip keep the current total of their Social Worth (SW). 2: The autonomous vehicle, being able to read the chips of everyone within the danger radius, takes the action that will minimize the loss of SW. Simple! :-)

Oh no, my SW just dropped below zero! I'd better take cover!

Better start working on your Klout score

People won't be able to "choose".

1. The car manufacturers will create software and slowly incorporate into the car controls. Creeping automation.
2. Car insurance companies will charge more (much more) for idiots who desire to "be in control".

By 2060, it will be illegal for humans to drive in the USA.

"By 2060, it will be illegal for humans to drive in the USA."

We already have a model for how this is going to play out. Humans transitioned from horses to cars.

You can still ride horses down many types of roads, even though they are inconvenience and potential hazard to cars. I imagine it will be illegal for humans to drive on the same type of roads it's currently illegal to ride horses.

Human driven cars will probably operate in the same kind of niche in the future as horses do. No one rides a horse for travel purposes anymore. You load the horse into a trailer, take it to an appropriate spot and then ride it around for your enjoyment. Older cars (without autonomous capabilities) will be loaded onto a trailer, taken to an appropriate spot and then driven around for you enjoyment.

How many people on this blog are terribly upset that they can't drive their Model T (maximum speed 45 mph) down the interstate? Is that a real issue today?

Amish people and their buggies are allowed on just about every type of road excepting only limited access expressways.

Sure, that was my point. And is that a significant problem? Has society made the use of horses on roads illegal? In general, the answer is no.

Human driven vehicles will fall into some of the same limitations, but there's little reason to believe they will be completely banned. Anymore than horse and carts have been.

But in an unregulated market, surely there will be those who want and will pay a premium for very aggressive vehicles? Just as surely as there's a market for very aggressive dogs?


I am happy to see we are moving past debating whether or not to push me in front of a trolley.

No offense but you're not to be trusted on questions of roadkill

As other people mentioned, no human has time in real driving conditions to make moral evaluations before a crash. Pilots are trained to react and trust reflexes instead of thinking.

Anyway, the article works on this: "3 traffic situations involving imminent unavoidable harm.The car must decide between (A) killing several pedestrians or one passerby, (B) killing one pedestrian or its own passenger, and (C) killing several pedestrians or its own passenger."

The software driving the car should keep safe the car passenger while minimizing the legal exposure of the car owner and manufacturer. Then I'd ask my lawyer friend......I think the most probable answers are: A) kill several pedestrians, B) kill one pedestrian, C) kill several pedestrians.

Since the pedestrian(s) are not in a crosswalk and the car is not speeding (software following rules), the most probable is that pedestrians are being negligent. The car may be speeding because of software failure, government failure of maintaining the road, etc. So, hit the pedestrian(s) and blame car maker or government.

The issue here is why the researchers asked anonymous people online instead of lawyers. Lawyers already solve these conflicts of liability.

Will driver less cars be able to discriminate based on social media analysis?

To answer the question. None. I want a simple vehicle that will go where I tell it to. I don't trust that some guy in Mountain View or Cupertino can program a vehicle to deal with what I commonly deal with. Slippery conditions with snow. Sudden rain storms. Dirt or gravel roads. Going up or down steep roads with either. I can adequately drive in those conditions with no issue.

A story. A few years ago a friend bought a small pickup. New. He drove out of his driveway and down the hill to the highway. It was slick with snow. He put it in low gear to control the speed brakes are not particularly helpful in this situation. The smart vehicle knew better, increased the throttle and nearly drove him into the ditch. He continued down the hill, turned left instead of right, drove to the dealership and got his money back.

V1.0 kills you. V1.01 fixes the problem. Do none of these people actually use computers?

Does this mean a terrorist can put you in danger,

Killing yourself,

By acting as a pedestrian.

What if the pedestrian were suicidal or drunk.

Still worthy of killing the driver?

Manufacturers are interested in selling and insurers in covering cars which will never ever ever provide opposing counsel with an event log saying "At time X, deliberately steered into plaintiff Y". The American legal system doesn't do trolley problems and it doesn't reward pushing fat men off bridges.

An autonomous car would only swerve if it shows a clear path, otherwise it will reduce speed as much as possible and attempt a controlled crash.

Swerving is risky. Braking hard is much more straight forward.

Uber or some other transportation company will own the cars, so what the passengers think about the morality of the software won't be important.

I'm with you. That's what I thought. If people are not owning the car, which could potential kill themselves, they wouldn't complain.

how come everyone would assume this sort of vehicles are dedicated to private own? What I would imagine is that big corp or government have their hands on these. I feel like the whole point of leaving human out of the driving seat is to let vehicles communicate with each others. Imagine all the vehicles could communicate with one another, there is no need for traffic light and any sort of traffic signs. not only no car accident would occur. the traffic would be so much better, it could effectively minimise traffic jam.

My personal qualm when it comes to self-driving cars is, when happens when you tell your car to take you somewhere and it says "No"?

"Car, take me to the gun range."
"I'm sorry, that's against corporate policy"

I bet that very soon after widespread adoption, DHS and local PD's will demand the right to declare areas off-limits for security purposes. Aggressive, progressive Justice Dept an/or state AGs will sue manufacturers for colossal sum because "failure to stop people from engaging in criminal activity is tantamount to being an accomplice" and then settle for a corporate policy to monitor "suspicious behavior" and then restrict and report it to gov't. As soon as that precedent is set and the capability has been programmed into the cars, there will be all kinds of pressure from activist groups to put unfavored activities on some sort of shit list, and agitate until companies comply. As soon as progressive types take over DOT, there will be a car version of Operation Chokepoint, where co's are basically told if you don't restrict (entirely legal) activities, then you'll be hounded by compliance threats until they either find something that sticks or rack up your legal bills until you give up.

I hate to be the game-theoretician in the bunch, but it seems like they haven't computed their ethical calculus beyond the first order. Because if lots of people were driving an "autonomous vehicle programmed to crash and kill its own passenger to save 10 pedestrians", the first thing I'm going to do is get 9 friends and jump out in front of cars so we can laugh while it plows itself into a tree to avoid us.

Economists are fond of the mantra 'incentives matter' -- it's applicability here is quite obvious.

Once there is a super-preponderance of driverless cars, there won't BE any philosophical dilemmas to solve. All vehicles will be controlled in such a way that there can't be any accidents. Human choices, inabilities, selfishness, and flouting of rules and safety are what make cars both dangerous and inefficient.

This is not to say that these negative contributors to accidents don't have their positives. They are part of freedom and independence. If one of my daughters needs to go to the hospital, I'll break every traffic law necessary to get her there quickly.

Deadweight losses will disappear, such as excessive following distance, stop lights and signs, rubbernecking, sunshine and congestion delays, navigation errors, etc.

I'm not sure why people think that once government has the means to control every driver that it won't. We will all be taking public transit.

Comments for this post are closed