The Google-Trolley Problem

by on June 15, 2012 at 6:31 am in Law, Philosophy | Permalink

As you probably recall, the trolley problem concerns a moral dilemma. You observe an out-of-control trolley hurtling towards five people who will surely die if hit by the trolley. You can throw a switch and divert the trolley down a side track saving the five but with certainty killing an innocent bystander. There is no opportunity to warn or otherwise avoid the disaster. Do you throw the switch?

A second version is where you stand on a bridge with a fat man. The only way to stop the trolling killing five is to push the fat man in front of the trolley. Do you do so? Some people say no to both and many say yes to switching but no to pushing, referring to errors of omission and commission. You can read about the moral psychology here.

I want to ask a different question. Suppose that you are a programmer at Google and you are tasked with writing code for the Google-trolley. What code do you write? Should the trolley divert itself to the side track? Should the trolley run itself into a fat man to save five? If the Google-trolley does run itself into the fat man to save five should Sergey Brin be charged? Do your intuitions about the trolley problem change when we switch from the near view to the far (programming) view?

I think these questions are very important: Notice that the trolley problem is a thought experiment but the Google-trolley problem is a decision that now must be made.

joshua June 15, 2012 at 7:23 am

Google will just program the trolley to stop, with a fail safe chosen by a random number. Liability retroactively falls to the creator of the random number generator algorithm.

dan1111 June 15, 2012 at 5:12 pm

Actually, I think they will solve it with an “I’m Feeling Lucky” button.

kungfuhobbit June 28, 2012 at 2:54 pm

dan1111, lol :D!

Yancey Ward June 15, 2012 at 7:38 am

Is this going to be the first real-world application of Asimov’s three laws?

i found an econ degree on the ground June 15, 2012 at 8:07 am

Self-destruct mechanism?

posthuman June 16, 2012 at 4:35 am

Very funny nick. Well played.

Andrew' June 15, 2012 at 8:08 am

“There is no opportunity to warn or otherwise avoid the disaster.”

Do I have time to call my lawyer?

Brian Donohue June 15, 2012 at 5:22 pm

heh

Todd June 15, 2012 at 8:14 am

Corporations are persons now, so Google will only be liable if they: A.) are drunk, B.) kill a little white girl, and/or C.) fail to choke legislatures with campaign contributions.

Plus, no one cares about fat people anymore. At least that is what a recent Google search seemed to demonstrate.

Miley Cyrax June 15, 2012 at 9:38 am

Or D), a young black male.

If I bought a driverless car, its developer would look like Google.

Anonymous June 15, 2012 at 10:14 am

Since concern about the death of young black males seems to be largely contingent on the race of the person deemed the proximate cause of their death, will Al Sharpton et al demand Google hire more blacks to do their programming? Would this make further deaths more or less likely?

The Other Jim June 15, 2012 at 11:40 am

That was a World Class Tag-Team Troll Slap. Well done, people.

Nyongesa June 18, 2012 at 1:12 am

On what planet besides privileged-elite-fantasyville?

Marc Roston June 15, 2012 at 8:15 am

Question 1: Is the code public?

Question 2: Are markets efficient?

If the answer to both questions is yes, then there will be neither fat people on the bridge, nor innocent bystanders nearby!!

Rahul June 15, 2012 at 8:25 am

I wonder how long we have to wait until a google-car trojan spwans.

rkw June 15, 2012 at 8:17 am

How does existing code handle these dilemmas? Avionics and finance come to mind as likely places where these kinds of decisions have already been written into programs.

Rahul June 15, 2012 at 8:22 am

Avionics has found an easy solution: when in doubt shut off and hand over system to human.

Corey June 15, 2012 at 2:24 pm

As someone who has worked on TCAS (traffic collision and avoidance systems) for aviation applications, I can say that this situation could never occur. When faced with an impending colission, the autopilot is programmed to automatically turn in the direction which most expeditiously avoids the colision (and if the other airplane also has TCAS, there are rules in place to assure that both planes don’t turn in the same direction). However, if that direction also would create a loss of separation, the system is programmed to evaluate other alternatives. The programmers never considered what to do if every single option (all three degrees of freedom) would create collisions, because nobody would put themselves in a situation where airplanes were packed that densely except for stunt formation flyers, who don’t use TCAS. I wonder what their rules are, though.

miko June 15, 2012 at 8:18 am

What is the general consensus on the trolley problem among “experts” of the field? (philosophers and economists are most qualified i think…)

Matt June 15, 2012 at 2:23 pm

Economists: strictly speaking, the answer is indeterminate: there is no Pareto efficient outcome, taking all 6 people into consideration. If you use Kaldor-Hicks efficiency: push the guy into the trolley.
Philosophers: do only consequences matter (consequentialists)? If yes, then push the guy. Does the manner by which actions lead to consequences matter (deontologists)? If yes, maybe don’t push the guy. Which is the superior framework? No way to know, just follow your gut.

Dan in Euroland June 15, 2012 at 3:53 pm

Doesn’t strike me that Kaldor Hicks works because there is the implicit assumption in KH that you can compensate the losers. Dead people don’t get much compensation.

Fromcalgary June 18, 2012 at 2:10 pm

Dead people do get compensated. When a person is killed by a drunk driver, the family sues the insurer of the drunk driver and get compensated. The system can be set up such as the one person that is killed, to save the 5, his family can then sue Google (the company), for compensation. I imagine this is probably what will happen, even with its self driving cars.

liberalarts June 15, 2012 at 8:19 am

Based on the drawing in this post, I gotta think that some of the liability should fall on the person who tied up the 6 people and put them on the tracks.

Yancey Ward June 15, 2012 at 9:16 am

Kochs.

anon June 15, 2012 at 10:01 am

I blame George W. Bush

David N June 15, 2012 at 8:27 am

@liberalarts It was Bing.

the commentariette June 15, 2012 at 8:27 am

The google-trolley problem is not a sensible analog of the trolley problem.

The trolley problem is an interesting thought experiment because the alternatives and outcomes are defined with perfect knowledge and in moral terms: kill one to save many.

A google-trolley problem isn’t defined in terms: a google trolley will solve problems of masses and forces and friction and obstacles to minimize
probability of impact or force of impact, etc. It doesn’t try to (or need to) solve the moral/philosophical thought experiment, it solves the physics.

tristanvdb June 15, 2012 at 8:40 pm

The google-trolley problem does exist! When the trolley encounter a finite number of scenario each leading to a “catastrophic” outcome, how each outcome need to be evaluate: is kill one five time better than killing five (5 * kill(1) == kill(5)).
This kind of problem have probably been analysed in the field of planning (as part of AI). I am thinking of factory and large dock modeling…

Thrasymachus June 15, 2012 at 8:32 am

‘Expert opinion’ here closely matches lay opinion, where people generally are in favour of flipping the switch, but against pushing the fat guy (although if you present both you get an effect where people try to be consistent and say the same for both).

(Aside: I’ve seen drawings like this of identical style in Unger’s ‘Living High and Letting Die’. Is there like a job for someone to full-time scribble out cartoons of these thought experiments?)

Ed June 15, 2012 at 9:05 am

Its not that difficult a problem. If you are in charge of switching for a trolley, you pull a switch to divert the trolley from running into five innocent people. You don’t push people onto the tracks in the face of an ongoing trolley.

The fact that once the switch is pulled, the trolley will run into yet another person is an accident. But the five people on the one track and the one on the other are equally innocent. At this point utilitarian accounting comes into play (five innocent people are more than one innocent people). You are not trying to kill either group, you are diverting the trolley from where it would do the most damage.

If you push a person onto the tracks in front of the trolley, you have just a committed a murder. That the murder has some beneficial effects is an accident. Probably all murders have some beneficial effects if you look hard enough.

And I don’t get the google thing at all, So you are programming a trolley to go out of control and kill people, but not that many?

James b. June 15, 2012 at 10:30 am

If you push a person onto the tracks in front of the trolley, you have just a committed a murder. That the murder has some beneficial effects is an accident. Probably all murders have some beneficial effects if you look hard enough.

You don’t think some overzealous DA out to make a name for himself isn’t going to charge you with murder if you throw the switch?

Steven Kopits June 15, 2012 at 3:02 pm

I’m with Ed here.

This is really the Hiroshima problem, isn’t it? This is the decisions military commanders make. If you must chose between one dying and several dying, the rational choice is to chose the one on a ceteris paribus basis.

msgkings June 15, 2012 at 3:35 pm

I thought Spock said it best in Star Trek II

And +1 as well to Ed, taking something that seems complex and making it pretty simple

msgkings June 15, 2012 at 3:37 pm

@ Ed

The Google reference is re driverless cars, how will they be programmed to handle ‘choices’ where either ‘choice’ results in human death?

But it’s a silly ‘problem’, no real world choice would look like that to a driverless car. If 5 people are in the road, the car will stop. It’s not a trolley on a track that can’t stop.

Major June 15, 2012 at 6:16 pm

But it’s a silly ‘problem’, no real world choice would look like that to a driverless car. If 5 people are in the road, the car will stop.

The five people just stepped off the sidewalk into the path of the driverless car, which is going at 50 mph. It doesn’t have time to stop. Either it swerves to avoid them, or it hits them. If it swerves, it will unavoidably hit another person. What should the driving computer do?

msgkings June 15, 2012 at 6:28 pm

Maybe ‘unavoidably hit another person’, maybe not. Where is a car driving 50 mph? Not in a city with close pedestrians, only on a highway. If 5 idiots run onto the higway, I guess the car will have to swerve like a human operated one would. And it might swerve into another car but not another pedestrian.

In a place where pedestrians are close to the road, the car will surely be going the speed limit of say 25 mph and be much more able to stop. If someone wants to throw themselves in front of a Google car, they might succeed in getting hit. But there won’t be other ‘innocents’ at risk. The ‘trolley problem’ is just way too theoretical.

Major June 15, 2012 at 6:58 pm

Maybe ‘unavoidably hit another person’, maybe not.

In the scenario I’m talking about, hitting another person is unavoidable if the car swerves to avoid the people in its path. The car is driving at 50 mph on a public highway and the other person is on the adjacent sidewalk a few feet ahead of it. Either the car continues on the highway, in which case it unavoidably hits the 5 pedestrians who just stepped into its path, or it swerves onto the sidewalk, in which case it unavoidably hits the pedestrian who is walking there. What should it do?

In a place where pedestrians are close to the road, the car will surely be going the speed limit of say 25 mph

Huh? 25 mph speed limits are typical for residential streets, but arterial roads (with adjacent sidewalks) routinely have much higher speed limits. The one near my house has a speed limit of 45 mph. But the precise speed doesn’t matter. Even 25 mph is fast enough for a collision to be unavoidable if the distance between the car and the pedestrians is small enough.

Dick King June 16, 2012 at 2:55 am

Indeed the reference is to driverless cars, but not to a silly artificial problem when driving a single car [or at least I don't think it is]. The decision that Google and the rest of the nation really faces is whether to field driverless cars when they are accurate and safe enough to substantially reduce the total number of accidents, or only when they are accurate and safe enough to never kill anyone.

If a certain number of cars driven by a certain number of human drivers over a certain distance would yield five traffic fatalities, and Google believes that the same amount of usage will yield but one, whether to release the system is the google trolley decision.

-dk

Major June 15, 2012 at 6:12 pm

The fact that once the switch is pulled, the trolley will run into yet another person is an accident.

No, it’s not an accident. It’s a foreseeable effect of pulling the switch. Just as the death of the fat man is a foreseeable effect of pushing him off the bridge. So if the latter action is wrong, why isn’t the former action also wrong?

D June 17, 2012 at 3:24 am

Because the fat man is just minding his own business, but the other guy is hanging around on the trolley tracks like an idiot.

Major June 18, 2012 at 1:00 pm

No, he’s not hanging around on the tracks. He’s been tied to the tracks.

GIVCO June 15, 2012 at 2:46 pm

I believe the real Thrasymachus would’ve said that it depends on the status of “you” in the problem.

John Mansfield June 15, 2012 at 8:36 am

What sort of trolley can be stopped from hitting five people by hitting a large sixth person first? The actor thinks he knows what is going on and how to reduce harm, but maybe he doesn’t and acting on his ignorance will increase harm. How does he know the five people lying on the track won’t stand up and leave before the trolley arrives? Where does all the certainity combined with impotence come from in these problems? Some people like contriving fantasies where they can be excused for killing.

msgkings June 15, 2012 at 3:38 pm

Exactly! Talk about 3AM dorm room navel gazing…

Andrew' June 15, 2012 at 6:28 pm

Engineer here. Philosophers, call me. We can improve this.

Ricardo June 16, 2012 at 2:54 am

After 9/11, the question was raised as to whether the military could legally shoot down a civilian aircraft full of innocent people to prevent more people dying on the ground. Rare, yes. But dorm room navel gazing? Not if you are Commander in Chief.

Dave June 15, 2012 at 8:50 am

Nowadays I think Google would just make their own trolly and push that to the front of the track above all the other more relevant/useful trolleys ;-)

Jim June 15, 2012 at 8:59 am

Given foreknowledge of this class of problem in such a way that you could write code to automate the decision process also implies a level of foreknowledge that should see you building better breaks & failsafe-stopping mechanisms rather than determining the number of people to kill. Foreknowledge & planning allows for problem solving rather than harm mitigation.

Gunnar Tveiten June 15, 2012 at 9:00 am

The example is extremely contrived. You claim it must now be solved, because we’re getting self-driving cars, but that doesn’t make it any more pressing to solve it. The chance that a self-driving car will find itself in such a dilemma is no higher than the chance that a human driver will find himself in such a dilemma. Both chances are so low that it’s entirely down in the noise and the answer has essentially zero effect on traffic-safety.

Situations where it’s really certain that you genuinely have the choice between killing one and killing 5 are exceedingly rare outside of constructed thought-experiments. Thus we have no pressing need for having all members of our society agree on the answer. Furthermore you have to *recognize* that you’re in such a situation at all early enough to make a choice for your answer to matter.

I self-driving car will be programmed to avoid hitting anyone if at all possible, and to minimize the impact (for example by braking as much as possible) if impact is unavoidable. This will probably result in it heading for the option where it comes *closest* to avoiding impact alltogether. My guess would be that a self-driving car would have a tendency to hit the group of 5 with 30mph, rather than the single person with 50mph, simply because that is the course where it avoids impact for aslong as at all possible.

A more interesting question is if the Google cars make are able to separate animate objects from scenery at all, and if they make different decisions when unoccupied. If you cannot avoid crashing, it makes sense to opt for crashing into a parked car or a tree, rather than a person, and if the car is unoccupied it’d make sense for it to opt for hitting a parked car in 40mph rather than hitting a pedestrian in 20mph. I doubt they make this kind of trade-off though.

jdm June 15, 2012 at 9:20 am

Gunnar, you beat me to it. While providing something to do after breakfast for the philosophically inclined, this particular thought experiment has little bearing on real situations, not only because it is rare that one in confronted with such a choice, but also because one’s ability to predict outcomes in these cases is extremely limited.

The bystander on the bridge, who is not obligated to do anything, can’t possibly know that pushing the fat man over will save anyone. That would involve a host of calculations and judgements that are impossible for anyone to perform in real time. An honest bridge onlooker would admit to himself that pushing the fat man over the bridge would likely do no more than increase the number of dead to six, while simultaneously guaranteeing that he will be arrested for murder.

It would seem that a recognition of our inability to correctly predict outcomes with any degree of certainty should lead us to follow precautionary principal similar to the one doctors are supposed to subscribe to.

In cases where one must take an action something (ie when one is driving the car or programming the robot), I believe most people and programmers would swerve to hit one to try to avoid hitting five; this does not seem like a philosophical
conundrum.

Tom West June 15, 2012 at 12:55 pm

Gunnar, the trolley problem *is* exactly what your left with at the point that Google cars can distinguish people. At some point, a programmer *will* be making the decision whether to (for example) swerve off the road to hit someone walking on the sidewalk vs. hitting children sprinting across the road, and unlike a human being who can claim they simply “reacted”, the code will be there to prove that the action was premeditated. (Or conversely, the choice was made *not* to avoid hitting several people when a less tragic option was available.)

The whole point is that eventually a real-life person will be faced with this decision. Suddenly the trolley problem, contrived as it is, will be very, very real. It may never actually happen, but the code (and the choice) will have to be there. (And when you have self-driving cars that in fact can accurately measure whether the option of injuring a few over many exists, I would not be surprised that the option occurs in reality far more often than we’re aware of.)

A fascinating original post.

Yancey Ward June 15, 2012 at 2:13 pm

Tom, nicely explained.

jdm June 15, 2012 at 5:18 pm

Isn’t there a distinction between the 2nd part of the trolley car problem and the situation faced by a driver, whether a human being in the flesh or a robot programmed by a person? A driver must act, regardless of how imperfect his, her or its information. I would conjecture that most if not all drivers would swerve to hit 1 rather than 5 if that is the only choice. The trolley bystander need however not act.
He or she or it should be extremely reluctant to take an unforced action that is likely to kill another bystander, who would not otherwise be killed, since in real life it is very hard to be sure how things will play out. The google robot just needs to minimize expected deaths, since that is how real drivers would presumably behave.

uffy June 15, 2012 at 8:27 pm

Exactly. The commenters claiming that this thought experiment is somehow unrealistic for driverless car programers to have to address are not thinking things through.

Cliff Wells June 16, 2012 at 9:43 pm

I doubt the moral problem exists as relates to autonomous vehicles. The creation of the moral dilemma assumes a series of programming decisions that simply are not made. I doubt there is a flow of if-then-else logic to handle any situation, as there are a nearly infinite number of variations to cover. Neither is it a chess-solver that looks ahead at multiple scenarios (there simply isn’t time – especially when the more scenarios you consider, the less time you have to execute any of them). Instead, simple avoidance based on relative weights is the likely algorithm (also weighted by road detection). You may find that while the car may hit a mailbox to avoid a dog, it might sometimes hit a dog to avoid hitting a dozen mailboxes.

This scenario likely couldn’t even be properly tested by programmers, since slight variations in the massive number of variables to be considered would result in an exponential explosion of results, so there’s little moral input any programmer could put into it (aside from weighting human objects very, very high relative to other objects).

Also, I assume an autonomous vehicle would obey posted speed limits. If you have ever driven the speed limit (not 5 or 10mph over), you quickly discover most accidents are easily avoided by simple braking (assuming you are paying attention to the road, a task difficult for humans, but easy for computers), so this type of emergency driving would be highly unlikely.

Steven Kopits June 15, 2012 at 3:47 pm

Ah, this is a self-driving trolley. I thought it was a mere homocidal programmer.

For self-driving technology, the order to choice will be

1) protect the vehicles occupants from serious injury or death
2) protect others from serious injuries or death
3) protect vehicle occupants from minor injury

I think the market will require that the primary obligation of the self-driving system is to protect its owner.

dan1111 June 15, 2012 at 5:31 pm

@gunnar, I agree that this is unlikely to be an issue in a fast-moving, uncertain situation like a self-driving car (or an impending trolley collision). However, real scenarios like this do happen. In WWII, the British used disinformation to cause the Germans to aim V-2 rockets away from heavily-populated London–and toward less-populated areas. It’s hard to come up with a more exact analogue to the trolley problem than that.

Dan Weber June 15, 2012 at 7:07 pm

A more interesting question is if the Google cars make are able to separate animate objects from scenery at all

Google cars pre-map the environment. If something new shows up it’s more likely to be a person.

They can do a lot of pre-computation to figure out what to hit. Bushes before mailboxes before trees before people.

Incidentally, one thing really hard for current cars to discern is a person in a skirt. That will take some work.

Bo Xilai June 15, 2012 at 9:03 am

Let’s say it’s a multi-millionaire manager at Yahoo facing a dilemma. He must decide whether to hand over the IP address and emails of a pro-democracy dissident to the Chinese authorities resulting in the torture and possible death of a decent human being or else risk marginally reducing the share price of a ludicrously rich company. Pretty simple choice. Pecunia vincit omnia when your are a psychopath.

zbicyclist June 15, 2012 at 9:55 am

Yes! Let’s deal with real problems not some situation that’s completely contrived.

I fear the answer is this: Is it easy for me to get another job at a competitor? If it’s a good job market, I stand on principle. If I have to take a multi-million dollar salary hit …

RAD June 15, 2012 at 9:22 am

Alex, what I find most interesting is that you associated the Google Driverless Car with the Trolley Problem. The software powering the Google Driverless Car faces the same scenarios human drivers face daily. If your association is correct, you should be able to restate all the different version of the Trolley Problem in terms of real-world driving scenarios. That in itself would be an extremely useful thing compared to the artificial and unrealistic Trolley Problem scenario(s). My intuition tells me that it is not possible to restate the Trolley Problem in terms of use cases that apply to human drivers and/or the Google Driverless Car.

Andrew' June 15, 2012 at 10:49 am

I think that’s the point. Some programmer has to make the decision sitting at a keyboard.

For example, do you swerve to avoid a pedestrian, thus breaking a traffic rule by entering the adjacent lane or do you just keep driving?

Finch June 15, 2012 at 9:23 am

> Do your intuitions about the trolley problem change when we switch from the near view to the far (programming) view?

I thought this bit was odd. Isn’t programming almost the definition of near mode? Am I misunderstanding something?

Having to program it makes it no longer some far-off abstract problem, but rather a matter of details and precise solutions.

Dent June 15, 2012 at 9:24 am

Obviously, all Google trolley must be built with an on-board utilitarian ethics computer like this one:

http://www.smbc-comics.com/index.php?db=comics&id=2569#comic

It can then decide not only how many fat people to hit for the greater good, but also where you should want to go to maximize social utility.

Dangerman June 15, 2012 at 11:19 am

That was an amusing illustration of the classic “utility monster.”

An equally implausible thought experiment…

Nick June 15, 2012 at 9:35 am

The first generation of google cars will not have to “solve” this problem in any meaningful sense. There will be collision avoidance algorithms and probably some rudimentary “ditching” logic (“I don’t have enough traction to stop so I have to hit either a ball – “low consequence collision” – or the boy chasing after it – “very high consequence collision” – , so I steer towards the ball”). The trolley problem reaction is just the epiphenomenon of this ditching mechanism – it will see many very high priority collisions versus a single high priority collision, and the actual behavior will depend as much on tiny differences, such as calculations of probabilities of success due to slight differences in friction on path 1 vs path 2, as it does on the amount of damage to be done. Either way, no morals done by the car, and no “moral override” morals done by the programmers (beyond the basic classification of collisions).

However, there will be some humorous epiphenomenon, such as when a car hits a parked police car to avoid hitting a moving truck, but it’s best not to read too much into it.

John Schilling June 15, 2012 at 11:06 am

It is highly unlikely that a first-generation self-driving car will be able to tell the difference between a ball and a boy. Or, more generally, evaluate the moral consequences of a collision in any relevant way. The reality, however boring it might be to armchair philosophers, is that self-driving cars will act to postpone any collision as long as possible and/or minimize the velocity of the eventual collision. Who or what lies at the end of that we-can’t-avoid-colliding-any-longer path, will simply not factor into the moronic slab of silicon’s decision-making process.

Nick June 15, 2012 at 12:43 pm

I agree that the philosophizing is over the top, but the first generation self-driving car that becomes (or is intended to become) a significant amount of traffic will have to have some rudimentary object classification. It can’t slam on the brakes whenever a bird flies into its path, but it can’t run over potholes with abandon. So whatever cost function is used to actually select actions will almost certainly take that into account.

Tom West June 15, 2012 at 3:18 pm

Not the first generation. But the third or fourth? Sure – the trolley problem will become very real.

Swerve off the road to hit a pedestrian on the sidewalk, or hit the group of four kids that ran out onto the road? With knowledge of exactly what the car can do given perfect knowledge of weight, dynamics, etc., the trade-off won’t have the gut-instinct that a human would be making. Instead, someone somewhere will have to make that decision with the cold-hard fact that they’re making a choice.

And my guess is they’ll be making it within 20 years, if not 10.

John Schilling June 15, 2012 at 4:28 pm

Alternately, they can not make the decision because they do not believe that the situation will ever come up outside of armchair thought experiments, or because they believe that the application of a generic try-not-to-collide-with-anything-at-all-ever algorithm to such a situation will produce a result that is close enough to optimal that any further effort is unwarranted. The combination of the two is almost certainly close to true. In practice, simply lumping dogs, children, balls, fire hydrants, and the like into the general category of “bigger than a breadbox – do not collide with these objects ever” will almost certainly deliver adequate real-world performance.

It is very definitely NOT the case that programmers of autopilots, AIs, or expert systems in the real world “have to” make explicit choices for every scenario their system could conceivably face in the future. They can, and do, and will continue to, leave the edge cases to the default behavior of the system.

Tom West June 16, 2012 at 9:07 am

Well, as someone who had a friend missed being hit by a car when he was walking on the shoulder of a country road because a deer popped out in front of the car, I would hope Google will will use all the data they have. In my friend’s case, the driver never saw him and simply instinctively swerved onto the shoulder to avoid the deer.

Likewise, I truly hope that the Google car will smash into a parked car rather than hit a child running across the street.

Both situations (the second especially) are bound to occur in real life. It’s just that human reaction time/instinct prevents us from deliberately having to make those choices. Not so Google.

And, quite frankly, blind-folding oneself so that you *cannot* intervene in the trolley’s flight is an interesting choice in an of itself.

Dan Weber June 15, 2012 at 7:17 pm

It’s very likely the self-driving car will never get into that situation. In a residential neighborhood with sidewalks, the car will notice the kids on the sidewalk and not be going fast enough that it can’t stop quickly.

Remember, the (correctly functioning, as per our hypothetical) car doesn’t get distracted, have blind spots, or forget about objects that seem to disappear behind barriers.

You can have instances where people are where people should never be, like this: http://www.youtube.com/watch?v=d0BbknAgab0 The self-driving car will probably have already used sensors to realize that the car two in front of it has come to a complete stop and already be considering alternatives.

Bill June 15, 2012 at 9:43 am

The answer is simple:

Google would conduct an auction.

If the fat man were Donald Trump and the 5 persons were poor,

The Hair would win, and

The Five would be squished.

Rich June 15, 2012 at 9:45 am

Easy answer. Since their revenue is primarily ad based, Google will just determine who will die based on maximizing advertising revenue.

Peter June 15, 2012 at 10:27 am

Exactly. They can avoid the Android user by swerving to hit the guy with the iPhone.

Isaac June 15, 2012 at 9:48 am

Well, maybe we can make the driver play this before starting the car: http://www.pippinbarr.com/games/trolleyproblem/TrolleyProblem.html

AVX June 15, 2012 at 10:02 am

Driverless cars would be programmed to avoid this scenario as much as possible. In the trolley case, that would be by making sure that the distance from an obstacle is large enough to enable a car/trolley to stop. For cases that are extremely remote in nature, it might not make sense to program them. Programming for each and every scenario can make the program complicated enough to the point of becoming unreliable.
Another way to think about a more realistic scenario would be when a car finds itself heading towards a crowd. Would the driverless cars have an optimal avoidance programmed in? Another scenario would be when a driverless car is coming around a bend and finds ten cars stopped in one lane and one car in another lane with no way of avoiding an impact… would it switch lane?

Joshua June 15, 2012 at 1:18 pm

You couched it in realistic terms, but I think this is exactly the answer! If anyone but the victims or their mustachioed captor is to blame, it’s the designer of the track and the fellow who insisted that freight had better get where it’s going, a few stray limbs be damned. Where it comes to machines, ethics is all about designing your system with the right risks in mind — and that applies to traditional railroads as much as to driverless cars. And since that freight includes food that keeps people alive, and medicine, and organs for transplant, and who knows what else, the occasional kid whose shoe gets stuck playing chicken — loses.

Master of None June 15, 2012 at 10:18 am

Which choice would minimize legal liability and/or insurance premiums?

I would be curious to know what legal liability / insurance experts think about this.

Andrew' June 15, 2012 at 10:50 am

“Which choice would minimize legal liability ”

Prediction: Whichever one they didn’t make.

db June 15, 2012 at 10:22 am

Those saying this is merely theoretical and unlikely to actually happen are wrong. Google must determine the car’s behavior when the car is unavoidably about plow through a non-evenly distributed crowd of pedestrians where no-matter what the car does it will hit at least 1 person.
The question is should the car turn itself to hit the fewest number of people or have a default rule be that it can never turn into a pedestrian, even if that means plowing straight ahead into a much larger group of people?

Finch June 15, 2012 at 10:34 am

You’re assuming that the Google car is programmed with a bunch of rules to cover many unlikely situations. This is almost certainly not the case. You make systems like this reliable by having small numbers of robust rules, even if they have potentially not-perfect behavior in pathological cases. Lots of complex rules would be likely to interact and cause more problems than they’re worth.

As others have noted, the Google car is likely to react to an emergency by braking to a stop and turning off. Even if an expert driver might save lives by stomping on the gas or doing a handbrake turn in some very unlikely scenarios.

Andrew' June 15, 2012 at 11:48 am

Or swerving.

That’s a choice, right?

I predict bunny rabbits and squirrels lives are about to get a lot worse (and shorter).

Finch June 15, 2012 at 12:38 pm

Sure, maybe. It’ll be simple defaults that are good in almost any situation. Though I expect not swerving for bunnies will save a non-trivial number of human lives.

Major June 15, 2012 at 6:38 pm

A driverless car should presumably have a rule that it should swerve to avoid hitting people (if it can’t stop in time). It should presumably also have a rule that it should not drive into people. If swerving to avoid hitting some people would cause the car to drive into other people, the two rules would be in conflict. The question is how the car should be programmed to resolve that conflict.

Dan Weber June 15, 2012 at 7:26 pm

Really, the only way that the car won’t be able to stop in time is if the person magically appears* or the brakes fail. Otherwise the car will slow down enough ahead of time to have a safety margin.

(When I say “magically appears” to cover things that are amazing exceptions but could still actually could occur. Guy jumping out of a truck, falling off a bridge, on so on.)

Major June 15, 2012 at 7:44 pm

Really, the only way that the car won’t be able to stop in time is if the person magically appears* or the brakes fail.

The person doesn’t “magically appear.” He just steps off the sidewalk into the path of the car. Seriously, you’ve never heard of pedestrians being accidently killed or injured crossing the street when they were struck by a car they didn’t realize was there? We can assume that the stopping distance for driverless cars will be less than that for human-driven cars, because computers have much faster reaction times, but the laws of physics simply do not allow instantaneous stops. If a pedestrian steps 20 feet in front of the path of a car traveling at 50 mph, there is simply no way the car could avoid hitting the pedestrian unless it swerves.

Dan Weber June 15, 2012 at 8:00 pm

The car will see the person on the sidewalk long before they suddenly walk into the street. And anywhere there is a “sidewalk” the car will not be going 50 mph.

People walking along the side of the freeway will be a problem. However, the problem will be that the computer cars will all drive cautiously. They’ll probably each call 911 to report meatbags on the road.

Major June 15, 2012 at 8:17 pm

The car will see the person on the sidewalk long before they suddenly walk into the street. And anywhere there is a “sidewalk” the car will not be going 50 mph.

Arterial roads, with adjacent sidewalks, routinely have speed limits of 40 mph or more. Since we allow this for human-driven cars, it seems highly implausible that we will demand a lower speed limit for driverless cars, given that driverless cars will have much faster reaction times. But a collision would be unavoidable without swerving even at lower speeds if the distance between the car and the pedestrian is small enough. The minimum possible stopping distance (assuming a reaction time of 0) on dry pavement for a car travelling at 30 mph is about 45 feet. On wet pavement, it’s 90 feet. It is obviously possible for a pedestrian to step less than 45 feet in front of a car traveling at 30 mph. I don’t know why you think such an event is even implausible, let alone “magical.”

Dan Weber June 15, 2012 at 8:49 pm

It is indeed magical for a person to appear such that the car didn’t know they were there before. The car will be aware of the pedestrian on the sidewalk the whole time.

The car doesn’t care about the speed limit. If it’s stuck in the far right lane next to pedestrians, it can (and probably will) slow down. And if the speed limit is 40MPH, surely the right-most lane is going less, right?

dan1111 June 15, 2012 at 9:26 pm

@Dan, programming the car drive in such a way that it could stop to avoid all possible collisions would make it unusable. Maybe some extremely patient passengers could put up with the car going 10mph next to any pedestrian, but what about oncoming cars? What about traffic in adjacent lanes? What about cars in side streets that could pull out suddenly? The car would be paralyzed trying to avoid all of the potential obstacles. Real-world driving depends on the good behavior of thousands of other actors. If the Google car is going to do real-world driving, it will have to get into situations in which it can’t stop in time if something unexpected happens.

Major June 15, 2012 at 9:30 pm

I see no basis for your claim that the car “probably will” slow down if there are pedestrians close by. Again, we allow humans to routinely drive at 40 mph or more when there are pedestrians just a few feet away on an adjacent sidewalk. It seems extremely unlikely that we will require that driverless cars travel much more slowly under the same conditions. And again, a collision is inevitable even at much slower speeds if a pedestrian steps close enough in front of the vehicle, unless the vehicle swerves. Tens of thousands of pedestrians are killed and injured every year because they step into the path of motor vehicles that cannot stop in time to avoid hitting them. Driverless cars may reduce the rate of these collisions by reducing stopping distances, but they will not be able to eliminate them. If the only way to avoid the collision is to swerve, the vehicle may then be faced with the “Google-trolley problem.”

CPV June 15, 2012 at 10:32 am

The reason that libertarianism has never risen above pizza conversation in the dorm room is precisely demonstrated by this posting.

JS June 15, 2012 at 10:35 am

Are we actually talking here about using drones in war? Allowing us to decide who lives and dies remotely, without having to get our hands (or consciences) dirty?

Finch June 15, 2012 at 11:11 am

To be fair, it’s only very slightly more remotely than what was going on before…

RSaunders June 15, 2012 at 10:38 am

Are fat guys the lowest we can go on the moral value scale? Why does it change from “innocent bystander” to “fat guy”? “People wouldn’t push an innocent bystander in front of a moving train, even if he were fat!”

TallDave June 15, 2012 at 11:14 am

The guy is fat only because the dilemma requires that only a fat guy will stop the trolley. Otherwise the obvious morally superior solution is to jump in front of the trolley yourself.

Tracy W June 15, 2012 at 11:34 am

Although if you have the time to calculate that the fat man will strip the trolley but you won’t, either you’re an awful lot faster at calculating these physics problems than I am, or you’ve got time to call the emergency services and let them handle it.

JasonL June 15, 2012 at 10:51 am

Where this seems goofy to me is in the implementation. People don’t have this kind of pre programmed thought process in the moment – they react. There is very little intentionality at heart rates that elevated and in situations that unfamiliar. We construct all sorts of stories about accountability and moral implications after the fact, but the truth is the situation is rare enough that all of those stories are flawed and certainly none are generalizable. If more people are harmed the total cost is higher, but it’s unclear to me we need to punitively address the choice of the driver for acting in the moment at all.

That’s just to say if you program the car one way vs another, it seems you could be held specifically accountable for a design that maximized casualties across are broad range of accident scenarios, or for an algorithm that wa so negligently constructed it failed to address very common easy harm mitigation scenarios, but the idea that the program would have to be accountable for failing to correctly handle each possible scenario is goofy.

BradinDC June 15, 2012 at 11:01 am

Wait, Google is planning on having some of its cars push fat people in front of other cars? That does sound bad.

Andrew' June 15, 2012 at 11:50 am

The bottom line is obvious. If you are fat, never accept a hitch from a Google car.

Robert June 15, 2012 at 11:05 am

This is a simple economic solution to this. You use the face recognition to identify the potential victims. Get their tax returns and spare the people who have paid the most in Federal Income Tax and/or make the most money (are the most productive).

Yancey Ward June 15, 2012 at 2:23 pm

The Death Panel Car.

Andrew' June 15, 2012 at 2:47 pm

Or the market solution, vending machines on the over-passes with trap doors.

Macarena June 15, 2012 at 11:09 am

>If your association is correct, you should be able to restate all the different version of the Trolley Problem in terms of real-world driving scenarios.

Not quite so simple as the trolley problem, but – you are programming a car. When travelling down a 2 lane mountain road, your car rounds a bend with the mountain to the left and a cliff to the right, and it encounters a small bus in your lane, passing a motorcycle. Should the car be programmed to:

a) Remain in your lane while slamming your breaks, most probably killing your passenger and likely the people in the bus (who will be redirected down the cliff to the right by the collision). The motorcycle proceeds unharmed. (This is the likely outcome with a human driver – a human wouldn’t have the reaction speed to avoid a collision.)
b) Swerve slightly inward, saving your passenger by clipping the bus with your right fender rather than crashing head-on. This still likely will send the passengers of the bus over the cliff to their doom, but the motorcycle may be able to avoid the collision.
c) Swerve more inward, hitting the motorcycle, likely saving both you and the passengers in the bus, but killing the (completely innocent) motorcycle rider.
d) Serve outward over the cliff, killing your passenger but avoiding the collision with either oncoming vehicle.
e) Swerve enough inward to take out the motorcycle, but not enough to avoid the bus. This either kills everyone, leaving no witnesses, or saves only your passenger. Doing this minimizes Google’s litigation risk, since no one living will realize what happened, other than your passenger. (It is also a plausible, albeit completely accidental, outcome with a human driver.)

I consider your passenger to be the fat man, the bus passengers to be the five on the track, and the motorcycle rider to be the one on the side track, though this could be argued. Remember, you are the car’s programmer/builder, (ie. Google’s ‘agent’) not its passenger. A purely economic agent acting for his principle’s interest might pick e), as it fits best with the programmer’s principle’s interest in avoiding a massive lawsuit. (In cases a and b the bus passenger’s families sue Google. In cases a and d the car’s passenger’s family sues. In case c the motorcycle rider’s family sues. In case e, no one knows enough about what happened to sue.) In this scenario, the bus driver is the proximate cause – he tied his passengers to the track and loosed the trolley, but he has shallow pockets. Much more lucrative to sue Google, who after all had the ability to save the life of any particular individual in the scenario with any choice, and Google doesn’t have the ‘didn’t have time to think’ excuse a human driver would have in court. Solution e) would not fit with Google’s mission statement, though. (Don’t be evil.)

A complication is that you aren’t programming specifically for this situation, but let’s say testing indicates that these are outcomes from five potential Al Gore rhythms for the vehicle’s code, which are otherwise comparable in outcomes.

Gordon Mohr June 15, 2012 at 2:51 pm

Excellent scenario exposition.

One mitigating factor against (e), in the real world that also includes liability lawsuits as a consideration, is that the presence of a software agent also highly implies the existence of either (1) some sort of ‘black box’ recording of the whole accident’s inputs and decisions; and/or (2) simulation systems for sending the agent through just these sorts of scenarios. So what happens (or could happen) on a remote mountain road under Google’s chosen algorithmic optimizations would be hard to keep a corporate secret.

dan1111 June 15, 2012 at 9:38 pm

Ahh, but unlike in the trolley problem, the car also has to anticipate what the other drivers will do. Driving courses teach that if you meet an oncoming car in your lane, you shouldn’t swerve left to try to pass on the opposite side of it, because the oncoming driver will most likely instinctively react by returning to his own lane. The best course for Google is probably to slam on the brakes and stay to the right. The motorcyclist may be sacrificed to the bus driver’s instincts, but the bus driver will almost certainly not let the head-on collision occur if there is time for him to swerve.

More importantly, the fact that there are other actors affecting the outcome probably takes this out of the realm of moral dilemmas.

TallDave June 15, 2012 at 11:18 am

The correct answer to both should be yes, but it’s not really something that would be programmed, because decisions like this are almost never simple binaries — especially not in cars, which have far greater degrees of freedom than trolleys.

Jonathan June 15, 2012 at 11:31 am

Philosophically, I don’t see how the two problem differ. Given enough AI to recognize the trolley problem when the Google car sees it, you are simply asking whether to bake in the utilitatian or anti-utilitarian response — however consistently or inconsistently you have answered the questions. In both the thought experiment and the programming problem, you are answering a question about the foreseeable consequences of future action, so don’t the answers have to be the same? Why would you program a computer to do something different than you would do yourself (with all the caveats of the first clause of the second sentence above)?

A Berman June 15, 2012 at 11:44 am

Since personal connections are a legitimate part of morality, I always save the person I’m closest to, emotionally first, then physically if no emotional connection exists.

Since the Google Trolley is being programmed by someone who doesn’t know the people involved, the programmer should try to maximize the expected number of lives saved.

Finch June 15, 2012 at 12:40 pm

I would want the car to do its best to save my life. I would pay more for that feature.

A Berman June 15, 2012 at 4:23 pm

True that. The moral choice isn’t the one that necessarily will sell. Which I suppose makes it not the moral choice.

AVX June 15, 2012 at 4:34 pm

.. which might be the thing that the car programmers would have to do. Would you buy a car which touts itself as being able to kill the least amount of people while potentially putting you at risk? What if the situation is to avoid killing two people vs killing the driver by swerving into the divider.

8 June 15, 2012 at 11:46 am

It’s a question for the lawyers. Where’s there’s an issue, the law will decide how to program it. Then there will be blog posts on econ and law blogs about the law costing efficiency due to liability. I can also imagine a scenario where the software makes a very, very bad choice, that no human would make, and it will be the one event used to define the industry, or even ban it. Or, full employment: every critical software will need at least one human present at all times…….until the write the software to replace that human.

I think a more practical question is what will happen if there are mass deaths caused by hackers.

JRPtwo June 15, 2012 at 11:48 am

In theory, you should push the fat man and program the same. In reality, you should not.

Jason Watkins June 15, 2012 at 11:49 am

The Trolley Dilemma is a very contrived situation. Car AI will never be written to reason at such a level, nor does it need to..

If you think about it, the Trolley Dilemma is quite complex: It requires predicting two hypothetical futures, predicting a likely loss of life in each future, and then backtracking within a graph of causal inferences to attempt to chose whichever outcome the programmer decides is moral. In a car you’ll have an unbounded number of futures. Exploring them this deeply, and with accurate understanding of causal dependencies and likely reactions of other cars is just too speculative to be useful.

Real AI cars will never be written this way. Instead they’ll have a variety of danger estimators. If some combined scoring of these passes a threshold, the car will initiate a fail safe behavior of making a rapid controlled stop.

I don’t see much point in debating a philosophical view here, because it’s an unrealistic scenario. When the problem doesn’t match reality, there is no real answer. The count of angels on a pinhead is meaningless to discuss.

Control theory for dangerous devices (rockets, planes) is a well explored topic. I think the engineers have a handle on the reality here. I don’t see much where economists or philosophers can contribute to that via a thought experiment that will never actually happen.

John Batey June 15, 2012 at 4:20 pm

“If some combined scoring of these passes a threshold, the car will initiate a fail safe behavior of making a rapid controlled stop.”
While normally true, it fails in some cases where there is no ‘fail-safe’. For example it’s driving along and a car rolls out in front of you; too close you your car to safely stop. If possible, the car would swerve and avoid. If the other lane has a small undefined ‘something’ (A pothole, dog, kid?) being in the swerve path, should it swerve?

As an engineer that works on safety systems, I can tell you that scenarios exist where there are no ‘safe’ options… but which still have ‘better’ options. We expect things like the trolley problem to happen anytime there’s some human interaction, because humans will periodically ‘brain fart’ and do things unexpected and dangerous.

Nick June 15, 2012 at 1:15 pm

The programmers don’t make these decisions, they are based on training data. After enough lawsuits have accumulated, the AI will learn to make the choice that minimizes legal repercussions.

Max June 15, 2012 at 1:35 pm

Does anyone avoid throwing the switch but decides to toss the fatty?

Steve June 15, 2012 at 1:47 pm

Is Google allowed to hire Denzel Washington?

Swedo June 15, 2012 at 2:28 pm

The only “problem” here is something akin to political correctness. We all know what must be done, and when the trolley starts rolling we would do it.

What we don’t want to do is acknowledge what we would do, because that makes us look bad.

Max June 15, 2012 at 5:42 pm

I don’t think this is correct.

Yancey Ward June 15, 2012 at 2:28 pm

Is Brad DeLong on the bridge?

Bill June 15, 2012 at 2:32 pm

What if the computer wrongly perceived through its separate perception system that 5 people were to die unless the person on the bridge were pushed?

Not only must the decision rule be right, but also the perception system.

If you right a rule ASSUMING perfect information, what happens if there is imperfect information.

Leo June 15, 2012 at 2:47 pm

it is not about numbers. who do you care for – that one person you might personally know and care for or those 5 evil strangers? or the other way around, those 5 family members or that one criminal? it is about the value you hold for both choices.

all things being equal (except probably the numbers), there is no correct answer choosing between two evils. but, a better decision would be to choose the lesser evil AND never stop to look for a better option. in other words, don’t just stand there, DO SOMETHING.

given the situation, i would throw the switch and divert the track AND try to do something else to prevent disaster. for version 2, i wouldn’t push the fat guy. it’s no guarantee it would stop the trolley, i might end up with 6 dead, 1 by murder.

but since it is just a scenario, until i give an answer, nobody dies. i’m still thinking.

Mayor Bloomberg June 15, 2012 at 2:48 pm

Throw the fat guy over. Even if no one’s lying on the tracks. Fat guy are public health criminals.

Andrew' June 15, 2012 at 6:31 pm

Is fat guy an organ donor?

If “YES”, throw him off the bridge.

If “NO”, then throw him off the bridge.

Dan Weber June 15, 2012 at 7:33 pm

You probably get the organs after the trolley runs him over.

Dan Weber June 15, 2012 at 7:34 pm

crap. You probably can’t get the organs.

Gordon Mohr June 15, 2012 at 3:22 pm

Google is data-driven and trusts large datasets over any other reasoning. Google would constantly A/B test different strategies in the field, varying car decisionmaking slightly by time, road, region until statistically significant patterns arise. At that point, they’d adjust to maximize some desired output.

At that level of analysis, some of the objections that the scenario is contrived fall away. The car doesn’t need certainties from super-sensors and super-modeling to decide between stark outcomes. A data scientist thousands of miles away know that if tuning variable X is 2, cars cause one mix of passenger/nearby-motorist/bystander death and dismemberment, if X is 3 a slightly different mix, and if X is 4 yet another mix. In aggregate, totally impersonal. (The data might even tease out effects on mortality rates through mechanisms that don’t even involve reported accidents.)

As the comments about ad revenue, android-vs-iphone users, and liability suits suggest, Google’s staff might not be strictly maximizing survivor-count or years-of-productive-life-remaining.

But perhaps the right kind of adjusted liability system could help. Maybe with autonomous cars, there’s not even a need to investigate individual cases. There’s just a large bill on an official schedule for each tangentially-associated death/injury, with the understanding that Google’s algorithms will mix that into their (mostly monetary) optimization calculations, and the constraint that the end-result be far fewer deaths than when humans ruled the roads.

So the Google-Trolley problem becomes: what bill should be imposed in each situation. Unless the billing-entity can make fine distinctions about ‘switch-like’ or ‘push-like’ or ‘passivity-like’ situations, it would probably be sending ‘a life is a life’ input into the Google system, pushing Google-Trolleys in the purely utilitarian direction.

msgkings June 15, 2012 at 3:52 pm

This comment is well thought out, but I still fail to see just how a true ‘trolley problem’ can arise in the real world.

If there are people in the road, the cars will stop. Not swerve into other people. Worst case the car gets rear ended by another car behind driving too close, and traffic laws now assert the driver behind is at fault for that collision.

The trolley problem ‘works’ because a trolley presumably can’t just stop, it has to be stopped (by the falling fat guy, not sure how that works in reality) or rerouted by switching. A Google car is just a car. It sees a person, it stops.

Gordon Mohr June 15, 2012 at 6:44 pm

See the above ‘mountain road’ scenario from Macarena: the human bus driver is doing something rare and illegal, forcing a stark decision fraught with mortality on the law-abiding and perhaps even-(electromechanically-)quick-witted car-driving protagonist.

Or picture all sorts of other accidents: pedestrian tripping and falling into roadway. Brake failure from adjoining streets. Loose beam from truck escaping endangering Google-Car passenger — does car swerve to sidewalk or oncoming traffic? It’s not as simple as car-can-stop-where-heavy-trolley-on-rails-can’t.

But even without imagining an exact scenario matching Trolley-Problem-like mortality payoffs, my larger point is that *summed over all the data*, Google’s optimization data scientists will face the exact same question. Every tuning of the algorithm will, summed over all historically-measured situations, have slightly different expected mortality effects on different relevant classes of potential victims. (Example classes being: Google customer, pedestrian bystanders, jaywalkers, other law-abiding drivers, other law-breaking drivers, cabinet officials suffering seizures, etc. ad infinitum.)

This dilemma may not even be new to Google. Even now, they may have enough data to be able to tell when certain changes to their PageRank algorithm, via their effect on what searchers see, have statistically significant effects on household-poisonings, do-it-yourselfer-carpentry-accidents, suicides, kitchen grease fires, soon-enough-diagnosis-of-fatal-if-untreated conditions, etc. *Google’s algorithms are already trading off different kinds of mortality among different classes of people.* The Google-Trolley problem is just another day-at-the-office, then, optimizing some different equations.

Glenn Mercer June 15, 2012 at 4:13 pm

Whether the problem is contrived or not is a good question, but it doesn’t bother me. How the ethics work out is a good question, but I am not smart enough to figure that out. What I DO find to be very real about this is the legal questions “autonomous” cars raise. Assume the Google car sees a person in the road, and stops to avoid hitting and killing that person. (Assume the person sprinting into the road does it so quickly that we don’t have the escape hatch of the AI turning the car back to human control.) The car is struck by the car behind the G-car. In that collision, the small child in the front seat of the trailing car is killed. This is not much of a contrived situation. Now, to be very morbid: you are the parent of the deceased child. Why would you NOT sue Google? I am not saying you should, or you could, or if you would be “right,” or if you would be “wrong,” but you know this will happen…. and so isn’t it not a valid and useful exercise to try to work out the legalities of this in advance, at least as best as we can?

John Schilling June 15, 2012 at 6:09 pm

You would not sue Google because you would lose. In a rear-end collision, the operator of the car in back is almost definitively at fault, for following too close. Here, at least, both the legalities and the operational realities have already been thought out in advance, and adding auto-driving to the mix changes nothing.

“Never follow another vehicle so closely that you cannot stop safely if that other vehicle applies full braking with zero warning”, is a simple rule that provides near-optimal outcomes in virtually all real cases. And, yes, is still implemented by platoon auto-driving schemes that involve following distances too close for human response – the safe stopping distance is quantitatively reduced, but the concept is still implemented. Why would anyone be daft enough to replace that with a set of rules that place small children in the trailing car at risk and put the lead car in the untenable situation of being required to maintain a certain minimum speed for some distance no matter what appears in the road ahead?

Aside from providing material for armchair philosophers, that is. It is becoming increasingly clear that if you want to generally minimize the number of dead bodies, you really need to have your transportation infrastructure designed by engineers, not philosophers. Our systems pretty much never require e.g. sacrificial fat men to stop runaway trolleys.

Dan Weber June 15, 2012 at 7:44 pm

By the time autonomous cars become good enough for us to worry about weird edge cases like this, they will be so prominent that the trailing car will also be autonomous too. The computer controlling the trailing car will be aware that the first car has slammed on the breaks before the occupants of the first car realize what’s going on.

(And the kid in the front seat is a major no-no, to say nothing of John Schilling’s point about a rear-ending car being nearly guaranteed to be at-fault.)

Glenn Mercer June 16, 2012 at 10:23 am

Good points all, I can only agree. Though I don’t think being rear-ended by a car following too closely, when I slam on the brakes to avoid someone in the road, is an “edge” case. (And given Americans travel 3 trillion miles per year, edge cases will always emerge, and will get news coverage! Who would have thought Toyota would have to spend $5 billion — and the government would even call in NASA — to investigate and correct how floor mats bunch up under gas pedals!) But never mind: I still think, however, that as we move to autonomous cars (and don’t get me wrong, I support this movement), we need to include the legal dimension as well as the philosophical or engineering and other such dimensions. And we cannot assume that the legal dimension will be rational, or fair, or right… but the lawsuits will have very real costs. Thus for example we have known how to do electric braking (electric actuation of calipers, rather than hydraulic) for many years, but it has hardly penetrated, even though it is a superior technology (it is kind of odd to think we are all driving around with plumbing systems in our cars…). Why? (And I know this from working with a brake company.) Legal risk: no OEM or brake company wants to be in the witness box saying “Yes the old system worked just fine, but we thought this one had advantages, and we are very sorry your family is dead, it was not our fault.” Lawsuits will always try to find the deepest pockets and attack them, right or wrong, and if we don’t have a basic legal framework in place for autonomous cars, I fear that the legal assaults will hold the technology up for a decade. I know Ford a few years back was simultaneously being sued for installing airbags (deadly!) and for not installing them (deadly!). I worry that some innovator will blithely sail off onto some highway, assert that logic and physics are on his side, something odd will happen, somebody will die, and then the legal firestorm that erupts would act as a “Hindenburg moment” that sets the whole thing back a decade. I am not arguing the merits of autonomous cars, which are very high, I am arguing against the mindset that implies that the trial lawyers can be overlooked because they do not have the facts on their side. Have the engineers design the system, yes, but “lawyer up” at the same time and get the relevant statutes in place, or we will be in deep trouble. Just browse the Association of Trial Lawyers automotive-section webpage to get an idea: http://www.justice.org/cps/rde/xchg/justice/hs.xsl/1139.htm

Comments on this entry are closed.

Previous post:

Next post: