The Google-Trolley Problem

As you probably recall, the trolley problem concerns a moral dilemma. You observe an out-of-control trolley hurtling towards five people who will surely die if hit by the trolley. You can throw a switch and divert the trolley down a side track saving the five but with certainty killing an innocent bystander. There is no opportunity to warn or otherwise avoid the disaster. Do you throw the switch?

A second version is where you stand on a bridge with a fat man. The only way to stop the trolling killing five is to push the fat man in front of the trolley. Do you do so? Some people say no to both and many say yes to switching but no to pushing, referring to errors of omission and commission. You can read about the moral psychology here.

I want to ask a different question. Suppose that you are a programmer at Google and you are tasked with writing code for the Google-trolley. What code do you write? Should the trolley divert itself to the side track? Should the trolley run itself into a fat man to save five? If the Google-trolley does run itself into the fat man to save five should Sergey Brin be charged? Do your intuitions about the trolley problem change when we switch from the near view to the far (programming) view?

I think these questions are very important: Notice that the trolley problem is a thought experiment but the Google-trolley problem is a decision that now must be made.


Google will just program the trolley to stop, with a fail safe chosen by a random number. Liability retroactively falls to the creator of the random number generator algorithm.

Actually, I think they will solve it with an "I'm Feeling Lucky" button.

dan1111, lol :D!

Is this going to be the first real-world application of Asimov's three laws?

Self-destruct mechanism?

Very funny nick. Well played.

"There is no opportunity to warn or otherwise avoid the disaster."

Do I have time to call my lawyer?

Corporations are persons now, so Google will only be liable if they: A.) are drunk, B.) kill a little white girl, and/or C.) fail to choke legislatures with campaign contributions.

Plus, no one cares about fat people anymore. At least that is what a recent Google search seemed to demonstrate.

Or D), a young black male.

If I bought a driverless car, its developer would look like Google.

Since concern about the death of young black males seems to be largely contingent on the race of the person deemed the proximate cause of their death, will Al Sharpton et al demand Google hire more blacks to do their programming? Would this make further deaths more or less likely?

That was a World Class Tag-Team Troll Slap. Well done, people.

On what planet besides privileged-elite-fantasyville?

Question 1: Is the code public?

Question 2: Are markets efficient?

If the answer to both questions is yes, then there will be neither fat people on the bridge, nor innocent bystanders nearby!!

I wonder how long we have to wait until a google-car trojan spwans.

How does existing code handle these dilemmas? Avionics and finance come to mind as likely places where these kinds of decisions have already been written into programs.

Avionics has found an easy solution: when in doubt shut off and hand over system to human.

As someone who has worked on TCAS (traffic collision and avoidance systems) for aviation applications, I can say that this situation could never occur. When faced with an impending colission, the autopilot is programmed to automatically turn in the direction which most expeditiously avoids the colision (and if the other airplane also has TCAS, there are rules in place to assure that both planes don't turn in the same direction). However, if that direction also would create a loss of separation, the system is programmed to evaluate other alternatives. The programmers never considered what to do if every single option (all three degrees of freedom) would create collisions, because nobody would put themselves in a situation where airplanes were packed that densely except for stunt formation flyers, who don't use TCAS. I wonder what their rules are, though.

What is the general consensus on the trolley problem among "experts" of the field? (philosophers and economists are most qualified i think...)

Economists: strictly speaking, the answer is indeterminate: there is no Pareto efficient outcome, taking all 6 people into consideration. If you use Kaldor-Hicks efficiency: push the guy into the trolley.
Philosophers: do only consequences matter (consequentialists)? If yes, then push the guy. Does the manner by which actions lead to consequences matter (deontologists)? If yes, maybe don't push the guy. Which is the superior framework? No way to know, just follow your gut.

Doesn't strike me that Kaldor Hicks works because there is the implicit assumption in KH that you can compensate the losers. Dead people don't get much compensation.

Dead people do get compensated. When a person is killed by a drunk driver, the family sues the insurer of the drunk driver and get compensated. The system can be set up such as the one person that is killed, to save the 5, his family can then sue Google (the company), for compensation. I imagine this is probably what will happen, even with its self driving cars.

Based on the drawing in this post, I gotta think that some of the liability should fall on the person who tied up the 6 people and put them on the tracks.

I blame George W. Bush

@liberalarts It was Bing.

The google-trolley problem is not a sensible analog of the trolley problem.

The trolley problem is an interesting thought experiment because the alternatives and outcomes are defined with perfect knowledge and in moral terms: kill one to save many.

A google-trolley problem isn't defined in terms: a google trolley will solve problems of masses and forces and friction and obstacles to minimize
probability of impact or force of impact, etc. It doesn't try to (or need to) solve the moral/philosophical thought experiment, it solves the physics.

The google-trolley problem does exist! When the trolley encounter a finite number of scenario each leading to a "catastrophic" outcome, how each outcome need to be evaluate: is kill one five time better than killing five (5 * kill(1) == kill(5)).
This kind of problem have probably been analysed in the field of planning (as part of AI). I am thinking of factory and large dock modeling...

'Expert opinion' here closely matches lay opinion, where people generally are in favour of flipping the switch, but against pushing the fat guy (although if you present both you get an effect where people try to be consistent and say the same for both).

(Aside: I've seen drawings like this of identical style in Unger's 'Living High and Letting Die'. Is there like a job for someone to full-time scribble out cartoons of these thought experiments?)

Its not that difficult a problem. If you are in charge of switching for a trolley, you pull a switch to divert the trolley from running into five innocent people. You don't push people onto the tracks in the face of an ongoing trolley.

The fact that once the switch is pulled, the trolley will run into yet another person is an accident. But the five people on the one track and the one on the other are equally innocent. At this point utilitarian accounting comes into play (five innocent people are more than one innocent people). You are not trying to kill either group, you are diverting the trolley from where it would do the most damage.

If you push a person onto the tracks in front of the trolley, you have just a committed a murder. That the murder has some beneficial effects is an accident. Probably all murders have some beneficial effects if you look hard enough.

And I don't get the google thing at all, So you are programming a trolley to go out of control and kill people, but not that many?

If you push a person onto the tracks in front of the trolley, you have just a committed a murder. That the murder has some beneficial effects is an accident. Probably all murders have some beneficial effects if you look hard enough.

You don't think some overzealous DA out to make a name for himself isn't going to charge you with murder if you throw the switch?

I'm with Ed here.

This is really the Hiroshima problem, isn't it? This is the decisions military commanders make. If you must chose between one dying and several dying, the rational choice is to chose the one on a ceteris paribus basis.

I thought Spock said it best in Star Trek II

And +1 as well to Ed, taking something that seems complex and making it pretty simple

@ Ed

The Google reference is re driverless cars, how will they be programmed to handle 'choices' where either 'choice' results in human death?

But it's a silly 'problem', no real world choice would look like that to a driverless car. If 5 people are in the road, the car will stop. It's not a trolley on a track that can't stop.

But it’s a silly ‘problem’, no real world choice would look like that to a driverless car. If 5 people are in the road, the car will stop.

The five people just stepped off the sidewalk into the path of the driverless car, which is going at 50 mph. It doesn't have time to stop. Either it swerves to avoid them, or it hits them. If it swerves, it will unavoidably hit another person. What should the driving computer do?

Maybe 'unavoidably hit another person', maybe not. Where is a car driving 50 mph? Not in a city with close pedestrians, only on a highway. If 5 idiots run onto the higway, I guess the car will have to swerve like a human operated one would. And it might swerve into another car but not another pedestrian.

In a place where pedestrians are close to the road, the car will surely be going the speed limit of say 25 mph and be much more able to stop. If someone wants to throw themselves in front of a Google car, they might succeed in getting hit. But there won't be other 'innocents' at risk. The 'trolley problem' is just way too theoretical.

Maybe ‘unavoidably hit another person’, maybe not.

In the scenario I'm talking about, hitting another person is unavoidable if the car swerves to avoid the people in its path. The car is driving at 50 mph on a public highway and the other person is on the adjacent sidewalk a few feet ahead of it. Either the car continues on the highway, in which case it unavoidably hits the 5 pedestrians who just stepped into its path, or it swerves onto the sidewalk, in which case it unavoidably hits the pedestrian who is walking there. What should it do?

In a place where pedestrians are close to the road, the car will surely be going the speed limit of say 25 mph

Huh? 25 mph speed limits are typical for residential streets, but arterial roads (with adjacent sidewalks) routinely have much higher speed limits. The one near my house has a speed limit of 45 mph. But the precise speed doesn't matter. Even 25 mph is fast enough for a collision to be unavoidable if the distance between the car and the pedestrians is small enough.

Indeed the reference is to driverless cars, but not to a silly artificial problem when driving a single car [or at least I don't think it is]. The decision that Google and the rest of the nation really faces is whether to field driverless cars when they are accurate and safe enough to substantially reduce the total number of accidents, or only when they are accurate and safe enough to never kill anyone.

If a certain number of cars driven by a certain number of human drivers over a certain distance would yield five traffic fatalities, and Google believes that the same amount of usage will yield but one, whether to release the system is the google trolley decision.


The fact that once the switch is pulled, the trolley will run into yet another person is an accident.

No, it's not an accident. It's a foreseeable effect of pulling the switch. Just as the death of the fat man is a foreseeable effect of pushing him off the bridge. So if the latter action is wrong, why isn't the former action also wrong?

Because the fat man is just minding his own business, but the other guy is hanging around on the trolley tracks like an idiot.

No, he's not hanging around on the tracks. He's been tied to the tracks.

I believe the real Thrasymachus would've said that it depends on the status of "you" in the problem.

What sort of trolley can be stopped from hitting five people by hitting a large sixth person first? The actor thinks he knows what is going on and how to reduce harm, but maybe he doesn't and acting on his ignorance will increase harm. How does he know the five people lying on the track won't stand up and leave before the trolley arrives? Where does all the certainity combined with impotence come from in these problems? Some people like contriving fantasies where they can be excused for killing.

Exactly! Talk about 3AM dorm room navel gazing...

Engineer here. Philosophers, call me. We can improve this.

After 9/11, the question was raised as to whether the military could legally shoot down a civilian aircraft full of innocent people to prevent more people dying on the ground. Rare, yes. But dorm room navel gazing? Not if you are Commander in Chief.

Nowadays I think Google would just make their own trolly and push that to the front of the track above all the other more relevant/useful trolleys ;-)

Given foreknowledge of this class of problem in such a way that you could write code to automate the decision process also implies a level of foreknowledge that should see you building better breaks & failsafe-stopping mechanisms rather than determining the number of people to kill. Foreknowledge & planning allows for problem solving rather than harm mitigation.

The example is extremely contrived. You claim it must now be solved, because we're getting self-driving cars, but that doesn't make it any more pressing to solve it. The chance that a self-driving car will find itself in such a dilemma is no higher than the chance that a human driver will find himself in such a dilemma. Both chances are so low that it's entirely down in the noise and the answer has essentially zero effect on traffic-safety.

Situations where it's really certain that you genuinely have the choice between killing one and killing 5 are exceedingly rare outside of constructed thought-experiments. Thus we have no pressing need for having all members of our society agree on the answer. Furthermore you have to *recognize* that you're in such a situation at all early enough to make a choice for your answer to matter.

I self-driving car will be programmed to avoid hitting anyone if at all possible, and to minimize the impact (for example by braking as much as possible) if impact is unavoidable. This will probably result in it heading for the option where it comes *closest* to avoiding impact alltogether. My guess would be that a self-driving car would have a tendency to hit the group of 5 with 30mph, rather than the single person with 50mph, simply because that is the course where it avoids impact for aslong as at all possible.

A more interesting question is if the Google cars make are able to separate animate objects from scenery at all, and if they make different decisions when unoccupied. If you cannot avoid crashing, it makes sense to opt for crashing into a parked car or a tree, rather than a person, and if the car is unoccupied it'd make sense for it to opt for hitting a parked car in 40mph rather than hitting a pedestrian in 20mph. I doubt they make this kind of trade-off though.

Gunnar, you beat me to it. While providing something to do after breakfast for the philosophically inclined, this particular thought experiment has little bearing on real situations, not only because it is rare that one in confronted with such a choice, but also because one's ability to predict outcomes in these cases is extremely limited.

The bystander on the bridge, who is not obligated to do anything, can't possibly know that pushing the fat man over will save anyone. That would involve a host of calculations and judgements that are impossible for anyone to perform in real time. An honest bridge onlooker would admit to himself that pushing the fat man over the bridge would likely do no more than increase the number of dead to six, while simultaneously guaranteeing that he will be arrested for murder.

It would seem that a recognition of our inability to correctly predict outcomes with any degree of certainty should lead us to follow precautionary principal similar to the one doctors are supposed to subscribe to.

In cases where one must take an action something (ie when one is driving the car or programming the robot), I believe most people and programmers would swerve to hit one to try to avoid hitting five; this does not seem like a philosophical

Gunnar, the trolley problem *is* exactly what your left with at the point that Google cars can distinguish people. At some point, a programmer *will* be making the decision whether to (for example) swerve off the road to hit someone walking on the sidewalk vs. hitting children sprinting across the road, and unlike a human being who can claim they simply "reacted", the code will be there to prove that the action was premeditated. (Or conversely, the choice was made *not* to avoid hitting several people when a less tragic option was available.)

The whole point is that eventually a real-life person will be faced with this decision. Suddenly the trolley problem, contrived as it is, will be very, very real. It may never actually happen, but the code (and the choice) will have to be there. (And when you have self-driving cars that in fact can accurately measure whether the option of injuring a few over many exists, I would not be surprised that the option occurs in reality far more often than we're aware of.)

A fascinating original post.

Tom, nicely explained.

Isn't there a distinction between the 2nd part of the trolley car problem and the situation faced by a driver, whether a human being in the flesh or a robot programmed by a person? A driver must act, regardless of how imperfect his, her or its information. I would conjecture that most if not all drivers would swerve to hit 1 rather than 5 if that is the only choice. The trolley bystander need however not act.
He or she or it should be extremely reluctant to take an unforced action that is likely to kill another bystander, who would not otherwise be killed, since in real life it is very hard to be sure how things will play out. The google robot just needs to minimize expected deaths, since that is how real drivers would presumably behave.

Exactly. The commenters claiming that this thought experiment is somehow unrealistic for driverless car programers to have to address are not thinking things through.

I doubt the moral problem exists as relates to autonomous vehicles. The creation of the moral dilemma assumes a series of programming decisions that simply are not made. I doubt there is a flow of if-then-else logic to handle any situation, as there are a nearly infinite number of variations to cover. Neither is it a chess-solver that looks ahead at multiple scenarios (there simply isn't time - especially when the more scenarios you consider, the less time you have to execute any of them). Instead, simple avoidance based on relative weights is the likely algorithm (also weighted by road detection). You may find that while the car may hit a mailbox to avoid a dog, it might sometimes hit a dog to avoid hitting a dozen mailboxes.

This scenario likely couldn't even be properly tested by programmers, since slight variations in the massive number of variables to be considered would result in an exponential explosion of results, so there's little moral input any programmer could put into it (aside from weighting human objects very, very high relative to other objects).

Also, I assume an autonomous vehicle would obey posted speed limits. If you have ever driven the speed limit (not 5 or 10mph over), you quickly discover most accidents are easily avoided by simple braking (assuming you are paying attention to the road, a task difficult for humans, but easy for computers), so this type of emergency driving would be highly unlikely.

Ah, this is a self-driving trolley. I thought it was a mere homocidal programmer.

For self-driving technology, the order to choice will be

1) protect the vehicles occupants from serious injury or death
2) protect others from serious injuries or death
3) protect vehicle occupants from minor injury

I think the market will require that the primary obligation of the self-driving system is to protect its owner.

@gunnar, I agree that this is unlikely to be an issue in a fast-moving, uncertain situation like a self-driving car (or an impending trolley collision). However, real scenarios like this do happen. In WWII, the British used disinformation to cause the Germans to aim V-2 rockets away from heavily-populated London--and toward less-populated areas. It's hard to come up with a more exact analogue to the trolley problem than that.

A more interesting question is if the Google cars make are able to separate animate objects from scenery at all

Google cars pre-map the environment. If something new shows up it's more likely to be a person.

They can do a lot of pre-computation to figure out what to hit. Bushes before mailboxes before trees before people.

Incidentally, one thing really hard for current cars to discern is a person in a skirt. That will take some work.

Let's say it's a multi-millionaire manager at Yahoo facing a dilemma. He must decide whether to hand over the IP address and emails of a pro-democracy dissident to the Chinese authorities resulting in the torture and possible death of a decent human being or else risk marginally reducing the share price of a ludicrously rich company. Pretty simple choice. Pecunia vincit omnia when your are a psychopath.

Yes! Let's deal with real problems not some situation that's completely contrived.

I fear the answer is this: Is it easy for me to get another job at a competitor? If it's a good job market, I stand on principle. If I have to take a multi-million dollar salary hit ...

Alex, what I find most interesting is that you associated the Google Driverless Car with the Trolley Problem. The software powering the Google Driverless Car faces the same scenarios human drivers face daily. If your association is correct, you should be able to restate all the different version of the Trolley Problem in terms of real-world driving scenarios. That in itself would be an extremely useful thing compared to the artificial and unrealistic Trolley Problem scenario(s). My intuition tells me that it is not possible to restate the Trolley Problem in terms of use cases that apply to human drivers and/or the Google Driverless Car.

I think that's the point. Some programmer has to make the decision sitting at a keyboard.

For example, do you swerve to avoid a pedestrian, thus breaking a traffic rule by entering the adjacent lane or do you just keep driving?

> Do your intuitions about the trolley problem change when we switch from the near view to the far (programming) view?

I thought this bit was odd. Isn't programming almost the definition of near mode? Am I misunderstanding something?

Having to program it makes it no longer some far-off abstract problem, but rather a matter of details and precise solutions.

Obviously, all Google trolley must be built with an on-board utilitarian ethics computer like this one:

It can then decide not only how many fat people to hit for the greater good, but also where you should want to go to maximize social utility.

That was an amusing illustration of the classic "utility monster."

An equally implausible thought experiment...

The first generation of google cars will not have to "solve" this problem in any meaningful sense. There will be collision avoidance algorithms and probably some rudimentary "ditching" logic ("I don't have enough traction to stop so I have to hit either a ball - "low consequence collision" - or the boy chasing after it - "very high consequence collision" - , so I steer towards the ball"). The trolley problem reaction is just the epiphenomenon of this ditching mechanism - it will see many very high priority collisions versus a single high priority collision, and the actual behavior will depend as much on tiny differences, such as calculations of probabilities of success due to slight differences in friction on path 1 vs path 2, as it does on the amount of damage to be done. Either way, no morals done by the car, and no "moral override" morals done by the programmers (beyond the basic classification of collisions).

However, there will be some humorous epiphenomenon, such as when a car hits a parked police car to avoid hitting a moving truck, but it's best not to read too much into it.

It is highly unlikely that a first-generation self-driving car will be able to tell the difference between a ball and a boy. Or, more generally, evaluate the moral consequences of a collision in any relevant way. The reality, however boring it might be to armchair philosophers, is that self-driving cars will act to postpone any collision as long as possible and/or minimize the velocity of the eventual collision. Who or what lies at the end of that we-can't-avoid-colliding-any-longer path, will simply not factor into the moronic slab of silicon's decision-making process.

I agree that the philosophizing is over the top, but the first generation self-driving car that becomes (or is intended to become) a significant amount of traffic will have to have some rudimentary object classification. It can't slam on the brakes whenever a bird flies into its path, but it can't run over potholes with abandon. So whatever cost function is used to actually select actions will almost certainly take that into account.

Not the first generation. But the third or fourth? Sure - the trolley problem will become very real.

Swerve off the road to hit a pedestrian on the sidewalk, or hit the group of four kids that ran out onto the road? With knowledge of exactly what the car can do given perfect knowledge of weight, dynamics, etc., the trade-off won't have the gut-instinct that a human would be making. Instead, someone somewhere will have to make that decision with the cold-hard fact that they're making a choice.

And my guess is they'll be making it within 20 years, if not 10.

Alternately, they can not make the decision because they do not believe that the situation will ever come up outside of armchair thought experiments, or because they believe that the application of a generic try-not-to-collide-with-anything-at-all-ever algorithm to such a situation will produce a result that is close enough to optimal that any further effort is unwarranted. The combination of the two is almost certainly close to true. In practice, simply lumping dogs, children, balls, fire hydrants, and the like into the general category of "bigger than a breadbox - do not collide with these objects ever" will almost certainly deliver adequate real-world performance.

It is very definitely NOT the case that programmers of autopilots, AIs, or expert systems in the real world "have to" make explicit choices for every scenario their system could conceivably face in the future. They can, and do, and will continue to, leave the edge cases to the default behavior of the system.

Well, as someone who had a friend missed being hit by a car when he was walking on the shoulder of a country road because a deer popped out in front of the car, I would hope Google will will use all the data they have. In my friend's case, the driver never saw him and simply instinctively swerved onto the shoulder to avoid the deer.

Likewise, I truly hope that the Google car will smash into a parked car rather than hit a child running across the street.

Both situations (the second especially) are bound to occur in real life. It's just that human reaction time/instinct prevents us from deliberately having to make those choices. Not so Google.

And, quite frankly, blind-folding oneself so that you *cannot* intervene in the trolley's flight is an interesting choice in an of itself.

It's very likely the self-driving car will never get into that situation. In a residential neighborhood with sidewalks, the car will notice the kids on the sidewalk and not be going fast enough that it can't stop quickly.

Remember, the (correctly functioning, as per our hypothetical) car doesn't get distracted, have blind spots, or forget about objects that seem to disappear behind barriers.

You can have instances where people are where people should never be, like this: The self-driving car will probably have already used sensors to realize that the car two in front of it has come to a complete stop and already be considering alternatives.

The answer is simple:

Google would conduct an auction.

If the fat man were Donald Trump and the 5 persons were poor,

The Hair would win, and

The Five would be squished.

Easy answer. Since their revenue is primarily ad based, Google will just determine who will die based on maximizing advertising revenue.

Exactly. They can avoid the Android user by swerving to hit the guy with the iPhone.

Well, maybe we can make the driver play this before starting the car:

Driverless cars would be programmed to avoid this scenario as much as possible. In the trolley case, that would be by making sure that the distance from an obstacle is large enough to enable a car/trolley to stop. For cases that are extremely remote in nature, it might not make sense to program them. Programming for each and every scenario can make the program complicated enough to the point of becoming unreliable.
Another way to think about a more realistic scenario would be when a car finds itself heading towards a crowd. Would the driverless cars have an optimal avoidance programmed in? Another scenario would be when a driverless car is coming around a bend and finds ten cars stopped in one lane and one car in another lane with no way of avoiding an impact... would it switch lane?

You couched it in realistic terms, but I think this is exactly the answer! If anyone but the victims or their mustachioed captor is to blame, it's the designer of the track and the fellow who insisted that freight had better get where it's going, a few stray limbs be damned. Where it comes to machines, ethics is all about designing your system with the right risks in mind -- and that applies to traditional railroads as much as to driverless cars. And since that freight includes food that keeps people alive, and medicine, and organs for transplant, and who knows what else, the occasional kid whose shoe gets stuck playing chicken -- loses.

Which choice would minimize legal liability and/or insurance premiums?

I would be curious to know what legal liability / insurance experts think about this.

"Which choice would minimize legal liability "

Prediction: Whichever one they didn't make.

Those saying this is merely theoretical and unlikely to actually happen are wrong. Google must determine the car's behavior when the car is unavoidably about plow through a non-evenly distributed crowd of pedestrians where no-matter what the car does it will hit at least 1 person.
The question is should the car turn itself to hit the fewest number of people or have a default rule be that it can never turn into a pedestrian, even if that means plowing straight ahead into a much larger group of people?

You're assuming that the Google car is programmed with a bunch of rules to cover many unlikely situations. This is almost certainly not the case. You make systems like this reliable by having small numbers of robust rules, even if they have potentially not-perfect behavior in pathological cases. Lots of complex rules would be likely to interact and cause more problems than they're worth.

As others have noted, the Google car is likely to react to an emergency by braking to a stop and turning off. Even if an expert driver might save lives by stomping on the gas or doing a handbrake turn in some very unlikely scenarios.

Or swerving.

That's a choice, right?

I predict bunny rabbits and squirrels lives are about to get a lot worse (and shorter).

Sure, maybe. It'll be simple defaults that are good in almost any situation. Though I expect not swerving for bunnies will save a non-trivial number of human lives.

A driverless car should presumably have a rule that it should swerve to avoid hitting people (if it can't stop in time). It should presumably also have a rule that it should not drive into people. If swerving to avoid hitting some people would cause the car to drive into other people, the two rules would be in conflict. The question is how the car should be programmed to resolve that conflict.

Really, the only way that the car won't be able to stop in time is if the person magically appears* or the brakes fail. Otherwise the car will slow down enough ahead of time to have a safety margin.

(When I say "magically appears" to cover things that are amazing exceptions but could still actually could occur. Guy jumping out of a truck, falling off a bridge, on so on.)

Really, the only way that the car won’t be able to stop in time is if the person magically appears* or the brakes fail.

The person doesn't "magically appear." He just steps off the sidewalk into the path of the car. Seriously, you've never heard of pedestrians being accidently killed or injured crossing the street when they were struck by a car they didn't realize was there? We can assume that the stopping distance for driverless cars will be less than that for human-driven cars, because computers have much faster reaction times, but the laws of physics simply do not allow instantaneous stops. If a pedestrian steps 20 feet in front of the path of a car traveling at 50 mph, there is simply no way the car could avoid hitting the pedestrian unless it swerves.

The car will see the person on the sidewalk long before they suddenly walk into the street. And anywhere there is a "sidewalk" the car will not be going 50 mph.

People walking along the side of the freeway will be a problem. However, the problem will be that the computer cars will all drive cautiously. They'll probably each call 911 to report meatbags on the road.

The car will see the person on the sidewalk long before they suddenly walk into the street. And anywhere there is a “sidewalk” the car will not be going 50 mph.

Arterial roads, with adjacent sidewalks, routinely have speed limits of 40 mph or more. Since we allow this for human-driven cars, it seems highly implausible that we will demand a lower speed limit for driverless cars, given that driverless cars will have much faster reaction times. But a collision would be unavoidable without swerving even at lower speeds if the distance between the car and the pedestrian is small enough. The minimum possible stopping distance (assuming a reaction time of 0) on dry pavement for a car travelling at 30 mph is about 45 feet. On wet pavement, it's 90 feet. It is obviously possible for a pedestrian to step less than 45 feet in front of a car traveling at 30 mph. I don't know why you think such an event is even implausible, let alone "magical."

It is indeed magical for a person to appear such that the car didn't know they were there before. The car will be aware of the pedestrian on the sidewalk the whole time.

The car doesn't care about the speed limit. If it's stuck in the far right lane next to pedestrians, it can (and probably will) slow down. And if the speed limit is 40MPH, surely the right-most lane is going less, right?

@Dan, programming the car drive in such a way that it could stop to avoid all possible collisions would make it unusable. Maybe some extremely patient passengers could put up with the car going 10mph next to any pedestrian, but what about oncoming cars? What about traffic in adjacent lanes? What about cars in side streets that could pull out suddenly? The car would be paralyzed trying to avoid all of the potential obstacles. Real-world driving depends on the good behavior of thousands of other actors. If the Google car is going to do real-world driving, it will have to get into situations in which it can't stop in time if something unexpected happens.

I see no basis for your claim that the car "probably will" slow down if there are pedestrians close by. Again, we allow humans to routinely drive at 40 mph or more when there are pedestrians just a few feet away on an adjacent sidewalk. It seems extremely unlikely that we will require that driverless cars travel much more slowly under the same conditions. And again, a collision is inevitable even at much slower speeds if a pedestrian steps close enough in front of the vehicle, unless the vehicle swerves. Tens of thousands of pedestrians are killed and injured every year because they step into the path of motor vehicles that cannot stop in time to avoid hitting them. Driverless cars may reduce the rate of these collisions by reducing stopping distances, but they will not be able to eliminate them. If the only way to avoid the collision is to swerve, the vehicle may then be faced with the "Google-trolley problem."

The reason that libertarianism has never risen above pizza conversation in the dorm room is precisely demonstrated by this posting.

Are we actually talking here about using drones in war? Allowing us to decide who lives and dies remotely, without having to get our hands (or consciences) dirty?

To be fair, it's only very slightly more remotely than what was going on before...

Are fat guys the lowest we can go on the moral value scale? Why does it change from "innocent bystander" to "fat guy"? "People wouldn't push an innocent bystander in front of a moving train, even if he were fat!"

The guy is fat only because the dilemma requires that only a fat guy will stop the trolley. Otherwise the obvious morally superior solution is to jump in front of the trolley yourself.

Although if you have the time to calculate that the fat man will strip the trolley but you won't, either you're an awful lot faster at calculating these physics problems than I am, or you've got time to call the emergency services and let them handle it.

Where this seems goofy to me is in the implementation. People don't have this kind of pre programmed thought process in the moment - they react. There is very little intentionality at heart rates that elevated and in situations that unfamiliar. We construct all sorts of stories about accountability and moral implications after the fact, but the truth is the situation is rare enough that all of those stories are flawed and certainly none are generalizable. If more people are harmed the total cost is higher, but it's unclear to me we need to punitively address the choice of the driver for acting in the moment at all.

That's just to say if you program the car one way vs another, it seems you could be held specifically accountable for a design that maximized casualties across are broad range of accident scenarios, or for an algorithm that wa so negligently constructed it failed to address very common easy harm mitigation scenarios, but the idea that the program would have to be accountable for failing to correctly handle each possible scenario is goofy.

Wait, Google is planning on having some of its cars push fat people in front of other cars? That does sound bad.

The bottom line is obvious. If you are fat, never accept a hitch from a Google car.

This is a simple economic solution to this. You use the face recognition to identify the potential victims. Get their tax returns and spare the people who have paid the most in Federal Income Tax and/or make the most money (are the most productive).

The Death Panel Car.

Or the market solution, vending machines on the over-passes with trap doors.

>If your association is correct, you should be able to restate all the different version of the Trolley Problem in terms of real-world driving scenarios.

Not quite so simple as the trolley problem, but - you are programming a car. When travelling down a 2 lane mountain road, your car rounds a bend with the mountain to the left and a cliff to the right, and it encounters a small bus in your lane, passing a motorcycle. Should the car be programmed to:

a) Remain in your lane while slamming your breaks, most probably killing your passenger and likely the people in the bus (who will be redirected down the cliff to the right by the collision). The motorcycle proceeds unharmed. (This is the likely outcome with a human driver - a human wouldn't have the reaction speed to avoid a collision.)
b) Swerve slightly inward, saving your passenger by clipping the bus with your right fender rather than crashing head-on. This still likely will send the passengers of the bus over the cliff to their doom, but the motorcycle may be able to avoid the collision.
c) Swerve more inward, hitting the motorcycle, likely saving both you and the passengers in the bus, but killing the (completely innocent) motorcycle rider.
d) Serve outward over the cliff, killing your passenger but avoiding the collision with either oncoming vehicle.
e) Swerve enough inward to take out the motorcycle, but not enough to avoid the bus. This either kills everyone, leaving no witnesses, or saves only your passenger. Doing this minimizes Google's litigation risk, since no one living will realize what happened, other than your passenger. (It is also a plausible, albeit completely accidental, outcome with a human driver.)

I consider your passenger to be the fat man, the bus passengers to be the five on the track, and the motorcycle rider to be the one on the side track, though this could be argued. Remember, you are the car's programmer/builder, (ie. Google's 'agent') not its passenger. A purely economic agent acting for his principle's interest might pick e), as it fits best with the programmer's principle's interest in avoiding a massive lawsuit. (In cases a and b the bus passenger's families sue Google. In cases a and d the car's passenger's family sues. In case c the motorcycle rider's family sues. In case e, no one knows enough about what happened to sue.) In this scenario, the bus driver is the proximate cause - he tied his passengers to the track and loosed the trolley, but he has shallow pockets. Much more lucrative to sue Google, who after all had the ability to save the life of any particular individual in the scenario with any choice, and Google doesn't have the 'didn't have time to think' excuse a human driver would have in court. Solution e) would not fit with Google's mission statement, though. (Don't be evil.)

A complication is that you aren't programming specifically for this situation, but let's say testing indicates that these are outcomes from five potential Al Gore rhythms for the vehicle's code, which are otherwise comparable in outcomes.

Excellent scenario exposition.

One mitigating factor against (e), in the real world that also includes liability lawsuits as a consideration, is that the presence of a software agent also highly implies the existence of either (1) some sort of 'black box' recording of the whole accident's inputs and decisions; and/or (2) simulation systems for sending the agent through just these sorts of scenarios. So what happens (or could happen) on a remote mountain road under Google's chosen algorithmic optimizations would be hard to keep a corporate secret.

Ahh, but unlike in the trolley problem, the car also has to anticipate what the other drivers will do. Driving courses teach that if you meet an oncoming car in your lane, you shouldn't swerve left to try to pass on the opposite side of it, because the oncoming driver will most likely instinctively react by returning to his own lane. The best course for Google is probably to slam on the brakes and stay to the right. The motorcyclist may be sacrificed to the bus driver's instincts, but the bus driver will almost certainly not let the head-on collision occur if there is time for him to swerve.

More importantly, the fact that there are other actors affecting the outcome probably takes this out of the realm of moral dilemmas.

The correct answer to both should be yes, but it's not really something that would be programmed, because decisions like this are almost never simple binaries -- especially not in cars, which have far greater degrees of freedom than trolleys.

Philosophically, I don't see how the two problem differ. Given enough AI to recognize the trolley problem when the Google car sees it, you are simply asking whether to bake in the utilitatian or anti-utilitarian response -- however consistently or inconsistently you have answered the questions. In both the thought experiment and the programming problem, you are answering a question about the foreseeable consequences of future action, so don't the answers have to be the same? Why would you program a computer to do something different than you would do yourself (with all the caveats of the first clause of the second sentence above)?

Since personal connections are a legitimate part of morality, I always save the person I'm closest to, emotionally first, then physically if no emotional connection exists.

Since the Google Trolley is being programmed by someone who doesn't know the people involved, the programmer should try to maximize the expected number of lives saved.

I would want the car to do its best to save my life. I would pay more for that feature.

True that. The moral choice isn't the one that necessarily will sell. Which I suppose makes it not the moral choice.

.. which might be the thing that the car programmers would have to do. Would you buy a car which touts itself as being able to kill the least amount of people while potentially putting you at risk? What if the situation is to avoid killing two people vs killing the driver by swerving into the divider.

It's a question for the lawyers. Where's there's an issue, the law will decide how to program it. Then there will be blog posts on econ and law blogs about the law costing efficiency due to liability. I can also imagine a scenario where the software makes a very, very bad choice, that no human would make, and it will be the one event used to define the industry, or even ban it. Or, full employment: every critical software will need at least one human present at all times.......until the write the software to replace that human.

I think a more practical question is what will happen if there are mass deaths caused by hackers.

In theory, you should push the fat man and program the same. In reality, you should not.

The Trolley Dilemma is a very contrived situation. Car AI will never be written to reason at such a level, nor does it need to..

If you think about it, the Trolley Dilemma is quite complex: It requires predicting two hypothetical futures, predicting a likely loss of life in each future, and then backtracking within a graph of causal inferences to attempt to chose whichever outcome the programmer decides is moral. In a car you'll have an unbounded number of futures. Exploring them this deeply, and with accurate understanding of causal dependencies and likely reactions of other cars is just too speculative to be useful.

Real AI cars will never be written this way. Instead they'll have a variety of danger estimators. If some combined scoring of these passes a threshold, the car will initiate a fail safe behavior of making a rapid controlled stop.

I don't see much point in debating a philosophical view here, because it's an unrealistic scenario. When the problem doesn't match reality, there is no real answer. The count of angels on a pinhead is meaningless to discuss.

Control theory for dangerous devices (rockets, planes) is a well explored topic. I think the engineers have a handle on the reality here. I don't see much where economists or philosophers can contribute to that via a thought experiment that will never actually happen.

"If some combined scoring of these passes a threshold, the car will initiate a fail safe behavior of making a rapid controlled stop."
While normally true, it fails in some cases where there is no 'fail-safe'. For example it's driving along and a car rolls out in front of you; too close you your car to safely stop. If possible, the car would swerve and avoid. If the other lane has a small undefined 'something' (A pothole, dog, kid?) being in the swerve path, should it swerve?

As an engineer that works on safety systems, I can tell you that scenarios exist where there are no 'safe' options... but which still have 'better' options. We expect things like the trolley problem to happen anytime there's some human interaction, because humans will periodically 'brain fart' and do things unexpected and dangerous.

The programmers don't make these decisions, they are based on training data. After enough lawsuits have accumulated, the AI will learn to make the choice that minimizes legal repercussions.

Does anyone avoid throwing the switch but decides to toss the fatty?

Is Google allowed to hire Denzel Washington?

The only "problem" here is something akin to political correctness. We all know what must be done, and when the trolley starts rolling we would do it.

What we don't want to do is acknowledge what we would do, because that makes us look bad.

I don't think this is correct.

Is Brad DeLong on the bridge?

What if the computer wrongly perceived through its separate perception system that 5 people were to die unless the person on the bridge were pushed?

Not only must the decision rule be right, but also the perception system.

If you right a rule ASSUMING perfect information, what happens if there is imperfect information.

it is not about numbers. who do you care for - that one person you might personally know and care for or those 5 evil strangers? or the other way around, those 5 family members or that one criminal? it is about the value you hold for both choices.

all things being equal (except probably the numbers), there is no correct answer choosing between two evils. but, a better decision would be to choose the lesser evil AND never stop to look for a better option. in other words, don't just stand there, DO SOMETHING.

given the situation, i would throw the switch and divert the track AND try to do something else to prevent disaster. for version 2, i wouldn't push the fat guy. it's no guarantee it would stop the trolley, i might end up with 6 dead, 1 by murder.

but since it is just a scenario, until i give an answer, nobody dies. i'm still thinking.

Throw the fat guy over. Even if no one's lying on the tracks. Fat guy are public health criminals.

Is fat guy an organ donor?

If "YES", throw him off the bridge.

If "NO", then throw him off the bridge.

You probably get the organs after the trolley runs him over.

crap. You probably can't get the organs.

Google is data-driven and trusts large datasets over any other reasoning. Google would constantly A/B test different strategies in the field, varying car decisionmaking slightly by time, road, region until statistically significant patterns arise. At that point, they'd adjust to maximize some desired output.

At that level of analysis, some of the objections that the scenario is contrived fall away. The car doesn't need certainties from super-sensors and super-modeling to decide between stark outcomes. A data scientist thousands of miles away know that if tuning variable X is 2, cars cause one mix of passenger/nearby-motorist/bystander death and dismemberment, if X is 3 a slightly different mix, and if X is 4 yet another mix. In aggregate, totally impersonal. (The data might even tease out effects on mortality rates through mechanisms that don't even involve reported accidents.)

As the comments about ad revenue, android-vs-iphone users, and liability suits suggest, Google's staff might not be strictly maximizing survivor-count or years-of-productive-life-remaining.

But perhaps the right kind of adjusted liability system could help. Maybe with autonomous cars, there's not even a need to investigate individual cases. There's just a large bill on an official schedule for each tangentially-associated death/injury, with the understanding that Google's algorithms will mix that into their (mostly monetary) optimization calculations, and the constraint that the end-result be far fewer deaths than when humans ruled the roads.

So the Google-Trolley problem becomes: what bill should be imposed in each situation. Unless the billing-entity can make fine distinctions about 'switch-like' or 'push-like' or 'passivity-like' situations, it would probably be sending 'a life is a life' input into the Google system, pushing Google-Trolleys in the purely utilitarian direction.

This comment is well thought out, but I still fail to see just how a true 'trolley problem' can arise in the real world.

If there are people in the road, the cars will stop. Not swerve into other people. Worst case the car gets rear ended by another car behind driving too close, and traffic laws now assert the driver behind is at fault for that collision.

The trolley problem 'works' because a trolley presumably can't just stop, it has to be stopped (by the falling fat guy, not sure how that works in reality) or rerouted by switching. A Google car is just a car. It sees a person, it stops.

See the above 'mountain road' scenario from Macarena: the human bus driver is doing something rare and illegal, forcing a stark decision fraught with mortality on the law-abiding and perhaps even-(electromechanically-)quick-witted car-driving protagonist.

Or picture all sorts of other accidents: pedestrian tripping and falling into roadway. Brake failure from adjoining streets. Loose beam from truck escaping endangering Google-Car passenger -- does car swerve to sidewalk or oncoming traffic? It's not as simple as car-can-stop-where-heavy-trolley-on-rails-can't.

But even without imagining an exact scenario matching Trolley-Problem-like mortality payoffs, my larger point is that *summed over all the data*, Google's optimization data scientists will face the exact same question. Every tuning of the algorithm will, summed over all historically-measured situations, have slightly different expected mortality effects on different relevant classes of potential victims. (Example classes being: Google customer, pedestrian bystanders, jaywalkers, other law-abiding drivers, other law-breaking drivers, cabinet officials suffering seizures, etc. ad infinitum.)

This dilemma may not even be new to Google. Even now, they may have enough data to be able to tell when certain changes to their PageRank algorithm, via their effect on what searchers see, have statistically significant effects on household-poisonings, do-it-yourselfer-carpentry-accidents, suicides, kitchen grease fires, soon-enough-diagnosis-of-fatal-if-untreated conditions, etc. *Google's algorithms are already trading off different kinds of mortality among different classes of people.* The Google-Trolley problem is just another day-at-the-office, then, optimizing some different equations.

Whether the problem is contrived or not is a good question, but it doesn't bother me. How the ethics work out is a good question, but I am not smart enough to figure that out. What I DO find to be very real about this is the legal questions "autonomous" cars raise. Assume the Google car sees a person in the road, and stops to avoid hitting and killing that person. (Assume the person sprinting into the road does it so quickly that we don't have the escape hatch of the AI turning the car back to human control.) The car is struck by the car behind the G-car. In that collision, the small child in the front seat of the trailing car is killed. This is not much of a contrived situation. Now, to be very morbid: you are the parent of the deceased child. Why would you NOT sue Google? I am not saying you should, or you could, or if you would be "right," or if you would be "wrong," but you know this will happen.... and so isn't it not a valid and useful exercise to try to work out the legalities of this in advance, at least as best as we can?

You would not sue Google because you would lose. In a rear-end collision, the operator of the car in back is almost definitively at fault, for following too close. Here, at least, both the legalities and the operational realities have already been thought out in advance, and adding auto-driving to the mix changes nothing.

"Never follow another vehicle so closely that you cannot stop safely if that other vehicle applies full braking with zero warning", is a simple rule that provides near-optimal outcomes in virtually all real cases. And, yes, is still implemented by platoon auto-driving schemes that involve following distances too close for human response - the safe stopping distance is quantitatively reduced, but the concept is still implemented. Why would anyone be daft enough to replace that with a set of rules that place small children in the trailing car at risk and put the lead car in the untenable situation of being required to maintain a certain minimum speed for some distance no matter what appears in the road ahead?

Aside from providing material for armchair philosophers, that is. It is becoming increasingly clear that if you want to generally minimize the number of dead bodies, you really need to have your transportation infrastructure designed by engineers, not philosophers. Our systems pretty much never require e.g. sacrificial fat men to stop runaway trolleys.

By the time autonomous cars become good enough for us to worry about weird edge cases like this, they will be so prominent that the trailing car will also be autonomous too. The computer controlling the trailing car will be aware that the first car has slammed on the breaks before the occupants of the first car realize what's going on.

(And the kid in the front seat is a major no-no, to say nothing of John Schilling's point about a rear-ending car being nearly guaranteed to be at-fault.)

Good points all, I can only agree. Though I don't think being rear-ended by a car following too closely, when I slam on the brakes to avoid someone in the road, is an "edge" case. (And given Americans travel 3 trillion miles per year, edge cases will always emerge, and will get news coverage! Who would have thought Toyota would have to spend $5 billion -- and the government would even call in NASA -- to investigate and correct how floor mats bunch up under gas pedals!) But never mind: I still think, however, that as we move to autonomous cars (and don't get me wrong, I support this movement), we need to include the legal dimension as well as the philosophical or engineering and other such dimensions. And we cannot assume that the legal dimension will be rational, or fair, or right... but the lawsuits will have very real costs. Thus for example we have known how to do electric braking (electric actuation of calipers, rather than hydraulic) for many years, but it has hardly penetrated, even though it is a superior technology (it is kind of odd to think we are all driving around with plumbing systems in our cars...). Why? (And I know this from working with a brake company.) Legal risk: no OEM or brake company wants to be in the witness box saying "Yes the old system worked just fine, but we thought this one had advantages, and we are very sorry your family is dead, it was not our fault." Lawsuits will always try to find the deepest pockets and attack them, right or wrong, and if we don't have a basic legal framework in place for autonomous cars, I fear that the legal assaults will hold the technology up for a decade. I know Ford a few years back was simultaneously being sued for installing airbags (deadly!) and for not installing them (deadly!). I worry that some innovator will blithely sail off onto some highway, assert that logic and physics are on his side, something odd will happen, somebody will die, and then the legal firestorm that erupts would act as a "Hindenburg moment" that sets the whole thing back a decade. I am not arguing the merits of autonomous cars, which are very high, I am arguing against the mindset that implies that the trial lawyers can be overlooked because they do not have the facts on their side. Have the engineers design the system, yes, but "lawyer up" at the same time and get the relevant statutes in place, or we will be in deep trouble. Just browse the Association of Trial Lawyers automotive-section webpage to get an idea:

Churchill was right. Play God and let the bombs fall on Coventry.

Churchill was right. Play God and let the bombs fall on Coventry.

Push the fat guy too. Sorry fat guy.

Churchill was right. Play God and let the bombs fall on Coventry.

Push the fat guy too. Not because he's fat though. Sorry fat guy.

You don't ask philosophers or economists on these type of questions, you ask lawyers. It is totally a legal question, and lawyers are the only ones who can give a definitive answer. This is the beauty of English common law system - there will already be a bunch of precedents (for instance from Human drivers). If there are not, then they will be quickly created to allow exploitation of new innovations like this.

This thread illustrates the difficulty of arriving at a rationally derived solution to an emotional problem.

I simply don't see this driving as analogous to the trolley car problem and I don't think society does either.

The key distinction is whether the agent is obligated to act or not. The driver or programmer - they are equivalent - must act. The bystander on the bridge need not.

The driver who swerves to kill one on the sidewalk to avoid five on the street is not blamed, but praised (assuming he, she or it is not otherwise at fault). And if they do not swerve, they are not blamed either. It is understood that the driver must make a decision quickly with very imperfect knowledge.

This is very different from the man on the bridge. The man on the bridge, unlike the driver, is not forced to make a choice about who, possibly to kill. He is not in the drivers's seat. Because he is not forced, he should, in the real world, be extremely reticent to take any action that might involve killing someone to save someone else. He can't possibly know whether his action would in fact have the effect he desires. (The driver doesn't know either, but must do something, so acting on imperfect information is OK). If he does push someone over, he would be considered a monster and charged with murder.

Does anyone doubt that the robot driver should be programmed to minimize the
expected number of casualties, just as I think in practice most human drivers would do? That there is no moral conundrum here? Likewise does anyone doubt that society would never allow a programmed robocop to push someone over the bridge to save five others?

Fascinating conversation , probably this post will be the most commented one of 2012.
I think google (and AI in general) is very far far way to handle the trolley problem.
I took Sebastian Thrun's AI and Udacity class - one thing is pretty clear from his lectures, office hours et al. - If and when trolley problem occurs , the car handles over the power of control to the "human" driver.
(They are still struggling to drive the drive less car in snow, low visibility and any place in the East coast in the winter)

This problem is about thinking (thought process), constituents of our behavior and decision making, culture, and our perception of morality.
Human behavior (indivudual) behaviour and Organizational Behaviour (Google) are very differernt.. Organizations and organizational thinking tends to be rational (except when it comes to money especially other people's money - wall street, greed). so if you do a survey of 100 people and 100 organizations you will see very different results I can assure you majority of the individuals will say they will not push the fat man to save 5 people.. but you will see an overwhelming majority of organizations will have no second thoughts in the same situation (rational and logical behaviour).

The question asks a programmer in Google.. I am assuming that just like in most organizations, the programmer will most probabily follow the corporate directive and will not act on his own..(he will ask his boss and most probabily the decison will be a collective descion by the governance body - the board of directors).

The key thing is individual behavior and action and organizational behaiours and action will be differernt.. (organizations have no heart)

Actually this is *not* how philosophers understand "The Trolley Problem". For details (and more problems) see:

Down ing a plane to avoid hiting the WTC?
The german constituional court voided a law allowing to down planes kidnapped to save people
Italy had ,or has, a law forbidding payment of ramsom sacrificing a known life for an unknown kidnapped

I agree with people that say this situation is contrived.

Here are some issues to consider:

How long do we have to make this decision?
What are the consequences of this decision?

Note that both of the issues I am raising here are trick questions.

"In real life" you would have but seconds to make a decision about the moving trolley, possibly you will have only a fraction of a second. If you had a longer time to make the decision that time could have been used to bring the trolley to a stop without killing anyone.

This is a crucial issue -- it means that "real life" decisions of this sort are going to have to be made without time for reflection, or proper analysis.

But what about the consequences, how can that be a trick question? People are dying here...

Well, not really -- in a forum where we have time to consider this kind of question, no one is dying. In a "real life" version of this problem, people will most assuredly die. Except, if the "real life" version perhaps had some other alternative available (that's a problem with hypothetical problems -- they are necessarily much simpler than real life). So let's take a step back and ask "how did this problem occur in the first place"?

In this supposed "moral problem" we are not allowed to ask how this problem arose in the first place. And yet, people are going to be killed here. And yet, if we were really going to solve this kind of problem in real life we would be doing everything we could to *prevent* this problem from occurring in the first place. That's why, for example, we have laws that would let us prosecute someone tying people to trolley tracks (though, granted, any such law that is worthwhile will be quite a bit more general than that).

So... one way of addressing this kind of "hypothetical moral problem" is to take it at face value and accept the hopelessness of the situation. Here, counting the number of people is what you do because when taken at face value you are allowed no other useful information. Another way of addressing this kind of "hypothetical moral problem" is to recognize how poorly this problem reflects real life decision making -- the "problem" itself is the problem. Here, the numbers are meaningless, because they are not counting anything meaningful, and the best course of action is reject the problem itself.

The trolley problems are MORAL questions for people, not for systems that are programmed.

The actual math behind this type of "no-win" scenario doesn't look the same to a computer for a variety of reasons. It won't calculate no-win. The computer will calculate "best odds" chances of being able to avoid ANY impact or to REDUCE the impact. Since the trolley doesn't injure it's driver (unlike in a vehicle crash) not only is it inapplicable, but isn't even the right question to be posing.

A better question is whether the car will drive itself into a tree (fixed object) to avoid hitting another car or a pedestrian.

We can pose a number of moral dilemmas for the computer system, this system will be designed to avoid impacts. That is its goal. Plug it into your scenario with the two "tracks" and the answer is:

the longest distance will give it the greatest odds of coming to a complete stop. It doesn't matter the numbers. It just calculates odds of impact. The best odds of avoiding impact will be the choice. If impact is made, it will be minimized to the extent possible.

The autonomous car can't push a fat man into the oncoming trolley. The only thing it could do, hypothetically, is to push itself into the trolley's way. But that would not minimize impact.

Let's stop asking our cars to be moral on top of acting consistent with their directions. Garbage in garbage out.

Comments for this post are closed