How do people actually behave when faced with trolley problems?

The poor mice!:

Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.

Here is the paper by Dries H. Bostyn, Sybren Sevenhant, and Arne Roets.

It seems to me that Kant lived a life in accord with his actual doctrines, as did Socrates.  But most philosophers?  Most economists for that matter?  It would be interesting if there was an app that recorded your life, and then wrote up the corresponding moral doctrine in book form.  Or in the case of the economists, it could write out your utility function and adherence to the principle of maximizing expected utility.  Or not.

Hat tip goes to Dina Pomeranz.

Comments

So what is their conclusion in plain language? People are cowards who know what the right thing to do is but are incapable of following through? That an action that causes pain is somehow qualitatively different from an theoretical choice involving pain?

But that people feel guilty about their decisions afterwards?

This is an interesting study and no obvious problem with the methodology occurs to me - except that students probably do know the modern psychology department is incapable of hurting a mouse whatever they say later one. Moreover it is likely to be true because it aligns pretty well with what I think my grandmother would tell me. Grandmothers being a lot more sensible and reliable than academics.

A clever study.

When confronted with the reality of the dilemma, people opt out of being the decision maker.

Which is a decision to condemn the large group currently under threat.

Not saving someone is not the same as killing someone. The trolley problem is a false choice. It proves too much First, this post (and everything but the altruism maximizing activity) itself must be murder, for my choice of using this time for entertainment rather than for service to a cause or to my career to maximum effective altruism later. Second, as Ayn Rand pointed out in her criticism of altruism, altruism requires sacrifice and sacrifice is defined as giving up value, and value is subjective, therefore group altruism lowers group utility in a sort of anti-market where everyone reduces themselves to build up others simultaneously by engaging in welfare reducing anti-trade.

So to clarify the conclusion of my opinion: a society fully living by the trolley problem, which is to live by utilitarianism, ends up an evil society fully contradicting the intention of the exercise.

That's a pretty cogent summary of the shortcomings of the trolley problem. Gonna cut and paste this so I don't lose it.

You aren't morally culpable (or at least people don't feel morally culpable) for events that happen as a result of inaction rather than action. In context, some series of events which are not responsible for put the people on the track. Unless it's your job to operate the switch (i.e. you have voluntarily taken on some responsibility for making the decision of when to switch the tracks), if you do nothing, you remain not responsible for the situation that put them on the track. Likewise with the mouse people can just say "I didn't put those mice in the box, the experimenter did, so I'm not the one shocking them, the experimenter is. I have nothing to do with this situation."
I think you can make a valid argument that they should feel responsible, but I think that's the psychology of it, and they're not entirely wrong - someone else set up that whole situation so that the mice would be under this threat of being shocked.

It would be interesting if there was an app that recorded your life, and then wrote up the corresponding moral doctrine in book form.

What a bizarre way of looking at things!

Instead of considering how well we live up to objective standards of behavior, Tyler regards each personal as a Nietzschean superman determining his own morality and performs a statistical regression to extract some kind of "doctrine". Must be really weird to live your life and look at things that way.

Wouldn't it be simply pretty much the same as keeping a record of all your sins and then reflecting on your failures? Perhaps something a religious reactionary might be expected to support?

All this App would tell us is how far we fall short of our ideals - which is not surprising for any worthwhile set of ideals. I think that might be healthy.

Wouldn't it be simply pretty much the same as keeping a record of all your sins and then reflecting on your failures?

Not at all.... Tyler is talking about each person and "his doctrines".

If a religious person responds to a trolley problem (or better: a more realistic, less contrived test of behaviour) by acting/failing to act in accordance with religious laws and teachings, Tyler's hypothetical person acts either according to "his doctrines" (whatever they may be) or according to spur-or-the-moment intuitions (the latter being regarded as some kind of failure even though the person was probably behaving according to well-understood normal psychological principles).

That doesn't surprise me at all, and I'm actually a bit happy about it.

I always disliked the trolley problem, because in my opinion it asks the wrong question. If you absolutely know everything, then yes, hurting one person instead of several clearly is the better option.

However, in most rl situations, the question isn't „what to do if this is a trolley situation?“ but „is this a trolley situation?“

Doing horrible human experiments for some later great medicine isn't bad because every life is precious. It's bad because you don't know whether it will actually pan out, and you have the option of just doing non-horrible experiments instead, which is at worst usually just slightly slower.

Harvesting the organs of one person to save several is bad because the rejection rate is still significant, and if we'd use more of the organs of dead people, we wouldn't have that problem. And so on.

Morality in practice seems to me more about likelihoods and possibilities than about absolute truths.

There comes a moment when morality has to face the fact that the cat is either alive or dead. If you harvest the organs of a random person you *may* save the lives of some people, that is true. But you *certainly* kill the donor.

However I agree that with the trolley problem the issue is usually, in the real world, fuzzy. Some times both sides are fuzzy - should I donate all my money to Doctors without Borders or to the search for a cure for Malaria? Some times one side is - should I spend all my money on myself or should I give it all to someone with cancer who needs treatment not available in his country? However some times it is statistical and yet still fuzzy - should I give all my money to the Sisters of Charity so they can run hospitals in Africa which means a specific dollar may not save a specific person but on average the money as a whole will be used to save some people.

I would think, contrary to Peter Singer, the fuzziness is enough for me to keep my money at home and buy a new TV. As I suspect he does.

In some universes I throw the switch, in other universe I don't. No sweat.

"Harvesting the organs of one person to save several is bad because the rejection rate is still significant, and if we'd use more of the organs of dead people, we wouldn't have that problem. And so on."

I disagree. Harvesting the organs of one healthy person would be a great evil even if the rejection rate was zero and it was impossible to use organs of the dead or dying as an alternative. The reason is that it fundamentally degrades the value of human life to use a person as either a collection of spare parts or an emergency trolley braking system. If a human life is worth so little as to be usable in these 'mechanical' ways, then what, really, is the source of the moral imperative to help the others in the first place? If a human can be used as 'mere means', as a collection of spare parts or a wheel chock, why go to the bother of saving any of them at all? In that case, just turn around, walk away, and let the trolley go wherever it is headed.

That said, I this mouse experiment doesn't really get to that 'mere means' question. The mouse experiment is equivalent to the version of the trolley problem where one can throw a switch to direct the trolley down a siding where there is only one person standing on the tracks rather than let it proceed down the main line where there is a crowd. Which is morally a very different action than throwing a fat person onto the tracks to stop the trolley (even if the body counts are the same).

"If a human can be used as 'mere means', as a collection of spare parts or a wheel chock, why go to the bother of saving any of them at all? In that case, just turn around, walk away, and let the trolley go wherever it is headed."

What's the logic here? That if you can do something bad to a person that makes them worthless? I don't get it. You can't take away one beautiful precious life to save five beautiful and precious lives? I get your ultimate conclusion but not your reasoning here.

The reasoning is that if a precious human life can be converted to just a meat bag of spare parts when convenient, then a life must not be particularly precious (they're certainly not rare!). So in that case why bother yourself about trolley problems, since it's only just more meat bags of spare parts in a world filled with billions of them?

I would still rather have 5 bags of meat than 1 bag of meat.

I disagree; if you are not purely utilitarian, there is a good argument that it is wrong to sacrifice the one to save the five. I would say someone is not morally responsible for the five who were placed in danger through some other force, but is morally responsible for placing an innocent person into danger. Many philosophies, people, and legal systems draw a distinction between action and omission such that killing one person is morally worse than failing to save five people.

Although this belief does not have to be utilitarian, one utilitarian reason for that belief might be that people in the real world are so flawed and biased that they shouldn’t be trusted with the ability to weigh which lives are more worthy.

Also because who is more worthy is unknowable. Since it may depend on the future.

Hence the reliance on estimating the value of a life pretty much solely on average future earning power.

When hearing the trolley problem I'm wondering why so many people keep walking about in front of a trolley without paying attention.

The answer I usually give is that in real life, trolley problems occur in a context of other events which created the problem. For instance, the circumstances which led those people to be on the track, or which led the individual to happen to be near the switch which could save them. Maybe they disobeyed signs to stay off the track , maybe it's not your job to operate the switch, maybe you don't know what other people might be on the other track. There's rarely a perfect situation in real life that matches the trolley problem, there's a whole context of past events and present circumstances surrounding it, which changes the moral calculus of the choice.

While in reality the context leading up to the situation is likely going to influence the decision, as a matter of cognitive bias, economic modeling would likely assume (or prefer) that the solution be independent of irrelevant facts such as sunk costs.

There's a lot of other things which could be going on besides sunk costs. Like not having full knowledge of who is on what tracks, or the full consequences of flipping the switch. Especially in real life, would you run up to a switch at the subway stop and just flip it because you *thought* someone was on the track? When you have no idea what else might be going on ? You could kill hundreds of people on another train! Or the five people on the track might be terrorists who were planning to blow it up. You never have full knowledge of the consequences of your actions.

In real life, trolley problems rarely ever occur. Given that -- alongside the hundreds of millions of bodies piled up during genocides in the past century -- even utilitarians ought to favor deontological ethics for utilitarian reasons (the problems arising from breaking moral codes have been orders of magnitude greater than problems arising from blindly following them).

I'd like to see the hundred-bill-lying-on-the-sidewalk and the assume-we-have-a-can-opener experiments done with economics grad students.

If there were a can-opener on the sidewalk, someone would already have assumed it.

But would you run over 5 people to get the hundred dollar bill?

Wouldn't legal services cost much more than that?!

"among philosophers expressing the view that the overall quality of ethicists’ moral behaviour varies according to their broad normative commitments, nearly all said that Kantians behave on average less well than the others."

"A number of people stole candy without completing a questionnaire or took more than their share without permission. One eminent Kantian ethicist grabbed a single Ghirardelli square [chocolate] in passing and announced, ‘I’m being evil!’ Unfortunately, we were unable to study this behaviour systematically."

From The Moral Behaviour of Ethicists: Peer Opinion (2009)(https://academic.oup.com/mind/article/118/472/1043/1052434)

h/t
Siebe Rozendal

"among philosophers expressing the view that the overall quality of ethicists’ moral behaviour varies according to their broad normative commitments, nearly all said that Kantians behave on average less well than the others."

So the Singerians, Benthamites etc. smeared the Kantians ... surprise surprise.

Rozendal himself seems to be involved the "effective altruism community", which is concerned with "normative animal welfare". So how much of these evaluations by "philosophers" (actually just philosophy scholars) depended on differing definitions of behaving well? Maybe they said that because the Kantians were eating meat.

"Kant is the most evil man in mankind’s history." -Ayn Rand, The Objectivist, Sept. 1971, 4"

Possible methodology issues.
1. The participants are sophisticated enough to know that the shock is fake.
2. It is not a comparison to the hypothetical trolley problem because moral intuitions frequently make homicide categorically different from other harms. (For example, typical U.S. criminal code allows a necessity defense to most crimes, but for homicide necessity is restricted to self-defense. Those who would choose to push in the trolley problem would be criminally culpable. But when the stakes are reframed to fall short of life and death, suddenly the greater good becomes a defense. )

Yeah, #1 was likely a problem with the original Milgram experiments according to recent reviews of them. And now there is the additional problem that the Milgram experiments are widely known.

#2 is a good point as well, and in addition, for most people there is a divide between harm to humans and animals.

What about the elusive "rational voter"? Here is Thomas Edsall's contribution to that topic: https://www.nytimes.com/2018/05/10/opinion/democrats-partisanship-identity-politics.html What is "rational" behavior? Economists inform us that it is self-interest (i.e., making choices based on what maximizes one's self-interest). According to that standard, Donald Trump is the most rational person I know.

Surely we all know people who talk mean but are actually teddy bears, or talk tough but are actually non-confrontational. We can study it all we want, but some people stop their car at the scene of accidents and others don't. Some people find a wallet and go to great lengths to find its owner, and others do not. Some people return the money when they get too much change, and others do not. Some people jump in a freezing lake to save a cat, and others do not. And those who do it one time might not do it another.

Does anyone want to argue that the people who come up with these kinds of experiments lack basic morality? Can such experiments, with their manipulation and dishonesty and sadism, meet a Kantian or Benthamite moral code?

There is the famous, though perhaps apocryphal, account of Sir Winston Churchill sacrificing the city of Coventry to protect the fact that the Allies had broken the German code Enigma.

Regardless of whether that particular story is true or not, we are most likely to see actual Trolley Problems in wartime decisions.

If the movie Darkest Hour was historically accurate, Churchill also sacrificed a Brigade in order to save the 300K troops stranded at Dunkirk.

Indeed. And those are merely grand anecdotes of decisions made constantly in wartime.

Be it who walks point, who serve as skirmishers, or how to deploy destroyers.

Here I was hoping for a mice tied to a miniature track, in fear of being run over by an adorably lethal mouse-scaled trolley.

The utilitarians over at Slate Star Codex are probably working on a prototype right now

Seeing this and remembering similar threads here, I'm surprised Tyler hasn't done a review of "Avengers 4", which is an entire movie about the trolley problem.

Entertainment is filled with implicit trolley dramas -- will Spock risk the ship and entire crew to rescue Kirk from Tholian space, or from the meteor-plagued planet with all those Native Americans running around. Will Kirk risk the ship and entire crew to find Spock's shuttle craft amid the giant space protozoa ...

Minor correction to the above, its Avengers 3, not Avengers 4. Its hard to keep track.

A Candid Camera approach to the trolley problem. We could link it to social media accounts. Live updates. Ugh. Narcissism run amok.

We are back to signaling. Is this how we should view others, as disingenuous? For example let us consider the excellent Megan McCardle's latest column. Should I cynically dismiss this as signaling to #MeToo? Or can I chose to be generous and admit that her candor was refreshing?
Please don't get me wrong, I am not naive. I know there are malicious interlopers. I grew up in a church full of very bad men who claimed the divinity of God for themselves. My point is that we need more generosity and altruism to go along with our skepticism because otherwise we are just talking past each other.

Is the question of whether to torture a detainee who resists providing life-saving information a kind of trolley problem? Discuss.

There's another way to experiment with trolley problems ... Conduct an auction!
More here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2492557

Comments for this post are closed