What good is training in moral philosophy?

Eric Schwitzgebel and Fiery Cushman have an interesting paper (pdf, pubished Cognition version here) which raises that question rather directly:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

I wonder to what extent economists do better at treating sunk costs as sunk?  The pointer is from Michelle Dawson.

By the way, ethicists are not more ethical.  Just in case you were wondering.  Are economists more economical?


My father was very good at treating sunk costs as sunk. But then he had a business to run. Maybe the term "economist" should be reserved for people running businesses and "fantasist" for people teaching Economics.

So can we finally admit the universities are a bad joke and turn them into stables?

How else can I signal my willingness to piss away years of my life for no gain (see: Corporate America's preferred employees) ?

Biologists are no more biological...

This does bring to mind the old business school saw of 'if you are so smart, why aren't you rich?'

This does bring to mind the old business school saw of ‘if you are so smart, why aren’t you rich?’

Personally, I blame incomplete markets: http://tuvalu.santafe.edu/~leb/paper-22.pdf

Which in turn brings to mind Aristotle's story about Thales and the wine presses.

"Most of the ancient philosophers and the great moral visionaries of the religious wisdom traditions, East and West.......assumed that the main purpose of studying ethics was self-improvement. Most of them also accepted that philosophers were to be judged by their actions as much as by their words. A great philosopher was, or should be, a role model: a breathing example of a life well-lived."

The curious thing is that we expect football players, politicians and common people to be a breathing example a of a life well-lived. Laypeople should behave like ancient philosophers with romanticized biographies.

Also, there's a interesting nudge to religion. The basic is to acknowledge I'm not perfect , I'm not better than others..........thus I should be patient and kind with others. But it got supplanted by "I'm better than you" discourse. What's the point of becoming ethically superior if you become an unbearable annoyance or a risk to other humans?

In the end I agree with the author, philosophy is not a walled and well signaled highway to superior ethical behavior. It's just a discovery tool. Articulating this idea, even if it's not original it's something that needs to be reminded, frequently.

Then perhaps Augustine was right in his claim that education cannot make someone better ethically.

"The basic is to acknowledge I’m not perfect , I’m not better than others……….thus I should be patient and kind with others."

That is at least one version of what Christianity teaches, but I hardly think it characterizes religious ethics generally.

perhaps we could re-frame the results in the Schwitzgebel and Cushman paper by saying that most moral philosophers are not really good Bayesians. In particular, we could assert that moral philosophers (and economists, for the matter) are more like lawyers--that is, they are more like "advocates" for a given point of view than they are searchers of the truth ... If so, to what extent is this true of most people generally?

That brings to mind the quote attributed to Galbraith:

"Faced with the choice between changing one's mind and proving that there is no need to do so, almost everyone gets busy on the proof."

Apparently it also applies to economists and ethicists no less than to anyone else?

Even the people who study cognitive biases suffer from cognitive biases.

The biggest danger from looking at them is deciding that 1) because you know about them you won't suffer from them, 2) but now you have a way to disassemble all your opponents' arguments.

Each time I think I've conquered them, if I look at myself from outside I realize I haven't.

Not necessarily. You can train yourself out of at least some cognitive biases. I know- I've done it. For example, framing an incentive as a penalty versus a bonus affects people's decision-making, but it's pretty easy for me to flip frames here. In fact, I find 'economic thinking' useful for this particular bias.

If I have a beef with Kahneman, it seems like he largely applauds people overcoming these biases if they can - it's the rational thing to do, after all - but later on in "Thinking" he sorta equivocates and suggests there are times these biases should be indulged.

The trouble is that in the big picture I don't even know if the biases really are bad. Take the loss-vs.-bonus thing, they are equivalent if you look at the amount of money in your wallet only.

If you have suffered something that is framed as a loss, that means you have taken a kind of kicking from someone, and both of you know it. This sets a precedent, or at least reveals information about relative power. If someone has felt the need to come and offer you bonus, it's the same deal in reverse. There is *every* reason for a rational animal to prefer the latter.

@Adrian, I think I disagree. Obviously, a loss is worse than a gain. The question from prospect theory is: is a loss of $x more bad than a gain of $x is good?

If you have a net worth of $100,000, I can't imagine a rational utility curve would assign a significantly greater delta to a $100 loss than a $100 gain.

In general, if the gain or loss in question is small relative to net worth, loss aversion bias promotes irrationality.

Of course, if the gain or loss in question is $50,000, now I understand that significantly different utility deltas might be rational.

Good for you. I've completely failed to do the same thing. My best results are against the sunk costs fallacy, but even then I'm way too prone to reverse it into an "almost there" fallacy.

My experience is that knowing those bias helps understanding other people, and that's very valuable. But I see no way of escaping them, even if I turn everything into numbers and put them on paper.

@Adrian Ratnapala
Probably all of the biases are either to save cognitive energy (or time), or provide a very good fitness to a human living in a natural environment in a group with simple politics. That does not mean they are good for our environment.

Ethics belongs to the theology department anyway, these clowns are not philosophers.

Economists being economical:

Well as far as the location of the annual meeting, Economists do get all economical: http://www.wsj.com/articles/SB126238854939012923

Fred Kahn used to (jokingly, I hasten to add) say, "But those are MY sunk costs!" when sunk cost arguments ran against his position.

human, all too human

I am not surprised about the loss aversion results.

In fact, if you think about it, the only reason you are surprised about the loss aversion results is that you presume a certain shape of the utility curve...one that you were taught in college. But, if you find that the utility curve is shaped differently, maybe, just maybe, the utility curve is shaped differently, and not just the way you imagined in college.

Funny thing about empiricism versus theory.

Maybe economists are the theologians and philosophers.

Allan Bloom supposedly once said he had never witnessed a discussion the ethics of abortion by an academic philosopher not meant to justify the practice, which should tell you that moral philosophers are akin to lawyers: people with forensic skills.

G. E. M. Anscombe was one of Britain's more original post-war academic philosophers, who supported a ban on abortion. In fact:

She scandalised liberal colleagues with articles defending the Roman Catholic Church's opposition to contraception in the 1960s and early 1970s. Later in life, she was arrested twice while protesting outside an abortion clinic in Britain, after abortion had been legalised (albeit with restrictions).

It is odd but I quite admire her for her opposition to giving Truman an honorary degree. Just two academics protested the award. Normally I have nothing but contempt for people who do that - mainly because they usually also defend Stalin's famines. But in this one instance I think it was an actual principled moral stand. Maybe even the right one. She was, if nothing else, very consistent.

Of course she could never get a job at Oxford these days and she seems to be in the process of being air brushed out of history. Not the right sort of academic philosopher.

I had an econ prof who complained that all other econ profs where very pro-choice yet were strongly opposed to gender selection abortion. At first I was a bit appalled by his comment, but the more he talked about it the more I saw his point.
His point was that as said above economist should not be advocates but searchers of the truth, and if you are going to agree with one thing you had to agree with the other. If the abortion argument is to give the woman choice, isn't the logical conclusion that she should be able to choose everything. If we believe Levitt and women who had abortions did not race criminal children, then if the desire of the woman is to have a boy, or a girl, would that not mean that she would put more effort and resources into raising the child she wants to raise? In other words, where does the choice part end?
To this day this argument makes me uncomfortable, I fear that may be because there is some truth in it.

As an ex-academic philosopher, what always struck me was how confident philosophers were in the correctness of their moral and political beliefs. One of the huge takeaways from my training in philosophy was that we know a lot less than we think we know (that is, after all, the original lesson from Socrates). Not many philosophers seem to take that lesson to heart, though.

This thing goes way beyond philosophers.

Thank God you're back, Fred.

No doubt, but if anything should make one more skeptical about one's own beliefs, it's philosophy.

My favorite bathroom graffiti from college was a drawing of a guy with a big nose and a quote balloon saying "I'm skeptical." All over campus, in men's rooms anyway.

These signs are all over the place in New Orleans: http://blog.nola.com/entertainment_impact_arts/2009/08/medium_7.THINK%20THAT%20YOU%20MIGHT%20BE%20WRONG.jpg

Riposte: http://blog.nola.com/entertainment_impact_arts/2009/08/medium_7.I%20THOUGHT%20ABOUT%20IT.jpg

@Urso, if I'm not mistaken the first quote comes from Cromwell.

What about the "Question Authority" bumper stickers? Liberals used to love them.

They seem to think the absence of a p-value is a lot more significant than it is

How did they identify philosophers? Self-reporting?

Mostly, IIRC, they attended philosophy conferences and asked people with conference badges if they'd be interested in being subjects.

this seems extremely problematic. what if these people are not actually philosophers, and don't actually have training in morality. what if they are just posing?

Well... that would be a problem. But I expect that the probability of subjects lying about their academic specialties is *pretty* low.

They can rest easy knowing there's no such thing as objective morality and that moral philosophers, much like theologians, are spending thousands of years arguing over something that doesn't exist for all practical purposes, and for which there is no evidence that could possibly resolve the debate anyway.

I want your kidney. This could be easy or this could be hard.

right. if there is no objective morality, why would anyone complain about philosophers?

Why get trained in moral philosophy when you can just ladle the weak sauce of social "science", the absurd practice of applying the scientific method to human behavior, all over it?

Actually, why even bother with this step? Let's just skip to the brain scans for the "answers" to all of these unanswerable questions about why people do what they do.

This type of study can show that the behavior of professionals is _not_ changed by their training, but it could not have conclusively shown that their behavior _is_ changed by their training. When behavior is different, you can't rule out self-selection bias.

Another problem is what some behavioral economists call off-line vs. on-line thinking. This type of research studies on-line thinking, which is fast and based on informal guidelines. The researchers in this study did have some participants respond after a delay, but median response time for the delay group was about a minute for one scenario and a minute and a half for the other. That may not be enough to trigger a switch from on-line to off-line thinking. Off-line thinking is slow. Is a minute and a half long enough to consider the study's scenarios in off-line mode? What triggers off-line thinking?

Philosophers careers are based on their off-line thinking, and their training is intended to affect their off-line thinking. Unlike the ancient world, which valued philosophers for their daily behavior (on-line thinking), we value philosophers for published works (off-line thinking). Nobody asks if Peter Singer kicks his dog. Are there differences between non-philosophers and philosophers in off-line mode? And again, the problem of self-selection.

Of course this all applies to economists also.

It's highly useful - so long as you're under the age of ten:

Is philosophist more philosophical?

How much of the history of moral philosophy do you have to read before it becomes funny that contemporary moral philosophers and the social scientists who study them think they know what moral behavior is?

Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Who says that's a bad thing? Maybe the right way to do moral philosophy is to conform to human tendencies to be swayed by framing and order effects..... Who determined that moral philosophy has to be consistent?

Back when the Center for Applied Rationality was still starting up and nobody knew how to teach anything yet, I spent a couple of months trying to craft an hour's worth of exercises for a unit on Abandoning Sunk Costs. The session included exercises like having the participants pay for cookies, then revealing that the cookies were stale and needed to be thrown away, or reframing sunk costs as purchased options across several examples (from "I spent 3 years on this PhD program!" to "I have an option to complete a PhD program by investing two more years").

I'm not sure it taught anyone else to abandon sunk costs, but it sure taught me how to abandon sunk costs.

But no, generally speaking learning about a bias doesn't make people's brains better until you figure out some sort of practice you can do for two weeks. E.g., my wife Brienne, to train herself to become more curious, first trained herself to tap on her thigh every time she noticed herself being curious - you can't associate an action to a trigger until you learn the trigger. Before that, she trained herself to notice every time she passed a house with a red roof, because she wanted to train her skill at noticing things on a more solid target before she tried introspective targets.

You've got no right to expect self-improvement without some form of self-training. Otherwise the skill you're training is just 'talking about cognitive biases'.

I don't think "Are economists more economical?" is the appropriate question. It should be, Are economists more rational (in their economic decisions, such as buying and selling)?

Of course not. The main skill of an ethicist/philosopher is that they're good at talking about ethics, which by extension means they're good at describing behavior as ethical or non-ethical. Why would one develop such a skill if not to justify their own unethical behavior?

I wonder how much of of Tyler Cowen's ethnic eating is an economist being economical. Does the food taste better because it was cheaper?

This isn't a very useful indictment at all.

1) It would be stunning if philosophers weren't to some degree influenced by the same effects as other people. Since there is no countervailing effect for these effects not to show up at a statistical level would require ALL philosophers (given a large enough sample) to be immune.

2) Philosophers are no more trained to be immune from order/presentation/framing effects than economists are trained to be immune from advertisements. No one would attack the value of economic training because economists still prefered the beer shown with pretty girls and exciting music so why would this reflect badly on philosophical training.

Both philosophical and economic training teach one to evaluate (and make) arguments about a certain area of study and (hopefully) get closer to the truth about that area. In both areas experts will often be on the fence about certain issues and of course which side of that fence they come down on at a given time will be just as influenced by human factors.

3) Philosophers aren't, contrary to the suggestion in the paper, particularly educated about issues like framing, presentation order etc.. These are issues more commonly discussed in economics as it deals with choice and preferences. While this is sometimes brought up in philosophical contexts the usual focus is on what is the correct answer not what biases affect people's judgement. Indeed, the paper itself shows that ONLY 13% of philosophers even claimed expertise with these matters (saying they are familiar just means they heard them somewhere which makes it so surprising that only 62% even claim this)

4) While all philosophers working in ethics will be familiar with the trolley problem and a large fraction will be expert IN THE SENSE THEY HAVE A MASTERY OF THE MAJOR ARGUMENTS very few will have had the occasion to specifically address this as part of their work or come to a definite theoretical attitude about it.

If you replaced the "do you have a stable view about this question" with "have you published work on this question" or something similar I expect this effect would vanish.

5) The culture in philosophy views a question like "Is it wrong to push a fat man in front of the trolley" not as asking for an expert opinion informed by theoretical considerations but as asking for their intuitive reaction (at least absent such a theoretical consideration). After all philosophers are in the habit of arguing about the correct answer using facts about intuitive responses as evidence.


Look, I'm the first to admit that much of philosophy is useless or even counterproductive. While I think there is a useful subject that could exist I think most of modern philosophical practice needs to be thrown out as it creates backwards incentives. However, this study isn't one of the reasons to be critical.

Comments for this post are closed