*Rethinking the Good*, by Larry Temkin

The subtitle is Moral Ideals and the Nature of Practical Reasoning.  Without hesitation I paid full price for this book, in this case $74 though since then the price is falling.  While not an easy read, it is the most important work in choice theory and social choice in some time.

Why does the transitivity debate matter?  If you believe in transitivity, you will see lots of piecemeal improvements as adding up to something desirable.  If you do not believe in transitivity, useful normative inquiries have to be much more global.  You will put less weight on the Pareto principle, and less weight on partial equilibrium cost-benefit analyses.  You will focus on what constitutes a good society, and you will trust some very gross macro comparisons (“America is better than Albania”) more than micro comparisons (“the best system of taxation is X”).

If you are skeptical of transitivity as a postulate, you probably are less inclined to see individuals with intransitive preferences as irrational.  This may affect your views on paternalism and time inconsistency.

Many economists of course view the rejection of transitivity as simply unthinkable.  Perhaps without transitivity we cannot even speak of the notion of a coherent preference.  Temkin is skeptical of transitivity.  There are a few versions of anti-transitivity arguments:

1. A version of Arrow’s theorem will apply to plausible versions of pluralistic moral reasoning, just as it applies to decathlon scoring.  Temkin does not pursue this route, though he notes in the introduction it may be possible.  It is the route I would have preferred, as it makes the investigation more economical.

2. The Sorites paradox, or how many stones make a pile.  Temkin insists his argument does not boil down to the Sorites paradox, though one may add this to the pile of arguments against transitivity.

3. The “roughness” relation: maybe Mozart is roughly as good as Beethoven, but this need not be transitive.  Possibly Haydn is roughly as good as Mozart, but it does not follow that Haydn is necessarily “roughly as good” as Beethoven.  Often judgments about the good are rough by their nature.

4. Various pairwise comparisons lead the transitivity advocate to unacceptable conclusions.  For instance you can start with the view that adding a pain to the world is a bad, and (with intermediate steps), end up having to believe that adding a very very slight pain for a trillion lives is worse than brutally torturing ten people.  Or perhaps you are familiar with Derek Parfit’s Mere Addition Principle.  If you endorse some pairwise comparisons which increase both utility and equality, you can again be led to apparently unpalatable conclusions by some multi-stage comparisons (pdf, it would take a long time to explain here).  Temkin stresses this kind of argument and works through the possible responses in great detail.

The main contribution of this book is to show you that the transitivity postulate is far less intuitively appealing than it seems at first.  Twenty-two years ago I disagreed with Temkin but now I accept much of his critique.  Here is one very good Temkin piece from JSTOR.

These days, I see the good is more holistic than additive-aggregative.  This defuses Temkin’s arguments, though at a high cost.  (You will find Temkin’s criticisms of holism and related ideas at around p.355, though I find them unusually lacking in force.  One of his worries boils down to how a multiplicative view will handle negative numbers but I see the scale as sufficiently arbitrary that they need not pop up to begin with.)  We can make some gross comparisons of better and worse at the macro level, with partial rankings at best, but for many individualized normative comparisons there simply isn’t a right answer.  I view “ranking” as a luxury, occasionally available, rather than an axiomatic postulate which can be used to generate normative comparisons, and thus normative paradoxes, at will.  I see that response as different than allowing or embracing intransitivity across multiple alternatives and in that regard my final position differs from Temkin’s.  Furthermore, in a holistic approach, the “pure micro welfare numbers’ used to generate the paradoxical comparisons aren’t necessarily there in the first place but rather they have to be derived from our intuitions about the whole.

These thoughts provide one reason — though by no means the only reason — why I think so many policy comparisons are not very clear cut, not even in principle, not even if we had better empirics.

My main objection to this book is how it was written.  It is too long and too branching, much like Parfit’s recent volumes.  Temkin notes that Shelley Kagan, a very smart guy, gave him 117 pages of single-spaced comments on a prior manuscript draft.  Temkin took that as an invitation to lengthen the presentation rather than shorten it.

Addendum: If you are interested in these issues, you also should read Leo Katz’s new and fascinating book, more applied than Temkin’s, also rejecting transitivity as a universal principle of reasoning but focused on explaining the content of the law and its apparent paradoxes.


"These days, I see the good is more holistic than additive-aggregative. "

Doesn't that run counter to your blog title?

"If you are skeptical of transitivity as a postulate, you probably are less inclined to see individuals with intransitive preferences as irrational."

Can you give an example of an intransitive preference? because we have a budget line, it seems to me that preferences are necessarily transitive otherwise decisions would be impossible.
We would get stuck in a ...A>B>C>A>B>C>A... loop.

I've never ever seen an intransitive preference.

I go to the supermarket, they have A and B, I choose A. Next day they have B and C, I choose B. Next day they have A and C, I choose C. So, am I "irrational"?

Suppose that in the fourth day they have A, B and C. What would you predict I'll choose?

It seems obvious to me that, based on my past choices, you would have to predict that p(A)=p(B)=p(C)=1/3. There's really no problem whatsoever for a Bayesian to deal with intransitive choices. There's no need to postulate that choices have a predetermined structure, all you need is to apply Bayes' formula over and over. Whatever structure there is will be reflected in the resulting posterior probability distribution.


The perfect example. Because it's what I'm doing right now...

+1, That was very informative, thanks for sharing the knowledge.

I mean individual preferences....
I have seen voting and such produce intransitive preferences.

I'm not an expert in economics, but I am a mathematician, and I can tell you it's entirely reasonable to me to suggest that some things simply aren't comparable. Not every set has an ordering, that is, the sentence "A > B" doesn't always even make sense. Or one could have A > B and B > C but A and C are not comparable.

But I suppose the paradox A > B > C > A could occur when certain outcomes are tied together. For instance, let's say you really like the Steelers but you also happen to really like a player who plays for the Bengals, and you absolutely hate the Ravens. Let's say the only three possibilities are (A) the Ravens win the division, the Steelers get wild card, the Bengals don't go to the playoffs, (B) the Ravens win the division, the Bengals get wild card, the Steelers don't go to the playoffs, (C) the Bengals win the division, the Ravens get wild card, the Steelers don't go the playoffs. (I'm not saying I know how these could happen or that they could happen.) You prefer A to B because you'd rather the Steelers go to the playoffs than the Bengals. You prefer B to C because you prefer the Bengals to the Ravens. But let's say that also you actually prefer C to A because you just hate the Ravens so much that you can't bear for them to win the division if it's possible to prevent it, plus you like the Bengals enough to cheer for them in the playoffs. So A > B > C > A.

Irrational? Maybe. But if the possibilities are only given two at a time, then a person may very well have such intransitive preferences. If you ask them put all three in order at once, I suppose you couldn't get a chain like that. I imagine an expert would have a better example.

The Mere Addition principle is interesting. Probably I'm overly pragmatic, but the statement If the two populations were known to each other, and were aware of the inequality, this would arguably constitute social injustice. seems silly to me (though it's an amusing possible explanation of why advanced alien races never acknowledge their existence to us). I've never liked the notion that other people's greater happiness/wealth is intrinsically some sort of injustice if I'm poorer/less happy.

However, as they do not, Parfit says this represents a Mere Addition, and it seems implausible to him that it would be worse for the extra people to exist.

I think that's an interesting problem that highlights the transitivity issue well, but as a programmer I'm tempted to 1) define a function that balances a level of happiness against the number that have it, such that pain/happiness level X for Y number of people is calculated to be equal to pain/happiness level Z for W number of people, from which any comparisons of population could yield a definitive conclusion as to which situation was "better" (assuming such things could be measured and known). As with Sorites' Paradox, it seems a problem of insufficiently clear definition: one could simply define "heap" as being, say, a dozen or more grains.

I agree that it seems silly, but it still seems to be the case that for many people it isn't, because in addition to caring about one's own level of happiness people also care about the differences in levels (another way of putting it would be that my utility function U contains a term U' for someone else's, and that U increases as the difference between U and U' decreases).

It's a pretty weird way of looking at the world, but if people have real preferences that can only be satisfied by harming other people (such as racists, or folks in favor of redistribution), and that the satisfaction of these preferences might even outweigh the harms, isn't it a little too simple to just call those preferences unenlightened and ignore them?

"isn’t it a little too simple to just call those preferences unenlightened and ignore them?"

But of course, the question is, obviously, "what if that is my preference?"

If my preference is to act irrationally, and I therefore act as irrationally as possible in order to maximize my utility, am I actually acting rationally? If so, will my utility go up or down? Does this suggest that one whose preference is to act irrationally should actually act rationally (which will be irrational from his perspective) in order to maximize utility? If so, this explains a lot.

If I'm understanding the point being discussed:

If A has X he is happy,
When A knows B has X+1 he is less happy
perhaps, when A knows B has X+10,000, he is even less happy

This seems highly susceptible to evolutionary psychology "just so" story. Hunter gatherers spent many thousands of years in situations where small differences between individuals may have made big differences in likelihood of reproduction. Detection and responding to differences may be have been an advantage.

Rational or not, humans may be primed to be very concerned about relative differences. (Consider the many religious/moral prohibitions about envy).

That's the general point, yes. But the subtler thing is how to weigh the preferences of people who receive benefit from the harm of others. I agree that even horrible people shouldn't suffer needlessly, unless punishment has some consequentialist benefit such as deterrence, but what about when everyone is calling for the murderer's head? Is vengeance a preference just like any other, independent of any consequentialist benefit of punishment?

Robin Hanson's minimal morality (http://www.overcomingbias.com/2009/05/minimal-morals.html) might say yes, but then it seems to me that you'd then have to bite the bullet and conclude that some lynchings might have been beneficial, if enough people enjoyed watching them.

Couldn't it be that absolute happiness has no meaning and it is really only a relative measure? What's wrong with that assumption?

It wouldn't negate the ability to perform the calculation, it would only alter the parameters of how it was calculated.

In thinking about Mere Addition some more, there are some disturbing implications. Suppose it becomes technically possible to bioengineer people who are happy no matter how poor or oppressed they are. The implication is that we would have a moral duty to undertake such engineering in order to create the greatest amount of happiness among the largest number of people -- and perhaps even to do everything we can to make it technically possible!

Consider North Korea -- many North Koreans are apparently happy because they are told they have the best country in the world based on various falsehoods. Is it immoral to inform them of the truth if it makes them less happy?

Which all leads me back to the libertarian principle that probably we should be doing a bare minimum of coercive social engineering.

Interesting comment. But I'm afraid we're headed that way. It won't be bioengineering, it will be virtual reality so 'real' that 'happiness' will be available to all by plugging in.

And moral imperative will have nothing to do with it. Whoever creates that system will become one of (some of) the richest people who ever lived.

'The Matrix' won't be so sci-fi someday.

Shrug, you can have that with heroin today.

As Neo says in the Matrix, the problem is choice. Natural humans want various things and for evolutionary reasons are generally unhappy when they don't get them. An engineered human could be developed that was happy regardless. We could then create billions of very happy people who live at subsistence levels and, by the implicit logic of Mere Addition we'd have performed a great moral good.

Then we could move on to creating animals that want to be eaten...

For instance you can start with the view that adding a pain to the world is a bad, and (with intermediate steps), end up having to believe that adding a very very slight pain for a trillion lives is worse than brutally torturing ten people.

But, of course, depending on the formulation, pretty much everybody believes something like that. Would you invest your pharma research dollars in a product that relieved mild pain for everybody? Or the same dollars in a drug that treated horrific, otherwise unmanageable pain caused by a disease that affects only 10 people? Would you spend your scarce resources to provide effective AIDS or Malaria treatments to millions of poor people at a few dollars each? Or spend the same on 'regime change' in a country whose dictator routinely tortures and kills political opponents? It's the 'trolley problem' all over. It has become taboo for most of us to contemplate intentionally killing or torturing even one specific person for the greater good, but when that aspect of the situation is removed, transitivity still works pretty well. And even in the case of the taboo, it's not clear we have a transitivity violation anyway -- since it has become empirically obvious (the world has run the experiments!) that a willingness to treat people as 'mere means' has led slaughter on a previously unimaginable scale. So one could say that the apparent transitivity violation is simply a (pragmatically appropriate) unwillingness to address the thought experiment in isolation.

But if you've now concluded that 'the good' cannot really be effectively addressed empirically, how does that square with the MR endorsement of GiveWell? Or do you and Alex disagree on GiveWell's approach?

Transitivity obviously doesn't work on absolutes or infinities, and as you note there are sound reasons to generally treat "I shall now brutally torture ten people" as having a value of negative infinity. But that's an approximation; if we absolutely have to, we can back out the hugely but finitely negative value and put that into the ethical calculus. Thing is, in the real world that almost never comes up; it is primarily a creature of thought experiments designed to break other people's theories. And most of us are more concerned with having an ethical system that works in the real world, vs. one robust to abstract theoretical attack.

And if it's not obvious to anyone, the reason we treat "I shall now brutally torture ten people" as a neagtive-infinity event is that there are a nigh-infinite number of opportunities in which we could brutally torture people and (purely coincidentally, of course) profit thereby. Even if we were all supremely ethical beings who would never undertake an action unless utilitarian calculus showed it to be a net social good, utilitarian calculus is an imperfect art and "nigh-infinite number of occasions to convince ourselves it's OK to torture people" times "sometimes we screw up" equals a whole lot of people wrongly and brutally tortured. By comparison, opportunities to merely allow harm (e.g. by curing the common cold instead of Ebola) are relatively rare - and in any event cannot be avoided by a "First do no harm" imperative.

So, the actual negative value of an act of torture is the harm to the actual victims, plus a prorated fraction of the harm to all future victims who would not have been tortured save for a visible and socially accepted precedent that, hey, it's OK to torture people if you can make the utlitiarian calculus work out on a case-by-case basis. That's a huge negative value, and in most real situations the math is simpler and the outcome identical if you just use negative infinity. Adding a very slight pain (say, one-billionth of a torture event) to a trillion people, is worse than just brutally torturing ten people in isolation - but if you really think you are facing a choice between the two, you're either an imaginary participant in a sadistic thought experiment or you've really botched your ethical calculus.

If someone is trying to break your ethical system by invoking such infinities, you can if necessary go back and do the (presumably hypothetical) math to bring everything into the finite, comparable, and probably transitive range. Or just tell them to go away and not bother you until they have done so :-)

"So, the actual negative value of an act of torture is the harm to the actual victims, plus a prorated fraction of the harm to all future victims who would not have been tortured save for a visible and socially accepted precedent."


Well done, +10

How is this an example of negative infinity value? The negative value of 10 tortures seems to me to be smaller than that for 100 tortures; make the ratio bigger if you don't find that convincing. Also when you talk about the negative side effects of the torture precedent you appear to conceding that there can be a value of infinity +1 ( or else the precedent is irrelevant). The concept of infinity doesn't allow for comparisons like these.

Perhaps I should have been more clear. Brutally torturing people, 1, 10, or 100, is never in fact a negative-infinity value. It is a value that is in fact very hard to quantify, because the second-order effects (tortures enabled by your having weakened the social firewall aganst any torture) are huge. Most people, concerned more with real-world ethics than clever thought experiments, implement this immeasurably large negative value as a de facto negative infinity: "Your proposal requires we torture some number of people? That is ABSOLUTELY UNACCEPTABLE, no matter the actual number or the offsetting benefit!"

This is in fact logically equivalent to assigning a value of negative-infinity to any voluntary act of torture[*]. It has the advantage of leading to the correct ethical outcome in almost all real-world solutions for almost all actually-held ethical systems. It has the disadvantage of producing apparently-silly results like "10 tortures is no worse than 100 tortures", which is of approximately no relevance in the real world because there is approximately never a case in which your choices are limited to torturing ten people or torturing a hundred - you can always chose to torture nobody.

[*] "voluntary" in the sense that the torture will not occur unless the people conducting the analysis/debate chose to cause it to happen. Acts of torture performed by other people who aren't asking for our permission, are not generally ascribed the negative-infinity we ascribe to the acts of torture we might perform or comission ourself. This may seem irrational, but it is the way people actually work and it empirically leads to pretty good results.

I thought that Arrow's theorem had put an obvious end to the idea that transitivity is somehow self evident, and to the idea that such a thing as global good or aggregate utility could even be defined, etc etc? Apparently not?

It hadn't because people need some way to describe states of the world they prefer, and to try to convince others that those states are actually preferable on some non-arbitrary measure. Its inherently nonsense, but because most of political argumentation requires this it never goes away. Everyone needs to show that their arbitrary measure is somehow inherently better than someone else's arbitrary measure.

Seems there is some truth to that, though I think you ignore the important distinction between the arbitrary and the subjective. Defending arbitrary preferences is probably inherently nonsensical, but preferences are hardly ever arbitrary. Subjective preferences, on the other hand, can and surely should be advanced and defended, at least in my view.

Where you stand depends on where you sit.

Daniel Davies makes the following example of intransitive preference that always entertained me (I'd like if he hadn't closed his blog):

I prefer F-1 to NASCAR because I like to see the fastest cars possible, as a testament to human achievement or whatever. In the absence of the chance to see the fastest possible car, I like close races, so NASCAR is my #2 choice.

If there were F-1a cars that went faster than F-1 due to a rule difference, I would prefer F-1a to F-1 because those would then be the fastest cars.

I would also then prefer NASCAR to F-1 because I like the close races of NASCAR and F-1 no longer offers the "fastest cars" benefit that previously put it ahead.

My preferences are intransitive - adding C changed my ordering of A and B.

I'm sure there's something wrong with this formally but it's a fun example.

If you had a once-in-a-lifetime chance to be allotted for a ticket to only one of the three events, wouldn't you still rank your ticket-preference as { F-1a, F-1, Nascar }? Or not?

That is very good. But I think the hitch you sense lies in the fact that the introduction of C disqualifies A.

A isn't really A anymore. The preference for F-1 was based on a quality that F-1a later acquired (the quality of offering the fastest cars).

As I see it, if C can replace A, the preferences are not truly intransitive. If C were truly independent, where neither A nor B were redefined, then it would be spot on.

That's why Tyler's example above works better for me, although it does speak to something a bit different. The introduction of Haydn doesn't affect the nature of Beethoven. Beethoven posesses the same qualities regardless of whether or not Haydn exists.

I personally find the existence of Efron's dice a more than sufficiently strong argument against transitivity. It is not at all difficult to imagine possible personal or public decisions whose outcomes and the probabilities thereof end up working the same way that they do. (Or, to make a better case, a modified set with equal averages)

Wonder what Krugman thinks?

Whatever contradicts what he thought in the past.

That Leftist intransitive preference is a sign of their good morality, but that Rightist intransitive preference is a indication of their inherent evil.

There is actually pretty strong empirical support for the existence of intransitive preferences under laboratory conditions, dating back at least to the early 70s.

One famous example is offering a choice between two lotteries A and B, where A has a high probability of a low payout and B has a low probaility of a high payout. For suitably chosen values a large number of subjects will, when given the choice, prefer to participate in lottery A, while at the same time when subjects are given the opportunity to bid money for the right to participate in either lottery, they will have a higher bidding price for lottery B. (see eg. here: http://www.jstor.org/stable/10.2307/1808708 [Grether and Plott, 1979])

If it is hard (or non-intuitive) to figure out the expected value people might just be guessing. Will especially happen if the expected values are very close.

The point is the "guess" is biased.

This is a link of Temkin talking.


Re: 4) Part of the problem with the Repugnant Conclusion is the langauge with which it is stated. When you think of a life "barely worth living" all that springs to mind is pure suffering. But the premise of the argument is that these many lives are on net valuable and satisfying. Parfit also constantly shifts between relying on abstract principles to make counterintuitive claims and then making arguments based on intuition--but my reading of Reasons and Persons is that he is not adopting the intuitionist "reflective equilibrium" approach of Rawls. If something strikes someone as disgusting or repugnant, why is that definitive? Sweatshops seem repugnant at first glance but there are strong moral arguments why the free trade that makes them possible is justified and increases human welfare.

Re: 3) it's not clear how "roughly" is being used here. it seems like argument by needless definitional ambiguity. Any definition I thought of resolved itself into an issue that does not implicate transitivity, but I could be missing something.

Re: 2) the Sorites paradox seems like a generalization of zeno's paradox regarding the impossibility of movement. But this analytical move can be applied to anything to create puzzling results, and it doesn't seem overly profound. After all, you can walk 1 mile even though that technically requires traversing infinite sub-units of distance.

Re: 1) I don't know what Tyler's claim means exactly, but many comments have implied that Arrow's theorem somehow refutes transitivity. My understanding is that it is a problem of aggregation, not preference formation. It states that when aggregating individual transitive preferences you CAN have a transitive "collective preference," but only if you want to leave other obviously important criteria unsatisfied.

It would be nice if Tyler, or anyone else, could shed light on what "the holistic good" is. It's the first time I've heard the term, and I'm intrigued. It apparently has something to do with multiplication (utility?), but a Google search turns up holistic medicine, which isn't helpful.

"defuses Temkin’s arguments,"

Tyler, you don't defuse arguments, you defuse bombs.

Arguments are rebutted or refuted.

But defeasible arguments are (archaically, at least) defeased.

Cute malapropism, nonetheless.

I remember seeing these axioms in my first micro course and thinking that people are actually inconsistent all the time and often have no concrete idea about their preferences, even for things they regularly consume, and became acutely aware of instances where these axioms were broken for some period of time afterwards. It even became part of my disparagement of my field for a while .... assumptions which are sometimes true used to build assumptions that are sometimes true combined with data that we all know is only sometimes close to satisfying ideal statistical properties ... all of which we ignore and reach conclusions with several decimal places of precision and worry about such things as strict (or not) inequalities ...

None of this bothers me so much now, since I have no better alternatives to propose. Acknowledging weak points is, in my mind, a good thing though. That's because it makes our understanding of final conclusions generally richer, even if less certain.

I might even like to read the book suggested at the end of the post, as a revisit to one of the initial things that I found to be quite curious about the field.

Most of the argument of the book falls apart if one rejects commensurability instead of rejecting transitivity.

People, you are wasting your time. What we think of as morality is just genetic programming to allow humans to cooperate in small hunter gathering type bands. Given that, would morality necessary be transitive? It's just a bunch of kludge coding that was sufficient to make the simple interactions we had a few thousand years ago work. Just like the sex drive never anticipated pornography or birth control, we have a moral system that just doesn't work for today and is full of inconsistencies. The way to deal with this is not too try to force fit it into a logic system, that's like the Victorian attempt to make the bible scientific, just go with the flow and do what feels right. It what everyone (philosophers included) does anyway.

By the way, it's going to get even harder to maintain the fiction that morality has logic as we develop artificial intelligence and get the ability modify our own emotional responses.

Comments for this post are closed