Tyler asks, following philosopher Alastair Norcross, whether it could ever satisfy a cost-benefit test for one person to die a terrible and tortured death in order to alleviate the headaches of billions of others by one second.  Tyler begs off with "a mushy mish-mash of philosophic pluralism, quasi-lexical values" and moral conceit.  I will have none of this.  The answer, is yes.

The clearest reason to think that we should trade a terrible and tortured death of one in order to alleviate the headaches of billions is that we do this everyday.   Coal miners, for example, risk their lives to heat our homes and to generate the electricity that drives this blog.  We know that some of them will die horrible deaths but few of us think that we are morally required to give up electricity.


If somebody volunteers to die a horrible death to help alleviate a million headaches, I'd thank him kindly. I don't think that really answers the question, though.

Oh dear. In one case the people in question have rights and in the other they don't (or he doesn't), and you can't even see the difference from a moral perspective? Go to the bottom of the class.

If all electricty was good for was reducing headache symptoms by one second, I'd gladly give it up.

And other commenters have pointed out that people choosing voluntarily to be paid to take a degree of risk is morally different than imposing torture on the unwilling.

But you are as caught up in your ideaology as the Taliban are in theirs.

So you will never see a counter-argument in your life that undermines your cherished dogma.

There's also too little information and too vague of definitions.

Working with the headache example, does it make a difference if the death is accidental/incidental or deliberate? If it is a one-off, or a continuing process? If there were viable alternatives? If it was out of our control anyway?

Additionally, who defines what is a benefit? Currently, I benefit in thousands of minor ways from pain and tragedy of others. That's simply a fact. Is it morally justified? That really depends on where your basis of morality comes from. One death = 1 less second of headaches for billions. What happens when the equation changes to 100 death = 1 less second for thousands? 1000 deaths = 1 less second for 100s? 1 million deaths for a good parking spot on Tuesdays?

Does the number in any way change the moral equation? Does the type of benefit change the equation?

Some things to think about at least. I used rather absurd examples, in an attempt to illustrate the boundaries of this thinking. I would love to hear what you all think.


there is no uncertainty that some coal miners will die producing coal. The fact that we can't identify which ones is morally irrelevant. The uncertainty gives us "moral wiggle room" to feel ok about ourselves but there really is no difference.

Most people suggest that rights or voluntarism makes the difference but that's simply a rejection of cost/benefit analysis. I'm fine with that but note that it takes all the air out of the example since from the rights perspective the size of the benefits and costs is irrelevant.

I'm always surprised to see economists -- who otherwise believe that inter-personal comparison of utility is a no-no -- make cost/benefit arguments where the costs and benefits fall on different sets of people. Am I missing something?

First of all, 50 years of being awake 16 hours/day is about a billion seconds, so the tradeoff seems roughly fair (at least numerically).

Philosophically, I think an even better example is driving. Would we trade a trip that is 1 second shorter for a 10^{-9} chance of dying? Since your probability of dying on the road is proportional to the fourth power of velocity, choosing to speed will shorten your trip but increase your risks of death (and speeding tickets, which I'll ignore). I did the calculation on my blog and it turns out that 65mph is not so far from an optimal speed. So I would say that everyone who goes 5mph over the speed limit would accept this trade-off personally.

The problem with the coal miner example is that equating lives with money is difficult, and introduces more equity issues.

The reason people balk at this tradeoff is that no-one would rationally bargain his life away, since he would be unable to enjoy his compensation. For an alternative approach, see David Friedman's paper, What is "fair compensation" for death or injury?

I'm not usually so blunt, but I have to say that's a pretty dumb scenario. In my mind there's no cost-benefit analysis which concludes that billions of people have any moral right to have their headaches alleviated by anyone else's sacrifice, including a hard day at work, being dunked in ice water for five minutes, having to watch an awful movie, etc., etc. That simply makes no sense to me. And the question posed is not equivalent to coal-mining and electricity.

The easiest way to avoid the confusion of "voluntary" risks, such as when driving, is to note that there are also involuntary risks that we create for others by our driving. And all sorts of other behaviors.

And now that we've determined what we are, we're merely arguing about price.

Only the market can give a proper answer to this question. Any attempt to centrally answer any such question of cost-benefit is a denial of the power and moral validity of the market process.

Prof. AT,

The coal miners don’t provide a mere second but a steady stream of electricity. Furthermore, I don’t think that mining is necessarily the equivalent of a “terrible and tortured death.†

The least persuasive is the one second factor. Torture and kill a tyrant for the liberation of millions, perhaps. But won’t we have a bigger headache knowing that we are so callous about human life and suffering for one second of comfort. Consider the original headache the cost of maintaining human dignity.

I agree with Alex that the uncertainty issue is a red herring. Take the original example (one torturous death for billions of alleviated headaches) and add some uncertainty about who will be the victim. Maybe his identity will be determined by lot. Does that really change the nature of the question? Any point the original poser of the question wished to make could, I think, still be made even with the added uncertainty.

But Alex's coal miner story is a still question-begging. When Alex says, "The clearest reason to think that we should trade a terrible and tortured death of one in order to alleviate the headaches of billions is that we do this everyday," that's really not true. What we do everyday is let people voluntarily risk their lives in exchange for compensation. So this example provides no evidence that our "true" values countenance doing the same thing involuntarily and without compensation.

Now, maybe we *should* embrace a more utilitarian ethic, which seems to be Alex's point. But the coal miner example does not demonstrate that we already implicitly done so.

"The clearest reason to think that we should trade a terrible and tortured death of one in order to alleviate the headaches of billions is that we do this everyday. Coal miners, for example, risk their lives to heat our homes and to generate the electricity that drives this blog. We know that some of them will die horrible deaths but few of us think that we are morally required to give up electricity.

WE should trade? Who's "we"?

Tabarrok assumes that coal miners will be worse off for mining which is exactly the opposite of the conclusion those miners come to in a free market: they judge they will be better off.

And they are not trading their labor and health to make others better off, they do it for compensation which makes them better off themselves.

The problem is in Taborrok's "we". There is no question here which we are entitled, or even equipped, to answer collectively. We're only entitled and equipped to choose for our individual selves.

So you are saying that we are entitled to trade Alex Tabarrok's life for whatever we'd collectively prefer instead?

"Tabarrok assumes that coal miners will be worse off for mining which is exactly the opposite of the conclusion those miners come to in a free market: they judge they will be better off."

Again, the issue of willingness is a separate one, I believe. Note that in the description you give here, the free market offers a miner compensation sufficient for his risks only BECAUSE the rest of us value the temporary relief his work provides more than the life of those who will die; that's why we (indirectly) pay him.

If you take moral exception to Alex's endorsement of the utility calculus, then even if you consider individual rights inviolable in some sense, you should still advocate a voluntary, massive scaling back of the mining industry.


It is tough to argue with you when you keep switching your examples. First you talk about the deaths of coal miners who have a choice in assuming their risks. Then you say that talking about choice is a red herring because people who die from the pollution that comes from the use of coal have no choice in the matter.

If I was to argue in the same manner as you do Alex, I would argue that people in a free society do have choice about whether they chose accept the risk of coal pollution. Most people who live in areas where there is high coal caused pollution do so because those areas also happen to be places where there is economic opportunity. Thus, people live in areas afflicted with coal pollution for the same reasons that coal miners are willing to risk their lives in the first place.

I could argue that those who do not have the freedom to move away from coal caused pollution are victims of a non-free society. Not those who chose to buy coal.

Such an argument is quite a stretch. But it is no more of a stretch then you argument that buying coal is equivalent to making one person die so that billions of people can have a less pain in their lives.

The morality of choices depends on the information that people possess. If a doctor willfully treated people today the way that doctors treated people in the civil war era, we would consider that doctor immoral. Yet we do call the doctors of the civil war era immoral. Because doctors of today have more information we hold them to a different standard.

The basic problem I have with your argument is that you equate a moral decision about clear cut costs and benefits (pain free time for lots of people vs. one painful death) with real world examples where the information is not clear cut. I don't think such comparisons are valid.

Since we all die in the end, everything we do can be said to be a cause of death. Buying coal might cause people to die of black lung at the age of 30. But would those same people have died of malnourishment at the age of 20 without the job provided by coal? Can you really say that coal caused them to die earlier then they otherwise would have?

I believe that those who have the risk should also have the reward and that those choices should be voluntary. Thus, in your example where we know that it will take one person's death to bring pain free time for other people, I say that the cost should be assumed voluntarily. If no one will do so, then they should all suffer.

In so far as I have good information, I try to operate under the same principles in the real world. But I don't always have good information.

I know that if I buy coal I will be causing someone's death. I also know that if I buy coal I will be causing someone to live. I don't know if the people that I am causing to die are dieing sooner then they otherwise would have (malnutrition might have killed them in the same time frame). I don't know if the people that I am causing to live are living longer than they other wise would have (they might have found a different way of feeding their families and taking care of their medical costs).

Since I don't have perfect information, I may very well be causing some people to suffer involuntarily for other people's benefit. But I don't consider that to be the moral equivalent of choosing death for one person in exchange for pain free time for all because I don't have perfect information.


"...you should still advocate a voluntary, massive scaling back of the mining industry."


Last time I checked, electricity saves lives. It runs those fancy medical contraptions, keeps our food in super markets from spoiling (preventing starvation), allows us to grow crops, etc. etc.

So, I don't think that Alex's example really is helpful to him.

Except, it does bring up an interesting point. If we can produce electricity in a way that saves lives but costs more money, should we do it? For example, if it were feasible to use solar or nuclear power instead of coal, but (hypothetically) these other means cost more money, would we have an ethical obligation to switch.

I say yes. Obviously. Money != human life.

Of course, we have people like Alex running Ford with those exploding gas tanks in the Pinto, who figured the costs of law suits was less than the cost of fixing the gas tanks.

Guess what. Those Alex's who ran Ford were wrong and faced criminal charges. I guess that Alex's philosophy, if implemented in a commercial context, is not only morally wrong, but potentially criminal.

John T. Kennedy asks:
"Given a private road, what are the involuntary risks created by driving?"

You don't really need to posit counterfactuals if you really crave the fig leaf of "voluntary". You can simply posit that everybody using the public road is taking voluntary risks of being killed by others. But of course, there's no end to that sort of absurdity: you can say people are taking voluntary risks breathing polluted air, living in countries ruled by murderous dictators, etc.

By painting everything as voluntary, you absolve all fault from anybody, including the most heinous murderer: the victim voluntarily undertook the risk. We don't even have to care about compensation (unless it was promised): if the risk was taken, we have revealed preference.

With this libertarian absurdist idea of voluntarism, there are two major exceptions: the first is innocence. If you are innocent of the choices, then you cannot be said to have made a voluntary choice.

The second exception is to consider whether somebody has the legitimate power to add a risk to the social environment, thus changing the "voluntary" choices. Perhaps making them essentially unavoidable (such as breathing polluted air.) Thus we come full circle back to the initial question: can we trade off deaths for convenience?

I'd think about it in terms of diminishing marginal utility.

If one tried to have a near zero risk of dying on a given day, i.e. never went outside because of skin cancer, never drove because of accidents, etc., then life doesn't become much worth living.

That's part of the reason why some people do dangerous jobs such as coal mining. For them, the opportunity cost of less compensation from a safer job is greater than the risk of injury and death in the coal mine.

However, the same person who signed up for a mining job with a .1 percent chance of death won't take a job that pay 990 times more with a 99 percent chance of death. Explained in terms of microeconomics, the marginal utility of each additional dollar of compensation asymptotically approaches zero while the cost of death remains the same. Thus, to maintain the same ratio of utility per chance of death, an employer would have to offer more money for increasing the chance of death from 98 to 98.1 percent than from .1 to .2 percent.

And for someone to take a job with a 100 percent chance of death, they would have to be offered compensation of infinity. It's the economical equivalent of dividing by zero.

Therefore, even if billions of people all chipped, say, five bucks, for somebody to die a tortured death in order to reduce the billions' headaches by one second, it would still not be enough for the person to voluntarily enter a contract to die.

Alex, Le Guin's story The Ones who walked away from Omelas is for you. Here's Wikipedia on this

"...whether it could ever satisfy a cost-benefit test for one person to die a terrible and tortured death in order to alleviate the headaches of billions of others.."

Millions of Catholics believe so.

Seems like a better example than the coal miner one would be that of corporate insurance benefits offered to employees. Instead of each individual bearing all the cost and risk of that small chance of being in an accident, they spread the risk out across many employees & there for pay a lower price, alas, many small split second headaches vs. one person's tourtured demise. Physics demonstrates this well also. Say you have to lift a huge weight from point a to point b, and the weight is so heavy that you can't possibly do it alone or in one motion. Using a ramp or a system of pullies gets the same amount of work done (accomplishes the lifting), but with much less effort by anyone part of the entire system (that one person who has to suffer & die to move our weight around). Also, once the second long headache is over, it's over. A sunk cost with little bearing on the present. Those billions can go on to be productive citizens including the "would have been tortured" person. If you side with the torture...that one less person to add to the mix.

The "volunteer" part is essential in the initial example because it implies the coal miner is better off by taking the job, making the transaction a Pareto improvement (ignoring externalities such as pollution, which do complicate the analysis but were not the point in the initial example). And Pareto improvements are such an obvious moral choice that even positive economics, which tries to avoid all value judgments, strives for Pareto optimality. Now, if Prof. Tabarrok has any insight on how a Pareto improvement might be equivalent to torturing somebody to death, I beg him to elaborate on his thoughts, for the effects on positive economics and everywhere else could be profound.

Alex, I think you're completely wrong headed about this question, and I have two important things to think about that I haven't seen said straight in the comments yet.

I don't think you can discount the divide between a willing sacrifice (whether for compensation or merely out of altruism) and an unwilling one. If the sacrifice is unwilling, that unwillingness itself represents a cost that must be included in any analysis. If people believe that their lives are potentially forfeit in any kind of common way to this sort of analysis, that's going to cast a pall over everyone in the society that probably costs more than a second of headache pain.

And there's a purely mathematical issue of why it is incorrect to value very small risks in the same way you would value large ones. I copy here bits that I submitted today on a thread about this elsewhere.

People will accept small risks to their life or liberty/livelihood for a payment, but rarely will they accept near certain death for any cost. It's instructive to consider why in mathematical and economic terms, rather than in instinctual terms.

One way to discount a benefit if I'm being completely self-interested is to consider that the money I recieve is worthless to me in the event that I die.

So let's say I put an economic value on my life of X. This means that in the event that I could enjoy the payment 100% of the time, I would accept X*p as a premium payment in order to accept death with probability p.

But I can't enjoy that payment 100% of the time. If I accept the exchange, then p of the time, the payment will be worthless to me. So the payment I actually require to make this devil's bargain is X*p/(1-p). Note that this curve is hyperbolic. As 1-p gets close to zero, it increases without bound. Any very high number for p makes a huge difference in the expected compensation. If my X is 10 million dollars, I'll accept something like a coal mining job with a 30 in 100,000 chance of dying in a year for a $3001 premium to a similar job. (and the discounting factor adds only $1 to my desired payment). But if someone asks me to take a job with a 90% risk of death, I'm going to require a 90million premium, not a 9 million dollar one.

But even this analysis doesn't get close to the true cost required because very few people have a linear utility of money curve. If we use logarithmic utility of money, then it becomes increasingly hard to pay off that discount as p gets higher, and it's practically impossible to compensate people for accepting very large risks of death unless the value they place on their life is especially low. For an example if my u(x) = ln(x):

Then a 10% p of death with X=10million would require paying me 4.6 million, rather than the 1.11 million that a linear analysis would show.

At very low risks of death, the log analysis adds little (I need 3007 to take a coal mining job rather than 3001). But as risks go up, the costs required to compensate me shoot up *dramatically*. At a 20% risk of death, I need 75million to compensate me, despite my putative life value being only 10 million. To get me to accept a 50% chance of death would require 25 trillion dollars, or twice the 2005 US GDP. I can't seem to find a figure for total world wealth, but I'd bet it's roughly the same order of magnitude as the $88 quadrillion my model suggests is required to pay for a 60% chance of death. Even if I set the value for my life absurdly low, high risks still result in extremely high payments. If X=$100 but p = 80%, then the fair compensation is over 3 billion dollars.

And of course, certain death would require infinite compensation under this model for anyone with a positive X.

This doesn't even consider that most people's utility is flatter than pure log, esepcially at the high end. I see effectively no difference between getting paid 100 billion and 25 trillion personally and very little between 50 million and 100 billion, so the risk of death I'd be willing to accept personally for any amount of money is going to max out in the 20% range or lower.

That looks about right, doesn't it? I don't know many happy middle-class people who would play russian roulette - a pure 16% risk of death gamble - for a mere $1,500,000, but everytime we drive a car or partake in extreme sports we demonstrate the willingness to accept minuscule risk that trades life at similar linear rates.

When you talk about actually killing somebody, or putting them under a kind of indefinite severe torture that would make their life arguably not worth living, this model suggests that you cannot possibly pay them enough to take the deal willingly if they value their current life and have a normal-ish utility of money function.

This also suggests that concentrated misery/death has a much greater negative effect on overall well-being than spread out pain or risk.

This implies to me that unless we can distribute the suffering again, there's no way to make some balance of payments that would make all the participants in your hypothetical willing. If the overall total suffering were greatly decreased, then in principle, as long as the distribution of the resultant suffering was not *too* concentrated, there would be some balance of payments that would make everyone at least whole and some happier.

But one certain death/torture is always too concentrated and requires an infinite benefit to balance.

hey? whatever maximizes utility. that and that alone should be our measure. good day.

資金を増やそうとするのに不動産投資をするのが手っ取り早い。日本で不動産で東京 賃貸をさがすのはきわめて難しくシステム開発は日本の会社が良い。

... and this is why utilitarianism is bunk.

Comments for this post are closed