The difficulty of cross-theoretical aggregation

File this one under “unglamorous yet underrated philosophical paragraphs”:

There are really two problems that fall under the label of ‘the problem of intertheoretic choice-worthiness comparisons’.  The first problem is: “When, if ever, are intertheoretic choice-worthiness comparisons possible, and in virtue of what are intertheoretic comparisons true?”…The second problem is: “Given that choice-worthiness sometimes is incomparable across first-order normative theories, what is it appropriate to do in conditions of normative uncertainty?”

That is from the doctoral dissertation of William MacAskill, who is also a driving force behind the Effective Altruism movement.

Here is an oversimplified way of putting his point.  Let’s say you think utilitarianism is true with some probability, and Kantian deontology is also true with some probability.  Can you aggregate the recommendations of these two theories “across the probabilities”?  Not easily.  The Kantian theory offers an absolute recommendation, but should that carry the day if deontology is true with only 7%?  More generally, even less absolute theories do not offer comparable frameworks for cross-theoretical aggregation.  How does 6% truth for maximin, 13% truth for prioritarianism, and 27% truth for cosmopolitan utilitarianism all add up?  It’s not like calculating true shooting percentage in the NBA, because there is no common and commensurable understanding of “points” across the different frameworks.  This aggregation problem is actually tougher than Arrow’s, at least once we recognize there is justifiably uncertainty about the true moral theory.

There is actually some related blog commentary on this issue.  Overall MacAskill is on to one of the most important developments in consequentialist ethics over the last few decades.

Comments

The statement "utilitarianism is true" or "utilitarianism is not true" is more or less a textbook example of the kind of misuse of language Wittgenstein took issue with.
The presumption that this sort of vague nonsense can be analyzed logically is a big intellectual mistake.

utilitarianism is true AND utilitarianism is not true is closer to what this post is trying to talk about.

lol. At least he follows his own advice about not logically analyzing these statements, or at least not doing so correctly. ;)

I meant that is what Tyler is trying to do. The root of this problem is that we ONLY have non rational basis for our decisions about the physical world. We can only hope to be justifiedly wrong or right. We may be brains in vats with a nonzero percentage chance. Your moral theory needs to account for that.

Your complaints about vagueness and nonsensicality are misdirected if you think they're aimed at MacAskill. Passing silently over questions such as 'is utilitarianism true' in no way undermines MacAskill's project, which addresses questions such as how to act under a situation in which you feel the sway of multiple normative theories. In order for MacAskill to be nonsensical, you have to consider normative questions nonsensical generally. But if this were your objection you should have just said 'I don't believe in morality'. And regarding vagueness, you seem once again to be responding to a line from Cowen's admittedly oversimplified gloss rather than MacAskill himself. MacAskill's questions are about making decisions in particular situations.

Given that he argues expected utility theory with a prior over different moral philosophies is a possible solution, I'm pretty sure MacAskill is plenty guilty of mixing and matching different language games in an incoherent way. Wittgenstein more or less opened and closed this book in my opinion with the ethics=aesthetics one liner. Morality is real (in the same way that beauty is real- some things do or do not seem moral), but moral reasoning is not.

When Will MacAskill starts giving advice on careers, that is not philosophical nonsense, but he also has no expertise in it. If you want advice on how to end poverty (for example), you do much better to talk to a development economist than a moral philosopher. On a related note, philosophy suffers from a huge selection bias with regards to understanding Wittgenstein. Anybody who understands his books correctly knows better than to waste their time getting a PhD in philosophy.

Which of course means that no philosopher should get to label themselves an effective altruist, since it is an utter waste of time.

MacAskill does talk about maximizing expected 'choice-worthiness' (not utility) over possible theories (although only in scenarios meeting a set of strict conditions, which he argues do not generally apply). It seems to me that you are claiming that this talk is incoherent. I disgree. This talk can be comprehended in terms of the real-world situations that MacAskill is addressing, namely situations in which we have conflicting moral intuitions, and in terms of the things that we decide to do in those situations. In particular, MacAskill proposes a way to decide what to do in such situations: consider the suggestions of all the various moral theories which describe our various moral intuitions, and weight these suggestions according to how much sway those moral intuitions hold over us. Of course, this is the trivial case and not generally possible. But I just don't see how this advice could seem incoherent to you just because you think morality=aesthetics (a position I find plausible. actually, which is why I'm bothering).

Of course, you can always say "I don't like to think too much about what is right and wrong. In cases where my intuitions come into conflict I'm not interested in trying to decide in a principled way. I just shrug my shoulders and act arbitrarily". But that is a different criticism than claims of incoherence. Similarly, just because morality=aesthetics doesn't mean that we can't understand what it means to perform moral reasoning. More precise and expressive language can help us gain insight into the landscape of moral situations, and this understanding can change the decisions that we would make. This process is often called moral reasoning.

Logical issues aside, asserting that all human life is of equal value and animal life is valuable too sounds like a good starting point for behaving ethically.
Of course, viewed from a logical perspective, my previous sentence should be viewed more as part of my definition of ethical rather than as an argument that is or is not true.

Yes, obviously. MacAskill would agree with you.

Haven't read the paper but from the quote take the point to be slightly different. Not that different theories have certain probabilities of being true, but that moral problems are best described with several incommensurate theories at the same time, ie, moral problems have many dimensions that can't be reduced to one value. If so, how to weight various iincommensurate values

As you have put it, there is a a single reducible value to explain moral problems. We just are not see yet which one. In that woris the practical problem of deciding under conditions of uncertainty is an interesting problem but not a fundamental one. Of course that is not to knock it if it is true.

It's less problematic if you think of "consequentialism" / "deontology" / "virtue ethics" etc. not as standalone justificatory normative theories, but rather as well refined expressive vocabularies used to convey moral intuitions which are antecedent, and to a large degree translatable into each ostensibly opposing theory. This particularly class of "moral uncertainty" can be dismissed as caught up in what Brandom calls the formalist fallacy.

This quote explains what I mean:
http://abstractminutiae.com/image/113097315335

Your first sentence is very well put, and I fully agree. However I do not see how this resolves the issues that MacAskill is concerned with. There are situations in which we have conflicting moral intuitions (those expressed by rival theories). What do we do then? This is a question MacAskill seeks to address, and this question does not depend on the formalist fallacy, as far as I can tell.

To the extent that he emphasizes the "cross-theoretic" dimension, he is committing the fallacy. If we think of morality as, in the first instance, an implicit property of what we feel entitled to *do*, and the theoretic as a way of "making it explicit", then the conflict of having two moral intuitions is not an epistemic problem, but a problem of contradicting deontic constraint. To put that into decision theory, imagine two pruning procedures to a decision tree. The first prunes based on what is possible, probable and necessary (via our doxastic commitments) and the second prunes based on what is prohibited, permissible and obligated (via our deontic commitments). Only after these two procedures are complete are we left with a "rational" choice set. Given the inferential nature of these commitments, when we experience moral dissonance, it isn't an issue of an isolated normative or epistemic "coefficient" on a decision tree's branch. Rather, the main issue to resolve is how coherent the commitment is within our holistic web of commitments. We tend to make the choice that requires revising the fewest follow-on inferences. Does that make some sense? I don't think that solves the problem per se, but it does put MacAskill's question in very different light.

Thanks for this thoughtful comment. However, I disagree with this statement: "To the extent that he emphasizes the cross-theoretic dimension, he is committing the formalist fallacy". The formalist fallacy in this domain would be to treat ethical theories as justificatory rather than descriptive, right? MacAskill does not make this mistake. Simply talking about theories or using them as tools does not amount to treating them as justificatory. Specifically, MacAskill is addressing a real problem, namely how to decide what to do in cases where we have conflicting moral intuitions (those expressed by rival theories). One approach to dealing with such situations is to try to reconcile the recommendations of the theories that express our conflicting intuitions. But this is not so easily done, and MacAskill explores some interesting ways to approach this problem. Indeed, your own suggestion about 'holistic commitments' seems to be following this same general approach to the problem (i.e. a decision criterion which takes the recommendations of both sets of intuitions into account in some way). Are you also committing the formalist fallacy in comparing the recommendations of your two different pruning procedures, and counting how many downstream commitments each of them forces us to violate? Of course not.

As for using coefficients to weight the recommendations of various intuitions, this does not have to be understood epistemically. Rather, coefficients seem like a natural way to capture the fact that in a particular situation, different moral intuitions will hold more or less sway over us.

Neo-Hegelian obscurantism.

Aside from the gibberish, the fallacy is that moral problems (indeed, all life's problems including those in the realm of economics) can be reduced to a mathematical certainty. While Plato was a patron of mathematics and insisted that mathematics (mostly geometry) was an essential part of a philosophical education (because math demands clearly stated assumptions and logical deductive proof), he didn't take it to the absurd lengths it is taken today. Indeed, the overuse of mathematics today (a fetish) is but a smoke screen for what is in reality intellectual fraud.

MacAskill does not assume that "moral problems can be reduced to mathematical certainty". Indeed, it is a common fallacy to think that we can only use numbers when we are certain. In fact almost the opposite is true - numbers are often the most valuable and important under conditions of uncertainty.

Fundamentally confusing. "Normative theory" are two terms that don't go together. What does it mean for utilitarianism to be "true"? Theories are subjective to testing and evidence. What kinds of evidence or testing does one bring to utilitarianism or any other kind of normative proposition?

You should read MacAskill then, not Cowen's crudely stated summary. It doesn't have to mean anything for utilitarianism to be "true" in order for us to be faced with questions about how to act in situations where various moral intuitions are in conflict. Perhaps you are a moral relativist, but that is a separate issue.

"Overall MacAskill is on to one of the most important developments in consequentialist ethics over the last few decades."

My God

Judea Pearl will tell you to express it in the causal calculus formalism, everything else follows. :)

Isn't this what Pascal's Wager is for? "Can't really prove the basis of things, but I've got a reasonable enough answer to go with"?

Even if "utilitarianism is true", the actual recommendations can be completely different. Even within the EA movement, for example...Singer's people say that the marginal human decreases total utility. Every time an effective altruist saves an African child with a malaria net, they are DECREASING utility! Good luck aggregating that.

Huh? if they were incompatible metrics you'd got with maximum likelihood! This really is basic probability stuff. Just thing of actions as categorical and each theory having a value of 1 for a single action and 0 for all others. As often with most contemporary philosophy, this seems horribly out of date. They seem 50 years behind mathematicians and scientists.

One obvious application of this technique is to abortion. There is widespread moral uncertainty about the issue, but the high cost of getting it wrong suggests the value in trying to work out the best action under moral uncertainty.

http://lesswrong.com/lw/lhp/compartmentalizing_effective_altruism_and_abortion/

Sure it is a very hard problem, but again, the aggregation of information part is math out of the 1950's. The hard part is getting signal rather than noise. Regardless of what philosophical stance you take (soul, consciousness, potential etc) the hard part is in measuring, not aggregating. And, the aggregation math doesn't change based upon how poor your measurement is.

Specifically to the link. Compartmentalizing is actually a feature, not a bug in high-noise, high-dimensional environments exactly like the situations discussed. It is less prone to over-fitting. This is a more recent finding from math, but we are still talking 15 years old. (see ensemble learning) I wouldn't consider any modern discussion of compartmentalization worthwhile if it didn't place it in that context.

The thing is, most theories aren't "1 for a single action and 0 for all others" -- utilitarianism, for example, has very explicit order (and indeed, scalar) preferences. Many other theories have a "prohibited / required / allowed / supererogatory" categorization, which can't be reduced to (1,0,0,0...) data.

" How does 6% truth for maximin, 13% truth for prioritarianism, and 27% truth for cosmopolitan utilitarianism all add up? "

Looks like it could be modeled in a Klingian three axis model then run through an modified fractal.

The idea of assigning percentage truths to different sorts of philosophical argumentation runs counter to the value of studying philosophy. You study, and thus encounter, a body of work. This body of work enables you to assess various strengths of weaknesses of that body of work and specific ideas within it, and enables the student of philosophy to have something to ground themselves to in elucidating their own ideas.

In my opinion, the value of Kant is not whether or not he was right, but rather that he was able to put much of Christian ethics (not the kind espoused by the American right wing, but more so of the "love your neighbour" kind) into a secular framework which can be debated for its own merits or flaws without having to debate the existence of God or whether scripture was laid down word for word by some all-knowing omnipotent being as opposed to fallible man.

Utilitarianism is a useful construct and should be understood by all philosophers and economists, but practically speaking people use it to justify everything from pure communism to unfettered free markets.

Was Marx right or wrong?

Who cares? Studying him will open your mind (so long as your mind is a little open in th efirst place and you're not reading him for the simple purpose of trying to find out how/when he was wrong).

Same goes for these theories. Assigning percentage probabilities of correctness is a silly venture, a waste of time and effort in my opinion.

This is exactly why amartya sen's approach in The Idea of Justice is so useful. Practically, we don't need to decide what world we want to create ultimately, or even what is best to do right now - only agree collectively what would constitute meaningful improvement.

Comments for this post are closed