Effective Altruism: where charity and rationality meet

That is the title of my current column at The Upshot.  I very much enjoyed my read of William MacCaskill’s Doing Good Better: How Effective Altruism Can Help You Make a Difference.  The point of course is to apply science, reason, and data analysis to our philanthropic giving.

I am more positive than negative on this movement and also the book, as you can see from the column.  Still, I think my more skeptical remarks are the most interesting part to excerpt:

Neither Professor MacAskill nor the effective-altruism movement has answered all the tough questions. Often the biggest gains come from innovation, yet how can a donor spur such advances? If you had a pile of money and the intent to make the world a better place in 1990, could you have usefully expected or encouraged the spread of cellphones to Africa? Probably not, yet this technology has improved the lives of many millions, and at a profit, so for the most part its introduction didn’t draw money from charities. Economists know frustratingly little about the drivers of innovation.

And as Prof. Angus Deaton of Princeton University has pointed out, many of the problems of poverty boil down to bad politics, and we don’t know how to use philanthropy to fix that. If corruption drains away donated funds, for example, charity could even be counterproductive by propping up bad governments.

Sometimes we simply can’t know in advance how important a donation will turn out to be. For example, the financier John A. Paulson’s recently announced $400 million gift to Harvard may be questioned on the grounds that Harvard already has more money than any university in the world, and surely is not in dire need of more. But do we really know that providing extra support for engineering and applied sciences at Harvard — the purpose of the donation — will not turn into globally worthwhile projects? Innovations from Harvard may end up helping developing economies substantially. And even if most of Mr. Paulson’s donation isn’t spent soon, the money is being invested in ways that could create jobs and bolster productivity.

In addition, donor motivation may place limits on the applicability of the effective-altruism precepts. Given that a lot of donors are driven by emotion, pushing them to be more reasonable might backfire. Excessively cerebral donors might respond with so much self-restraint that they end up giving less to charity. If they are no longer driven by emotion, they may earn and save less in the first place.

On Paulson, here is Ashok Rao’s recent post on compounding returns.


Well, as long as they don't become obsessed with the seemingly small chance of Unfriendly AI . . .

But, but what about Roko's basilisk? He can attack anytime.

That doesn't seem likely to me, simply that they acknowledge it's one of many problems with the world: http://slatestarcodex.com/2015/08/13/figureground-illusions/

Except it's not one of the problems in the world, and even if it were writing Harry Potter fanfic wouldn't help.

I'm surprised when people say AI safety not a problem. Would you also say synthetic biology safety is not a problem?

It's obviously worth taking seriously.

As for the Harry Potter fanfic, do you think people have a right to leisure and a hobby? If not, what are you doing in MR's comment section?

Recent executive director of MIRI née SI and current GiveWell employee Luke Muehlhauser has claimed that said fanfic is a major accomplishment of the organization. Why would he say that if is is someone's hobby? And that's what they are spending money on, how is that altruism, much effective altruism?


(Ctrl-F for HPMOR, it appears several times)

Also, your argument by vigorous assertion that we should be worried about mean computers is not convincing.

"we should be worried about mean computers"

Your idiotic strawman is also not convincing.

I don't speak for MIRI, and MIRI is not the only organization concerned with AI safety research. Even if MIRI were dedicated 100% to Harry Potter fanfic, the correct response would be to stop supporting MIRI, not to decry all AI safety research organizations.

That article was awful and unrepresentative of the actual movement.

"And even if most of Mr. Paulson’s donation isn’t spent soon, the money is being invested in ways that could create jobs and bolster productivity."

This logic leads nowhere. The same reasoning dictates that $400 million might be best spent by just giving it to a hedge fund to manage because, "Who's to say?" Maybe that manager will find the best investments around and create more human good than all the mosquito nets in the world.

The most charitable explanation for that aside is that Tyler thinks that if someone is dumb enough to give $400 million to Harvard, Tyler doesn't want to dissuade them from at least giving to Harvard, as opposed to sitting in a bank account?

Its his alma mater.

Of course the donation a good idea.

> "Who's to Say?"

As you point out, that is specious reasoning. If the point of EA is to produce measurably better results, then the question comes down to what metrics do you choose?

And that, I think is the biggest pitfall to the entire movement - choosing the 'correct' metrics. Because it is deceptively easy -- seductive even, in a conflict-of-interest way -- for people to choose poor metrics that will make their pet projects look like successes even if the practical results are failures.

And that takes you back to the original problem - the application of good judgment, wisdom even, in deciding how to apply charity. It is basically the same problem whether it is EA or traditional charity. There is no substitution for good judgment.

A few weeks ago there was a good link about how metrics can be overdone and can be misleading, with people performing to meet metrics more so than the less quantifiable goals.

Yeah, I think the argument boils down to it is unreasonable to question any donation because "you never know".

Give a warlord enough money and he just might decide to invest in an Apple Watching coding program for starving children in Africa. It's better to give and than not.

So Harvard is equivalent to an African warlord?

The logic extends there, and beyond.

Also, I just think it's not accurate to say that EAs aren't thinking about research. For example, if you're trying to work on the somewhat mysterious magic of innovation, I think there's a lot to be said about:


And it's not like 80,000 hours doesn't include various forms of innovation and entrepreneurship as valuable goods for the world. I do think the flack Paulson got for his last donation was really unfair (it's not like he's, ahem, building a new student center) and there's a lot of potential for that donation to do good. But I also think that John Arnold is going to get a lot more innovation for his money than Paulson is because he's thinking much more deeply (and cares) about where the most good can be done through charity.

"If you had a pile of money and the intent to make the world a better place in 1990, could you have usefully expected or encouraged the spread of cellphones to Africa?"

If you just gifted that pile of money to Africans as cash gifts, wouldn't that have indeed encouraged the spread of cellphones, or whatever other other goods turned out to maximize Africans' utility, to Africa?

"Neither Professor MacAskill nor the effective-altruism movement has answered all the tough questions."

Has anyone person or movement ever answered all the tough questions?

I am assuming nowhere in the book does this author say that the Effective Altruism movement has answered all the tough questions and knows with great certainty how money can best be used to reduce suffering most effectively.

My interpretation of the Effective Altruism movement is that their goal isn't to provide confident answers at this time to questions that we can't possibly know yet. The short term goal is to get people to stop donating money to things we know are not productive to reducing poverty.

It's like nutrition. We have no one of knowing what the 'ideal' diet is for living a long and healthy life - anyone who claims to have the answers is lying. But we do know some things you should not do if you want to live a long time, and we are working on learning more about what you should try to eat.

The long term goals are to start gathering more evidence so we can methodically evaluate the risks and benefits of allocating money in different ways.

The short term goal is to get people to stop donating money to things we know are not productive to reducing poverty.

God, I hope not.

I always saw the point of EA as "for your given Good Thing, do it in the best way possible." So if you want to "end starvation in Africa" maybe take the time to check if, say, blindly dumping food on the continent that is just intercepted by warlords is actually meeting your goal, instead of actively fighting against it.

There are loads and loads of bad charities out there. Those that have negative returns to their stated goals. Filtering those out of the world is a great move. If people would be Effective in those goals it would help the world tremendously.

"...many {most} of the problems of poverty boil down to bad politics {government}.."

I did not have a pile of money and the year was 1993 but I was the FCC Chairman then and I was part of a very successful global plan to use cell phones to lift hundreds of millions out of poverty. At least in my experience innovation is not as mysterious as you think.

"Economists know frustratingly little about the drivers of innovation"
Bull shit! All you need to know is government is the driver of innovation. Anyone who thinks otherwise is a global warming denier!

I am from Brazil and for some reason I dont know precisely I follow your blog in a daily basis. I am always learning something new. thanks a lot!

Sometimes, I think we'd be better off if we all focused on bettering our own lives in a way that was respectful and mutually beneficial to others, and taught others to do the same.
Am I really going to do more good in the world by giving money away to strangers or by improving the lives of myself and the people closest to me, whom I know the best, and hence know best what they need?
I think it's important to emphasize that you have to be ethical and honest. Help yourself, but don't do it at other people's expense by fraudulent means - way too many think like that, and that's not what I'm advocating.

How convenient.

Improving one's own life can be greatly beneficial. And it doesn't have to selfish. If the med student decides to study this weekend instead of going out drinking with friends, he may (notice "may") be making the world a better place, by making himself a marginally better doctor.

Once he's established as a doctor and his income is more than covering his nut, though, it might be time to find other people who will self-improve if given some assistance and assist them.

It's probably easier to identify other people who will self improve if you already know them via family or community connections.
Like, for example, your own children. Unless your kids are already accepted at Harvard, it's hard not to think that a marginal dollar of income is better spent on your own child's health and education than on someone elses's. This is probably true for all but the wealthiest 1%.
Do you want to focus on keeping alive a few children at subsistence level, or on producing an extra doctor?

It's easy to see why people would *want* to spend money on their own kids, but it has nothing at all to do with altruism. On the contrary.

Have you ever been given gifts from relatives or friends? Even close ones who know you well? They are often very far from what you want/need/could benefit from.

I don't think you need to know a lot about someone to know that more often than not, the thing they want that would help them the most is cash. And that's something most Americans friends, family, govt often ignores - despite being in the same community.

You can't compare gifts from friends and family with charity. Charity is about utility, gifts are about pleasure.

Not that I am agreeing with the OP, just saying it's incongruous to dispute the OP's position in that way.

That would result in wealthy and better off communities consolidating those resources and opportunities.

Maybe, but are you more concerned with net utility, or equality?
If I can help my three hypothetical children far more than I can help three strenger's children, equality may be increased, but so will net utility.


Here's a suggestion for following Cowen's advice (which I interpret as to "focus" charitable giving) while maximizing the tax benefits of giving: the supporting organization. Most private charitable organizations are "private foundations". That's not a good thing, as private foundations (as opposed to "public charities") qualify for smaller tax benefits. Public charities, as their name implies, have broad public support, while private foundations don't. The supporting organization is a hybrid: it qualifies for public charity tax benefits even though contributions come from a narrow class of donors (or a single donor). How does a charitable organization qualify for supporting organization status? By focusing on a narrow charitable purpose of a public charity. For example, a large public charity might undertake a narrowly defined (by the supporting organization) charitable purpose that is funded primarily by the supporting organization. There are prescribed (in Treasury regulations) rules for a "supporting organization", which may not appeal to every charitable donor, but the point I'm making (focused giving) is essentially the same as the point Cowen makes. Sometimes good economic advice overlaps with good tax advice.

>we simply can’t know in advance how important a donation will turn out to be.

That is to say we have wide error bounds on our estimates of impact. This does not excuse us from trying hard to narrow those bounds by seeking more information. How would you behave if you HAD to come up with a best guess? You might do reference class forecasting, or consult with super forecasters, or do something else. I'm not sure what the best heuristics are here, but it is safe to say we can always come up with something better than throwing our hands up and declaring defeat before we have sat down and thought hard about the problem for 5 minutes.

Why am I being so harsh on this point? Because it is a seductive line of reasoning for something that people don't really want to have to think about in the first place. Donating to applause light charities is a big enough ask for most people. Asking them to now worry about which applause lights to throw money at is just too much. They did their part damn it, can't we just leave them alone? Yes! That's part of the point! You should not be expected to become an expert in charity analysis. We have capitalism and the division of labor and comparative advantage. GiveWell can do that bit for us. But that does mean we at least have the responsibility to let everyone know to donate to GiveWell and not Cute Puppies who need Expensive Surgeries.

Some of your points, like, "charity could even be counterproductive by propping up bad governments," assume that effective altruists will donate to actual charities rather than diverting all their funds to weird AI cults.

No one should fund weird AI cults. But no one should fund bed nets or African orphans either. And Harvard? That's just ridiculous. They are hardly running out of money.

I think the most altruistic thing is to just consume extra energy. Just keep your fridge open, the hot water tab running, and so on. This creates an incentive to invent cheaper forms of abundant energy.

"If corruption drains away donated funds, for example, charity could even be counterproductive by propping up bad governments." How about charity specifically designed to smack-down bad (and unelected) governments? Maybe give randomized trials of philanthropically-funded mercenaries & drones a shot? (Literally)

"Economists know frustratingly little about the drivers of innovation." What about a cross between Y Combinator and what Upstart tried to be at the beginning? Philanthropists make bets on promising individuals with life missions they want to support, then pay salary and benefits, provide mentoring, and share network. The "philanthropreneur" reports to the funders quarterly or semi-annually.

Would that work?

People who think we can go to Mars anytime soon are people who really really don't understand the microbiome.

(The microbiome in particular but the many unknowns in our food web in general. It is actually the biology that is hard and not the space flight engineering.)

Today's news about microbiome and infant health are timely examples in point. No way we know how to provide 2nd generation women on Mars a healthy microbiome.


Comments for this post are closed