Nick Beckstead’s conversation with Tyler Cowen

Nick is a philosopher at Oxford and he has worked with Larry Temkin and Nick Bostrom.  He typed up his version of our conversation (pdf), it starts with this:

Purpose of the conversation: I contacted Tyler to learn about his perspectives on existential risk and other long-run issues for humanity, the long-run consequences of economic growth, and the effective altruism movement.

Here are a few excerpts:

Tyler is optimistic about growth in the coming decades, but he doesn’t think we’ll become uploads or survive for a million years. Some considerations in favor of his views were:

1. The Fermi paradox is some evidence that humans will not colonize the stars.
2. Almost all species go extinct.
3. Natural disasters—even a supervolcano—could destroy humanity.
4. Normally, it’s easier to destroy than to build. And, in the future, it will probably become increasingly possible for smaller groups to cause severe global damage (along the lines suggested by Martin Rees).

The most optimistic view that Tyler would entertain—though he doubts it—is that humans would survive at subsistence level for a very long time; that’s what we’ve had for most of human history.

And:

People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).This has been fixed in other countries (e.g. US and China), but it hasn’t been fixed in other cases (e.g. Israel and Iran). There is more that we could do in this area. In contrast, the philosophical side of this seems like ineffective posturing.

Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people[‘s] motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly.

And:

Tyler thinks about the future and philosophical issues from a historicist perspective. When considering the future of humanity, this makes him focus on war, conquest, plagues, and the environment, rather than future technology.

He acquired this perspective by reading a lot of history and spending a lot of time around people in poor countries, including in rural areas. Spending time with people in poor countries shaped Tyler’s views a lot. It made him see rational choice ethics as more contingent. People in rural areas care most about things like fights with local villages over watermelon patches. And that’s how we are, but we’re living in a fog about it.

And:

The truths of literature and what you might call “the Straussian truths of the great books”—what you get from Homer or Plato—are at least as important rational choice ethics. But the people who do rational choice ethics don’t think that. If the two perspectives aren’t integrated, it leads to absurdities—problems like fanaticism, the Repugnant Conclusion, and so on. Right now though, rational choice ethics is the best we have—the problems of, e.g., Kantian ethics seem much, much worse.

If rational choice ethics were integrated with the “Straussian truths of the great books,” would it lead to different decisions? Maybe not—maybe it would lead to the same decisions with a different attitude. We might come to see rational choice ethics as an imperfect construct, a flawed bubble of meaning that we created for ourselves, and shouldn’t expect to keep working in unusual circumstances.

I’m on a plane for much of today, so you are getting Nick’s version of me, for a while at least.  You will find Nick’s other conversations here.

Comments

"Tyler is optimistic about growth in the coming decades, but he doesn’t think we’ll become uploads or survive for a million years."

"When considering the future of humanity, this makes him focus on war, conquest, plagues, and the environment, rather than future technology"

The problem with these two statements is that economic growth over the next 50 years or so is inextricably linked to future technology. It is not consistent to believe that the economy will keep growing for 50-100 years and that we will live in 2100 unaffected by radical future technology such as nano and bio tech. If you want to believe that radical technology will not be a huge factor in the future of humanity, you have to also believe that economic growth will flatline by 2100 or so. Where will sustained exponential growth of the global economy come from if not from these technologies? Better organization of existing human and material resources (holding tech fixed) will only take us so far.

You are seeing a conflict between two positions that Tyler didn't take. He doesn't state that there will be no technological progress; nor does he state that he expects growth to continue constantly for 100 years or more.

The statement "When considering the future of humanity, this makes him focus on war, conquest, plagues, and the environment, rather than future technology" is a little vague, so I interpreted it as meaning that "radical advanced technology will not have much of an effect on the future of humanity or on our lives in 100 or so years' time".

You could argue for a weaker interpretation like

"Tyler (personally) focuses on war, conquest, plagues, and the environment purely because that is his interest and has no view either way on whether radical advanced technology will not have much of an effect on the future of humanity"

but it is implicit from the context that he meant more than that, i.e. that he is trying to say that advanced technology isn't important.

Anyway I am sure Tyler has a sensible position that he will elucidate when he has time. I'm just interested in which bullet(s) if any he will bite or whether he favors a compromise between orthodox and non-orthodox positions along the lines of

"There will be some growth due to technological advance, but it really won't be that radical, it'll be small potatoes compared to what Bedstead/Bostrom think about, then once we have exhausted the amount of growth available from small potatoes tech advances, technology will stagnate along with growth or we will have a war/plague/etc which will prevent anything particularly interesting happening, and then a fairly stagnant state of humanity will persist until we go extinct at roughly the expected species lifetime for a vertebrate species (wikipedia states ~1 million years for this at http://en.wikipedia.org/wiki/Background_extinction_rate)"

Rationalist - You're perceptive for catching this contradiction.

The last several decades, even taking into account the tech boom, had lower average growth and our lifestyles changed less rapidly than in previous decades. If Tyler foresees a similar pace of slow change for the rest of the century, then he's implying a pessimistic outlook for economic growth.

The explanation for this contradiction is that Tyler's projections fall into Peter Thiel's category of indeterminate optimism. Tyler's unwilling to make firm predictions about technological change but wants to be able to say that the markets will continue to grow in the meantime.

"and our lifestyles changed less rapidly than in previous decades."
really ? one example : time wasted posting on blogs

Thanks - really it is the central tension at the heart of all criticism of the "Singularity" cloud of ideas - you have to believe one of (or some probabilistic combination of) three ideas which all sound a bit "wacky":

1. The economy will soon permanently stop growing
2. Humanity will become as powerful as gods
3. The End Is Nigh

Does it even make sense to think about anything that may happen more than 100 years from now? I feel that a perpetual 50 year horizon is as useful, anything beyond 50 years is useless to consider. Say that you predict something of magnitude will happen in 65 years time and you start preparing for it. I will only start preparing for that in 15 years (assuming your prediction holds to the same degree by then), and I don't think your head start matters that much. With my horizon, at least I don't waste time making unlikely predictions and thinking about acting based on them.

Humans are biologically incapable of planning more than a few years out with any degree of detail. We're not that smart. In all cases, when policy makers talk about forecasts beyond this year they are not just lying, they are trying to fool you. It would be nice if we could look out ten years and anticipate what's coming and then plan accordingly, but we can't so there's no point in trying.

This is exactly why restrictive, central-planning type policies such as closed borders are just silly. We need to create a dynamic society by offering almost unlimited numbers of H1B visas to those willing to come and work here.

If the Earth were going to hit by a civ-destroying meteor 65 years from now, I could see there being a big benefit in trying to colonize Mars now versus waiting 15 years to get started.

Once the decision is made, it's going to take 5 years or so to get the first generation of the hardware that is needed (Elon Musk has been working on big pieces of it but many still remain). Then you have initial flights with the first explorers over several years. Maybe 10 years after that you can bring in the less hardy but essential folks to build more infrastructure, and if you are aggressive 5 years after that kids start being born and you can open the doors to almost any young healthy person to immigrate.

So, that's hand-wavey 25 years down, 25 to go. You only have about a dozen launch windows to send people. (Each window is a month or two long.) Another 15 years tacked on at the end and you will have another 7 or so launches, and the capacity of both ends of the pipe will be greater for those later launches than the early launches.

I am assuming that any statement about the world in 50, 65 or 100 years is probabilistic. We are talking about someone saying that earth is going to be hit by an asteroid with p=0.01 between 2080 and 2120. I would find it ok to wait some more years to see if it is necessary to start acting.

But you do raise an important point. Imagine someone shows that the earth will be hit by an asteroid sometime in 2100. This asteroid will wipe out humanity with p =0.99. What should we do? Should we start investing in asteroid deflecting technology now, at the expense of, say, 5% of GDP. Or should we keep investing in medium term technology (e.g. nanotech, robotics, genetics) and start worrying about the asteroid in 2050? It could be that in 2050, asteroid deflecting technology is trivial with our robots and our nanotech. Issues to ponder. ;)

Isn't Nick Bostrom the guy who argues that the world we live in might be just a simulation? Always found that stuff wacky as hell.

"When considering the future of humanity, this makes him focus on war, conquest, plagues, and the environment, rather than future technology"

Tyler seems quite unconcerned about climate change and he certainly doesn't talk about it as the source of profound changes to the very-near future of humanity.

He does not talk much about alien invasion either.

"The environment"?

I prefer to call it the Innovation Ecosystem

Oops - sorry, didn't mean to be replying to Rahul. This was meant to be a general comment on the post.

What's the worst-case projection in the IPCC report? I don't think it's "end of all life on Earth."

I also found that stuff about the earth being round and people in Australia standing upside down wacky as hell. I mean obviously the Australians would just fall off the earth.

I know it's taboo but many problems could be solved by allowing much more immigration so that smart, talented, people from all over the world - Indian, China, Pakistan, etc. can cluster together to come up with the solutions current Americans don't want to be bothered doing.

This guy is a parody, right?

He spams the comments for while trolling for attention. Most ignore it now. He'll go away shortly.

But you know what WON'T go away? America's huge deficit of skilled workers which is only getting bigger as long as limitations on H1B visas persist.

You had me at "But you know what WON’T go away? America’s huge deficit"

On more than one occasion, thoughtful commentators have suggested a serious compromise -- if it's really about promoting innovation and not just undermining the collective rights of American citizenship -- then pro immigration advocates should accept a compromise where the US rules are switched to heavily constrain low wage, low education workers from entering (by for example, restricting the chain immigrations that comes from giving out of green cards to adult parents, adult siblings, and various and sundry random relations of the first mover low skilled immigrants) while opening up many more places to H1 holders. This policy of favoring the educated, the demonstrably productive, and most likely to produce well-integrated, above average offspring would be an easier sell to the American people. But time and again, advocates have said, No Way, we want MORE H1 visas but still want the same number of low education, low skill workers with dysfunctional families to get (de facto) priority. Hence US policy actively discriminates **against** the talented. Just ask anyone who has compared US and Canadian policy.

So Mr. Cowen thinks we are headed towards a new Dark Ages by the end of the century, worst case we are heading towards extinction.

I guess the solution is more immigration! Let us enjoy the cheap chalupas while they last.

Tyler isn't being honest with himself. Of course he believes, deep down somewhere, that the human race can survive indefinitely, otherwise why care about economics, about policy, about philosophy, about getting things right? There's no real reason to improve the world, aside from personal and fleeting satisfaction, if the end is the same no matter what, especially if you don't believe in an afterlife of some kind (perhaps he does? I don't know). I'm not a pessimistic person, but there is simply no reason any of us should be arguing about anything if humankind will be extinct in less than a million years (which is basically next week in the cosmic calendar). What's more, even if we manage to hang on, it will be at miserable subsistence levels — which, again, begs the question: if that is our destiny, who gives a shit whether r exceeds g? Why write about it, why argue about it? In a world destined for extinction — or, at best, some kind of indefinite animal-like existence where we are constantly fighting over food and water — why should any of us care about anything beyond our own immediate pleasure?

The reason, of course, is that he doesn't really believe it. He must believe, like most of us probably do, that if we can get things right, or at least right enough, that we can stay ahead of extinction and keep open the possibility that our species survives indefinitely. If not on this earth, then another one, and another one after that. In the absence of a god, there is simply no purpose to life other than survival. And there is no purpose to survival unless it lasts forever.

Your own death is final, you know.

File under: More Straussian Truths of the Great Books.

Some people are motivated by having an effect (hopefully benevolent) on future generations.

For these sorts of people, the knowledge that there will only be two or three more generations following them after they die would be depressing and demotivating. If you don't care about future generations ,not so much, and probably the prospect of the near-term end of humanity doesn't make much of a difference on your overall outlook.

"if that is our destiny, who gives a shit whether r exceeds g?"

#WinsTheThread

Who's up for some chess?

"being good at politics is negatively correlated with certain type of philosophical thoughtfulness. This is less true outside of the US, e.g. in Canada."

Institutional history and selection effects explain why so many top Canadian politicians are philosopher / intellectual-types, but the correlation is still negative once they're on the job. In fact, Michael Ignatieff and Stephen Dion are probably two of the best examples in recent history of the inverse relationship between philosophical and political aptitudes. Stephen Harper was an intellectual, as well, but he has a higher political aptitude because he is an economist who's read game theory and public choice, and had prior experience building a party.

This short audio interview with Ignatieff discussing the divide between political philosophy and actual politics is incredibly deep, given that he learned "politics isn't about policy" the hard way: http://philosophybites.com/2014/04/michael-ignatieff-on-political-theory-and-political-practice.html

In addition, Tyler gives the chance of whole brain ems at 1 in 1000 but has in the past given the existence of god 1 in 20? To paraphrase Bryan Caplan: WTF??

Does the existence of God imply the existence of whole brain ems in some way?

I found Tyler to hold quite a peculiar combination of views here.

The Fermi Paradox informs his view about the future, but he doesn't see philosophy as useful (anthropics is at this point still classified as philosophy).

He thinks we will likely go extinct, but doesn't see it as a high priority to work on preventing this.

History has been hugely and decisively influenced by technological revolutions and inventions such as nuclear weapons, but he prefers to "focus on war, conquest, plagues, and the environment, rather than future technology."

Tyler thinks we may go extinct due to the harm that can be done by small groups of people increasing, presumably as a result of technological change. But he is in favour of rapid technological progress and considers innovators to be heroes, rather than preferring a cautious approach to manage such risks. And he doesn't want to focus his attention on "future technology".

Tyler thinks AI will put a lot of people out of work, but doesn't think it will fundamentally alter civilization for machines to be smarter than humans - either towards extinction or redundancy.

I am not sure what 'rational choice ethics' is, and Google didn't help. I am unsure how anything in Homer or Pablo could help us resolve the repugnant conclusion.

It is hard to hold consistent views about the future, because many of the ongoing processes which we observe around us today lead to very strange places. Still, good comment.

The Fermi paradox is becoming stronger and stronger as the number of extra-solar planets detected rapidly increases. If planets are common then it beggars belief that life has not already arisen many many times in other places in our galaxy, given the sheer number of stars and the billions of years that the Galaxy has been in its current condition. This to me is science, not philosophy, detecting of planets is what is doing the work in this model, it was entirely possible that our solar system was pretty unique, but that is not what we see. The only filter left is how often do self replicating molecules arise, but even if you set that implausibly low you still end up with the same answer, much life must be out there. So I am with Tyler, FP must mean that somehow life does not do what we expect and spread from its own planet.

My model is simple, you cannot travel the stars without technology. And any animal that can develop technology almost always develops computers. And computers usually lead to AI. (Note that our current civilization is much closer to development of AI than star ships). And AI's, especially uploaded existing intelligent entities are by definition dangerous. So any technically capable species either becomes very very paranoid about preventing technology spread and remaining low profile, or it gets destroyed by an AI either of its own generation of that of a nearby civilization, which, by definition, is more paranoid and low profile.

So in space, evolution is continuing, the universe is filled with dark ultra-intelligent ultra-paranoid AIs locked in a death struggle, with a few very low profile very paranoid non-AI survivors. So no wonder we can't detect them with our puny instruments. We are too recently developed to get the attention of the AI's, but they will be coming for us, unless we can get to the AI level and into the dark mode before they get here. But the problem is then the danger from ourselves.....its bleak either way.

I don't see the Fermi paradox as a huge problem. A star with a Dyson sphere around it could support a huge amount of AI diversity, to the point where the civilization didn't feel any urge to get bigger. And if there proves to be no energy-efficient way to travel between the stars, the cost-benefit ratio of interstellar conquest looks pretty unfavourable.

Valuable information. Luucky me I found your weeb site accidentally,
and I'm surprised why this coincidence didn't took place in
advance! I bookmarked it.

Comments for this post are closed