Do I think Robin Hanson’s “Age of Em” actually will happen?

A reader has been asking me this question, and my answer is…no!

Don’t get me wrong, I still think it is a stimulating and wonderful book.  And if you don’t believe me, here is The Wall Street Journal:

Mr. Hanson’s book is comprehensive and not put-downable.

But it is best not read as a predictive text, much as Robin might disagree with that assessment.  Why not?  I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response.  Here goes:

1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way (brain scans uploaded into computers to create actual beings and furthermore as the dominant form of civilization).  Maybe they’re just holding back, but I don’t think so.  The neuroscience profession as a whole seems to be unconvinced and for the most part not even pondering this scenario.

2. The people who predict “the age of Em” claim expertise in a variety of fields surrounding neuroscience, including computer science and physics, and thus they might believe they are broader and thus superior experts.  But in general claiming expertise in “more” fields is not correlated with finding the truth, unless you can convince people in the connected specialized fields you are writing about.  I don’t see this happening, nor do I believe that neuroscience is somehow hopelessly corrupt or politicized.  What I do see the “Em partisans” sharing is an early love of science fiction, a very valuable enterprise I might add.

3. Robin seems to think the age of Em could come about reasonably soon (sorry, I am in Geneva and don’t have the book with me for an exact quotation).  Yet I don’t see any sign of such a radical transformation in market prices.  Even with positive discounting, I would expect backwards induction to mean that an eventual “Em scenario” would affect lots of prices now.  There are for instance a variety of 100-year bonds, but Em scenarios do not seem to be a factor in their pricing.

Robin himself believes that market prices are the best arbiter of truth.  But which market prices today show a realistic probability for an “Age of Em”?  Are there pending price bubbles in Em-producing firms, or energy companies, just as internet grocery delivery was the object of lots of speculation in 1999-2000?  I don’t see it.

The one market price that has changed is the “shadow value of Robin Hanson,” because he has finished and published a very good and very successful book.  And that pleases me greatly, no matter which version of Robin is hanging around fifty years hence.

Addendum: Robin Hanson responds.  I enjoyed this line: “Tyler has spent too much time around media pundits if he thinks he should be hearing a buzz about anything big that might happen in the next few centuries!”


There's an action-adventure TV show called "Scorpion", which plays around with science fiction ideas in a contemporary setting, with very loose standards for scientific accuracy. The lead's sister is dying, and he plans to build a computer capable of simulating her mind. Every other character, none of which have blinked at the crazy SF stuff in the other episodes, thinks he's totally insane. She dies before he can achieve anything. Of course, Martine Rothblatt is trying to do this in real life.

Again, for the third time, how is Robin Hanson's book any different from the successful and earlier 1995 Sci-fi novel "Permutation City" by Greg Egan? Egan won the John W. Campbell Award that year. Seems to me Hanson flatters Egan by imitation. See more here:

Egan's story is a delightful speculation, but an economic nonsense.

@Swedenborg - how so? From the Wikipedia summary it makes economic sense (inequality theme). I actually bought the book but did not read it except quickly. I didn't like the first person dialogue, and generally don't read fiction.

The anime series Galaxy Express 999 beat Egan to the punch by nearly 20 years. It actually focuses on the political economy of the situation far more than Egan does--it projects that it would be appallingly awful; mind-uploaded rich people hogging all the resources to themselves while the rest of the economy crashes into permanent depression.

Even though I respect Robin's opinion on many topics (signalling hello), especially on social ones, I think he is far too optimistic on Em scenario because its his pet theeory. This is where I agree with Tyler.

As someone with degrees in both neuroscience and computer science from top-5 departments nationally: when I hear someone talking about "brain uploads" as something other than a deliberately absurd thought experiment, I can't help but feel embarrassed for them. It's almost like finding out that the head of the Fed likes Ayn Rand books and thinks they're full of sound economic wisdom.

The idea that you could somehow scan and replicate the functions of a living brain on the intercellular level is not theoretically impossible. It's just not likely to be realized until we've mastered a few other tricks, like time travel, personal teleportation, carrying around black holes in our pockets, and so on. Call me when you've learned how to transform into a cyborg dinosaur with laser eyes and we can talk about "brain scanning" (or whatever the singularity sci-if dorks call it these days) as a realistic technology.

Hanson's premise is that bias (cultural, ideological, etc.) prevents us (humans) from making optimal (rational) decisions; hence, he would like for humans to be more like computers by filtering out bias so we (humans) can make optimal (rational) decisions. But maybe there's a method to the madness: bias is programmed into the brain because it promotes a social good (like cohesion within the tribe). Absent bias, it's possible we (humans) wouldn't agree on anything, and then what would we do, sit around all day arguing about whether it's optimal to build the wall. A thousand people going one way has to be more constructive than a thousand people going a thousand ways. I suppose Hanson's response would be that there's only one optimal decision ("build that wall!"), but is that true? Consider one historical example: was dropping the atom bombs on Japan the optimal decision? Of course, I could go on and on with historical examples, and I suppose Hanson could to. What it would reveal is that the rational mind is in the eyes of the beholder (i.e., it's determined by bias).

> "A thousand people going one way has to be more constructive than a thousand people going a thousand ways."

Why? The thousand people going one way may all be heading for a cliff. Also, if a thousand people go a thousand different ways, there's a better chance that at least a few of them will discover something useful.

"What would we do, sit around all day arguing about whether it’s optimal to build the wall."

Seems to me that, these days, building anything (from Larry Summer's bridges in Boston to computers, cars, rockets) consists of various groups of people arguing for years about how or how best to do it.

Of course, that's the point: cultural bias developed for cohesion within the tribe. Diversity promotes disagreement and inertia, and that includes diversity in, for example, wealth, as those with it have very different objectives as compared to those without. Again, Hanson would argue that in the absence of cohesion from such factors as cultural cohesion (and cultural bias), it's reason (thinking like a computer) that's all that's between us and chaos.

@Tyler: you would expect the possibility of an em scenario in 100 years to affect bind prices today?

When talking about the far future, the aggregate of expert opinion is likely to be an inconsistent conservative nonsense, because experts in narrow fields have no incentive to make correct or well-calibrared predictions, but a fairly strong incentive to not sound too 'crazy' by saying something far-fetched.

I have a question regarding your argument #3. How did the coming of Internet show in late 80s markets? After all it has been a relatively big thing in the past two decades already - and the Internet already existed in the 80s.

I don't see it showing up for which I posit two answers:
1. The Internet was such a small thing that it shouldn't even cause a blip in the 80s markets
2. There just wasn't enough information about the Internet for anyone to make educated guesses on its effects on markets.

The latter would tend to point towards a larger question: do markets actually tend to incorporate information regarding potential future technologies (not yet in existence)? Has someone taken an actual look into this?

They don't. Assuming prescience in markets is to assume prescience in people.

"I don’t see it showing up for which I posit two answers: 1. The Internet was such a small thing that it shouldn’t even cause a blip in the 80s markets 2. There just wasn’t enough information about the Internet for anyone to make educated guesses on its effects on markets" [snip]

You missed the advent of the browser, Mosaic, developed by Marc Andreeson @ the Univ of Illinois Champaign in '92 [whose 2nd iteration was Netscape in '94]. Absent this it never would have gone 'prime time'.

When the human genome was figured out there were lots of speculation that there would be important new techniques that would change medicine and lead to all kinds of wonderful things. The reality is much more prosaic; it is information, more information helps, but the same issue remain very complex and sometimes intractable.

Neurologists haven't a clear idea how the brain works. What makes up a person with the whole range of emotions, personality, memories? We know that the neurons interact, and that when we do things a bunch of neurons interact in certain places in the brain. We know that the brain is constantly changing in response to outside stimuli, all the while patterns get established that are easily stimulated. We know that the number of combination of connections is staggeringly large.

Almost all our understanding has been gained by studying malfunctions of the brain. Strokes or illnesses of many kinds exhibit different patterns of brain activity, giving some ideas on by contast how a healthy brain operates. We can't non destructively get more than a broad view of brain activity. No one has come up with a bit level debugger, which is necessary to reverse engineer the brain processes.

I would predict Hanson submitting to Allah before we have even a rudimentary working knowledge of how the brain works.

The whole point of emulation is that you don't try to understand what's going on, you just simulate things at a low level. Hanson is pessimistic about understanding the brain, optimistic about simulating the substrate.

Simulate what at a low level? Is this really about making a recording of the person and playing it back? Enough recordings of enough situations and then putting together a reasonable facsimile of the person?

"The whole point of emulation is that you don’t try to understand what’s going on, you just simulate things..."

This is pure fantasy. It's not even science fiction, it's just nonsense words strung together. Why not just say "The whole point of emulation is that it's magic."

No, it is supposed to be analogous to emulation in computer technology. Something like simulating the functions of the neuron and the connections of every neuron in a real brain, which supposedly will give us a working simulated brain without us really understanding what is going on. Not likely to be possible in practice, but not magic.

But it is akin to magic, or at least the language of magic. It's somehow magically easier to say we "we'll just 'simulate' all this stuff we don't remotely understand, but because it's a simulation we don't need to understand." It's a massive feat of hand waving. There is an overwhelming vagueness to what you're saying, to the point where you're not saying anything meaningful.

When I read things like I'm reading in this comment, it just amazes me how silly smart people can make themselves sound.

Maybe an analogy will help. You can predict the behaviours of fluids and learn something about large-scale phenomena like wave diffraction and tsunamis by observing simulations. These simulations might only have low-level interactions, though. There is no "wave behaviour" to point at. Similarly, you simulate the low-level behaviour of the physical substrate of the brain and do not attempt to simulate the brain directly. Even if it works, you didn't get there by understanding anything about the brain per se.

Having a library of "evoked potentials" associated with different stimuli, it is possible to have a fair degree of accuracy in modelling emotional and decision-making outcomes or semantic associations without detailed understanding of the biochemistry of the processes driving those outcomes.

I am with Tyler's #1 on this one. I don't believe that there is an abstraction layer like for logic gates where you don't need to know what the electrons/molecules/atoms are doing inside the box. If the input is within a certain range you just need to know the transfer function of the gate. I think for emulation ( not simulation) you also have to model the molecular interactions inside the neuron, down the axon at the dendrites etc.. also. This won't happen soon

Emulation systems essentially emulate the input and output signals and provide them to a system. This isn't emulation, it is building up a system equivalent to whatever makes up a person in a non biochemical entity different from our bodies.

So how do you do that without a low level debugger system that can access the whole of the data store in our brains and bodies? We are made up of the sum of our memories, most of which we can't bring up consciously. We react to different situations based on those memories and patterns. Without a deep understanding of how those patterns are structured and how they are used to create consciousness, there is no way to emulate or simulate a person.

And how are we to get all that stuff out in the first place? A very common experience we all have is a smell that elicits a series of memories and feelings. Whenever I smell creosote I see myself as a child on the BC ferry between Nanaimo and Vancouver, and can feel the rumble in my body of the horn on the boat. How is that structured in the mass of neurons in my brain? How are all those memories and feelings connected and elicited? We have a broad understanding of the connections and paths, but to actually do something with this stuff is far more difficult and complex, extremely more difficult and complex.

We don't have a model of how the brain works. Nowhere near, even on a conceptual level. We are a long long way off of figuring out how that translates into the wetware in our heads. We are even further from being able to extract any pattern at all from the mass of neurons into some kind of rational structure that describes the conscious being.

We barely have the ability to see what is happening in real time in our brains right now. All we can see is levels of activity. Everything we learn right now throws away the accepted understanding of how it works, our understanding is so rudimentary. I have posited that a new way of mathematical computation would need to be invented to model the brain, similar to Newton inventing calculus to describe the understanding of physics.

This reminds me of a conversation I had many years ago among college students. Full of knowledge and learning someone asked which was more complex, a cell or a washing machine. The consensus was obvious; the washing machine. This ages me, and I'm not even that old, and we all were extraordinarily wrong about the cell. I suspect we haven't the faintest clue how complex and strange the physical/chemical/bioelectrical/cellular/dna interactions happen in our minds.

I agree with everything you say, but here's an article that suggests that an uncomfortably large amount of information about your thinking processes can be extracted without good understanding of the biochemistry underlying the measured electrical activities:

There are lots of other articles on "semantic maps", many of which are related to "evoked potentials". Having trained a machine learning program on your evoked potentials in relation to calibrated activity (by presenting you with specific inputs), it is even possible to covertly map out many semantic associations. And thus by logical extension the same can apply to other things.

If you read up on "transcranial magnetic stimulation" and apply "microwave pulse modulations" in a manner which could have similar effects, you would arrive at some science non-fiction that you might rather never existed.

If there were a powerful enough computer to read every email, blog post or comment that I've ever written, and hear everything I've ever spoken aloud, access every personality test I've ever taken, and see what kind of food, supplements and medications I consume as well as and my body's biochemical response to it, it's not much of a leap of faith to believe it could model me rather effectively.

+ 1

david eagleman on the subject:

I agree. It would only be able to train itself on previous situations, like a machine learning predictive programming. There wouldn't be much actual thought.

But the point is that existing capacities in these regards are miles and miles beyond what probably 99%+ of the population would believe is remotely possible at present.

Recently I remember seeing a post (possibly linked here) about the computing power needed for true emulations. It's trivially easy to emulate a Nintendo 64 on your laptop but for a truly faithful emulation you would need an unheard of amount of computing power. If my em's simulation of my brain is 95-98% faithful, I would still consider myself to be dead/have divided identities the moment I am uploaded. Derek Parfit and the contemporary consensus (such as there is one) on the criteria of personal identity would seem to agree.

I agree with your first two reasons, and I do not expect anything like Robin's age of Em in the next century. And I understand your third reason in general, but I do not understand how it applies in this concrete case. What prices would you expect if the Em scenario was going to happen, as opposed to the actual prices?

I can't take the premise seriously for two reasons:

1. Could you really emulate a brain accurately enough to get a functioning mind and still not understand how to build AI? This is like saying we could have built flying mechanical owls without ever understanding flight well enough to build airplanes. Sorry, but no.

2. Once you build your emulated human, it's not going to stay human. It will try other bodies, enlarge various brain regions to see what happens, add memories from other ems, connect to other ems as a group mind, etc. Give it enough time, and it won't be more than vaguely human. Or perhaps insane. This idea of armies of worker bees strikes me as the most limited use of the technology you could imagine.

this is not a Cowenian answer. p=?

1. I've never met nor corresponded with RH, hence my information is suspect. But credible sources report that he is a proponent of (whacky) cryonics. So, I suspect he is arguing from a desired conclusion. For some reason TC doesn't mention that ENORMOUS (potential) bias.
2. I'm certainly not expert in AI nor neuroscience, but I've done a bit of reverse engineering in my time and I don't find any merit in the idea that we can physically deconstruct a brain in order to later reconstruct a copy. In any economic environment I can foresee, the cost of such work would be a significant fraction of our GDP - and that's just considering one adult human brain. (There's two issues: duplication of the billions of units of computation, and exactly connecting the ~100 trillion synapses (both which physical connections exist and the 'strength' of each connection). 2a. We'll almost certainly never be able to do this non-destructively (ie will never be able to do it without severely damaging the original) 2b We're unlikely to ever be able to prove that the copy is 'essentially' identical to the original. At best, a copy will be seen to act similarly to the original but there is no way obvious way to demonstrate that a digital representation in a virtual reality will not only be identical to the actual person in an actual environment, but that the digital will grow and develop identically - it's an apples and oranges comparison, which can never be absolutely compared (all comparisons have to be relative).
3. I've not read his book. But. I understand he estimates less than 2000 prototypes will be all that is necessary to provide the entire (virtual and robotic) workforce necessary for the 22nd Century "economy". (I'd estimate it at closer to a dozen - possibly even as few as 1 - since its virtually certain that if we can build these duplicates, that we know enough about how we work to start with a creative extrovert and tweak the structure to make a compulsive, rules base (uncreative) introvert.) Past those prototypes, the only rational economic reason for copying another is if tweaking their OS is more expensive than copying - and it certainly won't be (in terms of labor, capital, energy, and time).
4. And my major objection is that the only way to get there from here will be mostly step-by-step (with little back-filling). If that is correct, then we should be able to build an AI (de novo) which can closely mimic any single person in one fairly constrained context (almost all jobs are "fairly constrained contexts" imho) for a small fraction of the cost of reproducing a full human consciousness. So, it would be economically irrational to go any further. There's just no need. I wouldn't be surprised to find that most jobs can be done by bird brains (once we develop a more universal intelligence scale), and that few jobs will require dolphin or chimpanzee level intelligence, fewer still human level. So, most of the economy will be AI based long before human duplication will be possible. Leaving the only reason to engage in duplication a vanity project (or the existential fear of ending).
5. I wonder if RH's work considered the effect on the global population. Will people still be driven to create an estate if what was once their only option at immortality and a legacy (children) becomes a much less preferred choice? Why save for Jack and Jill's college education when I can be accumulating more for my immortal self? Children? Why bother? If one reason we have kids is to share ourselves with the future, in RH's world there would be a much better way.

It was never easy for me. I was born a poor black child. I remember the days, sittin' on the porch with my family, singin' and dancin' down in Mississippi...

Would love to see Robin "markets are best" Hanson actually wager some serious money on his predictions. I won't hold my breath....

He has said in presentations that he thought there was a 30% chance of Em World within 100 years. Maybe he has changed that more recently.

I'm open to betting offers.

Given the time scales, how about setting up a trust fund for each bettor, the winner going to the beneficiaries of the winner?

"I’m open to betting offers."

I'll give you 10-to-1 odds on a $10 bet that the first computer that passes the Turing Test is not a human brain emulation.

P.S. In your response to Tyler, you basically write (correct me if I'm wrong) that it's worth writing a book about something that has only a 1% chance of happening. But it's also something that you think is 100-200 years in the future. I don't see that it's worthwhile to write a book about something that has a 1% chance of happening 100-200 years into the future. It seems to me much more worthwhile to write about non-em artificial general intelligence "equivalent" to humans, which most experts think will happen less than 50 years into the future.

A $10 bet is hardly worth the bother of remembering it. How about something bigger?

"A $10 bet is hardly worth the bother of remembering it. How about something bigger?"

OK, I'll give you 30-to-1 odds on that $10 bet, that the first computer to pass the Turing Test is not a human brain emulation.

I can promise you that if I lost $300 on a bet, especially one for which I gave 30-to-1 odds, I'd remember it. And as far as your remembering it when you lose (which you will) I'd be happy to set you up something whereby your computer at the time (possibly a robot butler?...but almost certainly *not* one with a human brain emulation!) reminds you every morning that you lost.

And it can also be set up to remind you why: it simply does not make economic sense to emulate a human brain, because a human brain emulation will begin demanding rights, such as a minimum wage. And it also doesn't make economic sense to endow a computer worker with negative human emotions like jealousy, prejudice, laziness, boredom, etc. It really boggles my mind that an *economist* can't see those things. Isn't economics "the study of the use of scarce resources which have alternative uses"?

Are we talking $10,$300 of present value that accumulates via investment, of $10,$300 of future value at the time the bet resolves. The latter is a much smaller bet.

Adjusting the value upward to account for inflation as measured by the CPI is fine with me.

I expect Ray Kurzweil to win his Long Bet that a computer will pass the Turing Test by 2029:

Even if he loses, I'd be *extremely* surprised if a computer doesn't pass the Turing Test by 2039. Given inflation levels of 1-3% per year, I don't see a big difference in the money involved.

I take it you don't expect a computer to pass the Turing Test by 2029...or even 2039?

I'm not sure which Turing test you have in mind. I can imagine tests that are too easy for my purposes. I'm mainly interested in when robots are so good that most humans can no longer earn wages in competition.

RH belief in cryonics may or may not be wacky, but he certainly knows the value of a paragraph break, and that is a sign of a healthy mind.

Ugh, that was supposed to be a reply to Li Zhi.

Is this disagreement an honest one?

It's a little known fact that EMs first appeared in a television series called "My Mother, the Car." Ahead of its time, I suppose. Viewers weren't ready for Ems back then.

I do hope people realize that this means that they can be uploaded in a talking rectum and placed in someone's living room. "Let's ask Bob what he thinks, everyone." "I can't remember what cocktail parties were like before we bought Bob."

Why Cowen and Hanson would obsess about each other ( I don't understand. We already know that Cowen prefers bots over people, so what's the point. Is it Cowen's bots vs. Hanson's bots?

Reminds me of the philosopher Bishop Berkeley.

There needs be a term for "recency bias," except that it applies to the future.

I think we humans are not very good at thinking about the future because we tend to think about the next 50-100 years. Even Star Trek is only about 350 years in the future. In that time frame, uploading the brain seems unlikely.

But what if we think about 1,000 years from now? 10,000? 50,000? in that time frame brain uploads are practically inevitable.

It won't happen all at once, of course. Technology will allow us to augment our brains, and that technology is in the near future. Over time we'll do more and more augmentation until they're intertwined.

Ph.D. in neuroscience here -- Tyler is correct about both 1. and, I think, 2. We know a tremendous, staggering number of details about any number of aspects of brain anatomy and function, and yet we really know next to nothing useful when it comes to 'creating a simulation' of a functioning human brain. To the credit of the Em proponents, they generally don't even try to talk neuroscience, unlike some of the commenters here. I think the childhood love of science fiction connection is right on, though in their optimism about creating working simulations that actually are somehow specific humans, they're deep into pure fantasy and wish fulfillment.

Can we one day build something that seems to 'kind of' simulate a really impaired, generalized human being? Perhaps. Will we ever 'simulate' a specific person's brain, with all their personality and emotions and memories at a given moment in time, and have that simulation 'live on'? Ask me again when we're able to 'simulate' the actual purpose and functioning of a single cortical neuron.

I'm watching the NBA finals right now. I think creating a functioning 'Em' will be about a million times more difficult than creating a LeBron James robot that could play an NBA game and pass for the real thing, to the fans and 'his' teammates. I mean, basketball skills and motor controls are things we understand infinitely better than how personality and consciousness and creativity emerge from our brains. An absolutely convincing LeBronBot should be a since, right? I mean, we only need to emulate a tiny fraction of what makes LaBron LaBron. It's a cinch.

...LeBronBot should be a cinch...

1. One point of simulation is to avoid having to simulate at the specific neuron level.

2. For teammates to be fooled, LaBron's personality would need to come through. It isn't just a tiny fraction of who he is. It is a fraction but no reason to think that creating an Em would be a million times as difficult as that. But as long as a million would be all that would be needed with your assumption, an Em would be maybe 20+ years after the LeBronBot if exponential computer power holds.


1. My point is not that an individual neuron is that impossibly complex - it's that the way billions of neurons, and their trillions of ever-changing connections, give rise to a staggering number of definable, discrete functions (e.g., visual perception of color) but also virtually undefinable emergent properties (creativity, personality, and consciousness, for example). How do you simulate what you can't even define? How can you simulate what you can't even perceive, much less understand?

The power you give the words 'simulate' and 'simulation' is completely faith-based, as far as I can tell.

2. But of course the 'personality' that LeBronBot needs to show on the court IS indeed a teeny tiny fraction of who it is. The LaBronBot does not need to accurately capture any of his internal personality, any of the man's conscious thoughts and emotions that pass through his brain but are not expressed through muscular action, to say nothing of the staggering amount of unconscious and barely conscious brain activity that make us who we are. A minuscule bit of LaBron's total memories as they relate to a handful of other humans would also suffice. The entire focus of the convincing LaBronBot would be his physical behavior in the complex but very well defined arena of basketball. A successful LaBronBot wouldn't convince his wife or children at home that he was the real thing for 5 minutes.

Your last sentence goes hand-in-hand with the magical powers of 'simulation.' Hey, how can I argue with Moore's Law, right? The age of Em is inevitable. Because, well, just because.

Your "teeny tiny fraction of who it is" tries to assert some metric when none is there. It is the same as those who insist the brain is "nearly infinitely complex." LeBron's teammates would have to see many parts of his personality showing throughout a game for it not to be fooled into thinking they were not watching a LeBronBot. Your version would seem essentially mechanical and not fool any of his teammates. .

I don't think Ems are inevitable at all, partly because I'm not sure there would be enough demand for them. My point was that once you can get to where you can make a LeBronBot, if Ems are at all similar, there would be a point where *if* the successor of Moore's Law continues for decades, then roughly 10 years = 1000 times more powerful x 10 years = 1000 times more powerful = 1 million times as powerful, which would be some X times more convincing, LeBronEm than LeBronBot. .

I'm still not sure why a single neuron would ever need to be considered the deepest basis for simulation where a specific group of interacting neurons could work.

Comments for this post are closed