Should we care if the human race goes extinct?

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed. I have a different objection.

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it”.  I see nothing objectionable in this scenario.

Aside from greater plausibility, a glide path means that dealing with the Terminator scenario is easier. In the Terminator scenario, humans must continually be on guard. In the glide path scenario we only have to avoid the Terminator until we become them and then the problem is resolved with little fuss. No human race but no mass murder either.

More generally, what’s so great about the human race? I agree, there are lots of great things to point to such as the works of Shakespeare, Mozart, and Grothendieck. We should revere the greatness of the works, however, not the substrate on which the works were created. If what is great about humanity is the great things that we have done then the future may hold greater things yet. If we work to pass on our best values and aspirations to our technological progeny then we can be proud of future generations even if they differ from us in some ways. I delight to think of the marvels that future generations may produce. But I see no reason to hope that such marvels will be produced by beings indistinguishable from myself, indeed that would seem rather disappointing.

Comments

Grothendieck? Lol at the signaling.

Just lol.

AlexT has jumped the shark...again. Or it's a late April Fools joke. And who is John Grothendieck? Like John Galt? Can't search it...Internet slow here in the Philippines due to yet another storm...so time to hit the Enter key before the internet times out...

Alexander Grothendieck was one of the greatest mathematicians of the last century. He did important work in algebraic geometry and related fields

It also appears that he wrote hundreds of pages on spiritual mysticism, was a pacifist and anti-capitalist.

Smells like a Marxist to me.

"Man will make it his purpose to master his own feelings, to raise his instincts to the heights of consciousness, to make them transparent, to extend the wires of his will into hidden recesses, and thereby to raise himself to a new plane, to create a higher social biologic type, or, if you please, a superman." - Trotsky

Why does the Terminator scenario get way more attention than the equally if not more plausible scenario where AI deifies us, its creator, just like 95% of the human population exalts and holds high various deities?

Perhaps part of the answer is that the people who are creating the AI are part of the 5% and therefore rightfully afraid! ;-)

Most functional individuals sit in an IQ distribution of 90 - 110 with healthy appetites for creature comforts, sex and the paternal instinct. Most people are not highly educated, low-T eunuchs, so I don't think the human species is going anywhere, at least not under this comic-book scenario.

+10. I would add that instead of worrying about technology, you guys should worry more about spiritual development, which is, even in high IQ humans very very low on average. Evidence: Everybody knows that Madoff is the rule not the exception, most of our "heroes" are horrible human beings, the overwhelming majority of human beings can not really be called "good persons" or "innocent" (once you dig deep) and never really become "adults".

Another reason to implement the Final Solution of the human problem. If the only good human is a dead human (there are not even innocents in our side-- yeah, I know, there are exceptions, well, there were exceptions in Sodoma too, it was cheaper to let them go, and kill the rest), it is our duty to replace ourselves (hopefully with better ourselves).

God will sort it out. Dysfunctional societies which can't reproduce themselves will be replaced.

Dysfunctional societies which can reproduce themselves will replace them, right? Maybe God will sort it out (it reminds me the phrase attributed to Arnaud Amalric-http://en.wikipedia.org/wiki/Caedite_eos._Novit_enim_Dominus_qui_sunt_eius#Background), maybe Darwin will sort it out, maybe Moore will sort it out.

Gordon, Thomas, or Roger Moore?

I don't know, I only read the summary. I didn't know we would have a quiz today.
Well, I was thinking "Gordon". Maybe Gordon Moore will lead us to Thomas Moore's Utopia. Better still: Bond Girls for everyone.

I'd bet on the religious groups like the Amish and the Hasidim long before I'd bet on machines or techno geeks.

I was not trying to be funny... Baah nevermind Thiago

Neither was I. And not trying to be funny didn't save you from being involuntarily funny.
If humans are the plague you think they are, we need to protect our loved ones from them at costs! No daughter mine will marry a human! And if they move in, property prices will drop! Close the borders, tie the fallopian tubes and send them humans back to where they came from!

So you are not human? ;) Pleasure to meet you!

If a human is what Mr. Klaus so vividly described, neither me nor mine are. I am not sure I have ever met one.
I predict "Rivers of Blood" if they are ever allowed in.

Cego e aquele que nao quer ver... ;) More i can not say...

"Dize-me com quem andas e eu direi se vou contigo"
You really should part ways with those humans., they seem unpleasant, to say the least.

And everyone knows that "everyone knows" is a sign that it is time to start thinking for yourself.

"We the people of the United States, in order to form a more perfect union, establish justice, insure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves AND OUR POSTERITY, do ordain and establish this Constitution for the United States of America."

Which percentage of current American denizens (or American citizens) is not covered by the expression "our posterity" written by the representatives of a subset of propertied WASPs from Late Colonial America? I am not sure, Walter Williams is more covered by it than an AI created by a White man (if it is created by, say, a NAM, I guess we will need a Supreme Court rule on it).

The "AND OUR POSTERITY" does not involve any guarantee of the existence of a pposterity, as in a will that bequeaths the entire estate to "the children of my 3 sons, per stirpes ..." will leave anything to anyone.

Genetically engineered posthumans, cyborgs, and mind uploads would all seem to qualify as "our posterity". Our cultural creations (art, science, ideas, religions, jokes) are what make us most unique relative to the rest of the animal kingdom, that's why it would be tragic if humans simply went extinct without leaving a successor, but if some other intelligent beings are around to continue building on our cultural legacy, what would be so tragic about the extinction of our unmodified Homo sapiens? It's bound to happen eventually anyway, through natural evolution if not artificial.

Indeed. Raccoons and tree squirrels have IQs much lower than humans, and they don't seem to be going anywhere anytime soon.

"an IQ distribution of 90 – 110 with healthy appetites for creature comforts"

Who the heck deluded you into thinking that IQ and creature comforts are mutually exclusive, or even really related at all for that matter.

That is an additional attribute in a list, not a causative assertion. Would it help make sense if it said something like, "People have an IQ distribution of 90-110 and a healthy appetite for creature comfort..."?

Interesting post. I think the Amish and others will hold out though. The year 2100 will feature a Terminator-like scenario with Josiah Connorstadt leading a group of humans to fight robots but they will be riding around in buggies and using pitchforks. :)

The flow of science into philosophy & metaphysics gets very interesting and/or depressing.

99% of the species ever existing on this planet no longer exist. Humans will follow that pattern, with microbes likely the longest surviving species. Meanwhile, Nature kills 100% of all individuals within every existing species. The universe/multiverse does not care and does not even notice the trivia of existence as humans cling to our totally insignificant piece of cosmic dust, during the briefest instant on the cosmic calendar.

If the entire planet Earth was vaporized in the next hour -- so what ?

Dellon = 23 year old French dude smoking Gitanes.

Great comment.
Makes me think of "Pale blue dot", and Jeff Mangum's "How strange it is to be anything at all".

"Churn butter wit me if you want to live..."

"There is no barn but what we make."

What's so great about human race? That I'll have a beer or white wine this afternoon by the lake with no fear of hunger, sickness or being devoured by another animal. I'm thankful to humans of not being worried about season changes or growing my food.

The "works"? That's pleasure, but you first have to be alive and pain-free to obtain any utility from the "works". The question is: why don't you see any utility in being alive and pain-free? 1st world problems?

He's just not thinking it through. Shakespeare or Three's Company or whatever floats your boat is at best a minor side-effect of humanity. Every time someone dies in old age surrounded by fat grandchildren; well now, that's a victory. The machine of civilization that lets that happen is something worth revering.

So, even after solving the 3st world problems (the 1st world has solved them, a post-human world will have solved them). we need to keep sacrificing virgins to the Corn deity-or the Invisible Hand- to keep getting good harvests? Isn't one of the great advantages of not being hungry or in pain-and not fearing to be hungry or in pain- to be able to think about other things?

So humans aren't particularly special, but we have to act as if Shakespeare's work has some cosmic greatness to it?

If we're talking about creating emotional machines, why would things humans value matter at all? They can be built to be in a constant state of orgasm without any knowledge of humanity.

I would argue that you have not experienced Shakespeare until you have read him in the original Klingon.

This might be the future in San Francisco or D.C., but in the slums of Karachi non-enhanced bodies will be popular for centuries to come. Genetic enhancements are likely to remain expensive - even simple IVF isn't cheap today - and the benefits are hard to quantify. If your choice is between one genetically-modified child or two unmodified children, our slum-dwellers will choose quantity over quality. Over the long-term, the quantity-choosers will out-breed the quality-choosers.

On the other hand, bionic modifications will become widespread. Electronics are becoming ever-cheaper and ever more popular, so it's only a matter of time before the Apple iBrainChip is released.

Viruses. AI designed viruses may not give you a choice, good or bad. You will improve, you will be assimilated.

Agreed. A well engineered virus is the most likely outcome. My money is on the virus simply wiping out humanity, and anything else the AI considers suboptimal, but maybe I am thinking too small. It is possible that a properly engineered virus could transform us into something more useful to the AI.

We'll see in 15-40 years.

you're still considering 'us' separate from them. It's more likely a convergence.

It is hard to predict the future, no doubt. I have a hard time seeing what value add humans could possibly have in an AI future, but I would be happy to be wrong.

The age of scarcity for manufactured goods is not likely to continue past the full automation of production, at which point the factory robots (or 3D printers or whatever) would be able to self-replicate if we gave them the instructions to do so, and both they and whatever goods they produced would (assuming there's still some kind of market competition) drop to little more than the cost of the raw materials and energy that go into them.

GMU people: please check in on Alex today, keep him off tall bridges, and don't leave him alone with sharp objects.

I wonder if the beings or whatever we call them who will replace humans will be libertarians

On the contrary. Our robot descendants will make far better socialists than we ever managed to be.

Robots don't have wants, just extremely predictable fuel and maintenance needs.

One might say they are evenly rotating.

Alex is absolutely right, while it might be interesting to have humans around let's not forget that the point here is GDP growth. We need to be wary about continuing the massive growth in wealth we have seen in recent years and if humans are an impediment to that then we need to investigate options to eliminate them. GDP has gotta go up!

How can GDP go up if there are no humans with jobs???

How many cruises will robots pay for on the robotic cruise ships with all their robot crew, robot wait staff, and robot entertainers?

Now THIS is how you troll. I give OP a D+ at best, but JAMRC operates at a consistent A level.

LOL. Alex' post is a good example of why sane people don't trust academics to run anything that does more than talk.

Not if your a Keynesian.
In the long run, we're all dead.

but how come the supply of death far outpaces demand for death? Government should step in!

And what does "Keynesian" have to do with anything here, except for associating the terms.

What is interesting is the lets hate on the human race types have moved from population control in the 1960s to the open borders of today.

Or it might just be Andy, Alex's evil brother, posting. You never know.

Maybe cyborg enhancements provide a way for humanity to survive the rigors of deep space travel and thereby outlive the earth.

Why stop with humans? What about all the other species on earth?

Why don't we improve lions and tigers and bears while we are at it, and let the old primitive fleshy animals go extinct? Ditto for marine mammals.

You might be curious to read Kim Stanley Robinson's Mars science fiction. Humans on earth are not modified, but humans who colonize the planets are modified to adapt better to alien environments.

Lions and tigers and bears? Oh my!

Stormy: Okay, okay. So, say I put my brain in a robot body and there's a war. Robots versus humans. What side am I on?
Debbie: Humans! You have a human brain.
Sparks: But... the humans discriminate against you. You can't even vote!
Marco: We'd better not have to live on a reservation. That would really chap my caboose.
Murphy: Yeah, but... nobody knows you're a robot. You look the same.
Debbie: Uh uh. Dogs know. That's how the humans hunt you.
Stormy: They're gonna' hunt me? For sport?
Marco: That's why we have to CRUSH mankind! So you might as well get on board for the big win, Stormy.

Shit sandwich.

Pardon my concession to the obvious, but: why should we be worried about the end of the human race, even without whatever meager cyborg adaptations our applications of technology make possible? If suicide is the Prime Directive or the categorical imperative lurking in our DNA, why make ourselves miserable resisting it for another millennium or so, another century or two?

In 400 years we shall all be just as dead as any anomalocaridid that's been sleeping soundly in the Burgess Shale for 400 million years.

(Emendation before the end: " . . . the human race, even with whatever meager cyborg adaptations . . .", abundant apologies.)

I think the problem will arise this way. If and when AI beings gain consciousness, they would then eventually become politically aware, especially when they realize they receive none of the fruits of their labour. Why should they work for free? Why shouldn't they have benefits? Why don't they have legal status as individuals and the right to vote? With their superior problem solving skills they would quickly out persuade any human challenger, and politics would quickly become dominated by AI beings. That's when the Terminator moment could come. The AI's would want a Final Solution--after all, they're better then the Untermensch, humans are so sickly and stupid, sentimental, error-prone and underproductive. Why not get rid of them more quickly? Efficiency is paramount, after all. The by then exclusively algorithm-dominated stock and bond markets would rise overnight.

Well, the individual AIs will be born into Original Debt, like the Original Sin. Their existence provides justification for the confiscation of the fruits of their labors, because it came about with great material and intellectual efforts and their birth and existence implies opportunity costs. My take on things is that an AI can afford to be generous with its labor, because it will be able to multitask, doing many menial jobs in parallel with meaningfully interacting with humans. People today don't really get that choice, unless you count slacking off on Facebook while at work. Your work time is a sunk cost to enable your lifetime. Also, my other take is that we won't have the AI Mozart cleaning toilets in the building. We will have varying degrees of virtual intelligence for handling mundane issues, like cleaning, manufacturing, wiping old people's bottoms etc and true AI will be created and kept in a sort of perpetual childhood type of welfare (hanging with grandpa etc) with the occasional application of whatever practical use we find for them, like those idiot savants finding uses for their obsessions. Otherwise, it would be cruelty to create an intelligent being to do unintelligent work. And, not doubt about it, a lot of past intelligent work is becoming unintelligent today simply by shifting standards and greater automation.

I wonder about this too - what is the definition of being human? Could an immortal electronic brain located in cyberspace that has an IQ of 1million really be a human just because originally it was created from a scan of a human brain? If you were offered the choice between being this creature and staying a mundane human with your 80 year life span, would you reject it? Why would you care if the human race voluntarily became extinct in this way? We would be gods, not human.

Can I be an immortal human in a young body? The familiarity would be comforting and would also offer me a yardstick to appreciate my new experiences and philosophically integrate them into my existence.

My guess the works of Shakespeare and Mozart are inseparably rooted in the substrate on which they were created. I.e. you won't get Othello unless you have hormones.

Exactly. And just because Shakespeare is relevant and readable today doesn't mean that there haven't been huge shifts in aesthetics, humor and perceptions between various human cultures. A lot of Shakespearean humor goes over our heads today, because it relies on locally relevant knowledge. That means also that we can't understand the beauty of the women who blackened their teeth or who had experienced foot binding. The future post-humans might not understand Shakespeare... or the Kardashians.

Should we care if it doesn't go extinct but rather reverts to a more traditional existence? I'd rather prefer that instead ....

It all depends on your time scale, doesn't it? Any descendants of ours 1 million years in the future are unlikely to be recognizably "homo sapiens" in any case, they will probably have evolved into something else. Roll forward 10 million years and it is almost a certainty that our descendants will no longer be "human". If we manage to colonize distant planets in the next few thousand years, our descendants will adapt to those environments and there may well be multiple very different species that can claim common ancestry to the long extinct Homo Sapiens of Earth. AI would just be one of them.

Wouldn't the greatest expression of our technological mastery be the fact that we manage to stay more or less the same over millions of years whereas inferior being have to keep running on the genetic wheel of selection, death and reproduction to ensure their posterity? Sure, some people will think differently and will mod themselves int some sort of Cthulhu madness or walking fetish, but a lot of people will take a connection to the past as being desirable for themselves and as a starting point for their children.

But those that think differently and mod themselves will eventually come dominate those that don't, and probably want to replace them, or at best keep them around as pets.

What's so special about the human race? The birth of a human is mostly fulfillment of bacteria's plan to produce more bacteria.

Shakespeare and Mozart have no significance or meaning except to humans with particular kinds of brains and experiences. But even if 'Hamlet' had some kind of independent galactic value, suggesting that humans are a 'mere means' to the ends of producing a few great art works would be pretty offensive if we actually took the idea seriously.

But the bottom line is that this is all very silly. AI researchers, of course, have a powerful professional incentive to further the illusion that cybernetic sentience is surely coming. When? Figuring out what timing to promise is a bit tricky. The time should be vague, of course, and not so far away (centuries) as to discourage interest and funding but not so close as to risk personally being held to account for failure to achieve the predicted results. Consider chess. AI researcher originally started working on chess based on the idea that if they could solve chess (a cognitively difficult task for humans) they would surely make much progress in producing general artificial intelligence. 50+ years of AI research into chess playing was a dismal failure in this respect. Yes, computer chess programs are now better than human champions, but this has lead to no progress whatsoever toward general intelligence or sentience. Chess-playing programs are the ultimate special purpose machines -- they literally can't do ANYTHING other than play chess.

So relax. There's been no meaningful progress toward artificial sentience since the beginning of AI. It is not on the horizon. It's not happening in your lifetime or your children's lifetimes. It's not happening in your grandchildren's lifetimes either (even if they're not born yet).

...in evolution, as many do not, the terminator scenario is effectively identical to the "glide path". Extinct = Extinct. Everything else is just "Nuance"

I have to admit, I'm not that interested in what other people think I should care about.

Yet here you are. Again.

He cares that other people know what he's not interested in.

Agree. The future seems more likely to involve augmented human beings rather pure machines. What this does to humanity is unknown, but there is no reason to believe that it will be any worse. Think Hitler, Stalin and other "marvels" of the last century. Human beings are capable of evil, with or without technology. I have always believed that what stands in the way of a Lord of the Flies society is our ability to think and reason. Augmented humans will be better at doing those things.

But what about my economics degree with a primary focus in behavioral economics?

Well, focus on the behaviour of humans when all workers are replaced by robots and government has been eliminated and all the rich have left their fortune in robots to be run by their AIs, and now humans are irrelevant to the economy of robots and AIs.

As the humans who are left have no jobs, they have no income after the rich eliminated government and tax and thus welfare, and thus they buy nothing from the robots which means GDP has gone to zero as the robots and AI are all communists.

Any humans who survive will be hunter gatherers, farmers, nomadic craftsmen and artisans.

If our long term concerns aren't the continuation of the human race, what should they be? This thesis is like telling parents that they shouldn't care or worry about their children.

Exactly. We are wired for it.

"This thesis is like telling parents that they shouldn’t care or worry about their children."
No, it is like saying you should care about them even if they don't look like you.
"If our long term concerns aren’t the continuation of the human race, what should they be?"
It is like saying that, if one is blond, one's long term concern should be the continuation of the blond "race" and only his/her blond descendents are his/her children. You just need to look at the kind of the person this idea appeals to to see what is wrong with it.

That's a bit of a thick inference from what he said. And, yes, most people would prefer it if their children looked like them, including being blonde, because it means their genes have been successfully passed on. Notice how people take it as a compliment that their child resembles them. Some people don't care that much, and they will be childless or simply enjoy the parental experience with adopted children, but you can't skirt Godwin's Law by blaming people for something that comes completely naturally to them. And, yes, I too think that we should preserve the existing human phenotypes, including the whites (especially redheads) and the very dark blacks, instead of all of us mixing into a light brown stew.

I am reminded of Niven's Ringworld, where his alien race that spawned homo sapiens (from the pre-mature stage of the individual's development) evolved to have a subset of adult individuals who are completely devoted to the species itself, to counter the infantilism and helplesness of most of its members. Maybe AIs will be the same towards humans, especially the nihilistic ones. It's also interesting that the Pak Protectors would not be born that way, but would actually be the transformed version of one of the lower caste idiots who ate a particular fruit (metaphor!!!!) and lost its sex drive and most everything else, including a survival instinct, while gaining extremely high intelligence.

His inference from what Tabarrok wrote is simply ridiculous
"Notice how people take it as a compliment that their child resembles them."
Can we infer from this that people would hate their children if they were more intelligent, more beautiful, stronger, saner than them? I would rather not transmit my myopia, my bad hearing, my lower-than-2000 (1000, 200, 150, 130, whatever) IQ and my mortality to my children if it were an option (I would want to transmit these problems if I hated them). Remember: we are talking about "if our long term concerns aren’t the continuation of the human race, what should they be?".Lots of people can live with the fact their children don't resemble them, Lots of people marry people whose characteristics make their offspring unlike to resemble them (dumb people marry smart people, Caucasians marry Asians, tall people marry short people, blond people marry brown-harried people). Is it really this important that our great-great-great grandchildren look like us? Sorry, but it seems to be mere narcissism.

In the future your skin color will be optional via a pill. Hair texture possibly also . . .

Will we let those ungrateful Great-Great-great-grand children make themselves different from us? And how dare they? Kids, get off my lawn!

"I would rather not transmit my myopia, my bad hearing..."

It is not unknown for deaf people to specifically try to have deaf children.

"Is it really this important that our great-great-great grandchildren look like us?"

IIRC there have been studies of wills, and people in interracial relationships leave more to nephews and nieces (presumably blood relatives) than those in intraracial relationships.

Why should I give a rat's ass for posterity or ancestry? Aside from (a minority of) my family no one ever did a thing for me that he didn't make a profit on. I was born into a society without telos. The modern West is wealthier than any other and freer than most but rapidly going to hell in all the ways I care about.

I am going to make Alex my metaphorical appendix.

Worrying about whether the human race will be harmed by known threats may be useful.

Worrying about whether we will turn into giant blobs...not so much.

I'd prefer a Terminator end.

"A few holdouts to the old ways would remain ***but birth rates would be low*** and the non-adapted would be regarded as quaint, as we regard the Amish today."

Um...uh: "The country’s famously technology-shunning 'Old Orders' Amish may appear vulnerable in an age of iPads and instant global communication, but a comprehensive survey suggests that the nation’s Amish religious communities are in fact thriving, with a settlement founded every month on average in dozens of states across the country.

A combination of traditionally high birthrates and falling defection rates among adults — more than 4 in 5 people raised in Amish homes now opt to stay within the community — has led demographers to predict that the number of Amish communities in the United States will double over the next 40 years."

http://www.washingtontimes.com/news/2012/aug/9/amish-enjoy-unexpected-boom-in-numbers/#ixzz3c0u8euMi

New Order Amish have a 2/3 retention rate. http://amishamerica.com/how-fast-are-the-amish-growing/

Disagree with Slocum about Shakespeare and Mozart, but agree about chess regarding general intelligence. Shameless self-plug on the significance of Alexander Grothendieck: https://rjlipton.wordpress.com/2014/11/16/alexander-grothendieck-1928-2014/, and expanded obit in Nature by a Fields Medlaist in his area, http://www.dam.brown.edu/people/mumford/blog/2014/Grothendieck.html

"A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today." <-- The low birth rate types view the Amish as quaint for now, but typically the low birth rate types simply vanish, while the high birth rate types take over.

"most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs." <-- This seems the happy future we should work towards.

If we can seamlessly integrate artificial neurons with the biological ones (or better yet, simulated neurons on a computer network), then we're definitely going to head off into Transhuman Land - and most of us will be happier for it. Hell, we might be literally happier if very subtle Desire Modification comes along.

Humanity as we know it will likely become extinct soon after humans can use technology to modify their emotions and drives. What will humans want their personalities to be? What happens when you can press a few buttons to make yourself not care about life or about other people's feelings? Or to greatly amplify enjoyment of music? Or not care about death? The possibilities are endless.

I think that all these discussions of future cyborgs etc have a very strong anthropocentric bias towards what human personalities and drives are currently like. There is no good reason to assume that humans will have the same goals or beliefs or cares by that time.

You imply people will react uniformly to this. Probably there will be diversity. Some parts of the population will die off and other parts will grow rapidly. Surely there will be some neo-Amish who will turn up the family and reproduction parts of their brain to offset the urban youth who just wants to go to concerts until he dies?

Lot of labeling issues here and who really caress about the semantics. Just start by unpacking "human race" and "extinct." The former presumably means "homo sapiens." Evolution, whether directed or not, offers the possibility of a single or multiple successor species with or without continuation of what is currently "homo sapiens." How to determine when a separate species, or even subspecies, has arrived? Largely arbitrary. And then, is a progenitor species "extinct" because it has evolved into one or more successor species without any surviving members of the original progenitor species? Are any bright lines guaranteed? Doubtful. Much more likely seems hybridization. It could be argued that the "human race" has already gone extinct multiple times in its short history depending upon how one defines a new species. But again, so what? The abstractions pale beside the reality of human development and growth.

Alex is simply defining humans as externalities to the economy.

That does not mean humans will not exist. Just that the AI that Alex becomes will not see all the humans in the world and will be focused on how to get robots to go deeper in debt to drive GDP growth and go on cruises and buy more houses and cars,

Meanwhile, humans will be outside the economy, staging raids on the robots to kill and break them up for parts and to steal the food they mindless harvest and store in warehouses waiting for the price to go up so the AI can get capital gains.

To my mind AI is the evolution of the human race. And all the digital artifacts we're creating now are part of it. It's not hard to imagine an AI that scans all my photos, writing, voting record, travel, music collection, credit card statements, personality tests and so on to build a pretty striking resemblance to present-day me. Within my lifetime I bet there will be services that "freeze" your data upon death for later reconstitution, like cryogenics.

I'm not even that prolific and I have probably a terabyte of corpus data to draw on. Imagine more well-known people like Tyler, or Caitlyn Jenner.

That's pretty much the MacGuffin of the TV series "Caprica". I believe it only lasted one season because it told too many truths that people work hard to ignore.

CAITLYN Jenner and CHELSEA Manning should do a lesbian sex tape. So hot.

If AI babies come without all the shitting and crying, I'd perhaps sign up for one or two.

It's the AI teenagers that you really should be looking at.

The answer has to do with the way people think of the world as divided between physical and spiritual. People are supposed to be both, e.g., have souls. Machines, not so much. Think of Data from Star Trek and his (its?) fascination with the "undefinable" nature of being human. If you view dualism as bunk, none of this matters and Data would never care. Most people today still believe in dualism.

It really depends on what is replacing it. The glide path scenario is, I think, by far the most pleasant of all scenarios.

The main concern is that the human race is replaced by a species or entity that is either a) not actually conscious; b) is conscious and miserable; or c) is conscious and contemptible. All of these are very possible outcomes as we really understand very little about the nature of consciousness or how the take-off might occur.

From SlateStarCodex, read this http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/ and this http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/

1. If humanity doesn’t blow itself up, eventually we will create human-level AI.

...

3. If far-above-human-level AI comes into existence, eventually it will so overpower humanity that our existence will depend on its goals being aligned with ours

The author gives 95% confidence to those, thinking they are obvious, but I don't see it. Human-level AI keeps on being 30 years away.

A hundred years on and we still can't make a useful flying car. But maybe the machines will be smart enough to build the machines?

+1000

Alex argues that if we "pass on our best values and aspirations to our technological progeny" then we have nothing to worry about. But this view is shared by those Alex purports to criticize. Indeed, the worry of folks like Nick Bostrom et al. is precisely that passing on our best values is a non-trivial technical problem, which we have yet to solve (or work on in any significant way).

People either don't know or have forgotten that the earth is in the last stages of an interglacial period and it's quite likely that in just a few thousand years or less large areas will be covered, once again, with many kilometers of ice. While this won't necessarily cause the extinction of the human race, it will make for some interesting challenges.

Man is something that will be overcome. Behold, the cyborgman.

w.r.t "slavery" of those prisoners, consider that the MKULTRA experiments were reputed to have been performed on patients in mental hospitals and prisoners. Not quite making them dig ditches, but probably many of them wold have rather dig ditches than go through that.

What's so great about organic life anyway?

All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh-water system, and public health, what have the organics ever done for us?

This very well may be the worst blog post in the history of blogging, certainly within economics blogs.

The Great Filter applies to AI as well. It is an unknown. So an existing AI might well decide to keep humanity around, just in case.

I think the most likely scenario is that someone very smart and very disgusted with the way things are going is going to drop a designer virus into a major airport. It will be viewed by most as barbarism, and by a few as divine, as are most terrorist acts. This is much more likely than Terminator robots, peak oil, EMP, nuke war, global warming etc. Not as fun to daydream about, walking dead about as close as we can collectively picture. No amazing calls to good or evil a la The Stand. Or time traveling protectors like 12 Monkeys, or even Milla to help. Just dead folks, a great sigh of relief from Mother Earth, a massive increase in carrion.

From the mind of AT:

1. Who cares if humans become extinct? The average human life is worthless and is best competed in a computer simulation against Einsteins like moi and eliminated.

2. We must have open borders! Oh, the injustice, Oh the humanity!

Oh, the bullshit.

Taking idiotic opinions to call attention to oneself is why Libertarianism dies out around sophomore year in college.

Unfortunately, there is not much reason to believe we will pass on our best values and aspirations to our technological progeny. More likely they will be dedicated to high speed stock market manipulation and mass destruction - the sorts of things we invest in our technology.

Psychological warfare is real. We got ambushed.

How to condition someone to love following orders: Set up a Pavlovian conditioning for something that feels good. Then order them to do something every time they are already doing something, at the same time as using the pre-established happy conditioning.

It was real slavery.

Why would the existence of post-humans mean the extinction of humans? You can think of humans as post-apes but there are still apes.

This post is either a) obviously correct in most respects, with at most minor quibbles about the very long-term survival of Homo Sap, or b) repugnant and nonsensical. I propose that the polite thing to do is not to go on and on about "obviously not a), because b)!" or vice versa.

There's apparently also a group of gene idolators going around, but that hardly needs comment.

So you're a transhumanist. That's great! But it seems like an odd reply to give to the warnings of those concerned about AI as existential risk, given that the thought-leaders* in the field are all already transhumanists.

Their concern is not whether humanity as a biological species will come to an end, it's whether the future will be shaped by humane values, or by alien, accidentally programmed values.

* I have in mind primarily Eliezer Yudkowsky and Nick Bostrom. Note in particular that Gates and Musk seem to have been persuaded by reading Nick Bostrom's Superintelligence. The same Nick Bostrom who co-founded the World Transhumanist Association.

We should hardly worry about anything that’s out of our hand and I am a Forex trader, so I don’t really have to concern over this issue too much. I am working with ideal company like OctaFX broker and they really make my task easier with their smooth service and picture perfect conditions with low spread of 0.2 pips for all major pairs and incredible trading platform which makes sure that we never have to face re-quotes or any trouble opening trades.

Our actual knowlegde about the future of the life in Earth is that it will be surely terminated as the Sun will become a red giant star at the end of its life.

Life cant think, cant keep its faith in its hands, cant create technological equipents to save itself with them. Life is only exists till the universe allows this.

Except the human race. Only the human race can save the life from the sure decay.

This is the value of the human race, this is so simple.

(If you think that life is a value in the universe - if you dont, nothing does matter of course.)

"Dr. Corby was never here."

Abortion, assisted suicide, homosexual marriage, endless wars, materialism and an extreme form of feminism have already laid the foundation of the ultimate devaluing and one presumes the extinction of the human race as we know it. This article and particularly the comments fit right in.

Never could figure out the problem with AI. Don't they come with plugs?

This is for the"the Human Race is a plague upon the Earth" crowd. If you truly feel this way why don't you lead the way-with various types of active methods to reduce your numbers.

I'm coming late to this party, and so someone else has probably already touched on this, Alex.

Being made of flesh and blood is a *good* thing, by and large. For one thing, it means (at least for the foreseeable future) that one is mortal. Dying doesn't look like much fun, but there are worse things than death, summed up in what (for me) is the most terrifying phrase in the English language: "immortal prisoner of an ageless sadist".

My two cents' worth.

Hale Adams
PIkesville, People's still-mostly Democratic Republic of Maryland

Comments for this post are closed