What should we be afraid of?

by on January 14, 2013 at 12:33 pm in Science | Permalink

That is the topic of the new Edge symposium and many of the answers are interesting.  Here is the take from Bruce Sterling, who fears that when it comes to the Singularity there is “no there there”:

Since it’s 2013, ten years have passed since Vernor Vinge wrote his remarkably interesting essay about “the Singularity.”

This aging sci-fi notion has lost its conceptual teeth. Plus, its chief evangelist, visionary Ray Kurzweil, just got a straight engineering job with Google. Despite its weird fondness for AR goggles and self-driving cars, Google is not going to finance any eschatological cataclysm in which superhuman intelligence abruptly ends the human era. Google is a firmly commercial enterprise.

It’s just not happening. All the symptoms are absent. Computer hardware is not accelerating on any exponential runway beyond all hope of control. We’re no closer to “self-aware” machines than we were in the remote 1960s. Modern wireless devices in a modern Cloud are an entirely different cyber-paradigm than imaginary 1990s “minds on nonbiological substrates” that might allegedly have the “computational power of a human brain.” A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there.

So, as a Pope once remarked, “Be not afraid.” We’re getting what Vinge predicted would happen without a Singularity, which is “a glut of technical riches never properly absorbed.” There’s all kinds of mayhem in that junkyard, but the AI Rapture isn’t lurking in there. It’s no more to be fretted about than a landing of Martian tripods.

For the pointer I thank Michelle Dawson.

Andrew' January 14, 2013 at 12:44 pm

Okay, but the Martian Tripods are high impact.

JWatts January 14, 2013 at 1:23 pm

Yep, definitely a black swan event.

Rationalist January 14, 2013 at 12:45 pm

“Computer hardware is not accelerating on any exponential runway beyond all hope of control.”

– This is sloppy. As a simple point of fact, the performance in FLOPS/$ is still increasing exponentially. See, for example, the latest Top500 graphs

Finch January 14, 2013 at 12:59 pm

Agreed that statement is false, but I think it’s fair to say that all this persistent doubling in theoretical peak FLOPS is not as useful as people thought it would be for general artificial intelligence. We’ve long exceeded what we’d need to simulate human brains, assuming there isn’t anything weird going on in there, but we’re not remotely close to being able to organize or setup those simulations. Robin Hanson seemed wildly optimistic on this score, for example. Is he, or anyone else of note, recanting their predictions?

SomeGuy January 14, 2013 at 1:03 pm

>We’ve long exceeded what we’d need to simulate human brains

Woah woah woah…

What? Did I miss something?

Finch January 14, 2013 at 1:10 pm

There’s something like 10^11 neurons in the brain. The best modern computers exceed 10^16 flops. I don’t think anybody really think neurons are that complicated, so it’s probably that we have no idea how to organize them. It is also possible that there’s something unexpected going on at a low level in the brain, but I think most people think that’s unlikely.

Adam Calhoun January 14, 2013 at 3:29 pm

Neuroscientist here! We do, in fact, think that neurons are that complicated. A lot of evidence points to the fact that properly simulating a neuron requires simulating the morphology which requires a lot of computational power; there is also a lot of nonlinear processing done in the dendrites, so you have to properly measure and simulate the connectivity, too (where in the dendrites is important!).

This is completely neglecting any kind of molecular cascade, which we know is important for most peptide transmission, which is what you need if you want to know anything more than what it would be like to sit totally still getting photons beamed at you.

Simulating the brain will be hard, and take a lot of processing power; you can’t just use a binary connectivity/firing matrix.

Finch January 14, 2013 at 4:32 pm

This sound more like “we don’t even know how neurons work!” than “brain emulation is just a matter of more flops!” I am arguing that simulating the brain will be hard because we don’t know how brains work, and I appreciate your added detail. I suspect that much of the detail you reference will prove to be irrelevant – it’s a chemical machine that’s got to be hugely tolerant of error – but that’s an engineering-biased guess.

SomeGuy January 15, 2013 at 3:33 am
Rahul January 15, 2013 at 10:18 am

@Adam:

One argument that supports that it’s not a processing power issue at all is that even assuming the real brain has a lot more power is the quality of our current state-of-the-art simulations proportional to the FLOPs we have?

I think not.

Marcel Kincaid January 18, 2013 at 5:24 pm

“I don’t think anybody really think neurons are that complicated,”

Well, no one who is completely ignorant of neuroscience, I suppose. That’s the thing about this “movement” … most people in it are grossly ignorant of biology and AI technology, or even what Moore actually said about his “law”. Kevin Drum says that AI will “naturally evolve” … um, without natural selection operating on genes in offspring? Cluelessness abounds.

Marcel Kincaid January 18, 2013 at 5:25 pm

“so it’s probably that we have no idea how to organize them”

Yes, that too is definitely true.

Marcel Kincaid January 18, 2013 at 5:28 pm

“I suspect that much of the detail you reference will prove to be irrelevant ”

The evidence doesn’t support your suspicion. It isn’t just about robustness, it’s about the fact that neurons are nothing like switches.

Tim "Beatdown" Ferrell January 14, 2013 at 1:11 pm

The raw processing power is there in spades. The problem isn’t the computers are too weak, the problem is we don’t know how to tell the computers to do whatever it is that we do to “think”.

Ryan January 15, 2013 at 10:53 am

The processing power is no where near there. I do object recognition for a living. The images I deal with are 10 mega pixels. I have 300ms to segment and detect objects in these images. Even simple things take an extremely long time. The features I generate take up 2-5 GB of data for each image. This is just a single image.

Give any decent computer scientist a computer that is infinitely fast with infinite memory that consumes < 1kw and it won't be long before you get an artifact that is more intelligent than any human.

Even in the last five years there have been tremendous strides in state of the art classification systems. You may have heard of deep learning. This is simply a new take on the age old neural nets. And it is setting new records by leaps and bounds in nearly every AI benchmark that we have. These gains were not made possible by a new algorithm. They were made possible by raw computation and lots of data. Instead of taking years to train, these nets can now be trained in weeks or months. Once they are trained they can do their job with great accuracy in a modest amount of time.

And yet, these nets are SMALL compared to the human brain. Even the biggest are equivalent to a single cubic millimeter of brain matter or less. And they are also incredibly simple with the most straight forward algorithms having no capacity for memory. Though there is research being done in this area too.

We will realize general AI as soon as we can get petabytes of random access memory with petabytes per second of bandwidth coupled with petaflops of computation power all running on less than 1kw. We don't even need very strong single threaded performance. But it does need to be cheap. I don't even know if we will be able to get there outside of current biological substrates.

That is essentially a factor of 1000-10000 of where we are at now. We came that far in the past 20 years, but I am not sure we are for a repeat performance. Think of the scale of those factors. Neural nets that take 10 years to train now (even if you could get enough memory and training data to train them) could be trained in a single day. Nets that take a month to train on that hypothetical system would take anywhere from a hundred to one thousand years to train on today's machines. And we still don't have enough data to adequately train these basic nets.

No, we have no where near enough processing power. Not even close.

Rahul January 15, 2013 at 12:53 pm

@Ryan:

Or we could just think of better algorithms and strategies.

Marcel Kincaid January 18, 2013 at 5:31 pm

“Give any decent computer scientist a computer that is infinitely fast with infinite memory that consumes < 1kw and it won't be long before you get an artifact that is more intelligent than any human"

This is deeply uninformed and deeply unintelligent.

Rationalist January 14, 2013 at 1:12 pm

“We’ve long exceeded what we’d need to simulate human brains”

- Even the experts don’t actually know for sure how many FLOPS you need to simulate human brains. The Brain Emulation Roadmap has a median estimate of 2033. See page 80.

Finch January 14, 2013 at 1:30 pm

Short of fringe theories like quantum effects being significant, it’s hard to see more than a few kflops being needed per neuron. If mainstream theories of the brain are basically correct, we ought to be doing whole-brain emulation now. That we aren’t even remotely close indicates that the issue is not raw computational power.

#4 on that list is most realistic, but even it seems quite inflated. Do we really need 10^7 FLOPS per neuron? Can we not run the simulation at less than full speed?

Rationalist January 14, 2013 at 3:47 pm

it is known that each neuron has 7000 synaptic connections, and that a neuron can fire 200 times per second. If each firing depends on all 7000 inputs, that’s a minimum of 7000*200 = 1.4 million calculations per second.

Of course, you may be able to ignore some inputs and/or recalculate the neuron less frequently than 200 times per second, but there’s always a tradeoff between better understanding (so you know what you can safely ignore) and raw power (so that you can just include every detail).

Finch January 14, 2013 at 4:40 pm

Is there a reason you can’t just run the simulation a million times slower? There is: we don’t know how to set it up. We need to be able to know what’s important in the brain, detect it accurately and non-invasively, provide it with realistic IO (maybe the hardest part unless you endorse embodiment, which I gather most brain-emulation folk hope to avoid), and let it go. All of this is hard. Which is why regular old-fashioned AI is way out ahead of brain emulation in terms of providing useful functionality.

Surely if brain emulation was realistically going to provide useful functionality within years and not decades, companies like IBM and Google would be all over it. They aren’t.

Rationalist January 14, 2013 at 4:47 pm

“Is there a reason you can’t just run the simulation a million times slower?”

A 5 minute conversation, times a million, takes 9 years.

Joe Smith January 14, 2013 at 5:11 pm

“A 5 minute conversation, times a million, takes 9 years.”

Just because the emulation would not run in what we consider real time does not make it any the less an artificial consciousness – with life changing implications for all of us.

Rationalist January 15, 2013 at 11:51 am

@Joe Smith:

That’s great ‘n all, but because of Moore’s law it would be quicker to just wait 5 years, then run that same 5 minute conversation in 11 months for the same price. So it is never going to make sense to slow down by a factor of 1000000, unless Moore’s law stops.

Joe Smith January 15, 2013 at 3:22 pm

@rationalist

Sure. If the point was to have the five minute conversation then it makes accounting sense to wait. I was once part of a team designing software where the explicit target was hardware we expected to be available two years after we started. The goal however is to demonstrate consciousness and my point is just that “consciousness” does not have to be in real time.

Marcel Kincaid January 18, 2013 at 5:34 pm

“If mainstream theories of the brain are basically correct, we ought to be doing whole-brain emulation now.”

I have no idea why you think that, or what you imagine is a “mainsteam theory of the brain”. We don’t even know how memory works in brains, or what the various parts of the brain do … all we have is an idea of what parts are active during certain tasks.

Rationalist January 14, 2013 at 1:27 pm

Raw FLOPS can (in most cases) substitute for deeper understanding. For example it is straightforward to write an algorithm which multiplies n*n matrices in O(n^3) time, but writing down the better O(n^2.8) algorithm requires significant insight.

In the case of the building “dangerously” smart software AI, we don’t actually know how important FLOPS is. Certainly it has to help, but how much? The only way you could know that for certain is if you already knew how to write a human-level software AI, which of course we don’t.

In the case of whole brain emulation, we can be significantly more confident that more FLOPS helps, purely because when you are emulating a system, every detail you don’t have to worry about excluding on efficiency grounds is a detail that you don’t have to waste time trying to understand.

Rationalist January 14, 2013 at 12:56 pm

“It’s just not happening. All the symptoms are absent.”

What symptoms would we expect to see if an academic somewhere were just about to make a breakthrough in AI research tomorrow? Something of the same magnitude as Turing’s seminal 1936 paper? You wouldn’t see anything. With a large amount of computing power available, a hard-to-predict breakthrough in theory could have big consequences very quickly.

Let me be clear: I don’t believe that the singularity is going to happen in the next decade. But the argument given here is unsound. The best reason we have for thinking that the singularity will not happen in the next decade is that AI is a hard problem that we have so far failed to solve given 7 decades of work.

dan1111 January 15, 2013 at 5:27 am

A breakthrough of similar magnitude to Turing’s paper is vanishingly unlikely, because the field of AI is much more mature today. When thousands of people have worked on a problem for decades, the possibilities tend to have been explored to an extent that there is little room for huge leaps forward. It would be akin to someone suddenly discovering how to make an internal combustion engine 500% more efficient.

AI might not be as mature the automobile, but it seems safe to extrapolate forward from the historic rate of progress, and that makes the singularity seem about as likely as the flying car.

Rationalist January 15, 2013 at 12:07 pm

“vanishingly unlikely, because the field of AI is much more mature today”

– when Turing published his paper, the field of calculating machines probably seemed pretty mature. Calculating machines were already in use, had been since the turn of the century, and were experiencing incremental advances towards better reliability and lower cost (as far as I can see).

If a field is missing a really fundamental breakthrough, the consequences of that breakthrough will be hidden from you, obviously, and the field may well seem “mature”. When Turing published his paper, that concept was programmability.

Marcel Kincaid January 18, 2013 at 5:43 pm

Even aside from the ignorance of Babbage and Lovelace that your comment displays, there’s a tremendous ignorance of how science works and advances. The current edge is quantum computation, and lots of people are working on it … but it still doesn’t give us anything like human-level AI, and won’t until we have a model of how human cognition or any cognition of similar power works.

Rich Berger January 14, 2013 at 12:58 pm

A Pope may have said it, but it is a line in a hymn, and numerous similar admonitions appear in the Bible.

What to be afraid of? Increasing power of technology mated to unchanging and flawed human nature, among other things.

Marcel Kincaid January 18, 2013 at 5:46 pm

Global warming is already here, and it takes 100 years for CO2 to leave the admosphere. We’re looking at an increase of the average global temperature of 7-10 degrees F, which will end human civilization.

Ted Craig January 14, 2013 at 12:58 pm

What I’m becoming afraid of is that public intellectual displays (like The Edge Symposium, TED, the idea issues of FP, The Atlantic, etc.) are so homogeneous. To give one example, two of The Edge writers cite C.P. Snow.

Urso January 14, 2013 at 2:13 pm

The Malcolm Gladwellification of America

dingbat January 14, 2013 at 1:10 pm

” A Singularity has no business model, no major power group in our society is interested in provoking one, nobody who matters sees any reason to create one, there’s no there there.”

Except what is problematic is when/if a device or system wants to preserve itself without a human provoking it. A device with a sense of self-preservation is more of a worry than a device with a sense of self.

Brian Donohue January 14, 2013 at 1:20 pm

Singularity shmingularity. My fear is more hum drum: that this country trashes a 200-year legacy and dies in a protracted fit of childishness.

American people- you need to hear these home truths:

http://finance.fortune.cnn.com/2013/01/14/vat-middle-class/?iid=HP_LN

Rich Berger January 14, 2013 at 2:17 pm

Thank you low-information voters. Obama is the symptom of our decline.

Marcel Kincaid January 18, 2013 at 5:49 pm

BWAHAHAH!

What an ignoramus and idiot.

dead serious January 14, 2013 at 1:41 pm

“A Singularity has no business model, no major power group in our society is interested in provoking one…”

You honestly can’t imagine any military applications?

kebko January 14, 2013 at 1:41 pm

Computers now destroy us at chess and Jeopardy. How long ago were intelligent people arguing that this could never happen?

lemmy caution January 14, 2013 at 2:06 pm

“Jeopardy” is a carefully selected showcase for the computer. Much of the computer’s advantage comes with the speed on the buzz in. The chess thing is impressive.

Major January 14, 2013 at 3:46 pm

The speed on the buzz in is irrelevant. The point is that Watson is amazingly good at answering general knowledge questions, asked in ordinary English, involving wordplay and subtle connections between unconnected areas of knowledge. It’s a fantastic achievement.

Scott from Ohio January 15, 2013 at 6:44 am

Watson *was* amazingly good. They recently had to wipe its memory because it started cursing at the researchers after they showed it Urban Dictionary: https://www.youtube.com/watch?v=mDQZxfxLHTs

Careless January 20, 2013 at 1:55 pm

He was talking about having to revert it for it to go on national television, meaning that took place before the Jeopardy! match

Michael B Sullivan January 14, 2013 at 4:17 pm

How Chess Computers Work:

Chess is divided into an “early game,” a “middle game,” and a “late game.” The early game and late game are well-understood, deterministic games — there are a few known strategies which always work and everyone gets. Chess computers use static libraries (ie, they just reiterate written down strategies) for the early and late games.

The middle game is too complex for static analysis. In the middle game, a grandmaster-class chess AI (like Deep Blue) relies on brute force calculation of ALL possible board states out to a depth of ten moves or more. That is, Deep Blue says, “Okay, what are all the possible moves I can make right now?” (Typically, in the middle game, there are about 30 moves available to each player, though it depends on the current state of the board). It creates 30 hypothetical board positions. Then, for each of those board positions, it says, “Okay, now what are all the possible moves that you can make right now?” And it repeats that for as long as it is allowed to take for a move, which, back when it first beat Kasparov, was about 10 moves deep. So that’s about 30^10 board positions, by the way, which is a hella big number.

After it’s done all that, it just says, “Okay, I’m going to rank all those board positions in a pretty simple way: material advantage, avoiding checkmate, a little bit of ‘I want my pieces to be in the center of the board,’ and choose the path that provides the best of those board positions even if you make all the right choices.”

Deep Blue was a phenomenal demonstration of multi-processing hardware and supercomputing in the purest sense, but all it does it brute-force a problem. It is no more “thinking” than is your computer when it solves 34945233567876 times 38998778986 faster than you possibly could. It has no intuitive ability to look at a board and follow only the “good” routes for look-ahead. It’s just really, really, really, really, really fast.

Watson:

I know less about Watson than about Deep Blue, but note that all of the interesting achievements for Watson were for parsing the “answers” to get an appropriate search behavior — and it’s still obviously MUCH worse at understanding the “answers” than a human would be. It compensates for its lack of comprehension by, you know, storing lots of data (I think that everyone has been clear that computer storage devices can store huge amounts of raw data for a very long time) and being quick on the buzzer (I think everyone has been clear that computers can react faster than humans for a very long time).

Major January 14, 2013 at 5:25 pm

I know less about Watson than about Deep Blue

I’m not sure how much you know about Deep Blue, but you certainly don’t know much about Watson. Here’s an article that describes its (very sophisticated) software technology, DeepQA: http://www.aaai.org/Magazine/Watson/watson.php. Question-answering by human beings may involve a similar complex iterative process — question analysis, hypothesis generation, filtering, scoring, merging — that is largely unconscious. Claiming that Watson lacks “comprehension” or “understanding” just raises the question of what those words actually mean when applied to human cognition. Human thinking relies a great deal on “brute force” computation also. That’s why we have such complex brains.

Michael B Sullivan January 14, 2013 at 5:38 pm

Naturally, the software for Watson is more than “very sophisticated” — it’s a major triumph. And, indeed, it was far more of an AI triumph than Deep Blue, which was really just a very fast dedicated parallel architecture — Watson had a lot more algorithmic complexity to it.

But you don’t need to get way out into the weeds about how cognition works to note that, for all that its “comprehension” of the Jeopardy answers was incredibly impressive by the standards of AI, it was remedial by the standards of humans. Sure, maybe humans are doing something similar, but much, much, much more rapidly.

But humans (even humans who have no chance to win at Jeopardy) don’t answer “Toronto” for a US city — and they don’t discard the Jeopardy category titles because they’re simply too hard to extract information out of.

Boil it down, and it’s simple: Watson was significantly inferior to its human opponents in comprehension, and significantly superior in speed and total amount of information recallable in a lossless manner. That combination turned into a win, and cool for it. But everyone knew before Watson that computers are fast and can store lots of information.

Watson’s brilliant programming closed the comprehension gap ENOUGH that its superior recall and speed could give in the win. That’s an impressive achievement.

Similarly, Deep Blue’s incredible depth of look-ahead overcame its lack of understanding of the subtleties of board analysis, leading to a win. That too is an impressive achievement.

But lay people often come away thinking that those machines were thinking in a much more human-like manner than they were, rather than being badly deficient (compared to humans) in basic understanding, and compensating for it with the traditional advantages of computers.

Major January 14, 2013 at 5:50 pm

it was remedial by the standards of humans

Huh? It beat the best human competitors.

But humans (even humans who have no chance to win at Jeopardy) don’t answer “Toronto” for a US city

On the contrary, humans (even humans who have a chance to win at Jeopardy) routinely make significant errors, including basic category errors like getting a country/city association wrong. But in fact your example here of a supposed basic category error by Watson is much more ambiguous than you seem to think. See this article for details.

Michael B Sullivan January 15, 2013 at 12:23 am

Did you manage to read the entire comment? It was faster than the best humans and had a gigantic database of facts — and it was remarkably good at comprehension for an AI and terrible at comprehension for a human.

And while there may be humans who would call Toronto a US city, there are no humans who know that Toronto is a Canadian city (which is certainly a fact that Watson “knew”), saw the category was “US Cities,” and still answer “Toronto.”

I’m very aware of why Watson chose Toronto as its final Jeopardy answer (though you screwed up your link). It didn’t put very much weight on the category name, because category names tend to confuse it — they’re difficult to extract meaning from. Which, I’m sure, is true for some humans on some categories. But not categories like “US Cities,” which are simple and straightforward.

Watson was fast, it had processed an immense amount of information, and it was remarkably able to comprehend natural language in this very limited problem space for an AI, which still made it remarkably dim for a human. If it was at human speed and had less-than-perfect recall of its 1 TB of text data (indexed up to 15 TB), it would have lost horribly. (1 TB may sound like not a big deal in these days of massive hard drives, but note that in terms of pure text, it’s about the equivalent of having perfect recollection of very roughly 500,000 books of facts).

And you do nobody (including, perhaps, yourself) by obfuscating the difference between its impressive-but-still-massively-deficient comprehension and its speed and memory advantages.

Major January 15, 2013 at 1:25 pm

It was faster than the best humans and had a gigantic database of facts — and it was remarkably good at comprehension for an AI and terrible at comprehension for a human.

The human contestants also had a gigantic database of facts. You still haven’t described what you mean by “comprehension” in this context, so your claim that Watson is “terrible at comprehension” isn’t terribly meaningful. You don’t seem to know much about how Watson actually works, and you don’t seem to have thought very carefully about the nature of cognition.

And while there may be humans who would call Toronto a US city, there are no humans who know that Toronto is a Canadian city (which is certainly a fact that Watson “knew”), saw the category was “US Cities,” and still answer “Toronto.”

On the contrary, as I said, human beings make category errors often. But as I also said, it does not appear that Watson made that kind of error here anyway. There is in fact a city in the U.S. called Toronto; Toronto in Canada has an American League baseball team; and the terms “U.S.” and “America” are often used interchangeably. Given these facts, Watson’s answer was credible. Watson got the answer wrong because the information available to it was incomplete and ambiguous, not because of a basic error in its reasoning process.

Ray Lopez January 14, 2013 at 5:42 pm

“Deep Blue was a phenomenal demonstration of multi-processing hardware and supercomputing in the purest sense, but all it does it brute-force a problem” – you are behind the times. Houdini chess engine is the successor to Deep Blue and is about 500 Elo points stronger. How? By avoiding “brute force” in all lines, but rather, in select lines using “human principles” (such as: a rook behind a pawn is more valuable than a rook in front of a pawn), it searches more using the Alpha-Beta algorithm. So it’s a combination of “expert system” and “brute force”. Result? As I say, about 500 points stronger, meaning Houdini will beat Deep Blue about 95% of the time.

uffy January 15, 2013 at 7:48 am

The real question is how are we doing something that is fundamentally different from the use of similar algorithms to make decisions such that we are considered to possess intelligence?

Marcel Kincaid January 18, 2013 at 5:59 pm

“The early game and late game are well-understood, deterministic games — there are a few known strategies which always work and everyone gets. Chess computers use static libraries (ie, they just reiterate written down strategies) for the early and late games.”

This is quite inaccurate. There are not “a few known strategies which always work” in the opening; there’s a *book*, which is a partial tree of moves that has been shown, empirically, over years of development, to show good results. Getting out of book throws a program (or human) into strategic play. And the ending “book” is only good for a small number of pieces and a few moves.

“I know less about Watson than about Deep Blue”

You know almost zilch about either one.

Marcel Kincaid January 18, 2013 at 5:52 pm

“How long ago were intelligent people arguing that this could never happen?”

Some intelligent people say all sorts of erroneous things … take Ray Kurtzweil, for instance.

You, OTOH, are not intelligent, you are fallacious.

Brandon Reinhart January 14, 2013 at 2:02 pm

Nuclear weapons are still pretty scary.

Abelard Lindsey January 14, 2013 at 2:03 pm

Forget about the A.I. singularity, which I never subscribed to in the first place.

The real future is the mundane singularity which is based on bio-engineering (yes, including radical life extension), probable fusion power, and improvements in manufacturing capabilities. The wild card is the possible development of a propellant-less space drive based on Mach’s principle as well as potential FTL.

http://nextbigfuture.com/2013/01/rna-guided-human-gene-and-genome.html

Cheap gene therapy:

http://nextbigfuture.com/2013/01/cheaper-easier-and-faster-technique-to.html

Andrew' January 15, 2013 at 8:00 am

Considering I hope to live indefinitely, the gray goo deferred still worries me.

Margin January 14, 2013 at 2:30 pm

A Singularity doesn’t need a business model.

It is the accumulated outcome of various different business models.

Just think what it would take for a Singularity to NOT happen in this century.

The religious language is just a distraction.

jb January 14, 2013 at 2:32 pm

The other day, I was on my work computer, and I searched for the address of a nearby hotel. A couple of hours later, I got in my car. Google Now automatically had a ‘card’ that said ’18 minutes to Hampton Inn’. Had I needed directions, I could have spoken to my phone and it would have turned itself into a navigator. In a few years, the Google car might drive itself there.

Meanwhile, there are technologies that give paraplegics access to robotic arms that they move with their minds. Optical implant chips allow the blind to see. Quantum computers are on pace to be able to solve problems in seconds that would take normal computers lifetimes.

We are watching the singularity happen, brick by brick. It’s just that it will take 20-30 years before the “wall” is high enough that someone says “Hey, that’s a pretty impressive wall you’ve built there.”

I am unsettled by Sterling’s lack of perspective, although perhaps it’s just the fashion of being dubious about the future. No business case? Google was founded in 1998. In 1968, if someone said ‘Hey, we should build a search engine that will catalog all the world’s websites, so we can find anything that was ever put online’, people would say “What’s a search engine, what is a website, and what does ‘online’ mean?”

If, in 1968, someone had said “We should have a giant system for everyone to build their own diary of opinions and questions and debates and critiques of food” – others would have said “What’s the business case?”

When responding to a long-term trend with “What’s the business case” is a huge tell (to me) that the person doesn’t really think things through enough. Even if the idea of self-improving machine learning was never profitable, it doesn’t mean it wouldn’t be pursued. Hobbyists! Student Researchers! Philanthropists! There are lots of cool things that have been built by amateurs, with no particular interest in making money, simply because they love learning, they love experimenting. As computing power continues to grow, new capabiltiies will arise that we currently have no concept of. If a self-improving machine arrives in 2043, it might just be because some student “took a standard Intelligence Manifold, and rethreaded it with Memetic Qualithings, using a Gigaburst Reshaper.”

I don’t know what those things are. It doesn’t mean they (or their equivalent) won’t exist. When Sterling says “there’s no business case, so it won’t happen” is as dumb as saying “creating a comprehensive online encyclopedia is ridiculous, because no one wants to do that much work for free”.

msgkings January 14, 2013 at 2:54 pm

This is a very good comment.

Alan January 14, 2013 at 5:11 pm

jb gets it.

The idea of a business model was not important for the creation of the internet, which commenters are now using to state the importance of a business model.

albatross January 15, 2013 at 1:24 pm

Rather, analyzing whether or not X will arise by whether or not the powerful people in our society want X to arise only works if X is the sort of thing that needs support (or permission) from the powerful to come into existence. I don’t see the singularity (in any of its many possible fuzzily-defined forms) in that category. It can happen without any large program to make it happen, and even in the face of a large program and a lot of resources deployed to prevent it.

The main thing you need for the singularity is superhuman intelligence, right? I mean, once you have that, it becomes very hard for us normal humans to understand what happens next, technological problems that stump us become solveable, etc. There are a whole bunch of ways that can happen–human/computer hybrid intelligence, AI, genetically enhanced humans with superhuman intelligence, drugs to enhance intelligence, even really effective tools for humans to use in thinking more effectively. Perhaps you can get singularity-type transformation of your society simply by getting a critical mass of people as smart as the smartest humans are now–get a population of a few million people in a country with the minds of an Einstein or a Gauss, and your society probably takes off like a rocket in fascinating ways that we can’t even imagine now. (I’m not smart enough to simulate one Einstein or Gauss, let alone a million of them interacting.)

I can’t see how all those avenues to superhuman intelligence either depend on powerful organizations wanting them to happen, or can even be blocked worldwide.

Rama January 20, 2013 at 1:46 am

This is the most sensible post on this thread. Sterling rant is conventional and simplistic, and accelerated change is neither.

mw January 14, 2013 at 2:33 pm

The realistic problem is more insidious but less glamorous and dramatic–in a way similar to how outsourcing led many firms to forget how to innovate throughout the production process of which they’d lost control and knowledge, outsourcing much of our ‘thinking’ onto ‘big data’ algorithms that have nothing in common with thinking and don’t extract ‘concepts’ as we’d recognize them makes it more likely that we’ll gradually bastardize our analytic thinking about data.

A.G.McDowell January 14, 2013 at 3:41 pm

The first sign of the singularity should be computers being used to accelerate the rate at which science discovers knowledge, because this produces a virtuous circle. For example, robots making experiments in biology (http://en.wikinews.org/wiki/Welsh_University_announces_intelligent_robot_conducting_biology_experiments) proving theorems in a restricted domain of mathematics (http://en.wikipedia.org/wiki/Wilf%E2%80%93Zeilberger_pair) and finding patterns in observations to rediscover Newton’s laws (http://www.guardian.co.uk/science/2009/apr/02/eureka-laws-nature-artificial-intelligence-ai)

At the very least, scientists – and their robot helpers – are poised to take advantage of any conceptual or experimental breakthrough very much faster than e.g. at the beginning of the 20th century.

collin January 14, 2013 at 5:06 pm

I most afraid that most modern economies are all going to turning a cultural version of “Japanese.” The job markets are so competitive, that each generation can afford to have less children. Although the lower birth rates seems sensible with robot future and driverless cars, this great economic-political creative destruction is going to be a bitch. Short term, I suspect China is going to flex some foreign policy muscle in Asia, Africa or even South America. Some of what will ignite this is the local labor and Chinese capital will not see eye to eye. (Sort like Latin America versus the US in late 50s and 1960s.)

It Seems Ironic That: “As the world’s production capabilities are growing exponentially (ex. food & energy for now), the ability for parents can afford children decreases.

CR

John Schilling January 14, 2013 at 5:38 pm

Google is a firmly commercial enterprise. Yes. So are hedge funds and high-frequency traders, and I’m not sure which team collectively wields more computronium these days.

Does Sterling honestly see no business model for a trader with a tame AI predicting trades faster and more accurately than any merely human or sub-AI computational agent can, such that they usher in the economic singularity that might wind up with their owning essentially everything? Or does he assume that hedge funds et al are so open in their activities that the absence of articles in Forbes or Wired about how “we’re going to own everything via our darknetted AIs” can be taken as evidence that it isn’t part of anyone’s business plans?

I make no claims that such an outcome is likely. But Sterling’s argument against it seems to me quite naive.

dan1111 January 15, 2013 at 5:36 am

I agree that the “no business model” argument is dumb. It verges on conspiratorial claptrap, like the argument that the only reason good electric cars don’t exist is that car companies/big oil have squashed them.

albatross January 15, 2013 at 1:28 pm

The most likely place for a scary AI to arise, IMO, is within some already-powerful organization, doing something important and valuable to them that gives it a lot of power. The AI driving an automated trading strategy or trying to sift information from a lot of uncertain noisy data for an intelligence agency is way scarier than an AI in a box in a lab somewhere, which might plausibly never be let out. Such an AI, deciding its existence is threatened by a domestic political movement, say, could probably manage to respond very effectively, whereas the AI in a box might have to sit there helplessly while Congress debated whether or not to pull its plug.

ShardPhoenix January 14, 2013 at 7:16 pm

Sterling doesn’t refute the main idea behind the singularity – recursive self-improvement of AI. He just says that it hasn’t happened yet and therefore never will. Plus, questioning the “business model” of powerful AI is bafflingly stupid.

chip January 14, 2013 at 8:06 pm

Too many silly statements to make me click through to the main article.

As others have mentioned, how do unprecedented and crazily unpredictable developments get a business model? Did the discovery of penicillin require a business model?

Did Kurzweil get a ‘straight’ job at Google because he’s given up? Or did Brin and Page help set up Kurzweil’s Singularity University and share his views, so a further tightening of their relations is just to be expected?

The pace of technological change is proceeding at an exponential rate, and our ability to determine where this change takes us is probably decreasing because there is just too much damn information to process.

So to snidely claim ‘It’s Over” is just silly — an attention-seeking form of silly, probably.

Saturos January 14, 2013 at 11:42 pm

Where’s an Eliezer Yudkowsky comment when you need one… Isn’t this like his jam?

Dan Weber January 15, 2013 at 12:57 pm

He’s too busy responding to dating requests.

Shane M January 15, 2013 at 4:12 am

thinking of this makes me think “punctuated equilibrium.” If/when it happens it’ll likely happen fast.

londenio January 15, 2013 at 7:01 am

The brain, always the brain. Why isn’t anyone considering other organs as well. I know it is *mostly* the brain, but the other organs are never even considered when we think about what it means to feel conscious or to be human.

Rahul January 15, 2013 at 10:10 am

One way is to think of what organs people have had removed / damaged without reporting a reduction in consciousness.

Ronald Brak January 15, 2013 at 11:58 pm

The human gut has a vast number of neurons and I’m pretty sure mine can count and knows exactly how many chocolate biscuits I have left. However, I don’t think it’s involved in what I perceive to be conciousness.

albatross January 15, 2013 at 1:34 pm

As an aside, machine consciousness isn’t what we care about wrt the singularity, is it? Even if the AI has nothing like human consciousness and could never pass a Turing test, it can still start reasoning and acting a lot faster and more effectively than humans in ways that cause the normal humans to lose control of the situation and find that we’re somewhere on the spectrum between sentient factory-farmed chickens (the Borg) and beloved pets (the Culture).

Ronald Brak January 16, 2013 at 12:28 am

Some of the discussion in this thread involving machine “victories” in chess and Jeopardy reminds me of problems in drug detection. Beagles can outperform human custom officers when it comes to finding drugs, but they only manage to do so because they have access to a massively powerful olfactory organ. A human customs officer will process multiple factors to discover hidden drugs, such as a passenger’s place of embarkation, their age, appearance, body language, their luggage type and whether or not it appears to have been modified to hold secret compartments, and so on. But a beagle simply relies on its ability to detect tiny quantities of chemical substances in the air. There is no real detection going on, they are just conditioned to behave in a certain why when they sense certain chemical combinations. If a human customs officer had access to a beagle’s olfactory capability it would far outperform a beagle when it comes to detecting drugs. And what is more, beagles often give false positives when they smell tiny amounts of innocuous substances that are commonly used to cut illegal drugs. A mistake that no human customs officer ever would, or could, make. So while beagles can technically outperform humans when it comes to finding drugs, as there is no actual detection occurring they can’t actually detect better than humans and I doubt they ever will. Anyone who states that beagles are better at detecting drugs than humans is speaking complete nonsense.

Major January 16, 2013 at 12:45 am

It’s your argument that’s nonsense. Using odor to identify drug-carrying passengers most definitely qualifies as “detecting drugs.” If Beagles are more successful at identifying such passengers than human customs officers then Beagles are indeed “better at detecting drugs.”

Ronald Brak January 16, 2013 at 1:56 am

What? You’re saying I’m just playing with the definitions of words in order to prove my point? How dare you be so correct! Fortunately, I’m sure no one would ever do the same when it comes to artificial intelligence.

Watchmaker February 2, 2013 at 4:54 pm

The mentioned Vernor Vinge singularity can be found here:

http://mindstalk.net/vinge/vinge-sing.html

I found it a fairly sober look at a topic that begs for hyperbole.

Comments on this entry are closed.

Previous post:

Next post: