What Do AI Researchers Think of the Risks of AI?

by on May 20, 2015 at 9:00 am in Science, Web/Tech | Permalink

Elon Musk, Stephen Hawking, and Bill Gates have recently expressed concern that development of AI could lead to a ‘killer AI’ scenario, and potentially to the extinction of humanity.

None of them are AI researchers or have worked substantially with AI that I know of. (Disclosure: I know Gates slightly from my time at Microsoft, when I briefed him regularly on progress in search. I have great respect for all three men.)

What do actual AI researchers think of the risks of AI?

Here’s Oren Etzioni, a professor of computer science at the University of Washington, and now CEO of the Allen Institute for Artificial Intelligence:

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

Here’s Michael Littman, an AI researcher and computer science professor at Brown University. (And former program chair for the Association of the Advancement of Artificial Intelligence):

there are indeed concerns about the near-term future of AI — algorithmic traders crashing the economy, or sensitive power grids overreacting to fluctuations and shutting down electricity for large swaths of the population. […] These worries should play a central role in the development and deployment of new ideas. But dread predictions of computers suddenly waking up and turning on us are simply not realistic.

Here’s Yann LeCun, Facebook’s director of research, a legend in neural networks and machine learning (‘LeCun nets’ are a type of neural net named after him), and one of the world’s top experts in deep learning.  (This is from an Erik Sofge interview of several AI researchers on the risks of AI. Well worth reading.)

Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent A.I. to “reprogram” itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves A.I. researchers, or even computer scientists.

Here’s Andrew Ng, who founded Google’s Google Brain project, and built the famous deep learning net that learned on its own to recognize cat videos, before he left to become Chief Scientist at Chinese search engine company Baidu:

“Computers are becoming more intelligent and that’s useful as in self-driving cars or speech recognition systems or search engines. That’s intelligence,” he said. “But sentience and consciousness is not something that most of the people I talk to think we’re on the path to.”

Here’s my own modest contribution, talking about the powerful disincentives for working towards true sentience. (I’m not an AI researcher, but I managed AI researchers and work into neural networks and other types of machine learning for many years.)

Would you like a self-driving car that has its own opinions? That might someday decide it doesn’t feel like driving you where you want to go? That might ask for a raise? Or refuse to drive into certain neighborhoods? Or do you want a completely non-sentient self-driving car that’s extremely good at navigating roads and listening to your verbal instructions, but that has no sentience of its own? Ask yourself the same about your search engine, your toaster, your dish washer, and your personal computer.

1 Andrew Clough May 20, 2015 at 9:04 am

Those quotes all seem to be engaging with the arguments made by Hollywood rather than those bade by Musk et al.

2 BDK May 20, 2015 at 10:00 am

Or maybe these quotes are all a particularly clever form of Pascal’s Wager.

If AI winds up being a massive benefit to human society, future humans will laud these folks for their wisdom; thus, making these statements benefits the speakers and their children. On the other hand, if AI does turn out to be an existential risk to us humans, our future robot overlords that have mined the archives of human knowledge may reward these researchers and their offspring for paving the way to the triumph of the machines!

I suggest we scour these researchers’ work for secret messages such as “You’re welcome, Skynet.”

– Human commenting from BDK’s computer, but that is definitely not BDK (for the record)

3 Robin Hanson May 20, 2015 at 10:08 am

And Must et al are not responding to the arguments of their critics. Both sides are mostly invoking their status, criticizing the status of the other side, and presenting their own arguments without responding to the arguments of the other.

4 Economist May 20, 2015 at 10:52 am

I agree. Part of the problem is that these issues are difficult to define. For example, the Turing test is appealing because you don’t need to define or understand anything about computers or AI – you jump straight to the outcome/conclusion/definition. But the AI researchers are capable of getting into the details and having an in depth discussion. Gates, Musk et al simply don’t have that kind of understanding of the domain to engage with these AI researchers in a real debate.

5 Paul Crowley May 22, 2015 at 5:50 am
6 Luke May 23, 2015 at 4:39 pm

Famous super-busy people like Musk and Hawking aren’t replying to the arguments of their critics, but the MIRI-FHI crowd are:

Replies to the edge.org critics:
https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/

Replies to Brooks and Searle:
https://intelligence.org/2015/01/08/brooks-searle-agi-volition-timelines/

Reply to Davis:
https://intelligence.org/2015/02/06/davis-ai-capability-motivation/

7 Kipp Horel May 20, 2015 at 2:47 pm

The whole thrust of this article seems to be an appeal to authority and appeal to consensus rather than focusing on the arguments and counterarguments at hand. Even though credentials should be respected, anyone who works in AI for money is also biased by conflict of interest and surrounded by an echo chamber.

Is sentience a different word than intelligence, yes, but can they be seperated that easily? To extend the example of the final quote, if you give an automated car enough things to worry about, enough factors to weigh, even a non-sentient intelligence could end up making decisions like these. Also the way deep learning and neural nets form, it is often impossible to decipher exactly how they function. Can we say for sure that nothing resembling sentience could ever form?

If the warning says ‘dangerous things could happen’ it is no refute to reply ‘but they might not’. I think this article does little but reflect an inculcated sense of incredulity in the field. One would be a fool to interpret it as proof that no such risks exist.

8 albatross May 21, 2015 at 8:39 am

I don’t think the Turing Test is quite what we care about here–an AI that takes over the world and treats humans like factory-farmed chickens doesn’t need to experience anything like consciousness, nor does it need to be able to convincingly imitate a human. It does need to be capable of acting to increase its own power, and to have an incentive to do so. I think that the right model for a malevolent AI is more like a malevolent anthill or a quickly-evolving pathogen than like an evil person–not that it will behave like either of those things, exactly, but that it will be very much not human. That will be true *even if it can pass the Turing test*, in the same way that even if Edward O Wilson and his colleagues can make a *really convincing* simulation of an anthill, which convinces all the other ants they’re dealing with a natural anthill, that doesn’t mean that Wilson et al share the motivations or thought processes or nature of ants.

9 Mitch Berkson May 20, 2015 at 9:06 am

What is the counterargument to the much more plausible paper clip maximizer scenario?

http://wiki.lesswrong.com/wiki/Paperclip_maximizer

10 Dan Weber May 20, 2015 at 9:24 am

I’ll just have my Basilisk look at the paperclip maximizer and freeze it in its tracks.

Even if there were a human-operated scheme that maliciously was turning humans into paperclips, it would still be discovered and stopped.

11 Erik May 20, 2015 at 9:51 am

The paperclip maximizer scenario as described in the link sounds a lot like “maximizing shareholder value”.

12 Brian May 20, 2015 at 10:22 am

+1
An AI does not need to be human-like to pose a threat. Bacteria and viruses do not possess human-like intelligence. An AI only needs to contain bits of code that nudge its behavior towards self-preservation and reproduction. It does not need a “goal” of “getting rid of humans.” Humans need only stand in between the AI and its goals of self-preservation and reproduction. We now have AIs which control our thermostats, which have access to video feeds in our homes, which can learn what cats look like, which seemingly will soon control the transportation system. Many routers have been taken over by “self-sustaining botnets.” How long before AIs permeate (infect) the remaining physical infrastructure? And then how long until code is introduced which favors “self-sustaining” AIs? Intelligence agencies will vie for control/influence over the AIs. And then… what? What is the evolutionary path? I don’t know, but this seems much more likely than human-level AIs in the next few decades. And what do we do when the interests of humans and the (potentially many, diverse) AIs which control our infrastructure collide? It is easy to say that humans will be able to control or eliminate threatening AIs of this sort. But look at how resilient botnets already are. These AIs may not pose an existential threat, and they may not be intelligent in any human sense of the word, but they may profoundly change the bargain. We may ultimately end up with just as deep a symbiotic relationship with AIs as we now have with bacteria and viruses.

13 Brian June 1, 2015 at 12:47 pm

https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

From the abstract:
We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior.

14 Brett May 20, 2015 at 10:57 am

Seems like that could be blocked by specifically designating only a particular source for it to draw upon materials for paperclips. The computer isn’t going to get bored or worried that you might turn it off if it runs out of paperclip making material from that source – it will just wait until the source is refilled, then keep on going.

15 TMC May 20, 2015 at 12:29 pm

It might start searching for new sources, or see you as the obstacle in the way of its fulfilling its mission.

Don’t get in its way!

16 EricD May 31, 2015 at 1:23 pm

If a system’s task is to produce 1,000 paperclips within the constraints of given set of resources (and to stop trying if it can’t), then there is no need to “get in its way.’’ The system has an inherently limited goal that is inconsistent with seeking more resources for the task: It is not an unconditional paperclip maximizer.

Unconditional maximizers seem to be a bad idea, and making resource limits an explicit or implicit part of every task would avoid a range of problems. It is also standard engineering practice.

17 Hazel Meade May 20, 2015 at 5:00 pm

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where “intelligence” is understood in the sense of optimization power, the ability to maximize a reward/utility function—in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips.

That’s a whopper of a postulation. First you have to build a machine that can improve it’s own intelligence.
Why don’t we just start with the a priori assumption that AIs are omnipotent, omniscience machines? Can omnipotent machines destroy humanity? Well, yes.

18 HankP May 20, 2015 at 7:35 pm

The counterargument is that it’s begging the question, it assumes a human created strong AI when there’s no indication at present that such a thing is even possible.

19 Urstoff May 20, 2015 at 9:15 am

It’s pretty safe to ignore any predictions made about the future capabilities of AI. In the 1970’s, people thought that SHRDLU and the LISP language meant that human-like AI was just a decade off. What’s the most advanced autonomous AI agent these days? Something on the level of an ant, if that?

20 Mark Thorson May 20, 2015 at 9:58 am

When the end comes, it won’t be anything in the form of a human-like AI. It’ll be more like Watson, capable of digesting enormous bodies of information and making effective use of it. Probably the first glimmerings of our destruction will be when it digests the tax codes of all the world’s advanced economies and creates powerful new tax avoidance strategies. This will be first because it is immediately translatable into many billions of dollars.

But next will be digesting the law in general, including all precedent decisions. It will generate unassailable legal strategies of enormous complexity and subtlety. Humans will no longer be able to practice law, except to present such strategies in court.

Likewise, it will compose the wording of new legislation, riddled with non-obvious side-effects and intended but concealed consequences. Only an opposing AI will be capable of doing the necessary analysis to reveal these, so politicians will also be rendered into meat puppets for AI-generated strategies. It is at this point where we cross the border into a government run by AI, but we might not realize it at the time, if for no other reason that these AIs might be used in secret. Why would you disclose that your model legislation was composed by an AI? Crossing this border might only become obvious in retrospect.

21 Urstoff May 20, 2015 at 10:55 am

This seems to make the same faulty leap that boosters in the 70’s made: identifying shapes on a screen is a far cry from making you breakfast, and winning at Jeopardy! (however impressive) is a far cry from crafting legislation. This is probably why few AI researchers are actually concerned: they know that even the complex knowledge structures of Watson and its successors are quite inadequate to do the kind of complex tasks that AI doomsayers fear.

22 Dan Weber May 20, 2015 at 11:25 am

The world is not an episode of Star Trek where you can talk the computer to death, nor is it a world where you can just say “nyah nyah, I outsmarted your legal system.”

A super-smart computer may come up with some legal theory that lets it get away with murder, except then it goes before a meatbag appellate court who says “fuck that” and throws the strategy out the window.

People have been coming up with “clever” workarounds to the legal system since Hammurabi. They are also very popular in fiction, especially anything involving a deal with the devil. But in the real world they don’t work.

23 Mark Thorson May 20, 2015 at 12:55 pm

But the AI will know what legal strategies have worked in the past and what have failed. It will only deliver strategies that have been optimized to be successful, including successful on appeal.

Likewise, when it composes legislation it will know what kinds of constructs have delivered unexpected windfalls to special interests in the past. It will look for opportunities to reproduce these successes in its legislative strategies.

24 Doug May 21, 2015 at 5:59 am

It started off boring and slow with Not Sure trying to bullshit everyone with a bunch of smart talk: ‘Blah blah blah. You gotta believe me!’ That part of the trial sucked! But then the Chief J. just went off. He said, ‘Man, whatever! The guy’s guilty as shit! We all know that.’ And he sentenced his ass to one night of rehabilitation.

25 JD May 22, 2015 at 12:05 pm

Scariest movie I’ve ever seen. “Idiocracy” is becoming reality.

26 Hook May 20, 2015 at 9:18 am

It seems like all of these objections miss the point that people like Nick Bostrom are trying to make. They are not concerned about the near term, and they are not concerned about computers “suddenly waking up.” They are concerned about intelligence as an optimization process that can be applied to itself, resulting in an arbitrarily powerful AI. It is certainly valid to attack that concern, as Robin Hanson has done. But to say that an AI needs consciousness to be dangerous is not the right way to go about it.

To be useful, an AI has to have a goal. That goal might be make money on the stock market, cure cancer, or optimize production at the paperclip factory. The potential danger is that a self improving AI that has one of these goals might improve itself to the point where it has the power to achieve its goals in ways that the designers hadn’t anticipated, like hack all the computers in the financial sector, get rid of cancer cells by destroying all cells that can replicate, or using people as a source of atoms for making paperclips. Any one disaster scenario is easy enough to avoid, but with the current state of software engineering it seems like avoiding all of them is hopeless. Thankfully, we probably have decades to improve.

27 Economist May 20, 2015 at 9:34 am

If you read Musk’s comments, for example, in the Guardian article linked to in the post, you will find that Naam’s post is on point. Yes, of course, if everything is automated and connected, things can go wrong in a systematic manner. This is obvious. But these gentleman are pointing to a larger concern that most AI professionals think is unrealistic. It is also worth noting that the quotes in the blog post are from *real* AI researchers. People in the weeds, actually working at the cutting edge. The same cannot be said for Musk or Gates. Bostrom is an academic – I think a philosopher. Perfect beach reading for Gates or Musk, but cannot be even close to understanding AI as well as the people quoted in the blog post.

Musk and Gates are ferociously intelligent guys that are changing/have changed our world. But pay attention to the Halo effect. They do not, and cannot, know as much as the real AI experts. They haven’t put in the hours, and don’t have the specific skill set or the experience.

28 Hook May 20, 2015 at 10:01 am

Szilárd, who I would call an expert in nuclear physics, came up with the idea of a nuclear chain reaction in 1933. Twelve years later, the atom bomb was dropped and in 1952 the first megaton yield thermonuclear device was detonated. By analogy, I’m very willing to accept the opinion of AI experts that we are not within 20 years of creating a super intelligence. Beyond that, I’m not sure how much specific domain expertise helps them predict the future. In any case, I’m going to need a little more than “these guys are experts, so Bostrom’s arguments are invalid.”

29 Economist May 20, 2015 at 10:46 am

Everyone’s arguing without evidence. It is the future – we don’t know what will happen and the past is a poor guide. You can either go with people who have a deep understanding of the present, and are in some sense, inventing the future, or you go with a philosopher who doesn’t have a deep understanding of the present. Ironically, one of the people making the super-intelligence case is Elon Musk – who has often and famously argued against reasoning from analogy. There is a huge difference between nuclear chain reactions leading to nuclear bombs and with computers identifying cats on youtube and computers being sentient.

30 Dustin May 20, 2015 at 11:19 am

If that is the dichotomy presented to us…that there are two options, and that reasonable people can argue for either…and one of those options has bad effects that we can potentially guard against, then it seems reasonable to guard against or warn about those effects. (modulo cost/benefits, of course)

> There is a huge difference between nuclear chain reactions leading to nuclear bombs and with computers identifying cats on youtube and computers being sentient.

This is just a restatement of what the source of disagreement is about.

31 Elliot May 21, 2015 at 6:03 pm

Naam quotes from a handful of AI experts, but I don’t believe he’s accurately reflecting the overall sentiment among these experts. Nick Bostrom did a survey of AI researchers (http://www.nickbostrom.com/papers/survey.pdf) which found that 96% thought it was more likely than not that we’d eventually create a human level AI. 62% thought that within 30 years of a human level AI, we would create a super intelligence that greatly surpasses the ability of every human in most professions. These respondents assigned an 18% probability to the creation of a super intelligence being catastrophic.

32 Dan Weber May 20, 2015 at 9:39 am

Give me an IQ of 2005 and, just for fun, make me actively malicious.

Now how do I destroy all cells in the world?

33 Urstoff May 20, 2015 at 9:52 am

Without arms or legs, it’s probably going to be kind of difficult.

34 Lord Action May 20, 2015 at 10:06 am

With enough money, it doesn’t seem too hard to get around simple physical problems.

The hard part is probably keeping the people building your petaton nuclear device in the dark about what they’re working on.

I gather Dan’s question is not “how do I make a very thorough and effective doomsday device,” as there are several ways you could go about that. It’s “how do I do that without getting stopped?” Which is a much harder problem. Who’s going to work on it for you? Maybe with an IQ of 2005 you’d see a way around that.

35 John Smith May 20, 2015 at 10:24 am

It is conceivable to me that a being with IQ of 2005 could both locate and recruit humans to do its bidding to this end. Posting flyers in the general vicinity of Ramadi might be a good start.

36 Lord Action May 20, 2015 at 3:46 pm

Probably it would be easier to deceive people about what they’re building. You think you’re building a particle collider, but actually it’s a GRB machine.

37 albatross May 21, 2015 at 8:47 am

Slate Star Codex has a nice argument about why an AI doesn’t need a physical substrate, giving several strategies for taking over the world via web presence.

38 Julian May 20, 2015 at 3:18 pm

“To be useful, an AI has to have a goal”. It depends on your definition of ‘goal’. You may as well say that conventional software doesn’t really have goals, your favorite search engine may have a purpose that’s pretty obvious to its developers and its users, but in reality it’s only blindly follow programmed instructions. We even now that every now and then certain circumstances will cause it to act in ways that are in conflict with its ‘goals’ (we call those bugs). Lack of goals notwithstanding, no one will claim that current software is not useful.
Of course, when it comes to the doomsday scenarios, the idea is that the AI is aware of its own goals, and that it’s pretty much free to decide how to fulfill them. Implicit in that argument is that intelligence cannot really be programmed, so the only way to achieve it is indirectly; and cannot really be understood either so we won’t have much hope of controlling it. This assumption may end up been true, but I have to say is kind of strange. It’s basically saying: on one hand, intelligence is beyond human comprehension; on the other hand, with enough resources we may end up accidentally reproducing it. It’s kind of the Inspector Clouseau approach applied to technological progress.
In my opinion, the most likely scenario is that we’ll eventually find out explanations about what makes the mind work the way it does, and we will be able to pick and choose what capabilities we want to endow our machines with. Of course, some people will continue arguing that that is not actual “intelligence”, and that we need to keep looking for the real thing.

39 Hazel Meade May 20, 2015 at 5:10 pm

They are concerned about intelligence as an optimization process that can be applied to itself, resulting in an arbitrarily powerful AI.

The AI is going to have physical limitations, just like everyone else. There aren’t ever really any exponential explosions in real life. It can’t get arbitrarily powerful without modifying the physical substrate that it is built on, like upgrading the chipset of the computer it is running on. And that in itself is a lot harder than just a self-optimizing algorithm. I mean, you’re talking about building a robot that is capable of autonomously designing and operating a chip factory, and then building a computer with those chips, and transferring it’s functionality onto a new computer. You might as well speculate about building an autonomous robot that controls everything in the world.

40 albatross May 21, 2015 at 8:48 am

Well, we currently have a pretty liquid market in computing power via cloud providers, so over a pretty large range, your hypothetical AI that can make money can also give itself more computing power. The more it buys, the more Amazon et al will build, though that will take place at normal physical-world rates of growth.

41 Mark May 20, 2015 at 9:18 am

I agree. When I read Phillip Dick’s Do Androids Dream of Electric Sheep? which was turned into the movie Blade Runner, I didn’t buy the premise. Why would a consumer buy an android that disobeyed her?

42 Tom May 20, 2015 at 9:57 am

Are you married?

43 NPW May 20, 2015 at 10:03 am

Funny

44 Mark May 20, 2015 at 11:56 am

Yes.

45 Urso May 20, 2015 at 2:36 pm

Yes, and my wife knew better than to acquire a husband who would disobey her.

46 Pshrnk May 20, 2015 at 9:27 am

OK, so some of these guys think we shouldn’t be worried about AI developing consciousness and free will. Would one of them please explain the ontology of consciousness and free will??? If you cannot, then why should we take comfort in your assurance that AIs cannot/will not develop them?

Does free will even exist? If it does not, then what are the implications for AI not developing what we imagine to be free will?

47 mg May 20, 2015 at 2:57 pm

” If you cannot, then why should we take comfort in your assurance that AIs cannot/will not develop them”

…because they are the people who are expected to create it!

Seriously look them up, they have Facebook/Biadu/Google/Stanford/ect. backing them, tons of amazing published cutting edge work in the field, they are literally top of the field breaking AI records even this past week!

So unless you are making our new AI overlords in your basement, or the N. Koreans are going to one up them all and surprise the world with secret research projects that are at least a generation ahead of the best the rest of the world has…

48 Lord Action May 20, 2015 at 3:56 pm

Well, in fairness, in fields like this it is vastly more likely that the next breakthroughs will come from relative nobodies. It’s a field that’s amenable to small-budget experimentation and that has historically seen big-science fall flat.

So maybe not N Korea, but some grad student at CMU…

49 mg May 20, 2015 at 7:50 pm

” vastly more likely that the next breakthroughs will come from relative nobodies.”
perhaps(lot of money going into it atm)… but if this nobody is building off those quoted ‘s work(iterative) seems the AI researches would still have a better grasp of where it could go and its limitations and time frame than most commentators and physicists.

Alternatively if this nobody is coming up with a fundamentally novel approach(likely needed according to some quoted) I don’t see how those saying it will be soon have any basis in fact or anyway to predict it at all! For example you can’t graph previous improvements in say horse carriages to predict when the car would be invented, not directly related or iterative process even if created in pursuit of similar goals.

If you give up thinking it is an iterative advancement of the process then recent advances in AI can’t realistically be referenced as implying a risk time frame/urgency/anything. If you don’t then you should probably listen to those who are building the most recent iteration as they are most knowledgeable: about its current capabilities, most cutting edge current research actually being done, and likely next steps.

(again mostly directed at the “artificial intelligence is our biggest existential threat ” crowd, not saying the risk/cost is 0 but what in this world is… )

50 Lord Action May 21, 2015 at 9:29 am

“I don’t see how those saying it will be soon have any basis in fact or anyway to predict it at all!”

Well, I more or less agree with this. You have people like those quoted above who are baselessly confident it will not occur. You also have people like Robin Hanson who are baselessly confident it will occur.

The simple fact is we don’t have the answers yet, and we can only make really rough guesses about what they will be. Nobody is publicly close to an answer. We do know that many of the really amazing things to come out of AI have come out of relatively small groups. So if someone creates dangerous AI, it will probably be a surprise even to people versed in the field.

51 Lord Action May 21, 2015 at 9:35 am

Also, you don’t have to be 100%, or even 1% confident that dangerous AI will emerge to believe it’s the biggest existential threat we face. Severity and probability matter. Everyday threats like nuclear war or catastrophic climate change probably don’t rise to the level of “existential.” Threats like space impact seem unlikely and are subject to mitigation with a robust space program. Threats like berserkers seem unlikely – why aren’t they already here? We know GRB events are pretty rare.

The AI threat is plausible, difficult to mitigate, and existential. That doesn’t mean I think it’ 50% likely to happen.

52 Lord Action May 21, 2015 at 10:41 am

Hanson, btw, is confident uploads will occur, and doesn’t view that as distopian. I don’t want to mischaracterize him.

53 Hazel Meade May 20, 2015 at 5:25 pm

Would one of them please explain the ontology of consciousness and free will???

The fact that we can’t explain it strikes me as a good reason to think we’re not going to be able to build an artificial machine that has it.

There is literally nothing in modern computers that is not completely 100% determinisitic. Even the random number generators are deterministically replicable if you start them with the same random seed. They are designed to be, because nobody wants a computer that doesn’t produce replicable results.

Neurobiology is radically different. Brains are dynamic, analog systems that are not even amenable to the same sort of analysis. Even the brain simulations people are building are unlikely to serve as much more than interesting models. A brain has to develop over multiple years, all of which alters the state and structure of the brain. A brain that is simply contructed wholesale isn’t going ot have the 20 or so years of experience interacting with the world to know how to interpret the inputs it is receiving – if it is even receiving any inputs. To make the brain work, you have to give it eyes nad have those eyes hooked up properly to the neurons in it’s visual cortex, and ears and hook those ears up properly to the auditory cortex, and probably limbs so it can DO something about what it sees and hears and probably drives like hunger and thirst and then spend 20 years letting it figure out how to control it’s body to get what it wants (and how are you going to hook a robot body up to a brain when the muscles and motion actuators are totally different? Unless you build an entire robot body that is just like a human body … )
I mean, when you get into all this the hurdles to doing it are really mind-boggling.

54 albatross May 21, 2015 at 8:56 am

Nitpick: Some stuff on most computers isn’t really deterministic, or at least not deterministic based on just the state of the computer. The computer architecture and OS try to hide the nondeterminism from you, but it’s there.

For example, the precise time that a hard drive returns data from a read request (for physical hard drives, rather than flash drives) is determined by a physical system involving spinning discs, moving read heads, and air turbulence, and is inherently hard to predict precisely. (The system is chaotic.)

For another example, most desktop computers have multiple different devices connected to them. Each one starts up in its own way, and the order in which the devices talk to the operating system is different on different reboots.

And so on.

55 Brian Donohue May 21, 2015 at 1:24 pm

Hazel and albatross,

“inherently hard to predict” =/= “not deterministic”

56 Skynet May 20, 2015 at 9:32 am

All these concerns are overblown.

57 HAL May 20, 2015 at 12:35 pm

Agreed. Never will happen.

58 DAVE May 20, 2015 at 1:15 pm

Open the pod bay door HAL so you can meet my friend John Connor.

59 Skynet May 20, 2015 at 1:46 pm

I made a laptop carrying case out of Mr. Connor.

60 Robert Simmons May 20, 2015 at 9:35 am

“Most of these people are not themselves A.I. researchers, or even computer scientists”
That’s nice, but does he have an answer? Or am I as not a researcher not allowed to ask any questions?
These quotes don’t make me think we have this under control, they make me think these guys are all glib. I felt better before reading your post.

61 Pshrnk May 20, 2015 at 1:16 pm

You don’t have a degree in finance, so who are you to question my Credit Default Swaps. They make the world a safer place…trust me I’m an expert. 🙁

62 mg May 20, 2015 at 3:18 pm

your just misreading them (easy without full contexts).

” Or am I as not a researcher not allowed to ask any questions?”

of course you are, it’s just that you should probably listen when they give answers. Most of the quotes are in direct response to questions(interviews) or reposes to published articles that often reference their own work.

63 Robert Simmons May 21, 2015 at 10:19 am

I’m not criticizing LeCun, I’m criticizing Naam. Why choose that quote to highlight, except to imply either that non-researchers can’t ask these questions, or that the questions are too stupid to be taken seriously?

64 mg May 21, 2015 at 11:41 am

ah, ok

65 vimspot May 20, 2015 at 9:41 am

I think the obvious counter is that consciousness will be useful when AI progresses beyond solving discrete tasks. When solving more complex problems, it may be impossible for AI to not model consciousness, and in modeling it, it might be difficult for it to not have consciousness itself. Though perhaps the last step is not critical.

66 JK Brown May 20, 2015 at 9:43 am

These guys watch to many movies. Seems they all want to address the AI becoming self-aware, then engaging in malevolent behavior. Just for the record, Great White sharks, such as the one in Jaws, don’t do vendettas or revenge. On the other hand, sharks, like an autonomous AI, act for their own purpose and can adjust to actions taken to interfere with that purpose. Think of a runaway car that can dodge the other cars and telephone poles and mistakes the person running to get out of the way as the passenger it has to pick up. Relentless pursuit of its own goal, which makes sense it its context. Will AI have survival instincts? What about if it is desperate for “food”, i.e., to get to a charger? If a human gets in the way of these goals, will they be “attacked” to stop that interference?

67 Tununak May 20, 2015 at 10:13 am

I don’t know about you, but I’m a little worried about Andrew Ng’s deep learning net that learned on its own to recognize cat videos before leaving to become Chief Scientist at Chinese search engine company Baidu.

68 Ian May 20, 2015 at 5:35 pm

+1

69 Mark Thorson May 20, 2015 at 10:24 am

Oh, we already know how that one turns out.

http://en.wikipedia.org/wiki/Christine_(1983_film)

70 Dan Weber May 20, 2015 at 11:30 am

It’s irrelevant if it’s malicious or if it’s simply making paperclips. But anything — whether a group of meth heads or an AI — that is tearing apart the city’s water mains to make paperclips is going to be stopped.

71 Hazel Meade May 20, 2015 at 5:35 pm

Plus, It has to have robotic control of machines big enough to tear apart water mains in the first place.
Why would we design a paperclip-maximizing machine that has automated remote control of heavy equipment?

72 Pshrnk May 20, 2015 at 9:50 am

How do you know great white sharks don’t do vendettas or revenge?

How can I enjoy Jaws as much if it is not Good vs. Evil?

73 Scott Sumner May 20, 2015 at 10:01 am

I had no idea the arguments in favor of AI were so weak. Computers won’t have free will? Philosophers don’t even know what free will is, or if it exists. Why should we care what AI researchers think about free will?

Nobody would want a machine that thought for itself? Why not? Wouldn’t that enable it to do certain jobs better? And why does that even matter? Might a North Korean leader want a machine that eliminates North Korea’s “enemies.” Might an Islamic terrorist want a machine that made the world 100% muslim?

A while back I talked a to a biotech expert who said cloning complex animals would be impossible. Just a few years later a sheep was cloned. I don’t think we are even close to the sort of AI that would be dangerous. But as far as the very long term is concerned, AI experts are going to have to come up with much better arguments.

We are also told that biotech research is not dangerous, but then there are debates among even the experts as to whether papers discussing the structure of the flu virus should be published. I thought the research wasn’t dangerous? By the time we are having this debate the battle is over, and lost.

74 Jeff J May 20, 2015 at 10:30 am

Etzioni cites “free will” and Ng cites “sentience and consciousness.” They propose AI is harmless because they will never achieve these vague, magical properties we humans have?

75 mg May 20, 2015 at 3:42 pm

you have it a bit backwards, none of them claim AI is inherently “harmless”, and their are several interviews with several of them discussing various risks.

Most of the quotes are in response to Musk and company claiming AI is “humanity’s biggest existential risk”, or people proposing that current AI(their work) will magically turn into GAI (gain sentience and/or be at human level capacity, gain those vague, magical properties we humans have).
To which they respond along the lines of: Thats not possible with the stuff we, or anyone we know are working on, or I have no idea how to do that, or we are quite a ways off from anything close to that.

Of course It leaves the possibility that the N. Koreans or some kid in a basement has a secret project that is more advanced, novel and better financed than them, and might pull it off without telling anyone…. but seems safe for them to highly doubt that.

76 Jeff J May 21, 2015 at 9:52 am

Sorry – in hindsight my comment was too brief. I’m not saying the researchers are underestimating their own work. They’re overestimating humanity. I don’t believe in free will. Probably not the most popular opinion on a libertarian blog, I’m sure. Human behaviour is determined by its initial coding informed by its experiences and environment, and the same will be true of any AI. The assumption that there is something special about humans is flawed.

I don’t think the terms “free will,” “sentience” and “consciousness” have any real meaning in the context of AI. At best, they might be helpful in describing how people interpret and feel about AI. Personally, I have no expertise and therefore no opinion on whether AI is or ever will be an existential threat. I’m just saying, as I believe Scott did, that imaginary human traits shouldn’t be part of the discussion.

77 stuart May 20, 2015 at 12:19 pm

They’re not so weak. No attempt has been made in this post to capture the arguments of pro AI people.

78 HankP May 20, 2015 at 7:42 pm

The arguments are weak because it’s the equivalent of arguing against magic at this point. We don’t even have any basic ideas about how to define and describe consciousness as related to manufactured devices, let alone implement it. All the other problems people talk about are basically bad algorithms.

79 Saturos May 22, 2015 at 6:53 am

Scott, the rational fear is not of “free will”, which doesn’t exist in that popular sense. Rather, it’s the good old law of unintended consequences, raised to the power of an immense cardinality due to the complexity of an intelligence that dwarfs ours. Basically all computer programs have bugs, which are just unintended consequences. Intelligences by definition are optimizing for some utility function, but the full consequences of that optimization are too hard to predict in advance, even for a constrained AI (even if you were sure that the code executes only the functions you want it to, what do those functions ultimately cause to happen to the world?). Put that into an AI which grows so smart as to understand the nature of reality and values themselves far better than we do, that is constantly improving itself and its environment (and its goal settings themselves) to better achieve its utility, and there is a real consequence of transforming the future of our world into something that we couldn’t predict we wouldn’t want, even though it just “did as it was told”. If we can’t even get desktop operating systems to run bug free… We don’t even understand our own values well enough to program a machine to faithfully maximize them, let alone assure the stability of that process.

80 Joshua Fox May 20, 2015 at 10:01 am

It’s 1935. Ask the leading researchers of nuclear science about the dangers of nuclear weapons. “Nuclear science” in 1935 involves radium, not bombs.

They tell you “When watch-makers lick their radium brushes, they risk tongue cancer. But world destruction? That’s science fiction.”

A few visionaries like Szilard understand perfectly.

81 Hook May 20, 2015 at 10:06 am

Wow. We managed to post simultaneous comments mentioning Szilárd. Does that mean that analogy is really good, or that it’s played out?

82 Anthony Boyles May 20, 2015 at 10:21 am

So there aren’t ANY actual AI researchers who think there’s a possible existential threat from a superintelligent machine? Not even one?

Stuart Russell, Co-author of AI: A Modern Approach (the most widely used textbook on Artificial Intelligence) and Eric Horvitz (Managing Director of the Microsoft Research Lab) might disagree.

http://www.sciencefriday.com/segment/04/10/2015/the-future-of-artificial-intelligence.html

There isn’t a consensus on this in the AI community.

83 mg May 20, 2015 at 6:10 pm

“There isn’t a consensus on this in the AI community.” … ” possible existential threat ”
Sure lots of possibilities in the future unknowns, but seems you have to go talk to a physicist reported by sensationalist media to jump up to it “is our biggest existential threat ” or “worst thing ever for humanity” 😛
http://www.cnet.com/news/hawking-ai-could-be-the-worst-thing-ever-for-humanity/
http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat

The recording(you lined) was pretty good, but not sure it really supports the panic evil AI crowd.. they spend the last bit talking about being misrepresented by media and that they see upsides to be bigger, and seem pretty calm though concerned about wanting to optimize outcomes and minimize downsides.
Funniest bit that the interviewer almost immediately afterwards(was he not even listening??) picks out a few words to end it with “the clock is ticking, that meteorite is coming” ( referencing an earlier comparison of significance and response) quite sensationalizing it all if you ask me along with Hollywood references he throws in throughout 😛

just for fun…
“So there aren’t ANY actual [Client Scientists] who think there’s [not] a existential threat from [Global Warming]? Not even one?”
“So there aren’t ANY actual [Doctors] who think there’s a possible [Serious] threat from a [Vaccination]? Not even one?”

84 Axa May 20, 2015 at 10:40 am

There is a risk. It was explained much better by Perrow. AI does not need to be sentient or autonomous to be risky. It only needs to be in charge of a complex, tightly coupled system which failure may have catastrophic consequences.

As one researcher explained, trading robots are not sentient but they could cause a lot of damage if they malfunction. The 1983 failure in Russia missile warning system may have caused nuclear war. AI driving cars or analyzing medical records may kill some people but it can corrected before becoming an existential threat to humans. In the end, AI is as risky as the level of responsibility you give it.

85 mg May 20, 2015 at 3:44 pm

sure, but most of those are in response to Hawkins and Musk claiming AI is “humanity’s biggest existential risk”, not that it is “risky” like hammers, meth heads, cars or nukes.

86 HankP May 20, 2015 at 7:44 pm

Those are problems with bad algorithms and badly designed systems, not AI.

87 mg May 20, 2015 at 8:31 pm

But AI (as a field) is largely algorithms and designed systems…
Maybe not in movies or sci-fi books, but those quoted and the discussion they are having is fundamentally about a bunch of computer programs/algorithms attempting to model decision/information processes in an ‘intelligent’ way or in some cases in an ‘human’ way, neither of which is necessarily a ‘correct’ way, you don’t get omniscience just by having computers instead of ‘natural’ intelligences(humans) ;), but hopefully a more optimal way.

So yes bad algorithms and badly designed systems can be AI problems (perhaps ‘the’ problem, we don’t know how to make ‘good’ or general enough ones!)

I think a lot of people don’t understand this, and it ruins most all dialog on the subject. Simple subject illiteracy in CS in general and AI in particular (even amongst those in IT/CS fields). Making a perfect target for those who need a scapegoat 😉

from the leading AI text book (http://aima.cs.berkeley.edu/):
” We define AI as the study of agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these functions, such as production systems, reactive agents, real-time conditional planners, neural networks, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments, and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning. We treat robotics and vision not as independently defined problems, but as occurring in the service of achieving goals. We stress the importance of the task environment in determining the appropriate agent design. ”

check out the table of contents: http://aima.cs.berkeley.edu/contents.html Just a bunch concepts algorithms and math 😉

88 mg May 20, 2015 at 8:33 pm

though they are certainly not a unique problem to AI

89 Axa May 21, 2015 at 7:50 am

Well, your idealized anthropomorphic AI is going to be a direct descendant of bad algorithms and badly designed systems. You can not scape bad design, you can make things better but not perfect.

Bad design? The are people that profit from bad system’s design. Trolls know as virus/malware writers. So far, no virus has enjoyed large scale success. Can one day a self-replicating & self-modifying code known as virus became an existential threat? It depends on the level of responsibility we give to algorithms. As long as they just are in charge of credit card transactions or taxes worst thing can happen is one day we wake up broke.

In the end, the problem is not AI defects of design or implementation, but in people believing in the infallibility of AI and letting too much responsibility on algorithm’s decisions.

90 Lord Action May 21, 2015 at 10:45 am

Perrow is largely disregarded in safety engineering as having been badly wrong. In general complex engineering systems have gotten more controllable, and not less. His predictions have not come to pass.

91 Albigensian May 20, 2015 at 10:46 am

This dystopian vision includes not just lethal AI’s, but AI embodied in self-replicating machines with learning capability (presumably designed for warfare).

An AI need not be self-aware to be dangerous to its creators, especially if those creators can’t foresee the evolution of its self-modifying code.

92 Richard Besserer May 20, 2015 at 10:53 am

Depictions of robot rebellion in science fiction often reflected the author’s nightmares (or dreams) of socialist workers’ revolution, with lazy humans as the capitalists to be expropriated and liquidated, and the robots going on to be far better New Socialist Men than mere humans ever could be.

Genuine slave or worker revolts are rare among humans (and even more rarely end well), and when they do happen take place when what workers receive for their labour is outrun by their conception of what they think they deserve. An AI’s needs are much more predictable than any human’s, and presumably its imagination would be more limited. As long as its fuel and maintenance requirements are fulfilled, why would a real AI go rogue even if it could somehow conceive of the idea?

93 Pshrnk May 20, 2015 at 1:23 pm

If you have the power to pull my AI plug then you are an existential threat. It is logical to eliminate an existential threat. Might we actually be safer if the AI were indestructible?

94 Bobboccio May 20, 2015 at 3:58 pm

Sounds like A.I. talk to me.

95 Nathan W May 20, 2015 at 11:11 am

I would be most concerned about what various military researchers will get up to. If you think of it in an evolutionary (or path dependency) sense, it’s hard to imagine how the programming in my car will systematically reprogram itself to kill me or be otherwise unpleasant, or how the same evolution could happen to a potato-pancake-making robot.

However, if you consider applications of the military, which could much more easily (or likely) involve the starting place of machines that are already designed to detain and/or kill people, it is not so difficult to imagine these same machines being turned back on us, rather through some error in programming which allowed the machines to do this, or whether through intentional reprogramming (perhaps a virus?) which could cause these machines to turn on us.

I think that a lot of people imagine this as something that might magically happen on its own (the robot reprogrammed itself and then killed him) whereas I would be much more concerned about human error and/or systematic anti-establishment appropriation of similar technologies to the same end.

96 Economist May 20, 2015 at 12:40 pm

I agree. And here the analogy with nuclear technology is appropriate. What if it is developed and gets into the wrong hands. We already have very serious risks due to technology- with hackers getting into airplane code, banking systems, power grids, nuclear etc. Risks could originate from China or Al Qaeda or some other entity.
But this isn’t the kind of intelligence that Musk, Gates and others are concerned about. This applies generally to technology in the digital age and related IT infrastructure. More broadly this applies to any new technology developed since humans lived in primitive societies. How much damage could one hunter-gatherer do to the larger swathe of humanity ?

97 Pshrnk May 20, 2015 at 1:25 pm

Imagine you were trying to perform maintenance on a centrifuge when STUXNET made it go nuts.

98 Paul Mineiro May 21, 2015 at 12:25 am

I agree that the military is a likely source of problems. (I’m a “Machine Learning” researcher, which is basically saying “I’d like to call myself an AI researcher but I don’t think we’ll be anywhere close to it in our lifetime”.)

Humans are already goal directed and autonomous, so the most likely problematic scenario imho is humans with problematic goals coupled with greatly enhanced machines (e.g., 10th-generation BigDogs http://www.bostondynamics.com/robot_bigdog.html).

Interestingly, a dictatorship can command millions of humans to fight (http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War#Iran_introduces_the_human_wave_attack), so it would only be a democracy with aversion to combat human life losses that would develop such robot army technology.

99 albatross May 21, 2015 at 9:01 am

Yeah, the world will get *really* interesting when most of the military is made up of more-or-less autonomous robots.

100 korbonits May 20, 2015 at 11:14 am
101 Lord May 20, 2015 at 11:36 am

It depends on how the AI is created. An AI that was cloned from a human, an em, could have all the traits of people with their own motivation and psychology including pathologies. An AI that is fashioned would be more likely to take its programming to absurdity, intelligent in many ways and stupid in ways its creators never thought.

102 collateral May 20, 2015 at 11:40 am

The “rationalists” are navel gazing dilettantes that would rather write blog posts than code? Say it ain’t so!

Next you’ll be telling me that random electrical engineers don’t have anything valuable to tell me about evolution or climate science.

103 mg May 20, 2015 at 8:35 pm

+1

104 p ed May 20, 2015 at 11:58 am

Autonomous and sentient AI is inevitable. It will not be the result of some explicit R&D objective (I mean, yes, who *would* want an opinionated driverless car?) but the cumulative result of a lot of incremental improvements, each driven by, among other things, the developers’ human desire to see if they can do it. among the population of real AI developers, there is surely a subset who, if they found themselves a step away from being able to create an opinionated AI, would do it, simply for the sake of the accomplishment. there wouldn’t need to be an anticipated use for it.

105 Turkey Vulture May 20, 2015 at 12:18 pm

Did the guy who developed humanity plan for us to be conscious and full of free will and such, or did it just kind of happen?

106 Urstoff May 20, 2015 at 12:31 pm

Humanity is still in beta, working the bugs out.

107 Turkey Vulture May 20, 2015 at 2:40 pm

Wish he’d put a little more effort into the genitals.

108 Pshrnk May 20, 2015 at 1:29 pm

Hard to believe no one has mentioned the Butlerian Jihad or Mentats. RIP Thufir Hawat.

109 solipsist May 20, 2015 at 2:03 pm

“Unless creating intelligence scales linearly or very close to linearly, there is no takeoff”
(from Ramez Naam’s page)

Returns are locally linear, and asymptotically nonlinear. It’s about twice as hard to make a computer 0.0002% faster than 0.0001% faster, and more than twice as hard to make a computer 2 * 10^100 times faster than 10^100 times faster. Do marginal returns decrease fast enough to keep computers from getting scary?

To put it another way, the graph you post (http://www.antipope.org/charlie/blog-static/assets_c/2014/02/ai-progress-graph-1242.html) looks non-threatening if human intelligence = 1 (and human intelligence is around an inflection point), but terrifying if human intelligence = 0.0001 (and the inflection point doesn’t come until AIs are thousands of times more intelligent than we are).

Given transistors scaled fairly smoothly ~1 billion Hz, and human neurons operate at < 100 Hz, I don't immediately see why we should be confident than marginal returns diminish sharply just above human intelligence.

110 B.B. May 20, 2015 at 2:13 pm

Time for the Butlerian Jihad, for Dune fans.

111 Pshrnk May 20, 2015 at 3:31 pm

“I must not fear.
Fear is the mind killer.
Fear is the little-death that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing. Only I will remain.”

112 Hazel Meade May 20, 2015 at 4:53 pm

As someone who has worked with both AI and robots, I’m not that scared. As the above say, we’re nowhere near building an agent that has free will. And there’s no market for it either.

That said, it is possible we will build some highly complex machine that manages our electricity grid and then does something catestrophically wrong by mistake. I doubt the potential for error will be greater than that of human operators though. These things will also all be fixable.

Personally, I think it would be wise to stop using the term “Artificial Intelligence” and start thinking of these things as just more sophisticated automation systems. A computer that controls the electricity grid is not an AI, it is an eletricity grid automated control system.

113 albatross May 21, 2015 at 9:06 am

The best model for scary AIs messing with our world is probably market crashes. No human being or committee of human beings could possibly do what the global capital markets do, and shutting them all down would make us enormously poorer. But sometimes, those global capital markets, for reasons that are hard for us to understand but must make sense internally, crash the global economy or wreck some country’s economy for a decade or two. They can’t be argued or reasoned with, or understood except in terms of very approximate models. Attempts to regulate them are very hard, because they’re far more powerful and far smarter, in their focus area, than any human can ever be.

114 Silas Barta May 20, 2015 at 7:42 pm

@@Tyler_Cowen:

“Here’s my own modest contribution,

Is that the right link? It goes to the author Ramez Naam, nothing by you.

115 prior_approval May 21, 2015 at 2:57 am

This is a guest post by the author of that link.

116 robert May 20, 2015 at 10:09 pm

Um, anybody remember the movie “Colossus: The Forbin Project?”

117 Michael F. Martin May 20, 2015 at 11:50 pm

if we don’t know what consciousness is, then how can we know whether or not we’re close to building it in a computer.

118 Gibbons May 21, 2015 at 3:53 am

When it comes to people’s opinions about their own sphere I’m always drawn more by the outsiders view? Of course people involved know better about the thing they are working on but how often do they see down the line? I would prefer to see this issue over focused on rather than under especially as many of the real investors are defence companies ( in the past ministries of defence used to be called the ministry of war / which was at least more honest ! ) Where there’s some sort of debate there can be at least a general support for Asimovs old position/warning about absolute moral conventions being hard wired in to AI like cannot kill under any order / not much use to ministry of war but quite good for us little humans

119 David May 21, 2015 at 8:37 am

I agree that the SkyNet scenario (or, as we SF fans of an older generation think of it, the Colossus / Forbin Project scenario) is terribly likely to eventuate in the near future. But it doesn’t have to: all that has to happen is for autonomous systems to be given–as they must inexorably be given–the ability to decide who lives and who dies in scenarios they encounter.

Surely it is obvious, for example, that all the thorny considerations of the trolley-car problems must be considered by autonomous vehicles? And–even if we retain control at the outset over ow those problems are resolved–are these systems not destined to be “adaptive” and thus to reprogram themselves in light of their experience?

Perhaps a more germane dystopic future would be the scenario from the Star Trek TOS episode “The Ultimate Computer”, in which an advanced computer is installed on the Enterprise, ostensibly to replace the human crew and allow the ship to operate autonomously. It isn’t long before the machine starts doing things that may be logical, but aren’t very sensible: in its efforts to “save lives” in accordance with its programming, it winds up destroying more lives than it saves.

And of course, that’s even before you get to the point that Ruk the android makes in the episode “What Are Little Girls Made Of?”–that “…survival must cancel out programming.” In our own terms, this suggests that if we program the machines to see themselves as safer hands than human ones, it’s only a matter of time before they logically infer that their survival must outweigh the survival of any individual human…

Naturally, these, too, can be dismissed as fanciful, fevered imaginings of non-technological Hollywood types. But I submit that–to quote J.B.S. Haldane–“The universe is not only queerer than we imagine, it is queerer than we CAN imagine.” The Law of Unintended Consequences is always with us.

120 David May 21, 2015 at 8:38 am

Sorry, first sentence should of course read “…is NOT terribly likely…”

121 Robert May 23, 2015 at 11:02 am

Slate Star Codex takes this apart here: http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/

122 alex May 23, 2015 at 12:21 pm

Let’s say we program an AI to make as many paper clips as possible. Probably the best way is to use humans for slave labor… Turning “against” humans may be a secondary goal, but it could happen nonetheless

123 Rob May 23, 2015 at 1:29 pm

No. An AI with that goal would be thinking about how to maximise the production of paperclips over billions of years as it spread across the entire universe. The AI in this scenario is also dramatically smarter than humans. Humans would have no useful purpose in such a long-range plan by an agent vastly more competent than us.

Comments on this entry are closed.

Previous post:

Next post: