AI, Consciousness and Robot Outsourcing

One of my "absurd views" is that the first computer to become conscious was Deep Blue playing against Gary Kasparov in 1997.  It only happened for a moment but in one spectacular move Deep Blue performed like no computer ever had before.  After the game, Kasparov said he felt a presence behind the machine.  He looked frightened.

Ken Rogoff, a top-flight economist and chess prodigy, wonders whether we don’t all have a little something to fear.

But the level that computers have reached already is scary enough.

What’s next? I certainly don’t feel safe as an economics professor!  I have no doubt that sometime later this century, one will be able to
buy pocket professors – perhaps with holographic images – as easily as
one can buy a pocket Kasparov chess computer today.

Rogoff thinks that the upheavals caused by cheap AI will be far more important than those caused by low-wage labor from India and China.

…will
artificial intelligence replace the mantra of outsourcing and
manufacturing migration? Chess players already know the answer.

Comments

"...will artificial intelligence replace the mantra of outsourcing and manufacturing migration? Chess players already know the answer."

Nah. Chess is a very limited domain and chess computers have been custom-tailored to that limited domain. Very little that has been invented in the course of creating grand-master level chess computers is of use for general purpose intelligence.

The original idea behind chess-playing AI was that if researchers could get machines to master such an intellectually challenging task then surely, along the way, they'd necessarily solve the problems of creating intelligent machines in general.

But it didn't work out that way at all. Chess is intellectually demanding because it is a task that human brains are not all that well suited for naturally. It turned out that the *really* hard problems that human brains solve are ones that every normal 5-year-old can handle effortlessly--recognizing people's faces, for example, or understanding language. It took years/decades of failure for AI to solve these problems (or even make very much progress) to reveal how hard these problems are (an odd sort of contribution for AI to make, but a valuable one).

So don't sweat the 'AI professor'. In fact, here's an absurd prediction for you. Computer chess will follow the pattern of manned missions to the moon. The creation of chess playing computers that can beat the best human players will eventually kill the whole high-level research program. After that, there'll be no point, people will lose interest, funding will dry up, etc. Chess software will still be around for training and amusing human players, of course, but "Deep Blue" will end up a museum piece.

....Computer Artificial Intelligence can not handle cognitve dissonance -- a much higher level of analog data processing that humans rely upon quite heavily for survival.

Alex,

You don't understand the colossal failure that is strong A.I. Go do some reading about the history of artificial intelligence, the bold predictions from the 1950s, 60s and 70s, and the relentless rollback of those visions in the 80s and 90s. Go read about Cyc and then look at where it is now. We are all still on square one. In the past half-century, computer scientists have made no demonstrable progress toward understanding the nature of what we call "intelligence."

Slocum summarizes the work in A.I. quite well. Basically the researchers' contribution to date has been to demonstrate, via repeated failure, that the problem of building an intelligent machine is so hard that it's beyond our current understanding.

There have been many interesting theoretical ideas about intelligence tossed around. And essentially everyone agrees that an intelligent computer is theoretically possible. It's just that, in 2006, we have no idea whatsoever about how to begin designing one.

That's not to say it will never happen, or that predictions about "this century" are necessarily optimistic. Almost anything could happen by 2099, if the right technological breakthroughs come about. But if you think world-class chess moves have anything to do with the development of an artificial college professor, you simply do not understand the field.

Deep Blue, huh. Read about Go and computers somewhere, that's a better measure of the current state of AI.

Even if this was a problem, and it's not *currently*, we had better make peace with the Cylons before they make peace with US. I don't really like their style.

"...expert systems are already good enough to handle 80-90% of what any human does in a wide variety of fields"

I think this applies only to fields that require large bodies of reference data, like a medicine or geology. It seems unlikely that weak AI would have much effect on fields that are driven by creativity or innovation - which would include all area of human endeavor to some degree. This would be a good thing too; it would free up more human intelligence and leave the drudgery to computers.

One more thing. We can look back 30 years, into 1976, to see where the state of the art in computers was at the time, and where they might be 30 years in the future. As - my PC is to banks of spinning magnetic tape: my pc will be to X.

Do you have an iPod? I wonder if a 40G iPod that you hold in your hand has more memory than existed in the entire world in 1976.

Is solving a game of complete information all that spectacular? It's a function of processor speed, isn't it? It's when AI can solve games of incomplete information with a stochastic process that we need to be afraid. The team at Alberta is busy at this, too, though. But so far, I think their bots are great ring game players at limit poker against average and slightly above average players (maybe), but can't compete yet at the highest levels of skill. At least, that's what I've heard.

http://www.cs.ualberta.ca/~games/poker/

> I am in complete agreement with you. Computers will not need
> to be general AI, they will be expert systems programs,
> something that masters an individual area of knowlege.

'Expert systems' to replace experts have been 'just around the corner' for 20-25 years. Their impact has been limited. That's not saying that in particular domains, expert systems might not work reasonably well. But consider family physicians--there are various alternatives that could do what American general-practitioners do at much lower cost including (nurse practitioners, foreign doctors working remotely, etc). The hurdles there are legal, not technical.

> It's going to start happening very, very soon. Within 7 years,
> many white collar jobs are going to be heavily impacted by the
> effects not only of cheap Indian and Chinese professionals,
> but of expert systems replicating their knowledge.

Where's the betting market where I can bet against that happening (white collar jobs being heavily impacted by expert systems)?

> Within 3 years, keyboards are going to start being phased out
> as good voice recognition software takes over, so average
> input will go from 40 words a minute to 120-150.

Again, where do I bet against that happening? I work on a computer most of the day -- no matter how good voice recognition gets (and, BTW, it's not going to get that good any time soon) I dont *want* to talk to my computer all day. Can you *imagine* how annoying it would be to sit in an office full of cubicles or a coffee shop and listen to people talking to their computers? Morons on their cell phones would be nothing in comparison. And most of what I do on the computer does not involve generating streams of text as fast as possible. What I get paid to do is think and generate sometimes a very small amount of text, but it's the right text in the right place. And how are you going to edit with voice input?

> Within 5 years, many programmers are going to realize that
> their IDEs are writing more of the code than they are.

That's not a new phenomenon -- development tools generating more and more of the routine 'boilerplate code' has been going on for close to 20 years. That's one of the main ways that the productivity of programmers increases.

Ditto the naysayers on voice recognition. A whole host of problems there, noise pollution of the work environment one of the biggest. Far easier to use a keyboard in most cases where a keyboard is used today.

Voice communication works well between humans because of shared contexts and understandings and body language, etc. Consider the relative non-productivity of conference calls, for example. If we had to speak in full sentence form all the time speech would be far less productive.

> Its a matter what we are going to do with all
> these now useless, idle people, in the year 2030.
> My son will be 29 years old.

retirement at birth?

http://www.marginalrevolution.com/marginalrevolution/2005/12/must_i_retire_n.html

my current thinking is that eventually, humans will only be able to market their creativity. we need better idea / prediction markets to make sure you can derive a lifetime of income from your one great idea. maybe patent law needs to be reformed along those lines?

the other, a bit tedious scenario from 'the end of work' by jeremy rifkin is that we will all be working for NGO, which are somehow protected from the market forces. i'm not holding my breath on that one.

As a chess addict and a not very good player, I am very familiar with chess programs/computers. They can be quite frightening. In games under 5 minutes, they can beat grandmasters of any level without almost any effort. Currently on internet chess servers, some computers can play 40 moves in under 5 seconds towards the end of the game.
All that said, it took 60 years for that to happen and when it did computer scientists found out that they couldn't use what they had learned for other problems. For an indepth discussion on this (2.5 hours), checkout video.google.com and search for CHESS. There you will find a conference by the Museum of Computers on the History of Chess Computing.
Who knows? Maybe the Singularity is coming. I really do hope so... enough of competition. I want to be an art`ist!

The experience from computer chess is that in some problem areas, there's a minimum amount of computer power that's necessary to solve the problem. Early chess program researchers thought that smart programs could avoid doing so much work, but those smart programs were eventually overwhelmed by systems that did massive search (with some algorithmic cleverness to improve the efficiency with which the space was searched, but without excessive pruning).

What does this imply for AI? Perhaps the general AI problem is similar, in which case lack of progress to date merely means we don't have computers anywhere close to fast enough to let us make progress. In this case, lack of success so far would not mean there will be no success in the future.

ObSF: _Accelerando_, by Charles Stross.

"Computers will think when cars fly."
Excellent! Here is the company making flying cars:
http://news.com.com/Flying+car+ready+for+takeoff/2100-11389_3-6040007.html

Almost there...

Bernard,

I do too - I use Excel all the time and i've written sripts and VB code that have saved one full person from being hired in our department. I don't think I am very exceptional, its happening all the time. However, I do think that soon more of the soft skills are going to start to be replaced, like extensive securities analysis and law based skills. The quants are going to rule the earth, as they write more and more code that can 'interpret' words.

Robin Hanson has a few papers on all this, I think arguing that yes human productivity should rise, up to a point. But not when full machine intelligence hits the market; then wages per mind fall fall fall, though total production rises, and human income might rise if they tax/charge rent on/enslave the machines.

Computers will continue to be complementary to (at least some) human talents, for at least some period of time (20 years?). Different people have different talents and will be obsoleted at different times. As long as you're one of those whose talents are NOT obsoleted, your standards of living will go way up. For the others, I dunno. Hopefully they saved enough capital to carry them through. At least the resource cost to maintain what is now considered a comfortable life should be very low by then.

-Kevin

I think general A.I. will come sooner than most people expect. The reason I believe this is because I don't think we're quite as smart as we think we are. However, I admit I could be biased on account of my own personal cognitive ability.

Ray makes this hilarious and scary prediction that a $1000 computer will be as complex as ALL (!) human brains, sometime in the 2050s!! But if Moores law holds, thats where computers will be.

You do know that Moore's "law" is merely an industry goal. There's no substance to it other than "we'd like to double the transistor count per given area every 18 months". Literally, that's what it means. And we already know that it has an upper bound within the next 20 years, based on the "laws" of physics. That's the sort of thing ignored by people preaching the Singularity -- there are all sorts of things we know already to complement that which we don't know yet.

And in either case, we still must know what to teach AIs and what to learn from them.

First, if you program in anything other than assembly language, your IDE already writes almost all of your code. Of course, the wire wrap guys would say that since I have my opcode list preselected, that what I do barely qualifies as programming in the first place. The fact that you can even program in an IDE _AT ALL_ means that your productivity--and therefore your employer's ability to pay you--has been wildly advanced. But that does not mean that the IDE qualifies as having AI in any way that the term has ever been generally accepted.

The idea that Deep Blue might have achieved consciousness is inane. Conscious of what? Certainly not itself. If you have children, you know how regularly they can make a statement which appears to be deeply (and disturbingly) insiteful. But these first-blush deep statements are consistently very pedestrian, obtainly depth only when placed in the context of an adult understanding of the world. Ascribing deep widsom to a child is similarly foolish, only much less so.

As for simulating the human brain, that is another Hanson goofiness. Simulate at what level? You can try to simulate neurons as on-off switches, but you will find that the interconnect problem is far, FAR more complex than that. Where I work, we are looking at tracing current flow through a microprocessor. The simulation runs fewer than 10 cycles a second. Oh. Did I mention that we have a physical description of the processor so detailed that we can do the simulation?

As for Moore's law, don't forget that it survived the transition from bipolar to fets. We will hit the physical limit of cmos in a decade or so, but there is no law of physics that says computers have to be build from mosfets.

If, and I do mean IF we ever get to the point that (general Turing) AI becomes possible, it will almost certainly be preceded by a period in which direct and near-direct computational enhancements amplify our mental capacity dramatically. We're far more likely to join them to us than to be replaced by them.

Comparing outsourcing and AI is the most absurd thing one can ever do. This is like comparing a needle and hydrogen. Does this makes sense?

No, then how can you make sense of comparing AI and outsourcing. Freaky!

Alex, how did you manage to grow up without reading science fiction? Clearly your youth was wasted.

About 10 years ago science fiction ran into the "brick wall" of the post-AI future, which is somewhat related to the "post-singularity" future. The best thinkers in the genre all agree that abiologic thinking machines are inevitable (of course we make millions of biological thinking machines every year -- they're called babies). Problem is, anything after that is rather hard to imagine.

Yes, I've read the vast literature arguing that we can't make a thinking abiologic entity. Chinese rooms, etc. The anti-AI group are basically all arguing that we have souls, or something like a soul, and things without souls can't think. We'll see.

Once we make something as "smart" as a rat, we're pretty much toast. Unless there's some physical limit to cognitive capacity we don't know about, our only hope will be that our creations have something analogous to sentimental compassion for their ancestral pets.

I rather doubt we can avoid this. The economic value of a sentient machine is enormous. Hard to imagine we won't take this road unless al Qaeda sends us back to the 14tch century (the bright side to their agenda one might say).

Belatedly ... computation may lead to the same result as intelligence, and may lead to results inaccessible to unaided intelligence:
http://www.xs4all.nl/~timkr/chess2/diary_15.htm
Scroll to #282, 294, 298
Is this then intelligence? One can tweak parameters based on the record of a given grandmaster's games to emulate style. This is not thinking like a grandmaster, but emulating within the computational model's matrix. (Can you parse the Turing Test? Click on my handle below.)

Comments for this post are closed