Donald N. Michael on our future cybernation

by on June 8, 2014 at 6:32 am in Books, History, Science | Permalink

He wrote:

[When] computers acquire the necessary capabilities…speeded-up data processing and interpretation will be necessary if professional services are to be rendered with any adequacy.  Once the computers are in operation, the need for additional professional people may be only moderate…

There will be a small, almost separate, society of people in rapport with the advanced computers.  These cyberneticians will have established a relationship with their machines that cannot be shared with the average man any more than the average man today can understand the problems of molecular biology, nuclear physics, or neuropsychiatry.  Indeed, many scholars will not have the capacity to share their knowledge or feeling about this new man-machine relationship.  Those with the talent for the work probably will have to develop it from childhood and will be trained as intensively as the classical ballerina.

Michael then discusses what will happen to those people who cannot work productively with the machines.  Some will still work in person-to-person interactions, but the others will end up in government-designed public tasks and work short hours and subsist on the public dole.  He also considers the possibility of sending some of these individuals to poorer countries where automation is not so far advanced.

Michael wrote all of that and more in his book Cybernation: The Silent Conquest in…1962.

rluser June 8, 2014 at 7:23 am

What does Danny Dunn have to say?

Gabriel Puliatti June 8, 2014 at 7:28 am

I would disagree that the “society in rapport with the advanced computers” will only be a small group of philosopher-kings.

The beauty of computers is that they require quite a bit of knowledge that can’t be built into a-priori, but rather learned together with human hands. It’s hard to build a shoe-making robot from scratch if don’t know how shoes are made. You still need a lot of local knowledge to make computers be good at something, at least at first, and then you will still need human knowledge to maintain quality (even as an outside “module” within a fully robotic world working for humans).

Which is great, because the “consultant” role is really the job everyone aims for by the time they’re tired and growing old… and that’s how a lot of societies look like right now.

Gabriel Puliatti June 8, 2014 at 7:38 am

To clarify, compare Michael-era view of computers vs today’s.

In the past, computers were supposed to find the meaning of life– Asimov’s Multivac in “The Last Question” (1956) is asked “Can the workings of the second law of thermodynamics (‘the increase of the entropy of the universe’) be reversed?”. The computer thinks and thinks for generations until it gets the answer and becomes God and recreates the Universe.

But compare that to now… IBM’s Watson is awesome but most of human problems by computers have been solved at the mundane (email, fb, all the way to better supply chains, but very low-brow) vs the absolute.

Kevin C. June 8, 2014 at 7:34 am

I’m reminded of Eileen Gunn’s “Computer Friendly” here.

Adrian Ratnapala June 8, 2014 at 7:42 am

Let this be my one and only Marxist comment: The material structure of the economy can define history. I can imagine a civilisation of aliens, or of robots. They would have no humans, but would have matter in purposeful motion, and probably a GDP.

I can also imagine a robot civilisation with some human brains in it do certain kind of computations that silicon computers can’t. The path to that might perhaps look like an ultra-immersive programming environment or sysadmin user interface.

William Woody June 8, 2014 at 7:58 am

“I would disagree that the “society in rapport with the advanced computers” will only be a small group of philosopher-kings.”

Part of that phrasing came from an era where computers were mainframes, and the only way you could develop code was to submit your stack of punch cards to a systems operator and wait for them to deliver to you a printout from your program, sometimes a day or two later.

But even today, in an era of personal computers, there are definite differences in the productivity of those who work with computers that are much greater than productivity differences in nearly any other industry. The book “The Mythical Man Month” describes the productivity gains of an excellent software developer and an average one being somewhere around 20x; compare to (say) the construction industry where an experience framer may be able to go perhaps twice or three times faster than a relatively new framer.

I would suggest the only reason why those experienced developers have not seen gains in their salaries corresponding to gains in their productivity (with the exception of startups who go IPO–and their rewards often have far more to do with the social need for a new category of product rather than the underlying quality of the product they’ve assembled) has more to do with a managerial culture who seeks to prevent these experts from rising or replacing a management structure who often are rather ignorant of the technology they manage. (How else do you explain a 2x or 3x gain in salary range in an industry with a 20x or 25x gain in productivity?)

That explains in part why software development is a “young man’s” game (it isn’t, really, but at some point disillusionment tends to set in as you get older), in part why the failure rate of large software projects is in the 80%-90% range (because we purge our most experienced people and act as if a 20 year old out of college is technologically better than a 45 year old with 25 years of experience), and in part why there are so many part-time freelancers in an industry with an acute shortage of qualified people (because once your productivity significantly outstrips your salary you start looking for alternate forms of compensation).

We don’t have those so-called ‘philosopher-kings’ (or, to use a more modern term, computer rock-stars) because we have a culture that likes the status quo and has managed to create massive inefficiencies to preserve it.

(Like allowing multi-million dollar projects to fail because middle-management at a large company doesn’t want to give a several-thousand dollar raise to a key man on the project–rationalizing it as (a) well, he wasn’t that key, (b) in the long term it’s better to see our developers as factory-line workers rather than as specialized artisans, and (c) it’s not middle management’s money–and these things fail all the time, don’t they?)

The Mythical Man Month actually posits that if you have a team of 30 people: five highly experienced developers each managing 5 inexperienced developers, and you want to accelerate development–you are better off firing the 25 inexperienced people and putting your team of 5 experienced developers back to work. In today’s managerial structure at most large software firms (from Silicon Valley to government contracting shops), they’d ease the five experienced guys out the door.

Magnus June 8, 2014 at 10:09 am

People are always surprised to learn how small the development teams at Apple are. Many of their iPhone apps are one person. A players like working with A players and all that.

Iamthep June 8, 2014 at 11:19 am

I highly doubt that experience is even loosely correlated with productivity in software. Why do you think all of the startups in silicon valley are run by young guys. Some people are simply more productive. It takes a special mindset to really code 8+ hours per day without breaks. Many programmers are actually only productive 2-4 hours a day. Some really bad programmers probably have negative productivity.

ChrisA June 9, 2014 at 1:52 am

@William Woody
William, no offence, but if you are so smart and all these managers in established software firms are so dumb, why don’t you start your own software company? You should be able to run rings around those other companies right?
Look the dumb manager meme has a lot of currency among a certain set of lower level programmers. It is sort of like the Dilbert view of managers. Maybe this was actually true in the old days, when programming was generally done inside very large organisations that had effective monopolies (like Baby Bells and IBM). IIRC Scott Adams actually did work in a Baby Bell before he became a full time cartoonist, which is probably where he got most of his material from. In that era maybe playing politics rather than doing a good job was a good strategy for a middle level manager since by default there was little downside for the manager and some upside. But its hard to believe that with all the competition out there now, not just in the US but worldwide, that would be a good approach. Think about all the different firms out there, all with different approaches, are they all going to converge on the same dumb strategy? My guess is that many times when lower level people perceive decisions as dumb – they actually lack the context of the decision maker. So what looks dumb to one person can make sense if you look at the bigger picture.
Of course occasionally managers are dumb and make dumb decisions, but this cannot be true of the class.

mpowell June 9, 2014 at 1:33 pm

I’d agree with this. Productivity differences aren’t actually that great (2-3x sure, not 20x) and there is ultimately a lot of organizational work required on big projects. This is a great productivity leveler. A substantial portion of your output is not brilliant insight, just boring old regular work which most of the people in your workplace can do almost as well as the best performer. There are still plenty of dumb companies out there but there also plenty of better ones.

Floccina June 9, 2014 at 4:24 pm

From what I have seen the big differences in programmer is not brilliant insight but making fewer bugs.

Bill June 8, 2014 at 8:26 am

Oh, but you missed the most interesting predictions regarding computers.

That IBM would dominate the computer industry.

That to be competitive in business you would need to buy a mainframe computer, and this increase in cost, and the economies of scale in computing, would make it very difficult for small businesses to survive and therefore only the large corporations, like ITT, would survive.

That someday when there were 16 bit chips, and large companies would be able to store up to 50 gigabits of data, we would not have to work anymore and would be replaced by a machine.

carlospln June 8, 2014 at 6:01 pm
So Much for Subtlety June 8, 2014 at 8:33 am

There will be a small, almost separate, society of people in rapport with the advanced computers.

Whereas it is next to impossible to get anyone under 30 to put away their smart phone for ten minutes at a time.

So he was comprehensively wrong. This is not a surprise. Most people are. What did he get right?

Timothy June 8, 2014 at 9:47 am

They don’t usually have the “rapport” discussed though.

They can use the apps to do what the app designers provide for them.

So Much for Subtlety June 8, 2014 at 8:01 pm

Well they don’t have the type of rapport he meant, but rapport they have. That is the point, as someone else said, he imagined these vast computers served by High Priests. And it has not worked that way. We put more computing power than NASA used to send a man to the Moon in the hands of a 13 year old girl who uses it to chat to friends and send bikini pictures to her boyfriend.

Remember though, some times those users use their apps in ways that the designers did not intend. Texting being a great example. The designers did not intend it for public use. They meant it for machine-to-machine messages and then the marketers figured out they could use it to send phone credit. The users decided they liked 140 characters of inane babble more than actually ringing someone.

Chris S June 8, 2014 at 8:06 pm

I am not under 30 but I object to the seemingly widespread belief that those engaged in their cellphones are in some other world with no connection to this one and no intrinsic value.

Smart phones or other ubiquitous computing adds another layer to the same reality, like the difference between walking through a foreign city where you know no one, can barely read the street signs and understand none of the surrounding conversation; and in a city close to home where you have connections, understand every word floating around you, and have access to the layers of history and nuance intrinsic to the place.

ibaien June 8, 2014 at 9:46 am

average has been over for awhile, huh?

Timothy June 8, 2014 at 10:10 am

Not trying to brag too much but I might be starting to be one of these guys. I realized a while back that when I was a kid I was better at computers than most people but at some point as I grew up and learned I, I realized I was starting to just interact with computers in a different way, basically centering around the fact I can actually program it if I put in varying amounts of effort. This can be just 20 seconds or a minute thinking/typing a brief script that we generally wouldn’t even call a “program” but is definitely a minimal example of one…

Chris S June 8, 2014 at 8:16 pm

I started coding at eight and feel the same way. I actually think that human language (english, e.g.) and computer language are thatclose as concepts.

In the last decade or so as computing has become mainstream I started to realize what a burden this places on the non-fluent. For instance my mother in law, an intelligent and articulate person, got a new camera, which required downloading new software. The software installed in a completely obvious place -c:\windows\programs\blahbalbdjjf\whatever- that she had no clue existed or could or should exist. She almost took the camera back before I came and found it for her (took me ten seconds). The kewl kids think of her as a n00b but really it is our fault, not hers.

Engineer June 8, 2014 at 1:36 pm

Michael then discusses what will happen to those people who cannot work productively with the machines. Some will still work in person-to-person interactions, but the others will end up in government-designed public tasks and work short hours and subsist on the public dole.

Same idea as Vonnegut’s Player Piano (1952).

tgf June 8, 2014 at 1:40 pm
Steve Sailer June 8, 2014 at 5:21 pm

Didn’t C.M. Kornbluth say this in 1951 in “The Marching Morons?”

Sam June 8, 2014 at 5:30 pm

I’d bet against an outcome like this. Coding is becoming less challenging by the day, as the languages become more powerful, and more becomes automated within the language itself. The future will look more like Wolfram’s new functional programming language. Machines will have access to a massive library of multifunctional algorithms and coding will be done in natural language with tons of automatic suggestions, tips and corrections. The biggest divide will be one of style and ‘authenticity’, with a population of purists and advanced technicians who like to code in more raw or specialized languages.

Steve Sailer June 8, 2014 at 6:25 pm

Why then is coding becoming more of a male dominated field?

Chris S June 8, 2014 at 7:59 pm

Becoming? Or is and has been? Cite your trendline please.

To Sam’s point, what we used to call “coding” is looking less and less like that every day. Its no longer trying to think of tight, efficient algorithms that don’t overflow their memory buffer. It is more learning the business process and putting it into pre-built, not terribly efficient workflows.

Excel macros, drag-and-drop automation, ticketing systems, etc.

Which is also why the 20x coder doesn’t capture all their surplus. Most organizations don’t need them and therefore won’t pay them. Only the very few cutting edge companies or startups can put that effort to use. And those coders need a large infrastructure behind them in terms of an idea pipeline, legal, sales, etc.

Why doesn’t everyone have race cars in their garages? Could they not get to their destinations faster? Theoretically.. but cutting their commutes in half would only save 30 minutes, wouldn’t be legal or practical, and it would be impossible to drive a race car on what passes for a high speed interstate in this country.

Comments on this entry are closed.

Previous post:

Next post: