Will the proliferation of affordable AI decimate the middle class?

Here is how I think about these issues. The Artificial in AI can sometimes mislead so let’s start by getting rid of the A and asking instead whether more NI, Natural Intelligence, will decimate the middle class. For example, will increasing education in China decimate the American middle class? I don’t think so.

As I said in my TED talk, the brainpower of China and India in the 20th century was essentially “offline”. Instead of contributing to the world technological frontier the people of China and India were just barely feeding themselves. China and India are now coming online and I see the increase in natural intelligence as one of the most hopeful facts for the future. It’s been estimated that a reduction in cancer mortality of just 10 percent would be worth $5 trillion to U.S. citizens (and even more taking into account the rest of the world). A reduction in cancer mortality is more likely to happen with a well-educated China than with a poorly educated China. So we have a huge amount to gain by greater NI.

In the case of low-skill labor the rise of China has hurt some US low-skill workers (although US workers as a whole are almost certainly better off due to lower prices). The US has historically had an abundance of highly-skilled labor and with greater education around the world we have less of a competitive advantage. In the case of high-skill labor, however, I think the opportunities for gains are much greater than with competition for low-skill labor. Ideas are what drives growth and ideas are non-rivalrous, they quickly spread around the world. The more idea creators the better for everyone. At the world level, for example, the standard of living and the growth rate of world GDP have both gotten larger as population has increased.

Greater foreign intelligence and wealth could be a threat if intelligence turns from production to destruction (this is also a potential problem with AI). We probably can’t keep China poor, even if we tried, and any attempt to try to do so would likely backfire in the worst possible way. Thus, if we want to keep high-skill Chinese workers working on medical rather than military breakthroughs, we must preserve a peaceful world of trade. Indeed, peace and trade become ever more important the richer the world gets.

Now let’s turn from NI to AI. For the foreseeable future I see AI as being very similar to additional NI. Smart people in China aren’t perfect substitutes for smart people in the United States and there are also plenty of opportunities for complementarity. Similarly AI is not a perfect substitute for NI and there are plenty of opportunities for complementarity. An AI that drives your car, for example, complements your NI because it leaves more time for more productive tasks.

(What happens when AI does become a perfect substitute for NI? We could easily be 100 years or more from that scenario but my foresighted colleague, Robin Hanson, has a new book The Age of Em that discusses the implications of uploads, human intelligence copied into software—Hanson’s book is the most complete and serious scenario analysis of the implications of a new technology ever written but most of us won’t live long enough to know whether he is right although Robin might.)

Thus, the analysis of AI and NI is similar except for one important fact. As Chinese workers become better educated a significant share of the gains will go to Chinese workers (although by no means all).  AI, however, is produced by capital. But in our world capital isn’t scarce. The world is awash in capital and computing power is getting ever-cheaper. AI isn’t like an oil field owned by a handful of people. AI will be cheap and ownership will be widespread. Just look at your cellphone—it’s faster and more powerful than a multi-million dollar Cray-2 supercomputer of 1990. Moreover, in 1990 there were only a handful of Cray-2s and today there are billions of cell-phone super-computers including hundreds of millions and soon billions in poor countries. The gains from AI, therefore, will flow not to capital but to consumers. So if anything the gains from more AI are even larger than the gains from more NI.

From my answer on Quora.


A problem I see is that Chinese and Indian workers don't actually have access to American jobs, for the most part, due to immigration restrictions and other barriers. The AI analogy would require unlimited skilled foreign workers to be available on the American job market. The effect of this on the middle class would be debatable, to say the least.

Yes, I had the same thought.

You have just accepted a framing where making an iPhone etc for the American market is not naturally an American job. In the 1970's we considered Sonys imports. Now what is an import?

Foreign workers do compete with U.S. workers in lower-end jobs such as manufacturing. However, most skilled jobs cannot be easily located overseas. There has been some of this (e.g. software outsourcing) but it hasn't been very successful.

You know about the x-rays reviewed overnight in India for American HMOs?

My point really was that you've moved your baseline though, and no longer count any lost job as American. A natural erosion, but one with policy impact.

No, the main problem with the post is that competition from AI is nothing like the competition from China, because there is no trade with AI. In theory, rise of China need not cause difficulties for the US labor force (I don't want to argue here what it actually did) because we can specialize and trade - sure, the iPhone would be manufactured in China, but the design and marketing would be in the US, plus all the secretarial and custodial jobs for those office buildings; then, we would trade with China according to the 'comapartive advantage' theory.

There is nothing comparable for competition with AI that would allow Americans to have a reasonable mix of jobs that keeps everyone employed. Is AI going to take vacation in Florida? Does it need new tires which we can produce in Akron? Let's look at it from perspective of intelligence, which what 'I' in AI stands for. Instead of assuming AI will go practically instantly from IQ of a dragonfly to super human, let's say that at some point it reaches an effective IQ of 80. What are the US workers with IQ below 80 going to do at that point? The flippant answer, sometimes suggested by Alex and others, is that they will retrain, to become dentists, caddies and what not. Can a person with IQ of 80 be an effective dentist? If it could, that job would have been takend over by AI, wouldn't it?

Much is sometimes made of isolated cases like the chess competition where the strongest 'player' is a pair of human + machine. Fair enough. But, if chess was an industry rather than the passtime, and the costs mattered, would we really keep the human part of the combination that incurs 95% of the costs.

As for the claim that it 'could easily take 100 years' for computers to equal human in IQ... my capacity for sarcasm fails me...

A statement like "It’s been estimated that a reduction in cancer mortality of just 10 percent would be worth $5 trillion to U.S. citizens" carries multiple independent serious problems, starting with the fact that it is not well-defined. Does this mean that US citizens would collectively trade $5T in goods and services at current prices for such a boon? Over what period, and discounted how? But it probably refers to someone's assessment of the effect of the boon on GDP, which really doesn't improve matters: those of you reading this comment have mostly already decided how to view that little acronym, so suffice it to say that I don't think it can really help answer the question posed in the post title.

I don't even really disagree with the conclusions here: more Chinese scientists at work curing cancer is a good thing. But there is a lot more to life than cancer and cellphones. Aside from whether NI or AI will make us "better off," one could ask whether it will make us better. That requires judgment, not fanciful number-crunching.

"one could ask whether it will make us better. That requires judgment, not fanciful number-crunching" Not sure if you're saying it's going to be a normative rather than empirical question or something else.

It depends on what you mean by 'empirical.' Has television made life better? Has it made people better? If all you have to go on is, Did people buy them? then the question is at once easy and uninteresting. In reality it may depend on the person: it may have greatly enriched some people's lives, it may have destroyed others. As a naïve undergrad I would have referred smugly back to the "well, people bought them" fact. The effect of e.g. television on someone has a large empirical component to it, involving actually looking at the person, unless you define 'empirical' as "showing up in GDP." or by some tautology involving utility-maximization and the mere fact of the (voluntary) exchange.

For sure. We tend to think there are pathological choices, where people buy too much drugs or alcohol, but we tend to think everything else is good.

A "shopoholic" is pathological, but for everyone else shopping is normal.

In reality there must be many shades of grey.

I'm not sure of the specific origin of the claim, but usually these sorts of figures are based on the "statistical value of a life". I'm not convinced that it applies here, since most cancer patients are beyond their productive involvement in the economy.

"It’s been estimated that a reduction in cancer mortality of just 10 percent would be worth $5 trillion to U.S. citizens” "

The US spends about $150 billion per year on cancer treatment. So, $5 trillion is implausibly high.

That depends on what we're actually buying with our $150-large.

Is there any realistic scenario where we'd pay $5 super large, to cut the $150 large by 10%?

My understanding is that a large fraction of the $150B is basically ineffective. So we're paying the money and getting very little mortality improvement for it. A 10 percent reduction in mortality sounds like a large leap forward.

I agree that $5T is an awfully large amount of money. That's "colonize the solar system money", not "extend grandma's suffering a little bit" money.

"My understanding is that a large fraction of the $150B is basically ineffective."

Even if that's the case, all else being equal, you'd only save 10% of this or $15B a year. If you invested $5T in projects with even a net 4% ROI, then that's a $200B a year increase. I'd say a 10% reduction in cancer mortality (and expenditures) is worth $150-300 Billion. $5T is an order of magnitude high.

Maybe I misunderstand, but I think the trade being proposed is that we spend $5T, and we move the cancer mortality rate from 1% to 0.9%. Today, we spend $150B to move the cancer mortality rate from 1.001% to 1.000%. I am making up numbers here, but you get my point? In my contrived example we already spend $150B for one thousandth of one percent. So $5T for one tenth of one percent seems kind of reasonable.

The argument is not that we could save a little on the cancer treatment we're already getting, it's that we could buy something hitherto unbuyable.

"we could buy something hitherto unbuyable."

Imagine, for instance, if $5T removed the top three cancer killers of children.

"Today, we spend $150B to move the cancer mortality rate from 1.001% to 1.000%."

I'm pretty sure we do far better than that already.

"In the United States, the overall cancer death rate has declined since the early 1990s. The most recent Annual Report to the Nation on the Status of Cancer, published in March 2016, shows that from 2003 to 2012, cancer death rates decreased by:

1.8 percent per year among men

1.4 percent per year among women

2.0 percent per year among children ages 0-19"


I will pay unlimited amounts of other people's money to keep myself alive.

"Cancer death rate" is a very tricky number. If I develop a more sensitive test that finds earlier cancers, and no other change is made in treatment, the death rates will fall simply because I've put more people in the denominator.

I know I put it in bold but I need to repeat it: even with no improvement in public health, this change makes cancer survival numbers look better. No one's health is better, but the numbers improve. Yay.

It's not related to the savings in pharma products, it's related to the additional years of life.

I think you're doing some reasoning where the value of cancer treatments to the economy is equal to their market price. For an example of how this doesn't work, consider a rather more ridiculous example (which I think serves to make the point) that we would die without sodium, and hence 100% of all future production is contingent on access to sodium, but we spend somewhere in the range of a dollar per year on it.

Perhaps thinking about zero sum games (more specifically their opposite) or complementary goods (as applied to health) might also help to see that the value can be incredibly much higher than the market price.

I think it's implausiblly high, but I think for very different reasons.

The main obvious difference between AI and NI is obliquely what you said - AI is capital. That is, an increase in productivity from NI might be expected to come with increasing consumption (at least, to a point).

Then there is that issue of productive/destructive. AI is likely to be only accidentally destructive; but NI is prone to fret over relative rates of production and consumption - favoring unbounded consumption, at least in the short term.

Someone will own that AI, and they will either consume or invest what that AI produces.

Also NI needs food and shelter and other expensive things that put a floor on the competitive advantage of lower wages abroad. At least at subsistence, and in practice, for the time being, a lot higher than that. On the other hand, whenever the software gets figured out for some previously NI-only application, the cost of doing it with AI is likely to fall much, much lower than what it takes to keep NI alive.

AI - Absence of Intelligence, a far greater threat than Artificial Intelligence. The risk in China is a Great Leap Backwards, the result of political unrest and economic instability and a new era of oppression, all triggered by a corrupt economic system that confers the benefits of the Great Leap Forward on a very few and the burdens on many. The question is whether China will turn inward (oppression) or outward (military adventurism). Unlike similarly situated countries in the West (e.g., Germany), China has historically turned inward. But unlike in the past, much of China's wealth, and China's economic elite, have moved, not only to Singapore but places like Vancouver. http://www.nytimes.com/2016/04/13/world/americas/canada-vancouver-chinese-immigrant-wealth.html?hp&action=click&pgtype=Homepage&clickSource=story-heading&module=photo-spot-region&region=top-news&WT.nav=top-news. Will the dispersion of China's wealth and elites increase or reduce the likelihood of military adventurism? People with NI would like to know; those suffering AI don't care.

Seems like the big issue remains that of alienatable capital that is then owned by a few resulting in increasins concentrations of wealth and the increasing reduction in the ability or people to participate in the production process/need to engage others in the productive process and the impact that will necessarily have on social institution and views about what a society and economy is.

Not sure I see that the analogy between the Cray and the modern smart phone is that meaningful, those I certainly concede the increase in the pure number relating to speeds is true. As implemented the Cray could not do anything that our smart phones do for us and the phone cannot do what the Cray did. Which may actually get us right back to the underlying question -- is AI and NI really in competition or are they so different from one another that considering them substitutes is only possilbe at a very superficial level.

You could write an app to run 90s cray software (very small problems by today's standards) on a smartphone. No one is interested. Much more powerful software runs on run of the mill engineering workstations.

No you can't.

Amdahl's laws, or rules, of high performance computing have not been negated by 30 to 40 years of semiconductor development.

Economists seem to live in some fantasy world where the laws of nature do not apply.

After all, AI won't eliminate the lead pipes in all the water systems and stop leaks, eliminate potholes and rebuild bridges, ....

Actual benchmarks show modern cheap computers beating venerable Crays.


On the competition between AI and NI, I am with the #2MA folk who say impact is broadly seen, even with these low levels of AI.

Really it is like the "is free trade good?" question. Articles like the above say "good" but aren't super up front about term or pains of adjustment.

Not related to the main argument but I want to provide an update of the comparison to the 1990 Cray-2 computer.
Top of the line smartphones (iPhone 6, Galaxy 5, etc) are more powerful than Deep Blue (the supercomputer that beat Kasparov in 1997).
We have billions of Deep Blues in our pockets. In 5 or 10 years we may have billions of Watsons in our pockets.

Not sure. AI is software (intellectual property) while smartphones are hardware.

So, AI might be available for the masses either by: a) "free services" as google, facebook, etc, b) open source AI , c) software piracy. But, why people should share intellectual property for free? It happens, but it's not common.

"Ideas are what drives growth and ideas are non-rivalrous"
That is not the case at all. Me having an idea does not prevent you from having one, but if they are the same idea and I implement it first, then they are clearly rivalrous. Obviously having more people having more ideas is a good thing, but I don't think it is because they are not non-rivalrous, but rather they solve the "low hanging fruit' problem, more people having more idea leads to more fruit, even if a lot of it is repetition, some of it will be novel.

Good to know that distributional issues literally don't exist, as always, and that because global capacity to make stuff and the ability of people who own and make everything to make and own more things is just going to expand, that the trend of gradual immiseration of the first-world public - the one driving Trump - is just going to go away as you accelerate the very forces causing it. Because reasons!

We'll all have Iphones! Of course, we'll be homeless. But Iphones!

You've heard it a million times before, but you're not listening. You were chosen because of your disinterest in listening. That's what makes you useful.

Either Alex is familiar with Ray Kurzweil, or he isn't. If he isn't, then he should be. And if he is, then he's does his argument an injustice. We could easily be less than 20 years from AI becoming a perfect substitute for NI. That's the natural trajectory of technological progression.

Kurzweil's book about this was published in 1999. It hasn't panned out: http://www.forbes.com/sites/alexknapp/2012/03/20/ray-kurzweils-predictions-for-2009-were-mostly-inaccurate/#74256413441c


It's time to admit the singularity is not going to happen and that we aren't experiencing progress at an ever-increasing rate. If anything it seems to be slowing.

I should caveat that AlphaGo made me just a little nervous.

Just ask it if it is time to plant tomatoes. Then you'll feel better.*

A current tech AI can be trained for a task, like a piano playing chicken, but it can't go out on the web to self train for a question.

* - I asked Android this and it knew to point me at the Farmers Almanac, but then I was on my own. Technically it has enough info to answer (my location and the date) but is not really a general intelligence.

I'm more or less in the "AI is harder than people think" camp, but AlphaGo was a major, major advance, and we had essentially no warning, even among people who follow the relevant literature and players.

So it moved my priors a bit. Obviously it's not solving the general problem.

Technological progress is slowing? On planet earth?

Yeah, doesn't that seem obvious to you? Arguments otherwise generally say "well, the progress is now harder to measure," and I sort of agree, but it's hard to argue things are rip-roaring ahead.

I think progress is huge, but obviously it can't keep up with every optimist's dream. Kurzweil is an optimist even among optimists.

It has been slowing for several decades now, especially in the key area of energy. Read Tainter.

Isn't Tainter the pessimist's pessimist? I know he (and the Archdriud) were go-to guys for the Peak Oilers.

Tainter is a realist, which is even more depressing.

Um, his book about the singularity was published in 2006. And his predictions have been mostly accurate. Your link was not a serious analysis.

His 1999 book The Age of Spiritual Machines predicted human-level AI was a couple of decades away. His own detailed analysis of his predictions is more generous than that article. He can point to a lot of specific predictions about technology and devices that were relatively accurate. And other predictions he spins as correct where I'm not so convinced.

However, it looks to me like he seriously missed the mark on the overall progress of AI. From 1999-2017 we simply did not make dramatic progress toward human-level AI. Nor does his newer prediction of the singularity in 2045 look reasonable to me. However, time will tell.

15 years ago, there was optimism in many quarters that AI-like "learning" would result in nearly perfect instantaneous translation of just about everything by ... around now.

But serious translators only use "machine translations" to speed things up a bit for perfect repeats, and that, only within the very same document, and at that, you need to personally double check each and every instance to make sure it makes sense, because sometimes the precise same words mean different things in different places.

As a translator, I used to think that my translation work would be threatened within the next 5-10 years. Instead, messing around with low grade machine translation leads to many organizations a) wanting translation services and then b) realizing that they need a real professional to get it done. The market for serious professionals has expanded as a result.

Perhaps similar such situations will apply to quite a lot of the other areas presumed to be threatened by machine learning.

Ray Kurzweil - "... And his predictions have been mostly accurate. ..."


"Ray Kurzweil's Predictions For 2009 Were Mostly Inaccurate"


I don't understand how Alex has a job. There are a million libertarians on the internet whose education consists solely of Henry Hazlitt that could write an identical article.

You could make the exact same argument about the industrial revolution, but at this point the scholarly debate is empirical and has sharpened to "did it take 60 or 100 years before wages surpassed the point where they were in 1760?"

But hey, if you want an armchair theorist to spout an econ 101 platitude, then you could get Alex or any number of college students that just read Economics in One Lesson.

I don't imagine his tenure was established on the basis of his blog musings.

You don't even touch on what most people think is the most disruptive fact - if AI can do most of the jobs, what do people do? Where do they earn the money to reap the benefits of a world awash in AI?

Why do people need to earn money if AI is producing everything for next to nothing?

Our society is built around the idea that a good life involves productive work. If we transition to a society in which only a few people can do productive work, and the rest are as outclassed in the workplace as a guy with an IQ of 70 would be at a software company, that's going to drive some huge and painful social changes. Now, maybe as AI or super smart humans or whatever take over the high end of the economy, that will create more jobs that people do better than machines. That's the way we'd normally expect the labor market to work--we stop needing so many buggy whips and horse trainers, but we need more car mechanics and gas station attendants. But it's not obvious to me that this must work out in this case.

One place to look at this is the really dysfunctional ghettoes in US cities. Those are places where there is a *lot* of available unemployed labor, probably available pretty cheap. Yet few employers want to put a factory in those places--the advantage of lower-wage employees is probably outweighed by the social problems those employees bring to work, the high crime in their neighborhoods requiring expensive security on the factory, the dysfunction of the city governments that can lead to the factory not having clean water or properly-maintained roads, etc. Could that same pattern extend to make most people equally unappealing for employers?

It's hard to imagine what people will do to have a meaningful life by the time that all the things which currently require human labour are done by machines. I imagine future people will find things to do, but just very hard to imagine what it might be.

I'm inclined it might get into a lot of forays into philosophy, art, study and debate about history and politics, various forms of spiritual inquiry, maybe just teams sports and other forms of competitive social activity ... perhaps very few people will be very involved in stuff that we consider as technological process. But then, perhaps along the way, someone will stumble on a very simple idea that makes everyone fantastically happy for the rest of history. Say ... mind tricks in achieving universal love, but perhaps something very different from that.. but, then, how would we satisfy our interest in blood and guts? Video games? Blowing up uninhabiltable planets in other solar systems?

Is there a functioning subculture of people who never need to work, but manage to be pretty content and functional? I imagine there's a culture of heirs to great fortunes that can live a pretty good life without ever needing to work, but that world is so far from my own, I have no confidence that I understand anything about it.

Ski bums. Surfers.

"Is there a functioning subculture of people who never need to work, but manage to be pretty content and functional? "


The idiot box.

The products that AI produces will cost money because the people that made the AI want to be paid. This is why people will need to earn money.

Has the level of critical thinking around here really sunk this low?

If someone doesn't have a job, cheap goods aren't much good. Can't purchase anything with an income of $0, no matter how cheap it is.

It is not a foregone conclusion that new jobs will replace the old. When buggy whips went out of style...what happened to the horses? The answer is that since we couldn't find a new use for them they were sent to the glue factory. Less horses today then in the past. When capital no longer needs a middle class...why keep them around?

The middle class are better with organized use of pitchforks than horses. Also, our natural ability to have sympathy for other humans, which is far greater than our natural sympathies for other species.

Yes, we can try to take wealth. Maybe we can beat the drone armies sent against us.

Point is, in a world where your labor isn't valuable anymore there is no productive way to consume, you have to take. Talk of sympathy aside, most people don't like being taken from, and will resist. Not much free market in all that.

When you get right down to it, I support whatever gets me what I need to live a halfway decent life. I'll work for it if I can, but if I can't work I'll take. I'll do a mix of both if that's what it takes. I'll form alliances with whoever I have to in order to achieve the necessary objective. I figure everyone else is the same.

asdf - worst case scenario, we can just go back to Little House on the Prairie sort of living. Such forms of resistance are not unheard of. A loose analogy can be found in Nigeria under the British, where many refused to play ball with the powers that were.

If things got bad enough, I imagine it wouldn't be hard to organize a sort of proletarian sanctions against the elite class, something along the lines of "buy small industry only and screw the corporate/elite powers that are screwing us" sort of things, which would leave them relatively poor and weaken their ability to stave off foreign threats due to an all-round weaker economy. Foreseeing such possibilities, I don't imagine it would get that bad.

I believe we are staring to the end of capitalism, If not in my lifetime, surely in my children's. There is not anything that's "Natural" or "God-given" about capitalism is a system that prospered because it provided something for everyone. If automation replaces labor and IT & AI replaces middle management and clerks. We will have a huge mass of people with no means to earn a living. Will they just decide that are evolutionary users and meekly die off?

Part of Marx's views that the collapse of capitalism, well, they weren't all based on the bad working conditions of labourers in his day and their discontent about the situation. His predictions were not entirely unsound, in the sense of capitalism producing ever more until there was surplus beyond belief. I haven't read the stuff for many years so my interpretation is bound to be at least a bit off mark, but the idea of the internal contradictions of seeking profit and surplus beyond belief seeking new markets until every market was saturated ... well, eventually something might give.

I don't think the end game would necessarily be communism. But I don't see particularly strong reasons to see why it might not be altogether unlike it, not in the sense of a command economy, but in the sense of an economy which all people receive according to their need, and all people contributing what they have to offer.

"I believe we are staring to the end of capitalism, If not in my lifetime, surely in my children’s."

Robots and AI won't replace capitalism. Sure, most of the goods you might want will be as cheap as water at a public fountain. So cheap that it doesn't make sense to charge for it directly. But scarcity will still exist. People will strive to earn currency so that they can have an up front seat at the hottest live concert. People will pay to own a rare painting their friends will admire.

How will people earn the money? In a country such as the US, there will probably be a direct living subsidy that you receive from the government. Probably for doing something that another person finds marginally useful (working). We might end up looking like the Jetsons. We might continue to work 40 hours per week at jobs that are less repetitive and more interesting.

Regardless, capitalism will still be around.

Yes, even when we can basically all have every material thing we can basically dream up, people will still be driven to create even more newer and better things, so they can buy positional goods. Then again, desiring status can be a cause of much dissatisfaction because only a very few people can win ... perhaps a flowering of a million subcultures could help to deal with that - everyone could have status of some sort in some subculture, and these subcultures would strive to position themselves as superior to others (not altogether unlike the present day, in a sense, but perhaps taken much further).

It's not at all clear to me that it would lead to more happiness, but there would still be things driving many people towards innovation.

And I will position my subculture to be superior to your subculture by nukeing yours.

I assume that we'll be neighbours and live social lives which are not very contingent on who lives next door.

If you look around the country, there is still a lot of work to be done. I'd strongly support an EITC to help those who are under the societal threshold to get and stay attached to the working economy.

I agree. It would require raising taxes, but an EITC is far better than raising minimum wage.

Yes, we need MORE support for jobs, any jobs -- help in day care centers, help in elderly care centers, help in recycling centers, help in moving stuff, help in coaching, tutoring, "teaching"/ supervising group learning & group testing.

Instead of mandating an increase in the min. wage, there should be SS tax credits for lower wage folk, so the business pays them BUT does not pay to gov't for them; and the gov't pays their SS for them (so they see the full SS amount in wages).

"Middle Class" will be more of a life style choice -- couples who marry and have kids and are responsible in going to work when they find work, and similarly going to re-training courses when their work changes, such folk will be able to live the American Dream double-income With kids, as well as the hassles. Those who choose other lifestyles will much more often be lower class poor, or in many cases upper class big success.

What would really be better is more AI to rapidly replace gov't bureaucrats -- so info collection and processing "for the gov't" is more done by gov't bots.
Including teachers & professors! So MR University with TA-bots and examination bots / review Q&A, no need to pay for any more econ professors...

Why is this post tagged in "Travel"? What does any of this have to do with travel?

More time for travel?

Alex just elides over the problem of how to maintain a peaceful world of trade, as if policies that promote trade automatically preclude warfare. What if the Chinese take over the South China Sea and impose costs on travel and trade, essentially holding tanker trade hostage to their foreign policy? What if they use their international clout to bully countries to support their policies as they do with Taiwan? Couldn't the threat of conflict be used to support undesirable policies? Or to negotiate asymmetric trade deals? We already know that China is willing to lower their own income to promote politically favorable deals. We know that they were willing to risk huge losses in trade income to crush the demos at Tiananmen. Does Alex think that a policy of pure pacifism won't lead to losses for the West? Does he imagine that any serious attempt to curb excesses will not risk outright war? There is no easy middle ground that is stable, especially if Japan comes to see the US as unreliable and starts to ramp up militarily, as they are already doing. Libertarians ignore the ways in which the Pax Americana came at great cost and assume that maintaining a stable, fair peace is easy.

Why do the Chinese want the South China Sea? If it is just for local oil drilling that has zero impact on the US.

I doubt it has much direct impact on the US, but it certainly impacts Malaysia, Vietnam and the Phillipines.

A four way split would probably not satisfy all of them, so here we are.

Some guy drew a line on a map of what they wanted and they haven't changed their minds since. I think they just think it's theirs, mostly, no matter that the historical evidence is pretty weak. Then again, the historical evidence is similarly weak on the other side too.

As one of the most ancient and long-lasting civilizations in history, one which has in almost all of history been a pre-eminent military and economic power (a short blip from about 1850-2000 excepted), it shouldn't be too hard to understand that they feel this way about it.

The oil and fishing is just an added bonus.

"one which has in almost all of history been a pre-eminent military and economic power "

So, might makes right, eh?

Hmmm, the US has a historical claim to all that Canadian land south of the 49th parallel.

I didn't say I like it. I'm explaining my understanding of the perspective. And anyways, no one else has a stronger claim on it. If it's not Chinese, then whose is it?

If the US were a 5000 year old civilization that had been traipsing around Canada for the last 2000 years, the invasion would have happened already. As it is, it won't happen because it would put a lie to every last bit of what values America claims to stand for (and to a degree genuinely holds).

Like, what if China decided to paint one of the largest bull's eyes that has ever been painted in history?

Why would they shoot themselves in the foot and bring on risk of catastrophic endings? I don't trust the Chinese in a lot of ways, but I don't think they're stupid, or particularly inclined towards using their military for the purpose of extortion.

It’s been estimated that a reduction in cancer mortality of just 10 percent would be worth $5 trillion to U.S. citizens (and even more taking into account the rest of the world). A reduction in cancer mortality is more likely to happen with a well-educated China than with a poorly educated China. So we have a huge amount to gain by greater NI.

What is the basis for this statement? Education, like immigration and now, cancer research, has no diminishing marginal returns. This is like the immigration argument: a million immigrants are good so a billion would be great, and really, if we would just put the entire rest of the world in the Continental US, we'd have Heaven on Earth.

Economists seem to lack any qualitative sense. We can put 10 million Chinese scientists to work full-time on the Phlogistonic Principle. No idea why Silicon Valley is designing social media and consumer apps instead of putting their giant brains to work on cancer research in pursuit of Alex's $5 trillion ROI.

There are a lot of very smart people grinding away at cancer research, so even if you are unusually smart, you are unlikely to be the guy who finds the breakthrough. (Also, you need about ten years of specialized education to get to the point where you can make much of a contribution.)

Anyway, don't sell the internet guys short. Facebook and Twitter seem kind-of trivial, but both have had an enormous impact on politics and society. Wikipedia and Khan Academy didn't make anyone rich, but I expect that their impact on the world will be enormous and enormously positive. (Why *shouldn't* a bright 12 year old be able to learn calculus?)

We could make a lot of progress on cancer research if every person submitted their DNA and submitted to 100% observation of all their habits.

But considering the diversity and horrificness of the ends to which such information might be put, I'm pretty content to wait a few hundred years to figure it all out, under the assumption that this implies much lower probability that I will survive cancer if it ever strikes me.

Different analogy, but what if millions of well-trained Chinese and Indian taxi drivers and truckers emigrated to the US, never slept, and worked for nothing? Would that be a boon for the typical non-taxi-driving American? Certainly, but it would cause a huge dislocation among exisiting transportation workers.

This future of the self-driving car, an example of where an entire industry can be replaced by technology/competition (as also happened on a slower/smaller/more local scale with US steel), will benefit most of us, but those displaced with be significantly harmed in the short- and likely long-term.

Workers and their careers aren't as dynamic as we'd like them to be. Hopefully there will be new roles to supplant the past, but globalization (and by extension, AI growth) has surely had an impact on labor wages as overall labor has become more plentiful.

The future is clearly pointing to Guaranteed Minimum Income, conditioned on two or one-and-done children. Or maybe none, depending on your social and biological profile.

I think the US should keep and expand the EITC. The critical component being a required work. A person with a regular job is more stable than one without. Society benefits from the stability. Also, that person automatically contributes to the tax roles, inherently subsidizing their own subsidy. Furthermore, they lay the building blocks for future success that will contribute to society and to future tax payments. It's a virtuous cycle.

That is my argument for protecting people's employment prospects by borders and tariffs. You can pay welfare at the cash register, or you can pay it via government transfer payments. The former strikes me as far less dysfunctional.

While I think in the near term keeping and expanding the EITC makes a tremendous amount of sense, that's conditional on people's labor being worth something more than zero, right?

What if AI and robots work pretty well and it's just not worth paying some people _anything_ for their labor? The minimum wage is sort of hiding that part of the picture nowadays - those people are just unemployed - so we don't really know where the minimum is.

"...that’s conditional on people’s labor being worth something more than zero, right? ..."

That's always the case. A lot of the "jobs" might pay very little and demand very little.

"Ok, I need someone to organize the local community bowling nights, four nights a week. Which one of the regular bowlers is looking for an EITC qualified job? The pay is only $20, but the 20 hours makes you eligible for the $120 weekly EITC payments. You still get to bowl of course, you just have to handle the paperwork and awarding trophies and such."

JamieNYC, upthread, makes the point that AI might bid awfully close to $0 for many jobs. The EITC doesn't solve that problem. The EITC works great for the $1/hr work problem you pose. At least I'm reasonably confident it would work well for the $5/hr problem we're facing today. Surely better than banning those jobs altogether via a price floor.

It might turn out in 100 years that we just need basic income to support all the flower-eating Eloi because there simply isn't work to be done.

In response:

1. That will be a nice problem when it happens.

2. If we are paying an EITC, the way that we will get to the flower-eating future is by slowly easing into it, rather than trying to jump in all at once.

3. I'm really skeptical you will never find there's no work to be done for most people. Do you think that most people will get to the place of that they can't answer the question "if I could hire people for 30 hours a week, what would I have them do?" That seems to meet the definition of paradise -- there is nothing you want -- so, again, it'll be a nice problem to have. We might just can't imagine the jobs today, just like someone from 200 years ago couldn't imagine the majority of jobs today.

4. Having the EITC will let us have price signals to say which labor is more valuable, even comparing the 10cent/hour job to the $1/hour job. Those 10cent/hour jobs might just be something like "play Farmville for me." Which sounds nuts, but if someone wants to hire someone for that, well, okay.

I don't want the consideration of some extreme and hypothetical future case to wash out a good solution for the problem in the here and now.

"Different analogy, but what if millions of well-trained Chinese and Indian taxi drivers and truckers emigrated to the US, never slept, and worked for nothing? Would that be a boon for the typical non-taxi-driving American? Certainly, but it would cause a huge dislocation among exisiting transportation workers."

+1, I agree with this strongly. Self driving cars will be developed and implemented far faster than former professional drivers can find work. There will be transitional issues.

I was actually thinking about this (AI replacing middle class jobs) over the last couple of days.

I was doing a review of my home/auto/umbrella insurance policies. My current vendor's website - ahem - does not support very well an analysis of availability and magnitude of the various discounts, what policy features, limits, deductibles, etc. can be changed, and how much that changes policy cost. So I called the agent, and she filled the gaps in the website. I suspect someone with a 115 IQ, a months training, and a couple of months experience, could easily do this job quite well. I suspect the agent earns a reasonable wage.

As I was doing this, it occurred to me that the insurance company has a fairly high incentive to automate this job away, if they could buy an adequate AI. The capital cost could be pretty high for the AI package - I suspect the hardware costs are trivial. Given 24/365 up time, no data entry mistakes, etc.

If I wanted to shop vendors, even with AI agents, that's still a bit of a pain. Ideally, I could rent a certified insurance buyer bot once year for a couple of bucks, and have it talk to the vendor bots, and get a really optimized solution.

A lot of customer service positions could be automated. But, even with lots of training and experience, with a human being you at least have a chance to intuit that they are feeding you lines that are against your interest. With AI, you can only assume that the service provider will tweak all the code in a way that is solely focused on maximizing the profits they will get from you - with a human, there is at least some chance that you might be able to extract the conclusion that "actually, that's not the right product for you, even though the company will earn less profits from that choice".

For me, one of the strongest determinants of customer loyalty is receiving advice from a salesperson that I might like to consider an option that earns them less profit, and that they only want to sell the expensive product if it meets my needs. Long & McQuades, a music chain in Canada, has basically earned my lifetime loyalty as a result. I have precisely zero customer loyalty to any other company I've ever done business with.

Maybe the programming can be tweaked to account for such things, but I don't think anyone will ever really trust the advice of a computer.

Reality is actually the opposite. A human salesman is more persuasive selling a human buyer things he doesn't need. The price is not so transparent, and it's difficult to make accurate price comparisons. With an automated system, an insurance provider (or lender, etc) must compete solely on price, since comparisons are immediate and transparent.

Yes, humans can be more persuasive, but at least you can say "if you don't cut with the bullshit then I'm leaving right now and telling all my friends what a bunch of scum you are. Can you provide an honest recommendation?" and evaluate whether they have the ability to do so in the space of the seconds that follow. With AI, it would be essentially impossible to evaluate any such sort of human cues.

Also, the programming of the AI could be changed any time. So, unlike a situation where you always go back to the same shop because you trust the advice of a specific salesperson, you never know when the programmers might decide to start turning on you and milk their good reputation to start increasing margins against your interest.

Yes, computers are good at competing on price. But the people programming the AI will always be motivated to lead you to the options where the make the fattest commissions (accounting for volume, and accounting for the need to build credibility, at least initially), not necessarily the option that is best for you.

What I intended to imply by the "certified buyer bot" was that you could rent/buy a bot that would operate in and protect your interest. For example, Consumer's Report might market a buyer bot for insurance, large appliances, tires, etc.

If you want recommendations for music, art, wine, etc where there is a larger subjective element, that may not be a good area of application. Although Pandora, albeit an algorithm rather than an AI, is presumably trying to find stuff you like). Systemic reputation/satisfaction ratings also serve some of the function lost by local tradesman/shopkeeper/etc. reputation as protection against bad advice.

"But the people programming the AI will always be motivated to lead you to the options where the make the fattest commissions (accounting for volume, and accounting for the need to build credibility, at least initially), not necessarily the option that is best for you."

Tried financing or refinancing a home lately? Shopped around for the best rate? You talk to a human "loan officer", but that human is purely an intermediary between you and a pricing engine, with the distinction of being an expert BSer. They get your information and plug it into an algorithm that spits out a rate. It's nothing you couldn't do yourself. However, lenders are NOT going to get into the business of automated lending because they know there is simply too much money to be made using tried and tested BS salesman tactics that automation threatens to obviate.

Engineer - Ah, OK. That could be a very interesting market. However, there will always be the temptation to milk a good reputation to start giving suboptimal advice to earn higher commissions.

Cowboydroid - I get your point. The only experience I've had in that sort of market was looking around at car insurance when I was considering buying a car. I think a similar situation applies though. I think the software tools are good at pure price comparisons, but are bad at comparing quality.

Actually, what matters to me more than price is knowing that I've got someone on my side to go to bat for me when the insurer (in your case financier) starts with all the obfuscation, etc. I'm very willing to pay a premium to someone who has expert knowledge in making sure that I get the maximum value out of the financial service that is provided. For similar reasons to what I said above, I'm very skeptical that any company's AI (or simply software) will be coded to to do that. However, Engineer's idea about a "certified buyer boT" could plausibly address this sort of situation.

Lots of pessimism here. As usual progress seems to generate much anxiety about jobs. If most people are out of work, it's not a bad thing. It will be the new normal The owners of capital ( Zuckerberg/Page etc..) will be richer. They'll be taxed and everyone will get a stipend. People can work on their hobbies, go hiking, immerse in VR all day, spend time with their AI friends ( likely an improvement over their current friends), probably augment themselves cognitively, physically, emotionally.

Work is over rated, for most of history it has been drudgery/slavery or rat race

No, because there's not going to be an true AI (as opposed to powerful, but essentially mindless and innovation-incapable computer). It's as much a pipe dream as perpetual motion.

There is already an existence proof. It's us, so we know it can be done. The time frame is a different story.

If AI were to find a way to reduce cancer mortality by 10%, I don't think anyone disputes that that would be good for the middle class. The issue is AI replacing middle class jobs. Will THAT be good or bad?

"Similarly AI is not a perfect substitute for NI and there are plenty of opportunities for complementarity. An AI that drives your car, for example, complements your NI because it leaves more time for more productive tasks."

Only if the commuting office drone wants to spend an extra hour of his day on office drone work.

When people say that automation costs jobs they usually think of workers being "replaced," but much of it comes from worker's being "complimented" by machines, and thus allowing the employer to do the same task with fewer workers. Accounting software "compliments" the accountant, which he welcomes. But the long-term effect on this is to allow a team of 5 accountants to do what it used to take 7 to do, harming the employment prospects of accountants as a class.

"AI, however, is produced by capital. But in our world capital isn’t scarce. The world is awash in capital and computing power is getting ever-cheaper. AI isn’t like an oil field owned by a handful of people. AI will be cheap and ownership will be widespread. Just look at your cellphone—it’s faster and more powerful than a multi-million dollar Cray-2 supercomputer of 1990. Moreover, in 1990 there were only a handful of Cray-2s and today there are billions of cell-phone super-computers including hundreds of millions and soon billions in poor countries. The gains from AI, therefore, will flow not to capital but to consumers. So if anything the gains from more AI are even larger than the gains from more NI."

The cellphone is a technology that people see, and it's a technology where the gains are most likely to flow to the consumer, because the consumer is one who buys it and can put it to work. But the technology that is less widely seen, the robot in the factory, will be owned by the "capitalists."* How much of that gain will go to them and how much will go to the consumer will depend on various factors, but if the last 35 years are any indication, we can expect a lot of it to go to capital.

I think AI replacing jobs will negatively effect the fortunes of the middle class and already has. Though it will not negatively affect all, or even a majority, of that class. When the company buys the new accounting software, it probably will not lay off 2 of the 7 accountants. The decision to do so would be made by a middle management person who would not directly benefit from saving the company money, and however much libertarians extol "creative destruction" the attitude of most Americans is that a company should not lay off workers unless it is "forced" to do so by economic conditions(the Great Recession gave many companies the excuse to remove the dead-weight.) In addition, laying off the accountants without cause might create morale problems among the remaining accountants. So, the company waits until one of the accountants retires or leaves for another company and doesn't hire a replacement. It is the recent college graduate, facing a scarcity of job openings, who will suffer the most.

*For lack of a better word.

" When the company buys the new accounting software, it probably will not lay off 2 of the 7 accountants. ... So, the company waits until one of the accountants retires or leaves for another company and doesn’t hire a replacement. It is the recent college graduate, facing a scarcity of job openings, who will suffer the most."

+1, that's my experience as an Automation engineer. Very few direct layoffs. Instead, less hiring in the future. And that's exactly the pattern we've seen in US manufacturing over the last 30 years. Increased production, decreased staff.

I found it to be an appealing idea that AI could be broadly owned/used/accessed. But probably your guesses are more realistic.

In translation, I very rarely use machine translation aides because most of my projects are unique. But some agencies require it because it's part of their system. Well, sometimes it's just because it's part of their system, but sometimes it's also because they use it to extract price concessions from translators.

It goes like this. You have to shell out loads of cash on translation software. You have to spend lots of time learning how to use it. Then, when it comes time to do the translation, you still have to independently translate everything. But the software calculates "matches" and "close matches" and the agency will offer to pay a 50% rate for "close matches" and 0% rate for "matches". The "matches", you still have to verify each and every one of them. The "close matches", actually, they often take even more work than other stuff, because you need to be exceedingly careful that you're not making mistakes in translating merely similar things identically, or that the contexts are too different and a different translation is needed.

The result is the translation software companies (capital holders) gain by selling software and training services, the agencies (capital holders) gain by streamlined project management and the ability to extract concessions from translators, and the translators themselves are required to pay upfront money to play the game and see any efficiency savings entirely eaten up by the capital holders.

Of course, in translator's forums, it is very much agreed that any remotely decent translator who is not desperate will not so much as waste their time communicating with people who offer terms like 0% pay for things that still require work or 50% pay for things that might take more work. A few years ago, these offers were extremely common in translation. But it seems that many of the agencies with better and higher paying clients are cluing in that they only get the dregs when the try to work on such terms. Such situations will presumably repeat in every brain-intensive profession where AI aides become increasingly relevant.

The assertion of NI from the developing world as economically equivalent to AI in the developed world without making an argument as to why that's a valid thing to do is pretty obnoxious. The main concern with AI is that it undermines the principle of comparative advantage in a way that trade with China does not. There is a huge, huge difference between AI and NI in the near future. That difference is physical location. The growth in NI in the world will be separated from labor markets in Western countries by thousands of miles and by barriers both legal and cultural. You implicitly recognize this when you point out that smart people in China are not perfect substitutes for smart people in the US. By similar argument, dumb people in China aren't perfect substitutes for dumb people in the US.

We seem to be reasonably confident that AI will be a complement rather than a substitute to smart people. That's not what bothers people about AI. What bothers people is the possibility AI might turn out to be a perfect substitute for dumb people. It's more along the lines of "what happens to society when most < $25/hr labor tasks are ones with strict dominance of AI over NI? What happens if that threshold becomes $40/hr? What happens if comparative advantage collapses because there's nothing productive whatsoever for some people to do? What happens the the most valuable asset (i.e. time/labor) of a third of the population utterly collapses in value?" If you're above the threshold, life will be fantastic. Better than the Jetsons. If you're not, then you will be pitied and despised by everyone who is, and politics will be about how much welfare will be required to keep the zero-marginal-value labor on enough oxy to keep it from revolting. The "are-you-better-at-something-than-the-machines" threshold could become an unbridgeable and unmanageable social chasm

I find it interesting and telling that discussion of AI cars overwhelmingly is about what drivers-becoming-passengers will do with all that lower-stress free time. The effect of that will be mainly expanding the footprint in which the typical suburban American lifestyle is feasible. The bigger economic transformation will be the one that results in "truck driver" being annihilated as a job category. Long-haul truck drivers are limited to 10 hours/day on the road (for fairly sensible and obvious safety reasons). Switching to AI will result in an immediate (more than) doubling of the utilization of the capital goods in question (the trucks). Furthermore, the increase in capacity utilization will tend to occur at the lowest-traffic times of day, since the AI won't fall asleep behind the wheel at 3:30 AM, and so not only the trucks but the roads will also get a huge increase in capacity utilization. The cost, logistical, and safety benefits will be colossal. It's also an advance very much in the forseeable future. That switchover is closer to 10 years away than it is to 50.

It will also mean that an entire employment category encompassing 850k people earning mean incomes of $43k (http://www.bls.gov/iag/tgs/iag484.htm) vanishes. If most of those people can find alternate productive employment, then NBD, our collective Jetsons future advances apace. The anxiety of AI is that they won't. Very few of these truck drivers will be capable of programming the bots that replace them. Their next-best-option might be a long, long way down. What worries people is not the substitution effect but the wealth effect.

Of course it will decimate the middle class because that is the whole point of AI. The 1% will be able to liberate themselves from all these nasty little people who want decent living conditions. Couple that with brown dwarfs pool cleaners, landscapers and nannies and you've got Zuckie's Utopia

"As I said in my TED talk, the brainpower of China and India in the 20th century was essentially “offline”. Instead of contributing to the world technological frontier the people of China and India were just barely feeding themselves."

I wonder if there might be some diminishing returns in this model too. A large number of 'online' natural intelligence can become a crushing intellectual orthodoxy as well. Some examples of great innovation were by small isolated regions of minds somewhat disconnected from each other. For example, Europe was fragmented politically and linguistically for much of the Industrial Revolution.

Keep in mind the law of diminishing returns. If gathering more and more intelligences together always has a positive return, then human civilization would have probably quickly converged to a single world state.

Perhaps this could be overcome by intelligence that does NOT think the way other human intelligences do and is not susceptible to peer pressure or social conformity in the way humans are.

"If gathering more and more intelligences together always has a positive return, then human civilization would have probably quickly converged to a single world state."

Only if people don't care about sustaining their language and culture. And if political power holders were indifferent to losing their localized power base.

Even if people did care, if more intelligences always are more advantageous then that interest alone would still push towards a single world state and culture. 1 Billion English speakers, for example, means 1 Billion intelligences pushing that 'innovation frontier'. Sure Flemish speakers like their language, but how do they counter all that pressure for English given English speakers like their language at least as much?

It would seem at some point having more intelligences in your corner can serve to trip you up rather than help you out. That would explain how minority speakers and groups could have avoided being all assimilated into one Human Culture Borg ages ago.

Not to mention that 56% of USAers own stocks. Presumably stocks would do very well in a more automated world.


Americans' self-reports of having money invested in the stock market -- either in an individual stock, a stock mutual fund or in a self-directed 401(k) or IRA -- were routinely higher than 60% prior to the 2009 economic crisis, but they have not yet returned to that level. That pattern is particularly evident among adults in middle-income households with incomes ranging from $30,000 to $74,999: 56% now say they own stocks, consistent with the percentage in 2010 but well below the 72% found in 2007 before the financial crisis. Stock ownership also remains down slightly among lower-income households (under $30,000), while it has held steady near 90% since 2007 among those in households earning $75,000 or more annually.

Of course if the change is too steep it could cause temporary problems. It is always the delta that gets you in economics.

"Own stocks" covers the ground from Warren Buffett who owns millions/billons of shares to chaps who happen to just have a single share of a company for some reason.

Comments for this post are closed