Neglected Open Questions in the Economics of Artificial Intelligence

That is my essay in the new NBER volume The Economics of Artificial Intelligence: An Agenda, edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.  Here is one excerpt from my piece:

These distribution effects [from more powerful AI] may be less egalitarian if hardware rather than software is the constraint for the next generation of AI.  Hardware is more likely to exhibit constant or rising costs, and that makes it more difficult for suppliers to charge lower prices to poorer buyers [price discrimination].  You might think it is obvious that future productivity gains will come in the software area — and maybe so — but the very best smart phones, such as IPhones, also embody significant innovations in the areas of materials.  A truly potent AI device might require portable hardware at significant cost.  At this point we don’t know, but it would be unwise to assume that future innovations will be software-intensive to the same extent that recent innovations have been.

You can buy the book here, it has many notable contributors and other essays of interest.


Maybe this is true, but what we see today are falling costs for manufactured goods, which I assume would include hardware.

Some people even posit that 3D or additive manufacturing processes will be the next wave in reducing manufacturing costs.

I think what he's getting at is the end of Moore's Law, which really does put a crimp on future developments. Either that or he really does not understand manufacturing.

The improvement of computation over the past 50 years really has been largely due to Moore's Law of semiconductor scaling. Software has been very peripheral to this.

The economics isn't there for expensive, one-off hardware. Not too many companies buy a supercomputer, for example. The trend the last half century has been away from big mainframes to cheaper cloud computing. Renting is cheaper than owning and thanks to Big Tech, compute is cheaper than ever. Your teenage nephew has at his beck and call a giant cluster of supercomputers at a good price courtesy of Jeff Bezos. The only real game changer is quantum computing, which is in its pioneering phase, but building an AI on a QC is barely a glimmer at this point.

Well, that's not the question he's asking. Cowen is assuming an important AI task that is so resource and compute hungry (and that is fundamentally single threaded) that it must run on the most expensive and tuned hardware. What happens then?

Fortunately, Cowen is wrong when he said "Hardware is more likely to exhibit constant or rising costs." Hardware hasn't yet, and I don't see why it would start now. Besides, if there is some important chunk of code that is too expensive to run on general purpose programmable computers, the industry will just map that code into hardware accelerators. That's what we've always done in the past (e.g. floating point and 3d), and that's how it will get handled in the future. I might even have friends who are doing that right now.

Considering how the world's most valuable companies are tech companies and considering how cash rich they are if such an AI was real, they would be all over it. It's not like Apple or Google won't pony up for some good hardware. They even have the ability to design it themselves like what Google did with TPUs to train AlphaZero.

Yes, this is why I have no worry about the hardware for self driving cars being prohibitively expensive.

Trump's trade war is what will make hardware prices go up. Prices are cheap now because the supply chain is in Asia where the competition is ruthlessly deflationary. Unless Africa or Latin America is able to pick up the slack and create an army of chip engineers, expect costs to climb.

Re supply chains: Foxconn just said it could manufacture the iPhone entirely outside of mainland China. Samsung makes phones too.

There are gluts in steel, aluminum, autos and almost any manufactured product you can name.

I think the "only China can provide the supply chain" argument is a red herring.

The multinationals have formed an alliance with the Communist Party of China to create a large manufacturing base. They wish to protect that base at all costs, and will pour unlimited funds into political campaigns, media, academia, think tanks, and lobby groups to put their message across.

Isn't it telling that you wrote, accurately, that 'Foxconn just said it could manufacture the iPhone entirely outside of mainland China.'

Apple does not manufacture iPhones, after all.

And Chip S. did not write China, he wrote the supply chain is in Asia, which is not contradicted in the least by pointing out that Foxconn or Samsung are not (PRC) Chinese.

China is the cheapest place to get your electronics assembled. If you remove them from your supply chain, then prices automatically move up. Are the multinationals doing the hard sell? Of course, but you still can't deny basic economic reality. Those jobs are also not going to the US. They are going to SE Asia which is still part of Asia.

If true, cost of hardware would only be a temporary brake on AI, once you have a super intelligent being, they can probably figure out ways to solve the hardware problem pretty quickly.

Yeah, Tyler assumed the hardest part of the question. If a Super-AI was a real thing, there are no limits to the amount of money world governments should pay to obtain it. Print unlimited amounts of money if you have to. The Super-AI will quickly fix your hyperinflation problem because well its smarter than the whole of humanity.

SUPER-AI HYPERINFLATION SOLUTION: Don't print money to buy a super-AI.

One of the most striking aspects of neural networks is that extent to which they can be 'distilled' or 'compressed': training a neural network can entail truly staggering amounts of computing power, but once it has been trained, it can then be simplified to 1/100th or less of the size or FLOPs with little or no degradation in performance. An example is DeepMind's WaveNet: it ran slower than realtime on the best GPUs, but then when distilled down into a small flat neural network, it now runs much faster than realtime and so cheap that it can be used by Google Assistant for free.

Even without little ASICs on smartphones now, there would be many NNs running on them these days. TPUs or ASICs are more about saving battery than any real requirement.

This also ignores the role of large neural networks in research progress towards better ones. Compare ImageNet winners like AlexNet to newer architectures like MobileNet, which are 10x or 100x smaller or faster at much better classification performance - *before* any kind of compression or distillation. Or consider style transfer. In 2014, style transfer required a day on a top-end desktop GPU. Within years, algorithmic improvements had brought it down to realtime on a smartphone.

Something to think about.

Cowen is making a related point: the boy wonders in tech don't do hardware well. Recall that Steve Jobs contemplated his devices would be manufactured in Silicon Valley but the effort failed miserably, so when Jobs returned to Apple he hired Tim Cook to develop the supply chain in China. And the rest, as they say, is history. More recently tech set out to build the car that would be driverless, but quickly abandoned the effort when they discovered it's hard to build a reliable car (i.e., hardware). China's experience in building hardware will likely give China the advantage in developing AI.

'the boy wonders in tech'

Remember when tech or Silicon Valley meant companies like Intel or Sun or HP, world leaders in doing hardware well?

Seems like a century ago, doesn't it?

Semiconductors largely left silicon valley in the early 90's. The tech crash of the early oughts largely killed tech manufacturing in this country, aside from a few giants like Intel and the OEM's. The OEM's are still all American companies.

'Semiconductors largely left silicon valley in the early 90's.'

Hence, seems like a century ago.

I think Intel still has a production line there, and there are probably a few test lines, but fabs have mostly left the Valley. Design has become more diffuse, but the Valley still dominates. When one talks about the "boy wonders in tech", I suspect they mostly refer to design and not manufacturing.

Assumes expensive hardware has to be in your pocket. Expensive hardware only needs to be created once, then distributed globally.

Absolutely correct. Look at googles new Stadia gaming platform for a simple proof of concept.

For what it's worth, the excerpt is reminiscent of marketing efforts for Google's cloud computing services. Maybe a decade ago there was a very good paper titled "The Unreasonable Effectiveness of Big Data" which argued (between the lines) that victory would go to those who had the largest databases and most capable datacenters. The paper sparked a minor panic outside Google and probably launched a thousand "big data" projects in other companies.
You should probably include "data" in the same breath as "software" and "hardware" when considering the inputs necessary for dominant AI.


The real bottleneck in the use of AI is the data you need to train it. Why do you think so many companies are furiously collecting and archiving huge data sets on as much as they can? Because AI needs big data, and in the future AI will be much more powerful.

We've known how to use neural nets to do image recognition for a long time. What we lacked was a database of millions of photos, all catalogued with subjects identified in them. But once Google started archiving photos, the data to do image recognition training finally became available. And now Google can surveil entire crowds and track people whenever they want. And so can anyone else that now has access to trained Neural Net image recognition software.

We have just scratched the surface of the unholy alliance between big data and AI. And as usual, most commentators are completely missing the significance of this and focusing on the wrong things, such as the never-ending income inequality angle.

For example of what you can look forward to, imagine a future where you apply for a job, and before you get to the interview the company invests $500 in a 'profile' from big data solutions, inc. The profile comes back and says, "Well, this person may be a liability. He might be a heavy drinker, as location data shows he often stopped at a bar after work at his last job. This also indicates the possibility that he drinks and drives. We also detect that he has several potential health problems based on his eating habits and search history. He doesn't attend church, he's almost certainly a gun owner, and a Republican. He doesn't get enough sleep, and it looks like he did quite a bit of web surfing in his last job. His contact history suggests anti-social tendencies and his cohort appears to be filled with undesirables. Recommendation: Do not hire."

So, you don't get the interview, and you will never know why. If the data company wants to hide how they rated you, they may just give the company who hired them an 'employment score' for you. And since AIs are quite impenetrable in terms of their logic, no one will even know why you got that bad score. They just know that the score is a good indicator.

Think about the value of this kind of research for many different uses. Good luck getting health insurance if your location history shows you do risky things, or that you have a bad marriage or don't get much sleep. Pray you don't become the enemy of someone with access to this level of data, as they can probably construct scenarios from your data that would be embarrassing or harming to you.

Commercially, a competitor could have a firm use trained AI's to identify your best employees for poaching, or to determine what secret projects you are working on, or to identify weaknesses in your capabilities or high turnover of certain people or whatever else they want to find.

Suspicious husbands or wives could hire firms that could tell them if their spouse is cheating on them. For example, a big data firm looks at all the cell phone numbers of your co-workers of the opposite sex, then runs a correlation query to see if your cell phone ever links up with theirs in suspicious places like bars or hotels. Hey, it looks like Bob's phone was with his co-worker Mary's phone at a bar a few times in 2015, then it became more frequent in 2016, and in late 2016 those phones started showing up in hotel rooms together. Or Mary's phone showed up at your place several times, always when your wife's phone was far away. Busted.

And because this works with archival data, even if you've learned today to leave your cell phone behind, it won't matter - if you had an affair ten years ago, you are now vulnerable if a fishing expedition by a bad actor with a good AI finds out about it. And the only thing saving you is the hope that no one will try. If you are a politician or another public figure, that's not likely. The future will be very unpleasant for you.

This is the real danger of big data. And it has nothing to do with class or money. In fact, the rich and powerful are more at risk than the rest of us because they don't live in obscurity. But we all face a future of asymmetrical information, which has the potential of great harm to us personally and to markets themselves.

And of course, the biggest danger to liberty comes when governments really get into it. For example, gun owners worry about a 'gun registry', arguing that once the government knows where the guns are, it will be much easier for them to confiscate them. Well, no gun registry is actually necessary. With a combination of location history, your social graph and search history, it would be almost trivial to determine if you own a gun, what kind of gun it is, and what you do with it. If this information had been available in the McCarthy era, finding all the Communists would not have been hard. If the Nazis had it, finding and rounding up the Jews would have been much easier. So marginalized and minority populations better make sure in the future that they stay on the right side of the government. The Chinese people are now finding that out, but it's coming for everyone.

Before we break up the big data companies, here's a modest proposal: The first thing we should do is pass a law requiring the destruction of personal data collected online after, say, one year. The real danger to all of us is the archival nature of the data, which ensures that we will be vulnerable to AI-based fishing expeditions for the rest of our lives. Let's make the possession of archival data of other people without their permission a felony. You can still collect data for advertising purposes, but it has a short half-life.

Sounds reasonable.

Mind you, the only reason I have a job now is because random things I'd left on the internet made my boss think I 'd be well suited for a particular task in which I'd had no actual experience.

Good point -- though I'm not sure it matters. I didn't read the blurb as being about some dominant AI but rather about access to any AI services for the lower rung in society (lower rung meaning not in the 1%, 0.0005%, 10%? 50%?? not sure on that).

As you note, it's not quite a hardware-software dichotomy nor am I sure that the design choice of hardware or software to implement the logic is quite as suggested regarding total costs -- but I'm not in that field so....

Hardware is more likely to exhibit constant or rising costs

Seriously, have you looked at the last, oh, eighty years in the computer industry?

Comments for this post are closed