Syverson on Productivity

The FRB of Richmond has a great interview with Chad Syverson:

EF: Some have argued that the productivity slowdown since the mid-2000s is due to mismeasurement issues — that some productivity growth hasn’t been or isn’t being captured. What does your work tell us about that?

Syverson: It tells us that the mismeasurement story, while plausible on its face, falls apart when examined. If productivity growth had actually been 1.5 percent greater than it has been measured since the mid-2000s, U.S. gross domestic product (GDP) would be conservatively $4 trillion higher than it is, or about $12,000 more per capita. So if you go with the mismeasurement story, that’s the sort of number you’re talking about and there are several reasons to believe you can’t account for it.

First, the productivity slowdown has happened all over world. When you look at the 30 Organization for Economic Co-operation and Development countries we have data for, there’s no relationship between the size of the measured slowdown and how important IT-related goods — which most people think are the primary source of mismeasurement — are to a country’s economy.

Second, people have tried to measure the value of IT-related goods. The largest estimate is about $900 billion in the United States. That doesn’t get you even a quarter of the way toward that $4 trillion.

Third, the value added of the IT-related sector has grown by about $750 billion, adjusting for inflation, since the mid-2000s. The mismeasure­ment hypothesis says that there are $4 trillion missing on top of that. So the question is: Do we think we’re only getting $1 out of every $6 of activity there? That’s a lot of mismeasurement.

Finally, there’s the difference between gross domestic income (GDI) and GDP. GDI has been higher than GDP on average since the slowdown started, which would suggest that there’s income, about $1 trillion cumulatively, that is not showing up in expenditures. But the problem is that was also true before the slowdown started. GDI was higher than GDP from 1998 through 2004, a period of relatively high-productivity growth. Moreover, the growth in income is coming from capital income, not wage income. That doesn’t comport with the story some people are trying to tell, which is that companies are making stuff, they’re paying their workers to produce it, but then they’re effectively giving it away for free instead of selling it. But we know that they’re actually making profits. We might not pay directly for a lot of IT services every time we use them, but we are paying for them indirectly.

As sensible as the mismeasurement hypothesis might sound on its face, when you add up everything, it just doesn’t pass the stricter test you would want it to survive.

And he makes an excellent point about the potential productivity growth from AI

…it seems that with some fairly modest applications of AI, the produc­tivity slowdown goes away. Two applications that we look at in our paper are autonomous vehicles and call centers.

About 3.5 million people in the United States make their living as motor vehicle operators. We think maybe 2 million of those could be replaced by autonomous vehi­cles. There are 122 million people in private employment now, so just a quick calculation says that’s an additional boost of 1.7 percent in labor productivity. But that’s not going to happen overnight. If it happens over a decade, that’s 0.17 percent per year.

About 2 million people work in call centers. Plausibly, 60 percent of those jobs could be replaced by AI. So when you do the same kind of calculation, that’s an additional 1 percent increase in labor productivity; spread out over a decade, it’s 0.1 percent per year. So, from those two applica­tions alone, that’s about a quarter of a percent annual accel­eration for a decade. So you only need maybe six to eight more applications of that size and the slowdown is gone.

Read the whole thing. There’s no fluff in this interview. Syverson packs every answer with substantive content.

Comments

Reading reports on the increase of distractions during working hours (and many of us having observed or engaged in it) evidences the use of less effective working hours at work and the consumption of more distractions. Should that not be measured so as to get more reliable data on productivity, real salaries, GDP and even voluntary unemployment?
https://perkurowski.blogspot.com/2017/11/we-need-to-restate-productivity-real.html

Respond

Add Comment

Syverson: "Moreover, the growth in income is coming from capital income, not wage income." Add the projected productivity growth from AI, and capital share will be even higher, potentially much higher. Or am I missing something? As for AI and its implementation, here is an essay by Kai-Fu Lee in which he argues that, while the U.S. is the leader in developing technology including AI, China is the leader in implementing it: https://www.nytimes.com/2018/09/22/opinion/sunday/ai-china-united-states.html If he is correct, then much of the potential for productivity and income growth attributable to technology including AI will be realized by China not the U.S.

It's going to be absolutely capital income, unless something very strange and very drastic happens politically. Large companies that understand software keep building larger and larger advantages over smaller competitors by keeping a lot of technology locked in, and having access to treasure troves of private datasets. Add to that 2x the salaries of a startup, and we are facing a cyberpunk-like future where you better work for the megacorp, or own a lot of shares in the megacorp.

The trick is going to be how said megacorps keep getting their young, smart, left leaning graduates to keep working for them as it becomes clear that they'd be working for the companies that cause the problems they dislike.

Why would the a left leaning person not like having information and control over individuals? That is their heritage.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

On: “choices of financial prod¬ucts after Mexico reformed its social security system. What did you find? Syverson says: In the 1990s, the Mexican government started from scratch and made some deliberate decisions about what the market was going to look like… they did not want AFORES to try to gain market share by offering high-risk products. So they very strictly limited the kind of assets these companies could use, almost to the point where they were basically homogeneous across companies”
I ask, if you were a young professional and had contracted an investment manager to help you to plan your retirement some 30 years from now, would not that investment manager lose his license had he only limited your portfolio to very safe assets?
God make us daring!
https://subprimeregulations.blogspot.com/2010/10/god-make-us-daring.html

The US' SS system only invests in US bonds.

Respond

Add Comment

Respond

Add Comment

Isn't the mismeasurement of productivity in the changes in quality. Would you see changes in GDP if the improvement in productivity is seen in better products, and much quicker dispersion of knowledge to improve products? Combine this with increased competition that keeps prices down.

Respond

Add Comment

On the other hand we have the UK statistical authority basically confessing that they had mismeasured improvements in telephone services pretty drastically at the beginning of this year;

https://www.ft.com/content/abc14c66-fb78-11e7-a492-2c9be7f3120a

I suspect that much of the data underlying GDP estimates (and so productivity and so on) are like this, basically following the old formula to calculate rather than really understanding how the economy is changing.

Respond

Add Comment

>Me go to work.
>Answer emails. Is my job.
>Notice blue boxes underneath new emails.
>Ignore.
>Realize blue boxes contain potential replies to emails.
>Notice blue boxes is leaning.
>Me automate sending emails with swears to fake addresses. Blue boxes refuse to say swears but quality of blue box replies goes down.
>Is temporary fix only.

Respond

Add Comment

any analysis of GDP over the last 15 years needs to form an opinion about what an iPhone was worth 15 years ago.

From the standpoint of the economy, what value does the Iphone present?

Music distribution. Does the current system generate an equivalent value as the music distribution networks that it replaced?

Advertising. Does the Iphone and Google produce the same value as the advertising systems that it replaced? I don't think so.

Communication. This has been a net benefit; cheaper and available. The phone companies of old dreamed of getting the equivalent monthly payments from their subscribers, without having to maintain the infrastructure.

What it looks like to me, and this characterizes much of the current economic trends, is less economic activity with a larger proportion going to one or two players. I have been involved in projects where the net result is lower sales, fewer employees and higher profits. Productivity is a measurement of labor and gross value of output.

Respond

Add Comment

Respond

Add Comment

"So, from those two applica­tions alone, that’s about a quarter of a percent annual accel­eration for a decade. So you only need maybe six to eight more applications of that size and the slowdown is gone."

So, 3 million jobs replacements amounts to .25 percent per year. That would imply that it would be around 12 million job replacements or 1.2 million per year to add another 1% to GDP. Of course, those 12 million people need to find equivalent jobs to their own standard of living to remain high.

That's the rub. Not only do we as a society need to automate the jobs, but we also have to find an effective niche for those displaced. Perhaps an expanded EITC paid for with a broad based consumption tax. But it would need to be substantial and we would need to constrain the Federal Minimum wage to keep the base line low enough to ensure employability.

Well, I don’t know how many would accept manual labor for work, but we could fix an awful of roads and sewers.

Respond

Add Comment

Respond

Add Comment

They were priceless.

Respond

Add Comment

AI, call centers, and productivity...but over the last decade and a half, many of the things that previously needed to be done by call centers have been enabled to be done by people directly over the Internet. So, one would have expected a strong productivity effect from this source to *already have shown up*.

Every on-line purchase by a consumer displaces work that previously would have had to be done by a telephone-based order entry clerk (or, if you look back far enough, by a paper-mail-based order entry clerk) or by a checkout clerk in a store.

Assumably, that improvement is already embedded in the economic numbers. Clearly Amazon exists because customers show a preference for their web based service.

Respond

Add Comment

The example of call centers shows how slippery the tracking of productivity and human progress are. Microsoft has call centers. When they release an easy problem-free upgrade to their operating system they get few calls. When they goof they might get a few million calls asking the same question. Sure, an AI to answer that question a million times might be useful, but it would be a better world if the question were never asked or answered.

Or Apple and a new iPhone, or Comcast and a new cable box.

In other words I am skeptical of a better future in which we all spend more time talking to call centers.

Respond

Add Comment

Respond

Add Comment

Syverson: "One is that it is not unusual at all to have an extended period — and by extended, I mean measured in decades — of slow productivity growth, even after a major technology has been commercialized and a lot of its potential has been recognized. You saw that with the internal combustion engine, electrification, and early computers. There was about a quarter-century of pretty slow productivity growth before you saw the first acceleration in productivity coming from those technologies."

This is not correct. The rate of diffusion was far faster for "early computers" than the combustible engine and electricity, which began in the 1880s.

I assume Syverson means office computers for "early computers" so 1980 is a decent starting point when Commodore, Apple and Radio Shack began selling a decent number of computers. Solow's joke that "You can see the computer age everywhere but in the productivity statistics" was uttered in 1987 after only seven years. Computers have also been different than the engine or electricity as the power of computers have increased every couple of years or so for the past 40 years, and it didn't take anywhere as long until 2005 to see productivity gains from office computers.

Respond

Add Comment

"Syverson: It tells us that the mismeasurement story, while plausible on its face, falls apart when examined."

The strongest-form interpretation of this is that Syverson believes his map is the territory.

A productivity and GDP view of progress is very useful, but it is definitely not the only thing going on. And so I think dismissing mismeasurement is a little like saying forget the world, and let me concentrate on my map.

" And so I think dismissing mismeasurement is a little like saying forget the world, and let me concentrate on my map."

That seems backwards. The critics making a hypothesis of mismeasurement are telling us to use their "map". There's no good evidence to support the mismeasurement hypothesis at this point.

I presume you enjoyed making that comment, explain precisely how it was measured as productivity or GDP.

Your post didn't have a technical analysis. My rebuttal didn't either.

Try harder.

You misinterpreted my original comment, and then refused to explain your own.

When I say the territory is bigger than the map, I mean that there are many more motivations, actions, and achievements in the world than productivity for GDP.

That's obviously the case as you share both your attention and your effort here without any expectation of financial gain.

I'll use simple words for you.

Both sides (everyone really) has a mental map. Because we can't know everything and we have to simply the incredibly complex processes to make decisions.

So, while your comment is true, it's irrelevant unless you can point to the specific spot where Syverson is wrong.

But I don't think I have a mental map.

I think I have an idea of a space too big and too complex to capture by any rules.

What is the Dalai Lama's impact on productivity and GDP? How can anyone know? But I suspect he himself would not be that concerned.

Respond

Add Comment

Note that in my first comment I said:

"A productivity and GDP view of progress is very useful, but it is definitely not the only thing going on."

This is a recognition the economic view is very useful, but is not a complete view of human experience.

That is obvious. They didn't factor in the sun rising in the morning either.

GDP and economic activity are necessary because the possibility of other things that complete the human experience depend on it. If you want people to live long lives and retire at 65, you better have a damn good level of economic activity to pay for it. Same with the arts, the environment, everything.

The ease of communication is wonderful until you have people talking while driving, or in my situation, people interrupting workers doing complex and dangerous tasks. The affordability of the ease of communication depends on the complex and dangerous tasks getting done safely and well, because if they weren't done the communication stuff wouldn't work.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

On the autonomous vehicle front, not only will drivers be made obsolete by AI, so too will the enforcement officers on the road, since driverless vehicles won't violate traffic regulations. Thousands of cops will no longer be needed to collar drivers that won't exist.

The best line in this article is when Syverson says that it is too soon to talk about strong AI. That's true.

Autonomous vehicles are reaching the limits of their Newtonian view of the world. There is only so much you can do by plotting the position and speed of other objects without having a concept of their intent.

https://www.digitaltrends.com/cars/waymo-alleged-self-driving-car-problems-report/

Once again, your link has nothing to do with your comment. No claim that the cars have reached any limit, or that one even exists. Either way, a self driving car does not need to be perfect, just better than human driven ones.

You should never show us that directly that you cannot parse what I'm saying.

The Information talked to Arizona residents who live in areas where Waymo tests its autonomous cars. They said the Waymo cars seem to have difficulty in certain traffic situations, such as making left turns across lanes with fast-moving traffic, and merging onto highways in heavy traffic. The cars also stop and brake abruptly, according to the report.

...

Part of the problem seems to be that Waymo’s self-driving cars are more cautious than the human drivers they share the road with. The cars stop for a full three seconds at every stop sign, and try to maintain a wide berth by making turns. Anyone who has spent time behind the wheel knows that most human drivers aren’t so conservative.

You have to stop and think how that relates to a Newtonian model.

You have to think about how much it relates to the AI having no model of intent.

Nothing there remotely suggests they are at their limits. 'Parsing' that from either your comment or the story would reveal a deep misunderstanding, that's all.

You cannot comprehend what a mental model is or why it's important I can't help you.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

"The best line in this article is when Syverson says that it is too soon to talk about strong AI. That's true."

Syverson didn't say anything about strong A.I. He said: " People have to figure out what sorts of things AI can augment, and we're not anywhere down that road yet."

That is way off. One example is machine translation. Almost all translators, partly due to having no technical background, have laughed at MT for years but there have been huge improvements that have notably lowered their wages from 2010. A friend in Tokyo recently told me that a new Japanese to English MT system is now about 90% accurate where almost no Japanese to English translator would have thought that was possible just a few months ago.

Computers are already outperforming cardiologists in accuracy of diagnosis 80% of the time. Here is what A.I. expert Geoff Hinton said in 2016:

" “They should stop training radiologists now. It’s just completely obvious within five years that deep learning is going to do better than radiologists. It might take ten years, but we’ve got plenty of radiologists already. I said this at a hospital, and it didn’t go down to well.”

These don't seem to match "We're we're not anywhere down that road yet."

We are definitely well down that road.

Well, as an old-timer I am seeing this in the context of my experience. Since the 1970s I have seen a lot of different things called AI, and one of the things that makes conversation difficult as that they all have the same name!

On one end we (still) have extremely hard problems like fully autonomous self-driving, and on the (now) lower end we have problems that can be solved by a single neural network.

I would say the successes you name are of the neural network kind. When they translate from Japanese to English, or from an x-ray to a diagnosis, they are replicating the pattern of matches they've been given in thousands of training sessions.

To add a little context, in a self-driving system one component might be saying "I think that is a person" or "I think that is a person on a bicycle," but what do you do from there? You simply train on what you've seen bicyclists do in the past, and assume the bicyclist behave in general the same? That might lead to false positive and false negatives for the question of "bicycles turn in front of me."

A human in that situation deuces a mental model, like "that is a dedicated sport cyclist heading in a straight line" or "wait a minute that dude could be drunk!"

Here is a bigger expert and I who informed my thinking:

https://rodneybrooks.com/bothersome-bystanders-and-self-driving-cars/

"a bigger expert than I"

Respond

Add Comment

Respond

Add Comment

Maybe you are coming at it from the other angle, that since the 1970s we have achieved many of the things we called AI. For instance, advanced robotics have replaced many workers in all aspects of manufacturing. And sure each incremental improvement in this huge space we call AI does displace a few more workers at the margin.

Robotics in advanced manufacturing are not AI. They’re programmed to do X.

Maybe specifically camera based inspection machines with machine learning and QA engineers training the machine to differentiate good vs bad.

But that’s rare and largely irrelevant to most robotics in manufacturing and the resulting job losses.

In the 1970s researchers who consider themselves "doing AI" developed exactly the kind of motion control and vision acquisition systems that are now used in manufacturing.

https://en.m.wikipedia.org/wiki/Computer_vision

You replied with a link describing exactly what I was referring to.

This was considered AI research at the time:

What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation.

Vital now in every camera equipped manufacturing robot.

More info here:

https://rodneybrooks.com/forai-the-origins-of-artificial-intelligence/

Again.

This is literally the one exception I put in my original comment.

You keep posting links “what about x”?

When I said the only exception is x.

Unreal.

Are you missing what I'm saying, or are you avoiding it?

Did you read the origins of AI link?

Did you see the work on moving boxes and stacking them?

Dude.

If you no longer see moving boxes as AI it's because you're picking a custom definition of AI.

And as is the main theme of that AI history piece, fuzzy definitions is part of a whole problem and where I started before you jumped in to disagree!

Respond

Add Comment

Me: "Maybe you are coming at it from the other angle, that since the 1970s we have achieved many of the things we called AI. "

You: "But I don't call that AI."

THAT'S THE POINT.

You really need to work on your reading comprehension skills. That's NOT what he said.

He said, that you might use machine learning for camera inspection systems, but AI isn't used for industrial robot programming.

As someone who's actually in the field, his point is mostly correct. The only significant exception is for robot picking systems which aren't necessarily programmed, but instead use a rudimentary machine learning system. The overwhelming majority of industrial robots don't use any kind of machine learning or AI at all.

There's a lot of money being spent on creating AI systems, but so far, very little of it has effected the factory floor.

I know it is not just inspection. Are you going to pretend otherwise?

(researchers who thought they were doing artificial intelligence did the earliest work on robot arms from positioning to motion and kinetics. They certainly did earliest work on identifying parts picking them up and placing them.

If you want to up your game, read that history of AI post, and say something intelligent about the fuzzy nature of AI and now the fuzzy nature of AGI.

That's really the bottom line.

You are both avoiding a most interesting question of artificial intelligence in order to have a stupid comment fight.

The most important question is when does rudimentary problem solving become artificial intelligence especially as compared to the general intelligence of a human being?

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Another view of that history:

https://www.forbes.com/sites/mikecollins/2015/01/05/artificial-intelligence-and-manufacturing-part-one/

I don't think it displaces workers. It makes things possible that wouldn't be economical previously.

We see that in auto manufacturing. All the gizmos in cars are possible because of the automated manufacturing of the gizmos, mostly electronic. So we have more gizmos, which require people to assemble them, maintain them, ship them around, manufacture the lower level components, etc.

The decrease in employment from automation is mostly from moving the labor intensive aspects somewhere else. It is cheap to ship things back and forth multiple times between specialized production facilities with the labor intensive steps in manufacture happening in places where labor is cheap and beatable.

The fact that a worker can't keep doing job A and must shift to job B is the "displacement" we're talking about.

But yes the idea that general employment and general prosperity rises is the bright side of the bargain.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Maybe bad workspace practices and poor salaries have decreased overall productivity in the US.

Respond

Add Comment

Respond

Add Comment