How much are free apps and other on-line services worth?

by on March 7, 2016 at 12:14 am in Data Source, Economics, Web/Tech | Permalink

A recent Wall Street Journal article (Aeppel 2015) argued that measured productivity growth is badly underestimated because measured GDP does not include ‘free’ apps and other online media services. This paper introduces an experimental GDP methodology which includes advertising-supported entertainment like Facebook in final output as part of personal consumption expenditures. This paper then uses that experimental methodology to recalculate measured GDP back to 1998. Contrary to the Wall Street Journal article, including ‘free’ apps in measured GDP has almost no impact on recent growth rates. Between 1998 and 2012, real GDP growth rises by only 0.009% per year.

There is also this:

Some firms have been creating software, which is already captured in the national accounts as investment. Other firms have been creating intangible investments in marketing, customer contact, business know how or other organizational capital. These intangible investments can pay off either in (1) eventual use of advertising or (2) moving customers to a premium service that does charge a fee. Despite their long-run value, expenditures on organizational capital are not currently captured in the national accounts as output. If we treat these expenditures on organization capital as intangible capital investment, then the productivity boom from 1995 to 2000 becomes even stronger and the weak productivity growth of the 2000’s is nearly unchanged.

That is from a new paper by Leonard Nakamura and Rachel Soloveichik (pdf), and I thank Hal Varian for the pointer.

1 Charlie March 7, 2016 at 12:33 am

Hal Varian would be an interesting guest for Conversations with Tyler.

2 Aidan March 7, 2016 at 12:52 am

Perhaps a better way of making sense of the economic effects of free stuff on the Internet would be to imagine the effects of something else suddenly becoming free, e.g. if a way of supplying free water for domestic consumption were suddenly developed. I would suspect the effects on GDP growth would be negative, but I’m pretty sure we’d be better off. GDP has its limitations as a measuring tool.

3 Stephan March 7, 2016 at 1:12 am

not an Econ major but if free water suddenly became available, then you would either pocket the savings or spend it; either way it is counted in GDP. In addition your extra spending may stimulate the economy or free water lead to an increase in building of swimming pools, fountains etc.. also likely stimulating the economy

http://worthwhile.typepad.com/worthwhile_canadian_initi/2016/02/does-saving-count-towards-gdp.html

4 Curt F. March 7, 2016 at 1:23 am

So if *everything* became free, would GDP become zero? Or infinite?

5 Stephan March 7, 2016 at 1:44 am

This sounds like a Singularity scenario where an AI has taken over, and anticipates all of our needs and wishes and can produce them ( or emulate them) at trivial cost.

6 mulp March 7, 2016 at 1:39 am

If whatever good or service you produce, Stephan, became free, you would earn how much, exactly? You would earn zero, right?

Given you earn zero, you would spend the savings from free stuff, on what? By earning zero, everything you buy costs zero, so things that cost a $1 you can’t buy until the price falls to zero.

TANSTAAFL. No amount of supply side Reaganomics can change that.

The decades long efforts by conservatives to slash labor costs is merely cutting gdp below what it should be if not for the success of conservatives convincing workers they will be better off earning lower wages and getting fewer benefits.

And gdp would be much lower if it weren’t for the massive deficits and exploding debts that pay for tax cuts and jobs for wars.

7 Nathan W March 7, 2016 at 6:02 am

But then he would contribute to labour supply and capital supply in other sectors, and other people would be able to spend their money on other stuff.

8 Peter Schaeffer March 7, 2016 at 1:24 am

Aldan,

You need to take a few national income accounting courses. If the price of water suddenly went to zero, measured dollar GDP would decline. However, the GDP deflator would also fall. Real GDP would stay unchanged. Of course, if the price of water went to zero, output of water intensive goods would tend to increase. This would show up as an increase in GDP.

9 Ray Lopez March 7, 2016 at 1:37 am

@PS – you need to bone up on the broader implications of the Cambridge Capital Controversy (C3), see https://en.wikipedia.org/wiki/Cambridge_capital_controversy and why you cannot easily compare either one year to the next, or even real variables in the same year (Microsoft vs General Motors, which produces more real value?). Study also this paper, found online: “Capital accumulation: fiction and reality”, Shimshon Bichler and Jonathan Nitzan and get back to us when you have. We’re here to help.

10 Peter Schaeffer March 7, 2016 at 1:18 am

This is an important post. There is a vast crowd of techno-utopians out there who are convinced that Tech is creating a vast new prosperity that is somehow being missed by the GDP bean-counters. With considerably regularity, I tell them to get a grip.

A closely related point is that a vast (and overlapping) crowd of health-utopians is convinced that medical nirvana is about to appear (or already has appeared). With considerably regularity, I tell them to get a grip.

11 Ray Lopez March 7, 2016 at 1:39 am

You’re much too pessimistic. I suggest you read the works by futurist and inventor Raymond “Ray” Kurzweil for a different perspective.

12 Stephan March 7, 2016 at 1:53 am

Kurzweil’s message is the Singularity by 2045. I wanted to believe it but I don’t believe it anymore. Moore’s law has slowed down and is becoming un-economical. Kurzweil always conflates the rate of of growth of computational capability with the growth rate of the economy as a whole.
As mentioned in this blog before by Tyler, Nordhaus does not believe we are close to a singularity

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2658259

13 Ray Lopez March 7, 2016 at 2:11 am

+1, will read as TC would say, but perhaps Nordhaus, a backwards looking historian, is by definition always pessimistic, ‘if man could fly he would have done so by now’). Also check out the works by Didier Sornette, a provocative economist who has a formulae–patented–that predicts a sort of negative singularity (‘something bad’, not clear what) around 2050, according to an earthquake predicting sort of algorithm. I’ve briefly looked at it (five minutes) and it seems sound.

14 anon March 7, 2016 at 5:24 am

I take exception with claims that Moore’s law failed. That always presumed continued investment and interest. Instead, plot twist, low power became the goal instead of transistor density.

Right now we need a week of battery life more than we need more storage or computation. So that is where investment and development goes.

15 Nathan W March 7, 2016 at 6:05 am

Moore’s law was always going to meet physics one day. But it’s been an impressive run, hasn’t it!

16 anon March 7, 2016 at 6:24 am

Samsung introduced a 15 terabyte SSD last week. In a 2.5-inch form factor.

The fact that most people don’t care is more important that a few do. Moore’s law didn’t fail. It blew past requirements.

17 Nathan W March 7, 2016 at 10:04 am

Stunning.

18 Peter Schaeffer March 7, 2016 at 1:51 pm

All,

Moore’s law is what I do for a living (somewhat). It mostly died a few years ago. See “Future Systems: Pitting Fewer Fat Nodes Against Many Skinny Ones” for a few charts. The number of available cores has soared in recent years. Per-core performance has essentially plateaued. It is somewhat true that algorithms can be modified to exploit multiple cores. However, this is very hard to do (with some exceptions). Almost any programmer can run his/her program on faster machine (with more memory). Utilizing multiple cores requires an understanding of memory fences, “happens before”, cache coherency, memory models, etc. For better or worse, these topics are so hard that the vast majority of very senior technical people no nothing about them. Perhaps one programmer in one hundred can write multi-core code. That may be a substantial overestimate.

19 anon March 7, 2016 at 4:44 pm

That is interesting Peter, but it too might be indirectly related to what happened to Moore’s law. I think fundamentally it was what Intel’s customers asked Intel to deliver. If it was higher transistor density, then they were asking for Moore’s law in the conventional sense, to continue. If they stopped caring about transistor density, perhaps because they worried about power, or perhaps because multicore programming was hard, then they might have started asking Intel for something else. A demand side, as much as a supply side change.

I notice that HC27 (The Hot Chips conference for 2015) was not very much about conventionally faster cpus:

http://www.hotchips.org/archives/2010s/hc27/

20 Peter Schaeffer March 7, 2016 at 8:32 pm

anon,

In actuality, transistor density has continued to soar. Type “transistor density” into Google and you will see what I mean. However, all of those transistors are now used to create more cores, not faster ones. Chip users would vastly prefer, fewer, but faster cores. However, the semiconductor industry simply can’t deliver. It is vastly easier to program, one 20 GHZ core, rather than ten 2 GHZ cores. However, no one knows how to build a 20 GHZ core (without ridiculous amounts of power and liquid N2 cooling).

21 anon March 7, 2016 at 9:55 pm

Yes, soar. But Moore’s law has a precise form, and

“Intel confirmed in 2015 that the pace of advancement has slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm. Brian Krzanich, CEO of Intel, announced that ‘our cadence today is closer to two and a half years than two.’”

A bit off pace.

22 Nathan W March 7, 2016 at 11:28 pm

Once you’re talking 14nm, you’re really getting close to butting your head up against hard physical boundaries where there’s simply too much likelihood of losing virtuosity due to “random” sub-atomic effects messing everything up.

Hence the interest in quantum computing, but even once the physics problems are solved, you’re into an entirely new era of computing and precious few minds have a clue about the sorts of logic needed to optimize in that kind of environment (and that’s about as deep as my knowledge goes – this well beyond any maths, logic or physics I’ve ever gotten into). Here’s the wiki on it: https://en.wikipedia.org/wiki/Quantum_computing

23 Peter Schaeffer March 8, 2016 at 12:50 pm

Anon,

I was (incorrectly) using Moore’s law as a generic term for progress in computing. To be specific, Moore’s law only pertains to transistor density. As I stated, “In actuality, transistor density has continued to soar”. A few folks, notably Robert Denard (Denard Scaling) and David House came up with an “extended Moore’s law” that predicted astounding progress in transistor density and CPU speed. For decades they were right.

See “Moore’s law and Clock Speed”

“Dennard Scaling, on the other hand, is an observation, made by Robert Dennard in 1974 that every time you scale down the width of a CMOS gate by a factor of 2, you can reduce both supply voltage and the threshold voltage by a factor of 2, the clock frequency can improve by a factor of 2, and the power density will stay constant. (Or you can scale down the voltage by less than 2, and you’ll get a better than 2x clock frequency boost, but an increase in power.) When Dennard’s observation was made a CMOS transistor was about 5 micron = 5000 nm wide. (So we’ve scaled transistor widths by a factor of about 360 in about 40 years.)”

and

“The death of CPU scaling: From one core to many — and why we’re still stuck”

“To understand the magnitude of what happened in 2004 it may help to consult the following chart. It shows transistor counts, clock speeds, power consumption, and instruction-level parallelism (ILP). The doubling of transistor counts every two years is known as Moore’s law, but over time, assumptions about performance and power consumption were also made and shown to advance along similar lines. Moore got all the credit, but he wasn’t the only visionary at work. For decades, microprocessors followed what’s known as Dennard scaling. Dennard predicted that oxide thickness, transistor length, and transistor width could all be scaled by a constant factor. Dennard scaling is what gave Moore’s law its teeth; it’s the reason the general-purpose microprocessor was able to overtake and dominate other types of computers.”

“The original 8086 drew ~1.84W and the P3 1GHz drew 33W, meaning that CPU power consumption increased by 17.9x while CPU frequency improved by 125x. Note that this doesn’t include the other advances that occurred over the same time period, such as the adoption of L1/L2 caches, the invention of out-of-order execution, or the use of superscaling and pipelining to improve processor efficiency. It’s for this reason that the 1990s are sometimes referred to as the golden age of scaling. This expanded version of Moore’s law held true into the mid-2000s, at which point the power consumption and clock speed improvements collapsed. The problem at 90nm was that transistor gates became too thin to prevent current from leaking out into the substrate. ”

As you can see, transistor density has continued to soar (since 2004). However, clock speed has not. Hence the shift to multi-core architectures. As stated above, multi-core architectures are much harder to exploit.

To be precise, Moore’s law isn’t dead even now. However, from a practical standpoint it died 10 years ago.

24 jpa March 7, 2016 at 3:24 pm

If Google beats the best human Go player next week, will you re-evaluate your position on Moore’s law slowing down? Many AI predictors didn’t have AI beating top human players until 2020’s….

25 Peter Schaeffer March 7, 2016 at 4:17 pm

jpa,

“If Google beats the best human Go player next week, will you re-evaluate your position on Moore’s law slowing down?”

No. Google is one of the few companies that can actually hire and keep programmers who understand multi-core programming. Beyond that, the latest Google Go program isn’t AI based. Google has switched away from AI to data mining. The same trend can be found elsewhere. Traditional AI has been replaced with data mining in several other fields as well.

26 anon March 7, 2016 at 4:54 pm

Moore’s law is about a very specific thing, the density of transistors on a chip. It is indirectly about whether you can beat a Go player, but only indirectly. Perhaps old hardware would do it with better programming, perhaps the cost of cloud computing has fallen to allow it, etc.

27 carlolspln March 7, 2016 at 2:20 am

au contraire, Ray

he’s the Harold Hill of the 21stC http://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-futurism

or should I say Lyle Lanley https://www.youtube.com/watch?v=ZDOI0cq6GZM

hmm, getting a bit concerned for you, Ray; you were always a guy who could hold his own in the tough part of town, e.g. Divisoria

😉

28 kimock March 7, 2016 at 2:58 am

+1 for the monorail refence

29 Ray Lopez March 7, 2016 at 9:24 am

Ray K is not wrong, just early. http://invent.org/inductee-detail/?IID=180

He’s impressive, as OCR in the early 70s was unheard of (he was before his time, just like his predictions).

30 Ak Mike March 7, 2016 at 10:56 am

Kurzweil evidently made some advances in OCR, but I worked as an operator of an OCR machine in 1973, and it was not new when I got that job.

31 Mark Thorson March 7, 2016 at 11:19 am

There’s two types of OCR: OCR of a special font (typically OCR-A, OCR-B, or MICR) and OCR of arbitrary fonts. OCR of special fonts has been around for much longer. Kurzweil’s contribution has been in OCR of arbitrary fonts, like those used in books.

32 Larry Siegel March 7, 2016 at 6:16 am

If I can buy a penicillin pill for $1 and it cures an infectious disease, so that I would pay $100,000 for it (rather than die) if I had to, where does the $99,999 of consumer surplus go in national income accounting?

It is not medical nirvana, since penicillin is not always effective and I’ll die eventually anyway, but I want someone to answer this question. I think value has been created by cheap penicillin that does not show up in GDP.

33 A Definite Beta Guy March 7, 2016 at 9:57 am

Exactly. Most Consumer Surplus was created before the advent of Free Apps. Free Apps largely substitute for television. Big whoop.

The only major Consumer Surplus is for young men who know what the Internet is really for.

34 Peter Schaeffer March 7, 2016 at 2:07 pm

Larry Siegel,

“where does the $99,999 of consumer surplus go in national income accounting?”

In general it doesn’t. The BLS does make some effort to incorporate “hedonic improvments” into the CPI. However, the impact is modest.

However, don’t read to much into this. Innovations have been generating vast uncaptured consumer surpluses for a long time (long before Apps). Anesthesia was invented well before the American Civil War. The cost of the chemicals (ether back then) was always trivial. The gains were huge.

Ignaz Semmelweis came up with a way to drastically reduce maternal mortality at almost no expense (in 1847). Eventually his ideas were widely adopted (the history is complex). The gains were huge.

Chlorinated water was (more or less) invented in the United States after 1900. The cost of chlorination was astonishingly low (and still is). The reduction in mortality was vast.

These are just a few example of vast, uncaptured consumer surpluses, from the last 200 years. A decent list would have dozens, if not hundreds, of entries. Of course, we can go back even further. The printing press yielded very large gains in welfare. The measured economic impact was small.

The bottom line is that innovations have produced rather large uncaptured consumer surpluses for a long time. Free Apps are just the latest incarnation (and not nearly as important as the older innovations).

35 William Cotton March 7, 2016 at 9:07 am

Hey Peter, could you point me to some of those people who are convinced that tech is creating prosperity that isn’t being measured?

I have an alternative theory, that because Facebook or Twitter control closed marketplaces for intellectual property, and that things like “animated gifs” and “image memes” are not represented by a fungible asset and traded in an open marketplace, that we’re losing out on a ton of actual prosperity. This would align perfectly with a study that finds only minimal gains in adjusted GDP.

It’s sort of like the post-communist societies described in de Soto’s Mystery of Capital. If you don’t turn your properties, digital or real, in to abstract assets, you miss out on releasing a lot of economic potential.

36 Peter Schaeffer March 7, 2016 at 2:41 pm

WC,

“Hey Peter, could you point me to some of those people who are convinced that tech is creating prosperity that isn’t being measured?”

Valid request. See “The Singularity of Fools”

“If utopia has always been a kind of escape clause from the human condition, contemporary techno-utopianism represents a radical upping of the ante. For entrepreneur Peter Diamandis, creator of the X Prize to spur the development of passenger-carrying private spaceships and other innovations, not only will technology make it so that “during our lifetime … we’re moving off this planet,” but it will solve even the gravest problems that confront humanity — climate change, species extinction, water and energy shortages. For futurist Ray Kurzweil, “nonbiological intelligence will match the range and subtlety of human intelligence” by 2030, making it possible to “go beyond the limits of biology, and replace [an individual’s] current ‘human body version 1.0’ with a dramatically upgraded version 2.0, providing radical life extension.” ”

“Technological utopianism” (Wikipedia)

“A movement of techno-utopianism began to flourish again in the dot-com culture of the 1990s, particularly in the West Coast of the United States, especially based around Silicon Valley. The Californian Ideology was a set of beliefs combining bohemian and anti-authoritarian attitudes from the counterculture of the 1960s with techno-utopianism and support for libertarian economic policies. It was reflected in, reported on, and even actively promoted in the pages of Wired magazine, which was founded in San Francisco in 1993 and served for a number years as the “bible” of its adherents.[10][11][12]

This form of techno-utopianism reflected a belief that technological change revolutionizes human affairs, and that digital technology in particular – of which the Internet was but a modest harbinger – would increase personal freedom by freeing the individual from the rigid embrace of bureaucratic big government. “Self-empowered knowledge workers” would render traditional hierarchies redundant; digital communications would allow them to escape the modern city, an “obsolete remnant of the industrial age”.[10][11][12]

Its adherents claim it transcended conventional “right/left” distinctions in politics by rendering politics obsolete. However, techno-utopianism disproportionately attracted adherents from the libertarian right end of the political spectrum. Therefore, techno-utopians often have a hostility toward government regulation and a belief in the superiority of the free market system. Prominent “oracles” of techno-utopianism included George Gilder and Kevin Kelly, an editor of Wired who also published several books.[10][11][12]

During the late 1990s dot-com boom, when the speculative bubble gave rise to claims that an era of “permanent prosperity” had arrived, techno-utopianism flourished, typically among the small percentage of the population who were employees of Internet startups and/or owned large quantities of high-tech stocks. With the subsequent crash, many of these dot-com techno-utopians had to rein in some of their beliefs in the face of the clear return of traditional economic reality.[11][12]

In the late 1990s and especially during the first decade of the 21st century, technorealism and techno-progressivism are stances that have risen among advocates of technological change as critical alternatives to techno-utopianism.[13][14] However, technological utopianism persists in the 21st century as a result of new technological developments and their impact on society. For example, several technical journalists and social commentators, such as Mark Pesce, have interpreted the WikiLeaks phenomenon and the United States diplomatic cables leak in early December 2010 as a precursor to, or an incentive for, the creation of a techno-utopian transparent society.[15] Cyber-utopianism, first coined by Evgeny Morozov, is another manifestation of this, in particular in relation to the Internet and social networking.”

“The Digital Dictatorship”

“It’s fashionable to hold up the Internet as the road to democracy and liberty in countries like Iran, but it can also be a very effective tool for quashing freedom. Evgeny Morozov on the myth of the techno-utopia.”

“The Real Problem With Techno-Utopianism”

“But then, it’s not like Frase is the only technological utopian out there. My students buy this whole hog. They think they are riding the wave of a paradise of new technological achievements that will make our lives easier and better. They have a very hard time figuring out that technology can sometimes have unexpected bad consequences, not to mention fully intended bad consequences like putting people out of work. I can’t really speak for other countries, though clearly societies like Japan and South Korea share similar love of technological innovation, but this blind faith in technology is deeply embedded in what it means to be an American, going back to the early 19th century and the rise of canals and railroads at the very least.”

I would add John Mauldin as a “mainstream” business writer who has embraced the technoutopian worldview.

37 Johnny March 7, 2016 at 1:19 am

I must admit I’m pretty economically illiterate but my first instinct is to say that productivity from free apps would show up elsewhere. Consider this hypothetical: Suppose I’m a software developer producing $500,000 of value to a firm per year, and then I start using a new free debugging software that increases my productivity by 10%. Now I’m producing $550,000 dollars of value/year. So isn’t that free software’s value being transferred elsewhere? Someone please tell me if I’m missing something important.

38 Stephan March 7, 2016 at 1:27 am

I would say true in this case, your productivity has increased, but what if the free app, just entertains you or makes you feel better about your job, or is interesting to look at or distracts you with ads for 10% of your time

39 Miguel Madeira March 7, 2016 at 5:33 am

What you are missing is that many (probably most) free apps are consumption goods, not capital goods.

40 Shane M March 7, 2016 at 1:48 am

Per Dan Ariely’s research, “free” might make us worse off.
http://www.wired.co.uk/magazine/archive/2011/01/start/dan-ariely

41 anon March 7, 2016 at 6:06 am

Yes. The power of free is that it can drive out better, even when better has a very small marginal cost.

42 Ray Lopez March 7, 2016 at 2:06 am

The problem with these studies (including this one, see below) is that they deal with ‘residuals’ that are fudge factors. For example, Total Factor Productivity is a fudge factor. The best that can be said is that if you’re consistent, year to year, perhaps you are measuring some sort of trend (or perhaps not, if your function is not capturing all the information you need). And the authors of this study make the same mistake, from their paper (p. 8) “We will follow the literature and use the standard TFP formula to estimate the impact of ‘free’ apps on aggregate productivity.”

The problem with using a production function residual like TFP is this (from the online paper “Capital accumulation: fiction and reality”, Shimshon Bichler and Jonathan Nitzan):

Note that economists know the “magnitude” of technology here not by observing it directly (which nobody really can), but only indirectly, through its reflection in the mirror: they deduce it as the residual between market capitalization and the dollar value of plant and equipment. Most economists encounter the technological “residual” in their study of production functions. These functions are intended to explain the level of output by the level of productive inputs – and are notoriously bad at doing so. Usually, they leave out a large unexplained variation in output – the infamous “residual” – whose existence the economists customarily blame on their inability to quantify “knowledge” (calling it a “measure of our ignorance”). This inability has devastating consequences. To illustrate, consider two hypothetical production functions, with physical inputs augmented by technology: (1) Q = 2N + 3L + 5K + T and (2) Q = 4N + 2L + 10K + T, where Q denotes output, N labour, L land, K capital, and T technology. Now, suppose Q is 100, N is 10, L is 5 and K is 4. The implication is that T must be 45 in function (1) and 10 in function (2). Yet, since technology cannot be measured, we can’t know which production function is correct, so both – and, by extension, any technology- augmented function – can claim incontrovertible validity. But then, if production cannot be objectively described, what is left of the supply function, equilibrium and the entire edifice called economics?

Put more concretely (as does the Bichler et al paper): which company is bigger, Microsoft or General Motors? MSFT has more technology ‘residual’ while GM has more ‘real’ assets. There is a measurement error when comparing the two. To get back to this example, perhaps the effects of “social media” have not been captured correctly. If a Third Worlder (and I’ve seen this in practice) prefers Facebook over having indoor plumbing, but Facebook is free while plumbing costs money, then something is wrong with the way we measure GDP.

The authors of the paper TC cite, Nakamura et al., on page ate, acknowledge this problem but waive it away when they say:

Obviously, this simplified model is not a perfect match for Silicon Valley firms. For example, the URL business model assumes that lower prices now help companies become more efficient and eventually profitable. This completely contradicts the assumption that current output is unrelated with past pr ices. However, many other industries also violate some of the assumptions listed above. A researcher who tried to account for all the complexities of each industry would quickly find the calculations impossible. Most researchers use the standard productivity formula as a starting point when comparing TFP across industries, over time or across countries. We will follow the literature and use the standard TFP formula to estimate the impact of ‘free’ apps on aggregate productivity. ”

Bonus trivia: Nakamura is also the name of the highest rated American chess player, along with the Italian-American Caruana. To tie in with the previous Polgar post of TC’s, Nakamura’s adopted father, a Sri Lankan chess master and popular author, taught his son the game of chess and as an experiment, like the Polgars, tried to make his son into a grandmaster, which worked (Naka is #2 in the world).

43 Nathan W March 7, 2016 at 6:10 am

E.g., Saudi open up the oil spigots will show up as improved TFP, whereas this effect does not imply any actual underlying technological improvement (new tech in the shale gas/oil industries may be a different story though …)

44 Ray Lopez March 7, 2016 at 9:30 am

@Nathan W – not sure, but maybe. As a residual, TFP is supposed to measure increased outputs given the same inputs (or that’s the theory) so lower cost of oil should not increase TFP, but it’s not my area of expertise. In support of your interpretation, the cited authors say “This completely contradicts the assumption that current output is unrelated with past prices” which implies TFP is stateless with past prices, so indeed lowering the price of oil will increase TFP. Regardless, as I noted above, the authors do dodge the question on TFP by assuming the conventional calculation of TFP, which biases the answer to come out a certain predetermined way.

45 William Cotton March 7, 2016 at 10:39 am

But Facebook is not a natural resource like oil or sunshine. It’s just a bunch of dudes in hoodies taking up a space in San Francisco, trying to: lure people into generating as many units of unproductive consumer digital goods as possible; organize those goods in a way in that maximizes interactions over time; sell those interactions to advertisers who can drive sales to actual economic activity.

The actual economic activity is already captured by economic models and it has very little to do with the net productivity of Facebook. There’s no “everything is different now” version of digital economics. Facebook pays rent in Mountain View and has a team full of public accountants.

There’s no price discovery going on with the digital consumer goods produced on platforms like Facebook. A photo or a post literally has no known value. This is not the case with the traditional media industries built on top of notions like copyright. You can find a market value for a song. The royalties rights to “A Hard Day’s Night” are a fungible asset. If I wrote that song, I could trade you 20%, and a judge will uphold that transaction against future profits in a contract dispute.

If you create a platform built without any notions of copyright, or one that actively promotes notions of copyleft, you basically shut down the economic engine of intellectual property.

Shameless self-promotion of an open source software system for digital media assets: https://github.com/blockai/openpublish

46 John Hall March 7, 2016 at 9:32 am

I think it’s a matter of what GDP is trying to measure. GDP isn’t a measure of consumer surplus, it’s a measure of economic activity.

47 derek March 7, 2016 at 2:28 am

Without ‘free’ software the internet would not exist.

Throw that into your sophisticated models and see what it spits out.

48 TMC March 7, 2016 at 8:38 am

So whoever wrote the free software wouldn’t do it for a buck?

49 derek March 7, 2016 at 10:07 am

First, more than likely they did.

Second, it was and still is the free availability of the internet components that has driven it’s growth. A free web server running on a free operating system using free scripting languages such as perl, php, and now ruby, python, java, javascript and the like has driven it’s growth. MySQL and other databases were free. Facebook and Google, definitely for profit enterprises publish openly the nuts and bolts libraries and techniques for others to use, with exceptions of course. The closed source stuff that is sold is characterized by horrible security; the first thing you do to secure your windows machine is to use one of the free web browsers.

There is lots of money to be made in this space; expertise in building a complex site or app is valuable and people pay for it.

50 Lord Action March 7, 2016 at 11:31 am

It’s hard to still say stuff like this with a straight face. The big lesson of 2014 was that open source software is not very well tested and not very secure.

The idea that thousands of eyeballs were looking at the code so all the problems were already found went away with Heartbleed and Shellshock.

51 prior_test2 March 7, 2016 at 11:59 am

‘The big lesson of 2014 was that open source software is not very well tested and not very secure.’

Thankfully, commercial software keeps filling the gaps that open source leaves – ‘Which software had the most publicly disclosed vulnerabilities this year? The winner is none other than Apple’s Mac OS X, with 384 vulnerabilities. The runner-up? Apple’s iOS, with 375 vulnerabilities.

Rounding out the top five are Adobe’s Flash Player, with 314 vulnerabilities; Adobe’s AIR SDK, with 246 vulnerabilities; and Adobe AIR itself, also with 246 vulnerabilities. For comparison, last year the top five (in order) were: Microsoft’s Internet Explorer, Apple’s Mac OS X, the Linux Kernel, Google’s Chrome, and Apple’s iOS.

These results come from CVE Details, which organizes data provided by the National Vulnerability Database (NVD). As its name implies, the Common Vulnerabilities and Exposures (CVE) system keeps track of publicly known information-security vulnerabilities and exposures.’ http://venturebeat.com/2015/12/31/software-with-the-most-vulnerabilities-in-2015-mac-os-x-ios-and-flash/

52 Lord Action March 7, 2016 at 1:51 pm

If you pay people money to find your bugs, they find them. You think those bugs aren’t there in open source? Or do you think that nobody (except for adversaries) is looking for them?

53 anon March 7, 2016 at 5:18 am

Only a subset of the great wave of free stuff is applied in the workplace, and only that fraction affects “productivity.”

If you don’t have a commercial output, good or service, you aren’t talking productivity. I enjoy MR, but for me there is no productivity involved. I am a dead end, an eyeball.

I don’t get some of the push back above, against Silicon Valley optimists, because many of them are not pushing productivity in the first place. Most are simply in the publishing business, with an advertising model.

But to return to productivity proper, it is an article of faith isn’t it, that the size of an economy cannot shrink or slow as it digests a bunch of free stuff. Why? At a minimum, attention matters. If you are on Twitter, you are less likely to be at the movies.

54 Miguel Madeira March 7, 2016 at 5:42 am

«Only a subset of the great wave of free stuff is applied in the workplace, and only that fraction affects “productivity.”»

Wrong, I think.

If you download by free an app (for your personal entertainment) that you value in $50, these mean that the GDP statistics will show less $50 than if you had paid for the app; attending that “productivity” is simply the mathematical ratio between GDP and number of hours worked, this will mean a less measured GDP.

Of course, there is also the other side of the coin – the statistics also don’t count as work much of the time used to create free apps, firefox add-ons, to update wikipedia, etc. (unless you do that during your work time…), meaning that perhaps the GDP/hours of work ratio is not much affected at all.

55 Miguel Madeira March 7, 2016 at 5:44 am

Where I wrote «attending that “productivity” is simply the mathematical ratio between GDP and number of hours worked, this will mean a less measured GDP», I wanted to write «attending that “productivity” is simply the mathematical ratio between GDP and number of hours worked, this will mean a less measured productivity»

56 anon March 7, 2016 at 6:03 am

I am taking a hard-line view that you need an actual sale, yes.

To me that strikes at the meaning of productivity, and keeps it tied to wages. That is, Tyler and Alex can claim that MR makes them valuable, but it is not directly a revenue stream they can tap (I presume).

And as you say, if we go nuts and start imputing free labor, us commentators become workers generating free content ourselves.

57 Larry Siegel March 7, 2016 at 6:11 am

0.009% per year, hmm. I practice my lucrative trade from my backpack, which contains a laptop, a phone, and a few other gadgets. I am writing this, as I write all the articles and research reports I sell, from the beach or from my backyard. In 1980 I would have had to go to an office, employ two research assistants and a secretary, and live near and go to a major academic library to do the same work. And I do not think I am alone in this improvement in lifestyle.

58 rayward March 7, 2016 at 6:23 am

Nothing is “free”. And that includes apps, Facebook, and Google. While the internet could have made for a more transparent world, it has actually done the opposite. People have become so conditioned that it’s all “free”, economists feel compelled to write research papers that state the obvious. The internet, I fear, has made people stupid.

59 Slocum March 7, 2016 at 7:42 am

One point about apps that should be noted — for the most part, they are the repackaging of preexisting services whose benefits consumers were already enjoying in other forms. Consider road-mapping. The basic (and most useful) capabilities have been around for more than 20 years. Here, for example, is a review of the 1996 version of Rand McNally ‘Trip Maker’ (not the first release). In the two decades since, mapping has migrated from the end-user device (a PC) to the server (as a web application) and back to the device (a smartphone or tablet — which in the case of Microsoft Surface, is also a PC). Email has had an even longer history with an extra round of ping pong — first on servers with dumb terminal clients, then shifted to PCs with email software, and then to the web server, and then back to the client in app form. The point is — most apps *couldn’t* be a source of recent ‘hidden’ productivity gains, since they’re not even really new. A lot of it is 20-30 year-old wine in new bottles. A lot of great stuff. But not really new.

60 Ray Lopez March 7, 2016 at 9:43 am

You can say that about anything, and in fact the US Patent Office routinely does. They rejected the laser (optical laser) based on the maser (microwave laser), and in one famous litigated case I read they rejected a system of flooding a barn with water to clean the manure based on one of the mythical labors of Hercules (The Augean Stables, see http://www.perseus.tufts.edu/Herakles/stables.html). There you go: a myth that never happened (but describes something that could be built) is prior art (the inventor did get a more narrow patent claim as I recall, just not the broadest claim they wanted). Nothing new under the sun, as the Jewish teacher Ecclesiastes once wrote.

Or, if you prefer a later version: “Everything that can be invented has been invented” – apocryphally sourced to Charles Holland Duell (April 13, 1850 – January 29, 1920) who was the commissioner of the United States Patent and Trademark Office from 1898 to 1901, but in fact Wikipedia says this quote originated with a sort of proto-Tyler Cowen Great Stagnationist, an earlier Patent Office Commissioner, Henry Ellsworth, in 1843. In an 1843 report Ellsworth states, “The advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.”. Note Ellsworth spoke just before the great inventions (radio, automobile, electricity) of the late 19th century, so like TC he was “right short term but wrong long term”.

61 Slocum March 7, 2016 at 11:03 am

“You can say that about anything”

No, you can’t. I’m not arguing that everything has been invented or nothing is every revolutionary. Digital mapping and GPS position were revolutionary. So was email. And digital photography. And many other related technologies. But moving the same map-rendering and route-planning algorithms from PC to server and then back to small screen PCs without keyboards (smartphones and tablets) just isn’t. In the first half of the 20th century, the changes in auto travel were revolutionary. In the second half they weren’t. The time between the Wright brothers in 1904 and the 707 in 1958 was 54 years — a period of rapid, revolutionary developments. In the 58 years that have followed, change has obviously been dramatically slower (the 747 is now 46 years old and still being produced). For digital electronics, we’re at least 5-10 years (and arguably more) into the slow, mature phase that we’ve seen in other industries over and over again. Get used to it.

62 Peter Schaeffer March 8, 2016 at 12:17 pm

Slocum,

All true. Check out “This chart shows how incredibly far we’ve come in the past 100 years of human health” (http://www.techinsider.io/rates-of-infectious-disease-deaths-in-the-past-100-years-2015-7). Almost the entire reduction in mortality from infectious disease predates 1955. Since 1980, death rates have actually been going up (perhaps because of AIDS). The “Tech Revolution” of recent decades is mostly happy talk.

Across an astounding number of fields (steel production, ammonia synthesis, ethylene crackers, antibiotics, anti-cancer drugs, space flight, automobiles, hybrid corn, etc.) progress was amazingly fast from 1900 to around 1970. Much less since 1970.

To RL we live in a perfect world because he can have fun with attractive young women in poor countries. I guess everyone has their own idea of utopia.

63 Albigensian March 7, 2016 at 11:41 am

Is something still “free” if it’s ad-supported? TV and radio broadcast were built on an ad-supported model in which the listener/view don’t pay for content. The content presumably has value to those consuming it; are they receiving value that’s not captured by what the advertisers paid?

What of a “fremium” model? Typically there are far more users of the free version than users of the pay version and presumably the users of the free version are receiving something of value, yet from the publishers viewpoint offering a free version is essentially a marketing expense.

Open-source software is often truly free to users as it is ad-free and is not used to market a for-pay version, yet presumably the creators of this software received non-monetary rewards for creating it or they would not have done so.

64 anon March 7, 2016 at 12:08 pm

Put another way, Craigslist did not increase productivity overall. It delivered value, but reduced both measured inputs and outputs nationally. Even as it was taken up, it reduced the size of the advertising market.

The strange thing about modern tech, usage can increase as markets shrink.

I just read that the number of full time reporters has fallen from 55k to 32k. Not because of productivity, because of destroyed markets.

Adding back in some imagined value for the destroyed market is nonsensical.

65 Tom Warner March 7, 2016 at 1:29 pm

Real GDP is a measure of output. It is not a measure of wealth generation, nor is it a measure of living standards.

The argument that living standards are improving faster than GDP is completely valid. But there’s no way to put a credible number on the rate of living standards growth, and somebody who hates everything digital and loves artisan craft could rightly disagree.

Trying to adjust GDP to reflect some real values that GDP methodology misses is silly. The only objective data is prices. Even real GDP must use prices for its relative weightings. Web content is worth whatever is being spent on it, and just because much of that is included in GDP data within the marketing costs of some other final goods and services doesn’t mean it’s not included in GDP.

66 Swami March 7, 2016 at 1:50 pm

In the late 80’s, a teleconference center cost about a half million dollars and worked poorly and had very few people in the network. Now everyone has it as a free app on their smart phone, and it works seamlessly.

Somehow this half a million teleconference center owned by a hundred million people is ignored in economics.

A letter used to cost a quarter to mail. Now teens chatting with teens and computers chatting with computers send hundreds of millions or billions of instant electronic notes to each other daily. But nobody counts these at a quarter each, let alone valuing them at what it would have cost in the 70s to send anything instantly.

Economics simply goes to measure free well. And technology thrives on free. Thus the trend is to see economics become less and less effective at measuring progress. A dilemma.

67 spencer March 7, 2016 at 2:01 pm

Social media is not different than radio or TV. IN each of the three you get supposedly free entertainment.

But the business model is selling advertising or commercials — the free entertainment is just a way of getting you to look at the advertisement.

Facebook sells commercials for good old fashion dollars just like a radio or TV station.

That is its product that is already fully captured in the GDP accounts. There is nothing else out there that should be added to the GDP measurement. Yes, as an individual you may get more pleasure of something from an hour of social media than from an hour of TV. So you have a larger consumer surplus. But That
is not what GDP measures.

Comments on this entry are closed.

Previous post:

Next post: