Web/Tech

That is the topic of my latest Bloomberg column, and it is assuming no major increase in supply in the megacities themselves.  Here is one bit:

We live in a special time where clustered activities are unusually important for economic growth. Some activities, such as dentistry and cement production, don’t cluster geographically very much, for obvious reasons. In contrast, finance (New York and London), information technology (the Bay Area), and entertainment (Hollywood and New York) are the most clustered. For whatever reasons, it makes sense to have many of the top decision-makers in one place.

Leading cities have become so expensive in large part because two of these clustering sectors — finance and information technology — have been ascendant. There is no particular reason to expect those trends to continue forever, and that will bind rents in affected cities.

Even tech will decentralize its gains over time:

If you think of a typical technology project, some of the gains go to the venture capitalists and the intellectual property holders, and some of the gains go to broader society, including consumers. Insofar as the gains are disproportionately reaped by the early project initiators, then yes real estate values in the Bay Area (and other tech clusters) will rise. But the most likely future for information technology is that it will spread its benefits more and more broadly into more and sectors of the economy. That scenario suggests a partial convergence of urban futures.

Another way to put the point is that intellectual property returns erode over time. In the early years of smartphones, a big part of the gain goes to Apple. As cheap imitators enter the market, prices fall and more of the gains go to consumers, or business users of the product, who are scattered across the country.

The article contains other points of interest.

…I interviewed Eric Schmidt of Google fame, who has been leading a civilian panel of technologists looking at how the Pentagon can better innovate. He said something I hadn’t heard before, which is that artificial intelligence helps the defense better than the offense. This is because AI always learns, and so constantly monitors patterns of incoming threats. This made me think that the next big war will be more like World War I (when the defense dominated) than World War II (when the offense did).

Here is the link, by Thomas E. Ricks, via Blake Baiers.

One of Beijing’s busiest public toilets is fighting the scourge of toilet paper theft through the use technology – giving out loo roll only to patrons who use a face scanner.

The automated facial recognition dispenser comes as a response to elderly residents removing large amounts of toilet paper for use at home.

Now, those in need of paper must stand in front of a high-definition camera for three seconds, after removing hats and glasses, before a 60cm ration is released.

Those who come too often will be denied, and everyone must wait nine minutes before they can use the machine again.

But there have already been reports of software malfunctions, forcing users to wait over a minute in some cases, a difficult situation for those in desperate need of a toilet.

The camera and its software have also raised privacy concerns, with some users on social media uneasy about a record of their bathroom use.

Here is the full story, via Michelle Dawson.

Cyber weapons are different. If you are a state and you let potential enemies know about your arsenal of cyber attacks, you are giving them the opportunity to fix their information systems so that they can neutralize the threat. This means that it is very hard to use cyber weapons to make credible threats against other states. As soon as you have made a credibly-specific threat, you have likely given your target enough information to figure out the vulnerability that you want to exploit.

This means that offensive cyber weapons are better for gathering intelligence or actually taking out military targets than for making threats. In this regard, they are the opposite of nuclear weapons, which are more useful as threats than as battlefield options. Nuclear weapons can create stability because they deter attacks. In effect, they create a stable system of beliefs where no state wants to seriously attack a nuclear power, for fear that this might lead to a conflict that would escalate all the way to nuclear war.

Nuclear weapons and cyber attacks don’t mix well.

Unfortunately, this means that the advantages of cyber operations become an important liability for nuclear deterrence when they are used for “left of launch” attacks on nuclear launch systems. By secretly penetrating another state’s launch system, you may undermine the stable system of beliefs that discourages an attack.

Consider what might happen in a tense standoff between two states that both have nuclear weapons, where one state has penetrated the other state’s launch system, so that it could stop a nuclear counter attack. The state that has penetrated the launch system knows that it has a military advantage. However, if  it reveals the advantage to the target state, the target state will be able to patch its system, destroying the advantage. The target state does not know that it is at a disadvantage, and it cannot be told by the attacker.

The result is that the two states have very different perceptions of the situation. The attacker thinks that there is an imbalance of power, where it has the advantage. The target thinks that there is a balance of power, where both states have nuclear weapons and can deter each other. This means that the first state will be less likely to back down, and might escalate conflict, secure in the knowledge that it can neutralize the other state if necessary. However, the target state may too behave in provocative ways that raise the stakes, since it mistakenly believes that at a certain point the other state will have to back down, for fear of nuclear war. Thus, this creates a situation where each side may be more willing to escalate the tense situation, making it more likely that one state will decide to move toward war.

That is from Eric Gartzke and Jon R. Lindsay, there is more interesting material at the link.

The Robothespian

by on March 16, 2017 at 12:55 am in The Arts, Web/Tech | Permalink

When Judy Norman walks on stage for the play Spillikin, she performs beside a somewhat different cast member — a humanoid robot.

Featuring a “robothespian”, the play brings love and technology together for a story about an engineer who builds a robot to keep his wife company after he dies.

Yet accuracy is required from the human thespian:

The robot is connected to the theatre’s control room, where a laptop transmits cues for its performance.

“[There is] a big pressure on the actor…to always have the right lines, always stand in the right place so that the robot is looking at the right direction at that particular moment,” Welch said.

Onstage, Norman talks to the robot and even kisses it. In return, the robot replies, displays facial expressions and moves its hands.

spillikin

Here is the full story, with more photos and video, via Michelle Dawson.

That is the title of a recent paper in the Journal of Development Economics (NBER version here, 2013 ungated version here), and although the piece does not feel dramatic at first it is one of my favorite articles of the year.  It pins down some critical features of economic underdevelopment better than any study I know.  The subtitle, by the way, is “The Successes and Limitations of Bureaucratic Reform in India,” the authors are Iqbal Dhaliwal and Rema Hanna, and the work is set in rural Karnataka.

It is not easy to excerpt from, so I will summarize the narrative:

1. Using biometric technology — thumbprints — to monitor absenteeism induces staff attendance for public health workers to rise by almost 15 percent.

2. That in turn leads to a reduction in low-birth weight babies.

3. Yet the government proved not so interested in monitoring attendance on a more regular basis, not even to enforce their pre-existing human resource policies.  Potential penalties against late or absent doctors were not, for the most part, enforced.

4. Following the implementation of monitoring, the doctors showed the least improvement in attendance of all the workers, in fact virtually no improvement.  The entire positive effect came from nurses, lab technicians, and lower level staff.

5. The government was reluctant to continue the monitoring because it feared staff attrition and staff discord, especially from the doctors.  There is growing private sector demand for doctors, and many doctors are considering leaving these clinics for superior pay elsewhere, and perhaps also superior location.  Therefore the doctors are given, de facto, a very lenient absence and lateness policy, in lieu of a pay hike.

6. It is already the case that many of these doctors moonlight on the side, or have separate private practices, and that spending more time at the public clinic is not their major priority.

7. It is not easy for the underfunded local government to pay these doctors more, and thus a high level of lateness and absenteeism continues.  I wonder also what would be the morale costs on the non-doctors, if the monitoring were to be continued to be enforced in this differential manner over a longer period of time.

John Komlos has a new paper on this topic, here is the abstract:

Schumpeter’s concept of creative destruction as the engine of capitalist development is well-known. However, that the destructive part of creative destruction is a social and economic cost and therefore biases our estimate of the impact of the innovation on GDP is hardly acknowledged, with the notable exception of Witt (1996.“Innovations, Externalities and the Problem of Economic Progress.” Public Choice 89:113 –30). Admittedly, during the First and Second Industrial Revolutions the magnitude of the destructive component of innovation was no doubt small compared to the net value added to GDP. However, we conjecture that recently the destructive component of innovations has increased relative to the size of the creative component as the new technologies are often creating products which are close substitutes for the ones they replace whose value depreciates substantially in the process of destruction. Consequently, the contribution of recent innovations to GDP is likely upwardly biased. This note calls for further research in innovation economics in order to measure and decompose the effects of innovations into their creative and destructive components in order to provide improved estimates of their contribution to GDP and to employment.

Think of Uber being a relatively close substitute for taxicabs, for instance.  Speculative, as they say, and the paper does not in fact actually demonstrate these conclusions, but at least we should be asking such questions more often.

Should we tax robots?

by on March 6, 2017 at 12:23 am in Economics, Web/Tech | Permalink

That idea was suggested recently by Bill Gates, though I think you can debate with what degree of literalness.  It’s worth a ponder in any case, and here is a recent Noah Smith column on the idea, and here is Summers in the FT, WaPo link here.  And here is Izabella Kaminska.

Put aside the revenue-raising issue (which will require some taxes on capital, most likely, including on robots): if we have taken in optimal revenue, is there a separate and additional argument for an additional robot tax?  In this context, I would consider “robots” to be capital that is especially substitutable for human labor.

Presumably the claim is that there is either a distributional or an “externalities from a happy human being” reason to slow the rate at which capital is substituted for labor.  But if we accept that assumption, should we tax robots or subsidize wage labor?

One reason not to tax the robots is that employers might substitute away from robots and toward natural resources rather than toward domestic human labor.  Maybe that doesn’t sound intuitive, but think of paying the energy costs to outsource to another nation and transport the outputs back home.

But the main issue is probably one of incidence.  A general problem with a wage subsidy is that sometimes much of its value its captured by employers.  For instance if the subsidy takes an EITC form, employers could pay less to their workers, but perhaps many eager workers still would seek the job to capture the somewhat higher net total wage, namely the employer portion plus the benefit.  If enough workers are keen to get the pay, employers can claw back much of the EITC boost and still get the work force they need.

Now consider the incidence of a tax on robots.  If the elasticity of the demand for robots is high, there will be a big shift away from robots and toward labor (and land and other resources).  It is at least possible that workers capture more of the gains this way than from the direct subsidy to their wages.  On the downside, the employer fares less well under this scheme.

So it depends on how labor and robot elasticities relate to each other.  I don’t know what relationship between the parameter values is likely, but typically in these scenarios just about any result is possible.  The robot tax would seem to do best when the elasticity of demand for robots is high, but the corresponding elasticity of demand for labor is low (and differentials in supply elasticities do not offset this).  As robots and labor become more substitutable, that difference in demand elasticities is likely to diminish.  So if you are going to do this, maybe it is necessary to do it soon, precisely when it does not seem needed.

Your call, but that is the basic set-up of the problem.

Maersk had found that a single container could require stamps and approvals from as many as 30 people, including customs, tax officials and health authorities.

While the containers themselves can be loaded on a ship in a matter of minutes, a container can be held up in port for days because a piece of paper goes missing, while the goods inside spoil. The cost of moving and keeping track of all this paperwork often equals the cost of physically moving the container around the world.

That is by Nathaniel Popper and Steve Lohr, mostly about blockchains, via Ángel Cabrera.

*Deep Thinking*

by on March 3, 2017 at 3:03 am in Books, Web/Tech | Permalink

The author is Gary Kasparov and the subtitle is Where Machine Intelligence Ends and Human Creativity Begins.  I am honored to have had the chance to write a blurb for this book.  It is everything I wanted from this author and title, and it also contains the inside scoop — with some truly interesting and deep revelations — about the match with Deep Blue.

Self-recommending, and interesting throughout!

Here is the review, here is one bit:

“Matchers gain, strivers lose,” he [Cowen] writes in a new book, “The Complacent Class.”

Matchers, aka enthusiasts, are people who are motivated by personal interests, whether that’s record collecting, hiking, cooking, or obsessing about “Game of Thrones.” “The enthusiasts are not trying to come out ahead of everyone else; rather, they seek to have some of their niche preferences fulfilled for the sake of their own internally directed happiness,” Cowen writes.

Strivers, on the other hand, are motivated by beating others. “These are the people who strive to have the biggest office, bed the most mates, earn the most money, or climb whatever the relevant status ladder might be,” Cowen writes.

It’s not hard to see how recent trends have favored matchers. This group has benefitted from technology — from Tinder to Spotify to Google — that makes it easier for them to pursue their interests and find other people who share them. Meanwhile, strivers are suffering, faced with more competition than ever and a greater awareness of how many people around the world are beating them.

An excellent piece.

Late last month, a pair of Islamic State fighters in desert camouflage climbed to the top of a river bluff in northern Iraq to demonstrate an important new weapon: a small drone, about six feet wide with swept wings and a small bomb tucked in its fuselage.

The two men launched the slender machine and took videos from a second, smaller drone that shadowed its movements. The aircraft glided over the besieged city of Mosul, swooped close to an Iraqi army outpost and dropped its bomb, scattering Iraqi troops with a small blast that left one figure sprawled on the ground, apparently dead or wounded.

The incident was among dozens in recent weeks in a rapidly accelerating campaign of armed drone strikes by the Islamic State in northern Iraq.

The terrorist group last month formally announced the establishment of a new “Unmanned Aircraft of the Mujahideen” unit, a fleet of ­modified drones equipped with bombs, and claimed that its drones had killed or wounded 39 Iraqi soldiers in a single week.

Here is the full story by Joby Warrick.

There’s two versions of this.

1. One or a small group of entrepreneurs owns the robots.

2. The government owns the robots.

I see how we get from where we are now to 1. How would we get to 2, and is 2 better than 1?

That is a comment and request from Mark Thorson.  It’s embedded in a longer thread, but I suspect you can guess the context.

I would focus on a prior question: what is government in a world where everything is done by the robots?  Say that most government jobs are performed by robots, except for a few leaders (NB: Isaac Asimov had even the President as a robot).  It no longer makes sense to define government in terms of “the people who work for government” or even as a set of political norms (my preferred definition).  In this setting, government is almost entirely people-empty.  Yes, there is the Weberian definition of government as having a monopoly on force, but then it seems the robots are the government.  I’ll come back to that.

You might ask who are the residual claimants on output.  Say there are fifty people in the government, and they allocate the federal budget subject to electoral constraints.  Even a very small percentage of skim makes them fantastically wealthy, and gives them all sorts of screwy incentives to hold on to power.  If they can, they’ will manipulate robot software toward that end.  That said, I am torn between thinking this group has too much power — such small numbers can coordinate and tyrannize without checks and balances — and thinking they don’t have enough power, because if one man can’t make a pencil fifty together might not do better than a few crayons.

Alternatively, say that ten different private companies own varying shares of various robots, with each company having a small number of employees, and millions of shareholders just as there are millions of voters.  The government also regulates these companies, so in essence the companies produce the robots that then regulate them (what current law does that remind you of?).  That’s a funny and unaccustomed set of incentives too, but at least you have more distinct points of human interaction/control/manipulation with respect to the robots.

I feel better about the latter scenario, as it’s closer to a polycentric order and I suspect it reduces risk for that reason.  Nonetheless it still seems people don’t have much direct influence over robots.  Most of the decisions are in effect made “outside of government” by software, and the humans are just trying to run in place and in some manner pretend they are in charge.  Perhaps either way, the robots themselves have become the government and in effect they own themselves.

Or is this how it already is, albeit with much of the “software” being a set of social norms?

Replacing social norms by self-modifying software –how big of a difference will it make for how many things?

That is the theme of my latest Bloomberg column, here is the opening bit:

“Why should it be different this time?” That’s the most common response I hear when I raise concerns about automation and the future of jobs, and it’s a pretty simple rejoinder. The Western world managed the shift out of agricultural jobs into industry, and continued to see economic growth. So will not the jobs being displaced now by automation and artificial intelligence lead to new jobs elsewhere in a broadly similar and beneficial manner?

And:

Consider, for instance, the history of wages during the Industrial Revolution. Estimates vary, but it is common to treat the Industrial Revolution as starting around 1760, at least in Britain. If we consider estimates for private per capita consumption, from 1760 to 1831, that variable rose only by about 22 percent. That’s not much for a 71-year period. A lot of new wealth was being created, but economic turmoil and adjustment costs and war kept down the returns to labor. (If you’re wondering, “Don’t fight a major war” is the big policy lesson from this period, but also note that the setting for labor market adjustments is never ideal.)

By the estimates of Gregory Clark, economic historian at the University of California at Davis, English real wages may have fallen about 10 percent from 1770 to 1810, a 40-year period. Clark also estimates that it took 60 to 70 years of transition, after the onset of industrialization, for English workers to see sustained real wage gains at all.

From that turmoil, we also received Marxism and agricultural subsidies for generations!  Do read the whole thing

China has banned almost 7m people from taking flights and high-speed trains over the past four years as a penalty for not repaying their debts, the country’s Supreme Court has announced.

The penalty system is part of efforts to build a nationwide “social credit” system that will eventually rate every Chinese citizen by collecting big data on financial, legal or social misdeeds. The debtors’ travel ban has been touted as an important first step for building the structural links needed to implement such a comprehensive monitoring programme.

“We have signed a memorandum . . . [with over] 44 government departments in order to limit ‘discredited’ people on multiple levels,” Meng Xiang, head of the executive department of the Supreme Court, told state media on Wednesday.

…In addition to not paying debts on time, one can also be blacklisted for lying in court, hiding one’s assets and a host of other crimes. The Supreme Court said on Tuesday it was working on adding new forms of penalties.

Here is the FT story by Yuan Yang.  Keep in mind that the country does not have a real personal bankruptcy law, nor well-developed credit institution penalties, so this is viewed as one of the few options available.