Web/Tech

Here is a new paper by Christin, Egelman, Vidas, and Grossklags, entitled “It’s All About the Benjamins”:

We examine the cost for an attacker to pay users to execute arbitrary code—potentially malware. We asked users at home to download and run an executable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice—not to run untrusted executables—if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience.

The article is here (pdf), for the pointer I thank Bruce Schnier.

This new paper by Tom Blake, Steven Tadelis, and Chris Nosko is not entirely reassuring for the future of journalism, but it confirms what I have long suspected:

Internet advertising has been the fastest growing advertising channel in recent years with paid search ads comprising the bulk of this revenue. We present results from a series of large scale field experiments done at eBay that were designed to measure the causal effectiveness of paid search ads. Because search clicks and purchase behavior are correlated, we show that returns from paid search are a fraction of conventional non-experimental estimates. As an extreme case, we show that brand-keyword ads have no measurable short-term benefits. For non-brand keywords we find that new and infrequent users are positively influenced by ads but that more frequent users whose purchasing behavior is not influenced by ads account for most of the advertising expenses, resulting in average returns that are negative.

The NBER version is here, an ungated version is here.

The NYTimes has a good piece today on the increasing use of noncompete clauses, clauses that say that if you leave a firm you cannot work for a competitor typically for a period of 1 or more years.

Noncompete clauses are now appearing in far-ranging fields beyond the worlds of technology, sales and corporations with tightly held secrets, where the curbs have traditionally been used. From event planners to chefs to investment fund managers to yoga instructors, employees are increasingly required to sign agreements that prohibit them from working for a company’s rivals.

Non competes agreements (NCAs) are dangerous in my view because they put firms into a prisoner’s dilemma: Non competes benefit firms but harm industries by reducing innovation.

Today we all know about Silicon Valley but in the 1950s and 1960s the place for technology was Route 128 in Massachusetts which Business Week called “the Magic Semicircle”. The magic semicircle contained technological leaders like DEC and Raytheon and intellectual powerhouses like Harvard and MIT – this was at a time when Silicon Valley was mostly fruit trees.

When William Shockley left Bell Labs for the Valley it was not considered a promising move. And indeed something strange happened. Shockley wasn’t a very nice person, he couldn’t get any of his former colleagues to come work for him, and within a year of starting his firm in Mountain View, eight of Shockley’s researchers, who called themselves the “traitorous eight,” resigned. The traitorous eight, started Fairchild Semiconductor. Two of them, Robert Noyce and Gordon E. Moore, later left Fairchild to form Intel Corporation. Other people leaving Fairchild Semiconductor started National Semiconductor and Advanced Micro Devices. So it was in this branching off process of new firm creation that Silicon Valley was born.

Now here is the point, if Shockley had started his firm in Massachusetts or in pretty much any other state, the traitorous eight probably would not have left to start their own firm because they would have signed a standard non-compete agreement prohibiting them from competing with their former employer for 18 to 24 months. In California, however, the courts have consistently refused to enforce non-compete agreements. An employee who leaves one company can join a new company and start work the next day and they can do so regardless of any agreement.

Silicon Valley could not operate if non-compete agreements were enforced. Silicon Valley is a hyper-mobile workforce. Moreover, it’s precisely in the circulation of workers that Silicon Valley has one of its advantages the diffusion of new ideas. The key to Silicon Valley and much innovation today is the diffusion, the combination, the integration of different sorts of knowledge and worker mobility has been a big part of this. Not just worker mobility between firms in Silicon Valley but also immigrants, circulation between different countries, university-firm partnerships and so forth.

Firms who come to Silicon Valley know that they cannot use NCA to protect their innovations but they come anyway because the opportunity to learn from other people exceeds the costs of other people learning from you. Thus, worker mobility and the inability to protect IP by restricting mobility is bad for an individual firm but good for the industry as a whole, good for innovation, good for workers and good for consumers.

(Drawn from a talk I gave at a Google Big Tent event in Korea.) Hat tip to Loweeel in comments for some edits.

A programme that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.

…Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organised the test.

It is thought to be the first computer to pass the iconic test. Though there have claims other programmes have successes, those included set topics or question in advance.

A version of the computer programme, which was created in 2001, is hosted online for anyone talk to. (“I feel about beating the turing test in quite convenient way. Nothing original,” said Goostman, when asked how he felt after his success.)

The computer programme claims to be a 13-year-old boy from Odessa in Ukraine.

So far I am withholding judgment.  There is more here, lots of Twitter commentary here.  By the way, here is my 2009 paper with Michelle Dawson on what the Turing test really means (pdf).

A Hong Kong VC fund has just appointed an algorithm to its board.

Deep Knowledge Ventures, a firm that focuses on age-related disease drugs and regenerative medicine projects, says the program, called VITAL, can make investment recommendations about life sciences firms by poring over large amounts of data.

Just like other members of the board, the algorithm gets to vote on whether the firm makes an investment in a specific company or not. The program will be the sixth member of DKV’s board.

There is more here, via Gabriel Puliatti.

Machines vs. lawyers

by on May 29, 2014 at 3:14 am in Law, Uncategorized, Web/Tech | Permalink

We all know the market for lawyers is shrinking, but not every part of the legal services sector is in retreat.  John O. MacGinnis writes:

The job category that the Bureau of Labor Statistics calls “other legal services”—which includes the use of technology to help perform legal tasks—has already been surging, over 7 percent per year from 1999 to 2010.

Much of the rest of the piece details how various legal functions can be taken only, if only slowly, by smart software.  Here is a bit more:

Until now, computerized legal search has depended on typing in the right specific keywords. If I searched for “boat,” for instance, I couldn’t bring up cases concerning ships, despite their semantic equivalence. If I searched for “assumption of risk,” I wouldn’t find cases that may have employed the same concept without using the same words. IBM’s Watson suggests that such limitations will eventually disappear. Just as Watson deployed pattern recognition to capture concepts rather than mere words, so machine intelligence will exploit pattern recognition to search for semantic meanings and legal concepts. Computers will also use network analysis to assess the strength of precedent by considering the degree to which other cases and briefs rely on certain decisions. Some search engines, such as Ravel Law, already graphically display how much a particular precedent affected the subsequent course of law. As search progresses, then, machine intelligence not only will identify precedents; it will also guide a lawyer’s judgment about where, when, and how to cite them.

The entire piece is here, interesting throughout, via B.A.

In response [to the rise of diagnostic algorithms], NNU [National Nurses United] has launched a major campaign featuring radio ads from coast to coast, video, social media, legislation, rallies, and a call to the public to act, with a simple theme – “when it matters most, insist on a registered nurse.”  The ads were created by North Woods Advertising and produced by Fortaleza Films/Los Angeles.  Additional background can be found at http://www.insistonanrn.org.

Here is the link.  Here is an MP3 of the ad.  Remarkable, do give it a listen.  It has numerous excellent lines such as “Algorithms are simple mathematical formulas that nobody understands.”

For the pointer I thank Eric Jonas.

So, I think the net neutrality issue is very difficult. I think it’s a lose-lose. It’s a good idea in theory because it basically appeals to this very powerful idea of permissionless innovation. But at the same time, I think that a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today. So the challenge, I think, is to accommodate both of those goals, which is a very difficult thing to do. And I don’t envy the FCC and the complexity of what they’re trying to do.

The ultimate answer would be if you had three or four or five broadband providers to every house. And I think you actually have the potential for that depending on how things play out from here. You’ve got the cable companies; you’ve got the telcos. Google Fiber is expanding very fast, and I think it’s going to be a very serious nationwide and maybe ultimately worldwide effort. I think that’s going to be a much bigger scale in five years.

So, you can imagine a world in which there are five competitors to every home for broadband: telcos, cable, Google Fiber, mobile carriers and unlicensed spectrum. In that world, net neutrality is a much less central issue, because if you’ve got competition, if one of your providers started to screw with you, you’d just switch to another one of your providers.

The entire interview is interesting, including his discussion of the Obama administration and the possibility of a fragmented internet.  By the way here is Marc on EconTalk with Russ Roberts.

From an IGM/Booth survey:

Question B: Information technology and automation are a central reason why median wages have been stagnant in the US over the past decade, despite rising productivity.

Strongly agree, 0%

Agree, 33%

Uncertain, 29%

Disagree, 18%

Strongly disagree, 2%

No opinion, 11%

There are further results of interest here, via Carl Shulman.

Ezra Klein has a very good post on this topic.  He notes that for The New York Times:

…home page traffic has fallen by half over the last two years. This is true even though the NYT’s home page has been beautifully redesigned, and the NYT’s overall traffic is up.

The value of the company is up as well.  And then:

This is the conventional wisdom across the industry now: the new home page is Facebook and Twitter. The old home page — which is the actual home page — is dying a slow, painful death.

I’m skeptical. The thing about “push media” is someone needs to do the pushing. Someone has to post an article to Twitter or Facebook. That can be the media brand. It can even be the journalists. But when articles work it’s really coming from the readers.

Those readers of course are often the dedicated ones who find the article on your home page.  Ezra makes this additional point in passing, which I think is a neat example of how counterintuitive microeconomics can hold in the world of the internet:

Some of the most committed users are still clicking through the RSS feed (which is one reason Vox maintains a full-text RSS feed).

I would put it this way: the fewer people use RSS, the better content providers can allow RSS to be.  There is less fear of cannibalization, and more hope that easy RSS access will help a post go viral through Facebook and other social media.

When a blog is linked to the reputations of its producers, rather than to advertising revenue, the home page remains all the more important.  That is who you are, and many people realize that, even if they are not reading you at the moment.  I call those “shadow readers.”  For MR, I have long thought that the value of shadow readers is quite high.  (“Tyler and Alex are still writing that blog — great stuff, right?  I don’t get to look at it every day [read: hardly at all].  Why don’t we have them in for a talk?”)  In other words, a shadow reader is someone who hardly reads the blog at all, but has a not totally inaccurate model of what the blog is about.  For Vox or the NYT the value of a shadow reader is lower, although shadow readers still may talk up those sites to potential real readers.  For companies which run lots of events, such as The Atlantic, the value of shadow readers may be high because it helps make them focal even without the daily eyeballs.

What if everyone were a shadow reader?  What is the MRS between real readers and shadow readers?  And which are you?  Can a shadow reader sometimes be better to have?  After shadow readers don’t get so upset with you and don’t so much expect that you will write to please them!

And the internet is…

by on May 13, 2014 at 8:45 pm in Law, Web/Tech | Permalink

Mario Costeja González, the Spanish national who brought the complaint, said it was “not going to be a big problem for Google” because the ruling only required it to remove irrelevant information.

That is from the FT story, note that the scope and applicability of this new ruling does still remain up for grabs.

It suggests a world where an automated guardian manages our lives, taking away the awkward detail; the boring tasks of daily existence, leaving us with the bits we enjoy, or where we make a contribution. In this world our virtual assistants would quite naturally act as barriers between us and some brands and services.

Great swathes of brand relationships could become automated. Your energy bills and contracts, water, gas, car insurance, home insurance, bank, pension, life assurance, supermarket, home maintenance, transport solutions, IT and entertainment packages; all of these relationships could be managed by your beautiful personal OS.

Brands in these categories could find themselves dealing with the digital butler (unless we, the consumer, step in and press the override button), in which case marketing in these sectors could become programmatic in the truest sense.

It’s entirely possible that the influence of our virtual minders could reach far further. What if we tell our OS that we’ll only ever buy products that meet certain ethical standards; hit certain carbon emission targets or treat their employees in a certain way? Our computer may say no to brands for many different reasons.

There is more on that idea here.

Eli Dourado has a new piece of note:

In much of the world, the net is not neutral, thanks to companies like Facebook and Google.Facebook Zero is an initiative launched in 2010 to give customers of 50 carriers, mostly in the developing world, access to a lightweight version of Facebook on their WAP-enabled feature phones at no charge. Users can post, like, poke, and comment to their hearts’ content, but if they want to view photos or access non-Facebook sites, they incur the usual data charge. The model has been so successful at growing Facebook adoption in Africa that Google followed suit with a competing offering, Google Free Zone in 2012. Lest anyone think that this is a cruel ploy by evil, for-profit corporations to trap the poor inside their walled gardens, the non-profit Wikimedia Foundation also copied Facebook’s idea with Wikipedia Zero, to great effect.

…non-neutrality can also help to fund necessary network buildouts on an ongoing basis. By giving access to Facebook, Google, and Wikipedia away as a loss-leader, carriers are serving with their basic tier of service those who can’t afford more, and habituating those who can afford to click beyond the walled garden to using the mobile web. This price discrimination not only increases access but also raises more revenue than a neutral strategy would. Developing-world carriers need that revenue if they ever intend to build the kinds of networks that will support widespread Internet use. Net neutrality, in other words, would not only keep the poorest offline, it would keep investment in poor-country telecom infrastructure down for longer.

A similar, but less stark, dynamic is playing out in rich countries. Anyone who has ever used their Kindle’s included 3G service has benefited from network non-neutrality; after all, you can’t use it to access non-Amazon services. Absent Amazon’s non-neutral arrangement with wireless carriers, you’d have to pay a nontrivial monthly fee to access books via the cellular network, which would mean that most people would forgo cellular and stick to Wi-Fi. Again, we observe a non-neutral arrangement expanding access and saving people money.

Read the whole thing.

Melissa Bell surveys three weeks of Vox and asks what you think.  A few things strike me:

1. One of their innovations — which has occasioned lots of hostility — has been to shift the window of what is considered “reportable as accepted truth.”  A MSM article does not put defenders and opponents of evolutionary theory on the same footing.  Vox presents the workability of a health care mandate as something — if not quite to be taken for granted — as a matter where a pro-mandate journalistic stance can be considered a matter of fact.  By no means do I agree with all of their judgments, but I see them as ahead of the curve and outflanking their critics.

2. The site looks great, works great, and they are consistently finding interesting topics to report on, at a higher rate than most better-established MSM outlets.  If I go to the site I will find something new I didn’t know about, every day.  I don’t feel a need to push them into an RSS feed.  By the way, the site looks especially good on an iPad.

3. When I was in fifth grade, I was pulled out of some of the more boring classes and give “SRAs” to work with.  SRAs were color coded material laid out on a series of cards and boxed tabs, which could be manipulated and re-ordered if the student so chose, and which allowed progression to increasing levels of difficulty.  Vox.com reminds me of SRAs, and of some of the instructional theories of the 1960s, although of course on the web and thus with a superior presentation.  I preferred SRAs to class, but anything I like is to be considered suspect from a broader market point of view.  By the way, IBM eventually sold the SRA brand name and content to McGraw-Hill.

4. With any site you have to ask where the “pandering element” comes in.  With MR the TC pandering is to yours truly — the unpaid author — and it comes in the form of puffins, Japan, movie reviews, and obscure Straussian references, among other things which make me giggle.  With Vox the pandering is highly factual and tonally neutral coverage of some hot button issues, such as the racism of Donald Sterling or telling your parents your true profession (porn star).  This strategy likely will succeed, although those articles tend not to interest me personally.  I think they will do pretty well on Facebook and other social media sites.

5. I am most worried about a certain uniformity of voice across the articles.  Think of the headings, photos, and prose style as geared to put the links high in eventual Google searches.  But readers miss the presence of distinctive voices, including Matt and Ezra themselves, who of course have served this role in the past.  I’ve liked all of Matt’s articles for Vox so far, but I miss hearing Matt.  You know, the Matt of mattyglesias.typepad.com and wisecracks about the Wizards.  Slate and Salon are full of voices, and they have found this to be a successful formula, at least relative to the alternatives if not always in terms of net revenue.

I’ve liked Joseph Stromberg’s science coverage, and been impressed by his depth, but he does not (yet?) ring as a distinct voice in my mind.  I don’t even have an illusory picture of what he might be like, and I wonder if their writers can continue to attract readers with such a relatively low level of vividness.  (On the other hand, this limits the bargaining power of the writers!)  Yet can the writers be given greater voice while keeping the Google maximization strategy in place?

Over time this uniformity of tone also will make it hard for them to recruit or keep top writers or writers looking for a path to the top.  And every outlet needs a few of these writers, even if many of the pieces are to be more cookie-cutter in presentation.

6. Costs will rise when they send people outside of the office to do stories, as eventually they must.

7. I am still a pessimist about the long-term economics of media, and I remain unconvinced they have solved the key problem of a weak advertising market for on-line material.  Still, I am keen to see how they will extend the site.

The citation is here:

Matthew Gentzkow has made fundamental contributions to our understanding of the economic forces driving the creation of media products, the changing nature and role of media in the digital environment, and the effect of media on education and civic engagement.

Matt is at the Booth School of Business at the University of Chicago and there is much more at that link.  Here is Matt at scholar.google.com.  Matt’s well-known paper on ideological segregation, with Jesse Shapiro, is here (pdf).  Our class on the economics of the media at MRUniversity.com considers Matt’s work, for instance see this video on ideological segregation.

Here is A Fine Theorem on the Bayesian persuasion paper.

An excellent choice, of course, and hearty congratulations are in order.