Web/Tech

This is for call center operators:

The results are surprising. Some are quirky: employees who are members of one or two social networks were found to stay in their job for longer than those who belonged to four or more social networks (Xerox recruitment drives at gaming conventions were subsequently cancelled). Some findings, however, were much more fundamental: prior work experience in a similar role was not found to be a predictor of success.

“It actually opens up doors for people who would never have gotten to interview based on their CV,” says Ms Morse. Some managers initially questioned why new recruits were appearing without any prior relevant experience. As time went on, attrition rates in some call centres fell by 20 per cent and managers no longer quibbled. “I don’t know why this works,” admits Ms Morse, “I just know it works.”

The rest of the Tim Smedley FT story is here, via Peter Sahui.

There is a new NBER working paper with that title, by S. Borağan Aruoba and Jesus Fernandez-Villaverde. Here is the abstract:

We solve the stochastic neoclassical growth model, the workhorse of modern macroeconomics, using C++11, Fortran 2008, Java, Julia, Python, Matlab, Mathematica, and R. We implement the same algorithm, value function iteration with grid search, in each of the languages. We report the execution times of the codes in a Mac and in a Windows computer and comment on the strength and weakness of each language.

Here are their results:

1. C++ and Fortran are still considerably faster than any other alternative, although one needs to be careful with the choice of compiler.

2. C++ compilers have advanced enough that, contrary to the situation in the 1990s and some folk wisdom, C++ code runs slightly faster (5-7 percent) than Fortran code.

3. Julia, with its just-in-time compiler, delivers outstanding per formance. Execution speed is only between 2.64 and 2.70 times the execution speed of the best C++ compiler.

4. Baseline Python was slow. Using the Pypy implementation, it runs around 44 times slower than in C++. Using the default CPython interpreter, the code runs between 155 and 269 times slower than in C++.

5. However, a relatively small rewriting of the code and the use of Numba (a just-in-time compiler for Python that uses decorators) dramatically improves Python ’s performance: the decorated code runs only between 1.57 and 1.62 times slower than the best C++ executable.

6.Matlab is between 9 to 11 times slower than the best C++ executable. When combined with Mex files, though, the difference is only 1.24 to 1.64 times.

7. R runs between 500 to 700 times slower than C++ . If the code is compiled, the code is between 240 to 340 times slower.

8. Mathematica can deliver excellent speed, about four times slower than C++, but only after a considerable rewriting of the code to take advantage of the peculiarities of the language. The baseline version our algorithm in Mathematica is much slower, even after taking advantage of Mathematica compilation.

There are ungated copies and some discussion here.

It is excellent throughout, here is one good sentence:

The funny thing about Piketty is that he has a lot more faith in returns on invested capital than any professional investor I’ve ever met.

Here is another:

The result of all that is the effective death of the IPO. The number of public companies in the US has dropped dramatically. And then correspondingly, growth companies go public much later. Microsoft went out at under $1 billion, Facebook went out at $80 billion. Gains from the growth accrue to the private investor, not the public investor…

Most American retirement savings is invested in the public stock market. Most Americans can’t invest in private companies and most Americans can’t invest in venture capital and private equity funds. They’re actually prohibited from doing so by the SEC. If you both prohibit them from investing in private growth and wire the market so they can’t get into public growth, then you can’t be invested in growth. That raises the societal question of how are we going to pay for retirements. That’s the question that needs to be asked that nobody asks because it’s too scary.

The full interview is here.

This trend is accelerating:

When Jim Sullivan began working as a waiter at a Dallas restaurant a few years ago, he was being watched — not by the prying eyes of a human boss, but by intelligent software.

The digital sentinel, he was told, tracked every waiter, every ticket, and every dish and drink, looking for patterns that might suggest employee theft. But that torrent of detailed information, parsed another way, cast a computer-generated spotlight on the most productive workers.

Mr. Sullivan’s data shone brightly. And when his employer opened a fourth restaurant in the Dallas area in 2012, Mr. Sullivan was named the manager — a winner in the increasingly quantified world of work.

Here is some of what goes on behind the scenes:

Ben Waber is chief executive of Sociometric Solutions, a start-up that grew out of his doctoral research at M.I.T.’s Human Dynamics Laboratory, which conducts research in the new technologies. Sociometric Solutions advises companies using sensor-rich ID badges worn by employees. These sociometric badges, equipped with two microphones, a location sensor and an accelerometer, monitor the communications behavior of individuals — tone of voice, posture and body language, as well as who spoke to whom for how long.

Sociometric Solutions is already working with 20 companies in the banking, technology, pharmaceutical and health care industries, involving thousands of employees. The workers must opt in to have their data collected. Mr. Waber’s company signs a contract with each one guaranteeing that no individual data is given to the employer (only aggregate statistics) and that no conversations are recorded.

The article by Steve Lohr is here.

In Average is Over I wrote that future jobs will require good “people skills” all the more.  There is a new example of this from Solothurn, Switzerland, where the town is searching for a full-time hermit, to live of course in their hermitage.  But now Solothurn has updated the job description:

Solothurn has updated the job description. “Along with acting as caretaker and sacristan, responsibilities include interaction with the many visitors,” the ad warns potential applicants.

“There’s a bit of a discrepancy between the job title of hermit and the fact he or she has to deal with throngs of visitors,” says Sergio Wyniger, the head of Solothurn’s city council. So far, the city has received 119 applications and expects to make a decision by next week.

The job of a hermit isn’t what it used to be. Tourists can easily reach once-secluded spots and modern technology makes it harder to escape friends and relatives—or strangers looking for advice on how to navigate life’s challenges. Today, many hermits live in city apartments or suburban row houses, often relying on the Internet to make a living or order groceries.

…On top of keeping the gorge and adjacent chapels clean and tidy, the new hermit will have to help out with weddings and baptisms and dole out counsel for visitors suffering heartbreak or family trouble. In return, the city council will pay him or her 1,000 Swiss francs ($1,115) a month, along with free lodging in the wood-shingled hermitage. The hermit works for and is paid by the city of Solothurn.

Perhaps someone should write a book on how the institution of hermit is evolving:

“Hermits usually have a mobile phone, because they can switch it off for prayers,” says Mr. Turina, who wrote his Ph.D. thesis on Catholic hermits in Italy.

The article is here.

Here is a new paper by Christin, Egelman, Vidas, and Grossklags, entitled “It’s All About the Benjamins”:

We examine the cost for an attacker to pay users to execute arbitrary code—potentially malware. We asked users at home to download and run an executable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice—not to run untrusted executables—if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience.

The article is here (pdf), for the pointer I thank Bruce Schnier.

This new paper by Tom Blake, Steven Tadelis, and Chris Nosko is not entirely reassuring for the future of journalism, but it confirms what I have long suspected:

Internet advertising has been the fastest growing advertising channel in recent years with paid search ads comprising the bulk of this revenue. We present results from a series of large scale field experiments done at eBay that were designed to measure the causal effectiveness of paid search ads. Because search clicks and purchase behavior are correlated, we show that returns from paid search are a fraction of conventional non-experimental estimates. As an extreme case, we show that brand-keyword ads have no measurable short-term benefits. For non-brand keywords we find that new and infrequent users are positively influenced by ads but that more frequent users whose purchasing behavior is not influenced by ads account for most of the advertising expenses, resulting in average returns that are negative.

The NBER version is here, an ungated version is here.

The NYTimes has a good piece today on the increasing use of noncompete clauses, clauses that say that if you leave a firm you cannot work for a competitor typically for a period of 1 or more years.

Noncompete clauses are now appearing in far-ranging fields beyond the worlds of technology, sales and corporations with tightly held secrets, where the curbs have traditionally been used. From event planners to chefs to investment fund managers to yoga instructors, employees are increasingly required to sign agreements that prohibit them from working for a company’s rivals.

Non competes agreements (NCAs) are dangerous in my view because they put firms into a prisoner’s dilemma: Non competes benefit firms but harm industries by reducing innovation.

Today we all know about Silicon Valley but in the 1950s and 1960s the place for technology was Route 128 in Massachusetts which Business Week called “the Magic Semicircle”. The magic semicircle contained technological leaders like DEC and Raytheon and intellectual powerhouses like Harvard and MIT – this was at a time when Silicon Valley was mostly fruit trees.

When William Shockley left Bell Labs for the Valley it was not considered a promising move. And indeed something strange happened. Shockley wasn’t a very nice person, he couldn’t get any of his former colleagues to come work for him, and within a year of starting his firm in Mountain View, eight of Shockley’s researchers, who called themselves the “traitorous eight,” resigned. The traitorous eight, started Fairchild Semiconductor. Two of them, Robert Noyce and Gordon E. Moore, later left Fairchild to form Intel Corporation. Other people leaving Fairchild Semiconductor started National Semiconductor and Advanced Micro Devices. So it was in this branching off process of new firm creation that Silicon Valley was born.

Now here is the point, if Shockley had started his firm in Massachusetts or in pretty much any other state, the traitorous eight probably would not have left to start their own firm because they would have signed a standard non-compete agreement prohibiting them from competing with their former employer for 18 to 24 months. In California, however, the courts have consistently refused to enforce non-compete agreements. An employee who leaves one company can join a new company and start work the next day and they can do so regardless of any agreement.

Silicon Valley could not operate if non-compete agreements were enforced. Silicon Valley is a hyper-mobile workforce. Moreover, it’s precisely in the circulation of workers that Silicon Valley has one of its advantages the diffusion of new ideas. The key to Silicon Valley and much innovation today is the diffusion, the combination, the integration of different sorts of knowledge and worker mobility has been a big part of this. Not just worker mobility between firms in Silicon Valley but also immigrants, circulation between different countries, university-firm partnerships and so forth.

Firms who come to Silicon Valley know that they cannot use NCA to protect their innovations but they come anyway because the opportunity to learn from other people exceeds the costs of other people learning from you. Thus, worker mobility and the inability to protect IP by restricting mobility is bad for an individual firm but good for the industry as a whole, good for innovation, good for workers and good for consumers.

(Drawn from a talk I gave at a Google Big Tent event in Korea.) Hat tip to Loweeel in comments for some edits.

A programme that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.

…Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organised the test.

It is thought to be the first computer to pass the iconic test. Though there have claims other programmes have successes, those included set topics or question in advance.

A version of the computer programme, which was created in 2001, is hosted online for anyone talk to. (“I feel about beating the turing test in quite convenient way. Nothing original,” said Goostman, when asked how he felt after his success.)

The computer programme claims to be a 13-year-old boy from Odessa in Ukraine.

So far I am withholding judgment.  There is more here, lots of Twitter commentary here.  By the way, here is my 2009 paper with Michelle Dawson on what the Turing test really means (pdf).

A Hong Kong VC fund has just appointed an algorithm to its board.

Deep Knowledge Ventures, a firm that focuses on age-related disease drugs and regenerative medicine projects, says the program, called VITAL, can make investment recommendations about life sciences firms by poring over large amounts of data.

Just like other members of the board, the algorithm gets to vote on whether the firm makes an investment in a specific company or not. The program will be the sixth member of DKV’s board.

There is more here, via Gabriel Puliatti.

Machines vs. lawyers

by on May 29, 2014 at 3:14 am in Law, Uncategorized, Web/Tech | Permalink

We all know the market for lawyers is shrinking, but not every part of the legal services sector is in retreat.  John O. MacGinnis writes:

The job category that the Bureau of Labor Statistics calls “other legal services”—which includes the use of technology to help perform legal tasks—has already been surging, over 7 percent per year from 1999 to 2010.

Much of the rest of the piece details how various legal functions can be taken only, if only slowly, by smart software.  Here is a bit more:

Until now, computerized legal search has depended on typing in the right specific keywords. If I searched for “boat,” for instance, I couldn’t bring up cases concerning ships, despite their semantic equivalence. If I searched for “assumption of risk,” I wouldn’t find cases that may have employed the same concept without using the same words. IBM’s Watson suggests that such limitations will eventually disappear. Just as Watson deployed pattern recognition to capture concepts rather than mere words, so machine intelligence will exploit pattern recognition to search for semantic meanings and legal concepts. Computers will also use network analysis to assess the strength of precedent by considering the degree to which other cases and briefs rely on certain decisions. Some search engines, such as Ravel Law, already graphically display how much a particular precedent affected the subsequent course of law. As search progresses, then, machine intelligence not only will identify precedents; it will also guide a lawyer’s judgment about where, when, and how to cite them.

The entire piece is here, interesting throughout, via B.A.

In response [to the rise of diagnostic algorithms], NNU [National Nurses United] has launched a major campaign featuring radio ads from coast to coast, video, social media, legislation, rallies, and a call to the public to act, with a simple theme – “when it matters most, insist on a registered nurse.”  The ads were created by North Woods Advertising and produced by Fortaleza Films/Los Angeles.  Additional background can be found at http://www.insistonanrn.org.

Here is the link.  Here is an MP3 of the ad.  Remarkable, do give it a listen.  It has numerous excellent lines such as “Algorithms are simple mathematical formulas that nobody understands.”

For the pointer I thank Eric Jonas.

So, I think the net neutrality issue is very difficult. I think it’s a lose-lose. It’s a good idea in theory because it basically appeals to this very powerful idea of permissionless innovation. But at the same time, I think that a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today. So the challenge, I think, is to accommodate both of those goals, which is a very difficult thing to do. And I don’t envy the FCC and the complexity of what they’re trying to do.

The ultimate answer would be if you had three or four or five broadband providers to every house. And I think you actually have the potential for that depending on how things play out from here. You’ve got the cable companies; you’ve got the telcos. Google Fiber is expanding very fast, and I think it’s going to be a very serious nationwide and maybe ultimately worldwide effort. I think that’s going to be a much bigger scale in five years.

So, you can imagine a world in which there are five competitors to every home for broadband: telcos, cable, Google Fiber, mobile carriers and unlicensed spectrum. In that world, net neutrality is a much less central issue, because if you’ve got competition, if one of your providers started to screw with you, you’d just switch to another one of your providers.

The entire interview is interesting, including his discussion of the Obama administration and the possibility of a fragmented internet.  By the way here is Marc on EconTalk with Russ Roberts.

From an IGM/Booth survey:

Question B: Information technology and automation are a central reason why median wages have been stagnant in the US over the past decade, despite rising productivity.

Strongly agree, 0%

Agree, 33%

Uncertain, 29%

Disagree, 18%

Strongly disagree, 2%

No opinion, 11%

There are further results of interest here, via Carl Shulman.

Ezra Klein has a very good post on this topic.  He notes that for The New York Times:

…home page traffic has fallen by half over the last two years. This is true even though the NYT’s home page has been beautifully redesigned, and the NYT’s overall traffic is up.

The value of the company is up as well.  And then:

This is the conventional wisdom across the industry now: the new home page is Facebook and Twitter. The old home page — which is the actual home page — is dying a slow, painful death.

I’m skeptical. The thing about “push media” is someone needs to do the pushing. Someone has to post an article to Twitter or Facebook. That can be the media brand. It can even be the journalists. But when articles work it’s really coming from the readers.

Those readers of course are often the dedicated ones who find the article on your home page.  Ezra makes this additional point in passing, which I think is a neat example of how counterintuitive microeconomics can hold in the world of the internet:

Some of the most committed users are still clicking through the RSS feed (which is one reason Vox maintains a full-text RSS feed).

I would put it this way: the fewer people use RSS, the better content providers can allow RSS to be.  There is less fear of cannibalization, and more hope that easy RSS access will help a post go viral through Facebook and other social media.

When a blog is linked to the reputations of its producers, rather than to advertising revenue, the home page remains all the more important.  That is who you are, and many people realize that, even if they are not reading you at the moment.  I call those “shadow readers.”  For MR, I have long thought that the value of shadow readers is quite high.  (“Tyler and Alex are still writing that blog — great stuff, right?  I don’t get to look at it every day [read: hardly at all].  Why don’t we have them in for a talk?”)  In other words, a shadow reader is someone who hardly reads the blog at all, but has a not totally inaccurate model of what the blog is about.  For Vox or the NYT the value of a shadow reader is lower, although shadow readers still may talk up those sites to potential real readers.  For companies which run lots of events, such as The Atlantic, the value of shadow readers may be high because it helps make them focal even without the daily eyeballs.

What if everyone were a shadow reader?  What is the MRS between real readers and shadow readers?  And which are you?  Can a shadow reader sometimes be better to have?  After shadow readers don’t get so upset with you and don’t so much expect that you will write to please them!