Category: Web/Tech

My Conversation with Balaji Srinivasan

Here is the transcript and audio, and this is the intro:

Marc Andreessen has described Balaji as the man who has more good ideas per minute than anyone else in the Bay Area. He is the CEO of Earn.com, where we’re sitting right now, a board partner at Andreessen Horowitz, formerly a general partner. He has cofounded the company Counsyl in addition to many other achievements.

Here is one excerpt:

COWEN: Why is the venture capital model so geographically clustered? So much of it is out here in the Bay Area. It’s spreading to other parts of the country. Around the world, you see Israel, in some ways, as being number two, per capita number one. But that’s a very small country. Why is it so hard to get venture capital off the ground in so many areas?

SRINIVASAN: That’s actually now changed with the advent of ICOs and Ethereum and crypto. Historically, the reason for it was companies would come to Sand Hill Road. One maybe slightly less appreciated aspect is, if you come to Sand Hill Road and you get VC financing, the VC who invests in your company typically takes a board seat. A VC does not want to fly 6,000 miles for every board seat if they’ve got 10 board seats and four board meetings a year per company.

What a VC would like in general, all else being equal, is for you to be within driving distance. Not only does that VC like it, so does the next VC in the B round and the next VC in the C round. That factor is actually one of the big things that constrains people to the Bay Area, is VC driving distance, [laughs] because VCs don’t want to do investments that are an entire world away.

With the advent of Ethereum and ICOs, we have finally begun to decentralize the last piece, which was funding. Now, that regulatory environment needs to be worked out. It’s going to be worked out in different ways in different countries.

But the old era where you had to come to Sand Hill to get your company funded and then go to Wall Street to exit is over. That’s something where it’s going to increasingly decentralize. It already has decentralized worldwide, and that’s going to continue.

COWEN: With or without a board seat, doesn’t funding require a face-to-face relationship? It’s common for VC companies to even want the people they’re funding to move their endeavor to the Bay Area in some way, not only for the board meeting. They want to spend time with those people.

We’re doing this podcast face to face. We could have done it over Skype. There’s something significant about actually having an emotionally vivid connection with someone right there in the room. How much can we get around that as a basic constraint?

And here is another:

COWEN: Right now, I pay financial fees to my mutual funds, to Merrill Lynch, all over. Anytime I save money, I’m paying a fee to someone. Which of those fees will go away?

SRINIVASAN: Good question. Maybe all of them.

And:

COWEN: Drones?

SRINIVASAN: Underrated.

COWEN: Why? What will they do that we haven’t thought of?

SRINIVASAN: Construction. There’s different kinds of drones. They’re not just flying drones. There’s swimming drones and there’s walking drones and so on.

Like the example I mentioned where you can teleport into a robot and then control that, Skype into a robot and control that on other side of the world. That’s going to be something where maybe you’re going to have it in drone mode so it walks to the destination. You’ll be asleep and then you wake up and it’s at the destination.

Drones are going to be a very big deal. There’s this interesting movie called Surrogates, which actually talks about what a really big drone/telepresence future would look like. People never leave their homes because, instead, they just Skype into a really good-looking drone/telepresent version of themselves, and they walk around in that.

If they’re hit by a car, it doesn’t matter because they can just rejuvenate and create a new one. I think drones are very, very underrated in terms of what they’re going to do. 

Do read or listen to the whole thing.

Those new service sector jobs, China tech edition

Ms. Shen is a “programmer motivator,” as they are known in China. Part psychologist, part cheerleader, the women are hired to chat up and calm stressed-out coders. The jobs are proliferating in a society that largely adheres to gender stereotypes and believes that male programmers are “zhai,” or nerds who have no social lives.

…He said he was open to the idea of male programmer motivators but somewhat skeptical. “A man chatting with another man, it’s like going out on a date with a guy,” Mr. Feng said. “A little awkward, isn’t it?”

Ms. Zhang, the human resources executive who was part of the panel that hired Ms. Shen, stressed that it is important for a programmer motivator to look good. She said the applicants needed to have “five facial features that must definitely be in their proper order” and speak in a gentle way.

They should also have a contagious laugh, be able to apply simple makeup and be taller than 5 feet 2 inches.

…Ms. Shen said that she does not consider her job to be sexist.

Here is the full story from Sui-Lee Wee at the NYT.

Subliminal education?

The idea of inserting “social-psychological interventions” into learning software is gaining steam, raising both hopes and fears about the ways the ed-tech industry might seek to capitalize on recent research into the impact of students’ mindsets on their learning.

…Publishing giant Pearson recently conducted an experiment involving more than 9,000 unwitting students at 165 different U.S. colleges and universities. Without seeking prior consent from participating institutions or individuals, the company embedded “growth-mindset” and other psychological messaging into some versions of one of its commercial learning software programs. The company then randomly assigned different colleges to use different versions of that software, tracking whether students who received the messages attempted and completed more problems than their counterparts at other institutions.

The results included some modest signs that some such messaging can increase students’ persistence when they start a problem, then run into difficulty. That’s likely to bolster growth-mindset proponents, who say it’s important to encourage students to view intelligence as something that can change with practice and hard work.

But the bigger takeaway, according to Pearson’s AERA paper, is the possibility of leveraging commercial educational software for new research into the emerging science around students’ attitudes, beliefs, and ways of thinking about themselves.

Here is more, via Phil Hill.  Is all education subliminal education?

The value of Facebook and other digital services

Women seem to value Facebook more than men do.

Older consumers value Facebook more.

Education and US region do not seem to be significant.

The median compensation for giving up Facebook is in the range of $40 to $50 a month, based mostly on surveys, though some people do actually have to give up Facebook.

I find it hard to believe the survey-based estimate that search engines are worth over 17k a year.

Email is worth 8.4k, and digital maps 3.6k, and video streaming at 1.1k, again all at the median and based on surveys.  Personally, I value digital maps at close to zero, mostly because I do not know how to use them.

That is all from a new NBER paper by Erik Brynjolfsson, Felix Eggers, and Avinash Gannamaneni.

The Facebook Trials: It’s Not “Our” Data

Facebook, Google and other tech companies are accused of stealing our data or at least of using it without our permission to become extraordinarily rich. Now is the time, say the critics, to stand up and take back our data. Ours, ours, ours.

In this way of thinking, our data is like our lawnmower and Facebook is a pushy neighbor who saw that our garage door was open, took our lawnmower, made a quick buck mowing people’s lawns, and now refuses to give our lawnmower back. Take back our lawnmower!

The reality is far different.

What could be more ours than our friends? Yet I have hundreds of friends on Facebook, most of whom I don’t know well and have never met. But my Facebook friends are friends. We share common interests and, most of the time, I’m happy to see what they are thinking and doing and I’m pleased when they show interest in what I’m up to. If, before Facebook existed, I had been asked to list “my friends,” I would have had a hard time naming ten friends, let alone hundreds. My Facebook friends didn’t exist before Facebook. My Facebook friendships are not simply my data—they are a unique co-creation of myself, my friends, and, yes, Facebook.

Some of my Facebook friends are family, but even here the relationships are not simply mine but a product of myself and Facebook. My cousin who lives in Dubai, for example, is my cousin whether Facebook exists or not, but I haven’t seen him in over twenty years, have never written him a letter, have never in that time shared a phone call. Nevertheless, I can tell you about the bike accident, the broken arm, the X-ray with more than a dozen screws—I know about all of this only because of Facebook. The relationship with my cousin, therefore, isn’t simply mine, it’s a joint creation of myself, my cousin and Facebook.

Facebook hasn’t taken our data—they have created it.

Facebook and Google have made billions in profits, but it’s utterly false to think that we, the users, have not been compensated. Have you checked the price of a Facebook post or a Google search recently? More than 2 billion people use Facebook every month, none are charged. Google performs more than 3.5 billion searches every day, all for free. The total surplus created by Facebook and Google far exceeds their profits.

Moreover, it’s the prospect of profits that has led Facebook and Google to invest in the technology and tools that have created “our data.” The more difficult it is to profit from data, the less data there will be. Proposals to require data to be “portable” miss this important point. Try making your Facebook graph portable before joining Facebook.

None of this means that we should not be concerned with how data, ours, theirs, or otherwise, is used. I don’t worry too much about what Facebook and Google know about me. Mostly the tech companies want to figure out what I want to buy. Not such a bad deal even if the way that ads follow me around the world is at times a bit disconcerting. I do worry that they have not adequately enforced contractual restrictions on third-party users of our data. Ironically, it was letting non-profits use Facebook’s data that caused problems.

I also worry about big brother’s use of big data. Sooner or later, what Facebook and Google know, the government will know. That alone is good reason to think carefully about how much information we allow the tech companies to know and to store. But let’s get over the idea that it’s “our data.” Not only isn’t it our data, it never was.

Privacy sentences to ponder

The increasing difficulty in managing one’s online personal data leads to individuals feeling a loss of control. Additionally, repeated consumer data breaches have given people a sense of futility, ultimately making them weary of having to think about online privacy. This phenomenon is called “privacy fatigue.” Although privacy fatigue is prevalent and has been discussed by scholars, there is little empirical research on the phenomenon. A new study published in the journal Computers and Human Behavior aimed not only to conceptualize privacy fatigue but also to examine its role in online privacy behavior. Based on literature on burnout, we developed measurement items for privacy fatigue, which has two key dimensions —emotional exhaustion and cynicism. Data analyzed from a survey of 324 Internet users showed that privacy fatigue has a stronger impact on privacy behavior than privacy concerns do, although the latter is widely regarded as the dominant factor in explaining online privacy behavior.

Emphasis added by me.  That is by Hanbyl Choi, Jonghwa Park, and Yoonhyuk Jung, via Michelle Dawson.

The fox

Sen. Leahy has a Facebook pixel, invisible to users, that gathers user data of Facebook users who visit the site. (For a quick primer on what “pixels” do, visit Facebook’s resource guide on the data-gathering tool.)

That’s right, if you visit Senator Leahy’s campaign website, it’s likely your data, including your demographics and what pages you looked at on the site, have been placed into a custom data targeting audience by Leahy’s team.

Here is more, via @tedfrank.  You will note that Leahy was one of the interlocutors who confronted Zuckerberg over the privacy issue.

The Chinese corporate apology

When does a corporate apology become a political self-confession, or jiantao (检讨), an act of submission not to social mores and concerns, but to those in power? The line can certainly blur in China. But the public apology today from Zhang Yiming (张一鸣), the founder and CEO of one of China’s leading tech-based news and information platforms, crosses deep into the territory of political abjection.

Zhang’s apology, posted to WeChat at around 4 AM Beijing time, addressed recent criticism aired through the state-run China Central Television and other official media of Jinri Toutiao, or “Toutiao” — a platform for content creation and aggregation that makes use of algorithms to customize user experience. Critical official coverage of alleged content violations on the platform was followed by a notice on April 4 from the State Administration of Press, Publication, Radio, Film, and Television (SAPPRFT), in which the agency said Toutiao and another service providing live-streaming, Kuaishou, would be subject to “rectification measures.”

Read through Zhang’s apology and it is quickly apparent that this is a mea culpa made under extreme political pressure, in which Zhang, an engineer by background, ticks the necessary ideological boxes to signal his intention to fall into line.

At one point, Zhang confesses that the “deep-level causes” of the problems at Toutiao included “a weak [understanding and implementation of] the “four consciousnesses”. This is a unique Xi Jinping buzzword, introduced in January 2016, that refers to 1) “political consciousness” (政治意识), namely primary consideration of political priorities when addressing issues, 2) consciousness of the overall situation (大局意识), or of the overarching priorities of the Party and government, 3) “core consciousness” (核心意识), meaning to follow and protect Xi Jinping as the leadership “core,” and 4) “integrity consciousness” (看齐意识), referring to the need to fall in line with the Party. Next, Zhang mentions the service’s failure to respect “socialist core values,” and its “deviation from public opinion guidance” — this latter term being a Party buzzword (dating back to the 1989 crackdown on the Tiananmen Square protests) synonymous with information and press controls as a means of maintaining Party dominance.

Zhang also explicitly references Xi Jinping’s notion of the “New Era,” and writes: “All along, we have placed excessive emphasis on the role of technology, and we have not acknowledged that technology must be led by the socialist core value system, broadcasting positive energy, suiting the demands of the era, and respecting common convention.”

In the list of the company’s remedies, there is even a mention of the need to promote more content from “authoritative media,” a codeword for Party-controlled media, which suggests once again that the leadership has been unhappy with the idea of algorithms that wall users off from official messaging if they show no interest in such content.

Here is the full story, via Anecdotal.

Software engineer and psychologist wanted

AI Grant (aigrant.org) is a distributed AI research lab. Their goal is to find and fund the modern day Einsteins: brilliant minds from untraditional backgrounds working on AI.

They need a software engineer. The qualifications are twofold: an intrinsic interest in the problem of identifying talented people across the world, and a demonstrated ability to ship software projects without much supervision. This doesn’t have to be through traditional means. It could just be side-projects on Github.

They’re also looking for a psychologist with experience in personality and IQ modeling.

Bay Area location is a big plus, but not a requirement. If you’re interested in learning more, email team@aigrant.org with information about yourself.

This is not a paid ad, but I am seeking to do a favor for the excellent and highly talented Daniel Gross (and perhaps for you), with whom you would get to work.  Do please mention MR if you decide to apply!

Zeynep Tufekci’s Facebook solution — can it work?

Here is her NYT piece, I’ll go through her four main solutions, breaking up, paragraph by paragraph, what is one unified discussion:

What would a genuine legislative remedy look like? First, personalized data collection would be allowed only through opt-in mechanisms that were clear, concise and transparent. There would be no more endless pages of legalese that nobody reads or can easily understand. The same would be true of any individualized targeting of users by companies or political campaigns — it should be clear, transparent and truly consensual.

Who can be against “clear, transparent and truly consensual?”  But this reminds me of those conservatives who wish regulations would be shorter, simpler, easier to write — it’s not always that easy and wishing don’t make it so.  (Try sitting down with someone in the immediate process of writing such a rule.)  That said, let’s think about what maybe will happen.  How about the United States adopting some version of the forthcoming EU GDPR?  That might in fact be an OK outcome (NYT).  But will that be clear and transparent?  Is any EU regulation clear and transparent?  Can anyone tell me, sitting in their seats right now, if it will outlaw the blockchain or not?  Whether it outlaws the blockchain or not, could either of those outcomes be called “consensual”?  I don’t think Tufekci has given an actual proposal yet.

Second, people would have access, if requested, to all the data a company has collected on them — including all forms of computational inference (how the company uses your data to make guesses about your tastes and preferences, your personal and medical history, your political allegiances and so forth).

This is not feasible, as computational inference is usually not transparent and often is understood by nobody.  But even the simpler stuff — what exactly is the call here?  That Facebook has to send you a big zip file?  Is the goal to inform people in some meaningful way?  Or simply to deter Facebook from having the information in the first place?  If it’s the latter, let’s have a more explicit argument that people would prefer a Facebook they have to pay for.  Personally, I don’t think they would prefer that and already have shown as such.

Third, the use of any data collected would be limited to specifically enumerated purposes, for a designed period of time — and then would expire. The current model of harvesting all data, with virtually no limit on how it is used and for how long, must stop.

“Must”?  Not “should”?  That is a classic example of trying to establish a conclusion simply by word usage.  In this context, what does “enumerated” mean?  Are we back to GDPR?  Or they send you an email with a long list of what is going on?  Or that information sits behind a home page somewhere?  (So much for simple and transparent.)  You have to opt in to each and every use of the data?  So far it sounds like more bureaucracy and less transparency, and in fact this kind of demand is precisely the origin of those lengthy “opt in” statements that no one reads or understands.

Fourth, the aggregate use of data should be regulated. Merely saying that individuals own their data isn’t enough: Companies can and will persuade people to part with their data in ways that may seem to make sense at the individual level but that work at the aggregate level to create public harms. For example, collecting health information from individuals in return for a small compensation might seem beneficial to both parties — but a company that holds health information on a billion people can end up posing a threat to individuals in ways they could not have foreseen.

Maybe, but there is no example given of harm other than an unspecified speculation.  It also seems to be saying I don’t have a First Amendment right to write personal information into a text box.  And who here is to do the regulating?  Government is one of the biggest violators of our privacy, and also a driving force behind electronic medical records, another massive medical privacy violator (for better or worse), most of all after they are hacked and those who have sought mental illness treatment have their identities put on Wikileaks.  The governmental system of identity and privacy is based around the absurdity of using Social Security numbers.  Government software is generations behind the cutting edge and OPM was hacked very badly, not to mention Snowden made away with all that information.  And government is to be the new privacy guardian?  This needs way, way more of an argument.

I do understand that the author had only a limited word count.  But googling “Zeynep Tufekci Facebook”  does not obviously bring us to a source where these proposals are laid out in more detail, nor is there any link in the on-line version of the article to anyone else’s proposal, much less hers.  So I say this piece is overly confident and under-argued.

What instead?  I would instead start with the sentence “Most Americans don’t value their privacy or the security of their personal data very much,” and then discuss all the ways that limits regulation, or lowers the value of regulation, or will lead many well-intended regulations to be circumvented.  Next I would consider whether there are reasonable restrictions on social media that won’t just cement in the power of the big incumbents.  Then I would ask an economist to estimate the costs of regulatory compliance from the numerous lesser-known web sites around the world.  Without those issues front and center, I don’t think you’ve got much to say.

Which companies are likely to be good or bad at public relations?

That is the topic of my latest Bloomberg column, the community banks are likely to be good, here is one excerpt:

I think of community banks as enjoying relatively high levels of trust. Millions of Americans have walked through the doors of their local banks and dealt with the loan officers, tellers and account managers, giving the business a human face. A community bank cannot serve a region without sending out a fair number of foot soldiers. Banks tend to have longstanding roots in their communities, and a large stock of connections and accumulated social capital.

In turn, community banks have converted this personal trust into political clout. There are community banks in virtually every congressional district, and these banks have developed the art of speaking for many different segments of American society, not just a narrow coastal elite. When these banks mobilize on behalf of a political cause, they are powerful, as illustrated by the likelihood that they will get regulatory relief from the Dodd-Frank Act, probably with bipartisan support. They have such influence that one member of the Federal Reserve Board must be a community banker, even though few economists see much rationale for this provision.

Given their usefulness, it would be wrong to describe community bankers as a stagnant sector of our economy. Still, the same features that make them trusted and politically powerful also make them unlikely to be major sector disruptors.

Already you can see a problem shaping up, as perhaps the faster-growing, higher productivity gain companies will have less experience.  And indeed often the very dynamic, big tech companies are not so good at public relations:

Alternatively, let’s say you were designing a business that, whatever its other virtues might be, would not be very good at public relations.

First, you would make sure the business had come of age fairly recently. That would ensure the company didn’t have a long history of managing public relations, learning how the news media work, figuring out what it will or will not be blamed for, and rooting itself in local communities.

The next thing you might do is to concentrate the company’s broader business sector in one particular part of the country. That would ensure that the companies’ culture didn’t reflect the broadest possible swath of public opinion. Better yet, don’t choose a swing state such as Pennsylvania or Ohio, but rather opt for a region that is overwhelmingly of a single political orientation and viewed by many Americans as a bit crazy or out of touch. How about Northern California?

There is much more at the link.  The clincher of course is this:

And we have been building a political system that favors the time-honored company rather than the radical innovator.

Some simple Bitcoin economics

That is a new paper by Linda Schilling and Harald Uhlig, here is the abstract:

How do Bitcoin prices evolve? What are the consequences for monetary policy? We answer these questions in a novel, yet simple endowment economy. There are two types of money, both useful for transactions: Bitcoins and Dollars. A central bank keeps the real value of Dollars constant, while Bitcoin production is decentralized via proof-of-work. We obtain a “fundamental condition,” which is a version of the exchange-rate indeterminacy result in Kareken-Wallace (1981), and a “speculative” condition. Under some conditions, we show that Bitcoin prices form convergent supermartingales or submartingales and derive implications for monetary policy.

In this framework, I would attribute the volatility of the recent Bitcoin price to a) sometimes being in the speculative equilibrium or uncertainty about such, b) regulatory uncertainty, and c) uncertainty about the hedging or store of value properties of Bitcoin and other cryptoassets.  If you are interested in other considerations, here is a good Jimmy Song essay on why Bitcoin might be special.  And see this paper by Garratt and Wallace, though unlike with Schilling and Uhlig I am less sure how they are modeling the black/gray market uses for Bitcoin as a transactions medium.

What if we paid for Facebook?

Geoffrey Fowler asks that question, here is one bit from his analysis:

You can actually put a dollar figure on how much we’re worth to the social network. Facebook collected $82 in advertising for each member in North America last year. Across the world, it’s about $20 per member. Facebook the company is valued at about $450 billion because investors believe it will find even more ways to make money from collecting data on its 2 billion members.

You might imagine charging Americans $82 a year, though at that price the overall network would be smaller and of lower value to users.  Alternatively, Zeynep Tufekci wrote (NYT):

Internet sites should allow their users to be the customers. I would, as I bet many others would, happily pay more than 20 cents per month for a Facebook or a Google that did not track me, upgraded its encryption and treated me as a customer whose preferences and privacy matter. [She earlier had cited 20 cents per month as their profit per customer…TC takes all of these numbers with a grain of salt.]

Like Jonathan Swift, I have a simple proposal: don’t use “Facebook the service,” and conduct all of your social networking on WhatsApp, which by the way is owned by “Facebook the company.”  WhatsApp is fully encrypted, and it has no algorithms and indeed few bells and whistles of any kind.  From each person, messages are stacked in sequential order.  You can send photos and you can delete content, permanently I believe.  You can set up groups.  There is some kind of microphone function, though I’ve never figured it out.  And did I mention it is totally free?  Zero ads too.  Nor is the page cluttered, nor do you get these little notifications: “You have 37 messages, 49 notifications, 23 friend requests, 81 pokes, and a partridge and a pear tree,” etc.

Everything you are asking for exists now, from “Facebook the company,” though it is not “Facebook the service.”

Problem solved!  Oh, wait, you’re not interested…?  What should I infer from that?

Addendum: I do get that if everyone switched from “Facebook the service” to WhatsApp, the cross-subsidy would diminish and the terms of WhatsApp would change.  But still, at the margin, and in the meantime, plenty of people — including you — could switch and I expect this deal can remain the same.  Be a free rider!  Our democracy may depend on it.

Edward Tenner’s *The Efficiency Paradox*, or are big tech and finance actually the same?

The author is Edward Tenner and the subtitle is What Big Data Can’t Do.  Overall, I prefer to read Tenner on engineering more narrowly construed, but still I found some novel and interesting ideas in this book, as you might expect.

Most notably, I was struck by his claim that the rise of “Big Tech” and the rise of finance are more or less the same thing.  Many of the tech innovations are in fact transactional innovations, and both the “financialization” revolution and much of social network tech promulgate the idea of “life as a portfolio,” albeit portfolios of different kinds.  Both have an ideal of “friction-free commerce,” or social interactions, as the case may be, and of course in both cases this is organized by code.

Furthermore, if you make buying and finding things much easier, finance as a percentage of gdp likely will go up.  Do not forget that Jeff Bezos was first a young star at Shaw, a hedge fund.  Is it any accident that finance and tech are often, these days, competing for the same pool of talented young quant workers?

Here is one good bit from Tenner:

We have all heard of Jeff Bezos, founder of Amazon.com.  Only technical specialists and historians have heard of Jacobus Verhoeff.  Yet when Bezos planned to transform online retailing, bookselling was a natural beginning because, thanks to Verhoeff’s algorithm, more books had standardized product numbers than any other category of merchandise.

You can buy the book here.

More arguments against blockchain, most of all about trust

Here are more arguments about blockchain from Kai Stinchcombe, here is one ouch:

93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman!

And:

Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings?

It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people.

Here is the full essay, via Chris F. Masse.  Here is Kai’s earlier essay on blockchain.