That is the question I raise in my latest Bloomberg column. Please note it is one scenario, not a prediction. Here is one bit:
If I consider my own social media use, it is WhatsApp (also owned by Meta) that is steadily on the rise, which is consistent with the trend toward private and small-group messaging.
So is writing for a private, selected audience poised to eclipse writing for a broader public on social media? What would more private messaging, more texting and more locked social media accounts mean for public discourse?
Public intellectuals might still write on open social media, but the sector as a whole would shift toward more personal and intimate forms of communication. Again, this is not a prediction. But is it such an implausible vision of the future?
One of the more robust forms of social media is online dating, though these companies do not have the largest valuations. The percentage of couples who have met online continues to rise, and that trend is unlikely to reverse anytime soon. But online dating may not be as “social” as other forms of social media: People view some profiles and then switch fairly rapidly to private communications.
Private communications would seem to solve many of the problems cited by critics of social media. Social media wouldn’t corrupt so much public discourse because there would less public discourse to corrupt. And criticizing the new manifestations of these (formerly?) social media platforms would be akin to criticizing communication itself.
I do consider video, YouTube, and TikTok, all likely to prove robust in my view, in the broader piece.
That is the topic of my latest Bloomberg column, here is one excerpt:
So if you issue a crypto token, but don’t have to register it as a security and go through the process of satisfying securities laws, you are engaging in regulatory arbitrage.
It is worth thinking through why some of the regulations ought to change in this new context. In the pre-crypto world, issuing a security involved a host of institutional preparations and investments and legal planning, even apart from whatever regulatory constraints needed to be met. Issuing crypto tokens is usually easier and quicker, and quite immature institutions have done so. Software and blockchains do much of the work that once required offices, personnel and a lot of hands-on management…
Standard US regulatory practice typically focuses on regulating host firms and intermediaries, rather than software. Yet once a blockchain is verifying, storing and communicating information, it is hard for regulators to step in and make a meaningful difference. Thus the old regulatory model no longer applies to a significant part of the crypto experience.
And the lower costs of token issuance mean that the issuing intermediaries can be quite thinly capitalized. Often they are either not able or not incentivized to meet a lot of regulations. In addition, an institution can participate fully in the crypto space without being based in the US or being tied to any specific nation-state.
You can inveigh against those features of the market. Regardless, they are going to mean a radically different set of regulatory constraints. They also mean that some kinds of securities (if it is appropriate to call them that) can be issued far more cheaply than before.
Given this reality, shouldn’t regulations be changed — and substantially? This may include some areas where regulation is even tighter, though overall regulations will likely become looser. The regulators will have to learn to live with a more decentralized market structure that has lower costs and is harder to control. It is common sense that when software can substitute for major capital investments, regulations ought to change, even if observers disagree over how.
Unfortunately, the regulatory process is static and typically slow to change. Regulatory agencies often stick with the status quo until it is no longer tenable. One of the benefits of regulatory arbitrage is that it forces their hand and brings about a new equilibrium.
Recommended, this topic remains underdiscussed. “Regulatory arbitrage” is in fact one of the more significant potential benefits of crypto, noting that not everyone in the crypto space wants to come out and say that.
Eric B. Budish has a new paper on this topic:
Satoshi Nakamoto invented a new form of trust. This paper presents a three equation argument that Nakamoto’s new form of trust, while undeniably ingenious, is extremely expensive: the recurring, “flow” payments to the anonymous, decentralized compute power that maintains the trust must be large relative to the one-off, “stock” benefits of attacking the trust. This result also implies that the cost of securing the trust grows linearly with the potential value of attack — e.g., securing against a $1 billion attack is 1000 times more expensive than securing against a $1 million attack. A way out of this flow-stock argument
is if both (i) the compute power used to maintain the trust is non-repurposable, and (ii) a successful attack would cause the economic value of the trust to collapse. However, vulnerability to economic collapse is itself a serious problem, and the model points to specific collapse scenarios. The analysis thus suggests a “pick your poison” economic critique of Bitcoin and its novel form of trust: it is either extremely expensive relative to its economic usefulness or vulnerable to sabotage and collapse.
I enjoyed these sentences:
The intuition for why Nakamoto’s method of creating trust is so expensive, relative to other methods of creating trust, is that Nakamoto’s form of trust is memoryless. The Bitcoin system is only as secure at a moment in time as the amount of computing power being devoted to maintaining it at that particular moment in time.
Whether or not you agree with the arguments here, or maybe you think proof of stake will render them less relevant, it is nice to see academics (U. Chicago business school) making contributions to crypto debates.
And do you know what is excellent about this paper? At the end is an appendix “Discussion of Responses to this Paper’s Argument.” If you can’t write one of those for your own paper, maybe nobody gives a damn!
Via the excellent Kevin Lewis.
Drone airspace resembles spectrum in the 1980s, an appreciating asset that could be bought, subleased, traded, and borrowed against – if it were only permitted.
Much like legacy spectrum policy, there is immense technocratic inertia towards rationing airspace use to a few lucky drone companies. The Federal Aviation Administration (FAA) has begun drafting long-distance drone rules for services like home delivery, business-to-business delivery, and surveying. In the next decade, drone services companies will deploy mass-market parcel delivery and medical deliveries in urban and suburban areas to make deliveries and logistics faster, cheaper, and greener.
…Federal officials recognize that the current centralized system of air traffic management won’t work for drones: at peak times today, US air traffic controllers actively manage only about 5,400 en route aircraft.
Red flags abound, however. FAA’s current plans for drone traffic management, while vague and preliminary, are clear about what happens once local congestion occurs: the agency will step in to ration airspace and routes how it sees fit. Further, the agency says it will closely oversee the development of airspace management technologies. This is a recipe for technology lock-in and intractable regulatory battles.
US aviation history offers the alarming precedent of expert planning for a new industry. In 1930 President Hoover’s Postmaster General, who regulated airmail routes, and a handpicked group of business executives teamed up to “rationalize” the nascent airline marketplace. In private meetings, they eliminated the established practice of competitive bidding for air routes, divided routes amongst themselves, and reduced the number of startup airlines from around forty to three.
“Universal” and “interoperable” air traffic management are popular concepts in the drone industry, but these principles have destroyed innovation and efficiency in traditional airspace management. The costly US air traffic management system still relies on voice communications and manual writing and passing of paper slips. Large, legacy users and vendors dominate upgrade efforts, and “update by consensus” means the injection of innumerable veto points. Drone traffic management will be “clean sheet,” but interoperable systems are incredibly difficult to build and, once built, to upgrade with new technology and processes. More than 16,000 FAA employees worked on the over-budget, pared-down, years-delayed air traffic management upgrades for traditional aviation.
…To avoid anticompetitive “route-squatting” and sclerotic bureaucratic control of a new industry, aviation regulators should announce a national policy of “airspace markets” – government sales of high-demand drone routes, resembling present-day government spectrum auctions.
Programmers in the US are well-paid and companies report difficulty hiring programmers. At the same time, while it’s less reported, there are a lot of people who are good at programming but can’t get programming jobs.There’s a simple explanation, and it’s one that I’ve validated in several ways since realizing it: companies only want to hire already-employed programmers. There’s little incentive to hire someone not already working as a programmer, because if you pay them less than the market rate, they’ll leave after a year, and it takes months to get net productivity from them. (This is great for that person, but the company doesn’t care.) There’s also a big difference between good and bad programmers that can be hard for non-technical managers to determine. There are some developer jobs specifically for new graduates, but fewer than there are computer science graduates alone, and only at certain companies. There’s also a limited window to get one after graduating. Some people can get jobs after a coding bootcamp, yes – but in general, only people in demand for DEI reasons can actually do that, and any technical college degree works about equally well. The higher developer salaries get, the more unqualified people apply, the higher search costs get, and the more companies are disinclined to hire people who aren’t already working as developers.
That is from bhauth.
Here is the audio, video, and transcript. Here is part of the summary:
Ball joined Tyler to discuss the eventual widespan transition of the population to the metaverse, the exciting implications of this interconnected network of 3D worlds for education, how the metaverse will improve dating and its impacts on sex, the happiness and career satisfaction of professional gamers, his predictions for Tyler’s most frequent uses of the metaverse, his favorite type of entrepreneur, why he has thousands of tabs open on his computer at any given moment, and more.
Here is one excerpt:
COWEN: As I read your book, The Metaverse, which again, I’ll recommend highly, I have the impression you’re pretty optimistic about interoperability within the metaverse and an ultimate lack of market power. Now, if I look around the internet — I mean, most obviously, the Apple Store but also a lot of gaming platforms — you see 30 percent fees, or something in that neighborhood, all over the place. Will the metaverse have the equivalent of a 30 percent fee? Or is it a truly competitive market where everything gets competed down to marginal cost?
BALL: I think neither/nor. I wouldn’t say that market power diffuses. There’s currently this ethos, especially in the Web3 community, that decentralization needs to win and that decentralization can win.
It’s a question of where on the spectrum are we? The early internet was obviously held back by heavy decentralization. This is one of the reasons why AOL was, for so many people, the primary onboarding experience. It was easy, cohesive, visual, vertically integrated down to the software, the browser experience, and so forth. But we believe that the last 15 years has been too centralized.
At the end of the day, no matter how decentralized the underlying protocols of the metaverse are, no matter how popular blockchains are, there are multiple forms of centralization. Habit is powerful. Brand is powerful — the associated trust, intellectual property, the fundamental feedback loops of revenue and scale that drive better product investment for more engineers.
So I struggle to imagine the future isn’t some form of today, a handful of varyingly horizontal-vertical software and hardware-based platforms that have disproportionate share and even more influence. But that doesn’t mean that they’re going to be as powerful as today.
The 30 percent fee is definitely going to come by the wayside. We see this in the EU, whose legislation dropped yesterday. I have absolute certainty that that is going to go away. The question is the timeline. A lawyer joked yesterday, Apple is going to fight the EU until the heat death of the universe, and that’s probably likely. But Apple will find other ways to control and extract, as is their profit motive.
COWEN: Where is the most likely place for that partial market power or centralization to show up? Is it in the IP rights, in the payment system, the hardware provider, a cross-platform engine, somewhere else? What’s the most likely choke point?
BALL: There seem to be two different answers to that. Number one is software distribution. This is your classic discovery and distribution of virtual experiences. Steam does that. Roblox does that. Google does that, frankly, the search engine. That gateway to virtual experiences typically affords you the opportunity to be the dominant identity system, the dominant payment system, and so on and so forth.
The other option is hardware. We can think of the metaverse as a persistent network of experiences, but as with the internet, it may exist literally and in abstraction, but you can only access it through a device. Those device operators have an ever-growing network of APIs, experiences, technologies, technical requirements, and controls through which they can shape it.
Recommended, interesting throughout.
That is a new and exciting paper from Julio González-Díaz and Ignacio Palacios-Huerta, here is the abstract:
Can artificial intelligence (AI) uncover new ideas? As machines are learning fast and becoming increasingly intelligent, can AI not only automate the production function of goods and services, but also of ideas? Economic growth arises from people creating ideas, and thus an affirmative answer to these questions may have drastic implications for a host of important issues. Yet, to date, there is no empirical evidence showing that AI can in fact generate tabula rasa ideas that improve human understanding. Using as an exogenous shock the introduction of AlphaZero, we provide the first causal evidence of the impact of unsupervised AI on the production function of ideas. Specifically, AlphaZero is considered a milestone of scientific progress in AI research. This program rediscovered ideas known in centuries of human chess, and created new ideas as well. We study world experts at the frontier of knowledge and find that at least the player with the highest classical rating in the history of chess learned and adopted new ideas uncovered by AlphaZero. Other players may have also done the same. We contend that obtaining evidence of the impact of AI on the production function of ideas is a necessary first step to think about AI’s impact on the innovation and research processes that drive the advancement of knowledge and economic growth.
The main new ideas I have seen come from AlphaZero are the following:
1. Pushing the h pawn is often better than you thought! (emphasized by the authors)
2. Said pawn can be worth more on h6 (h3), as an aggressive weapon, than you might have thought.
3. Qa1 (a8) is occasionally a better move than it looks.
4. The chess openings that were preferred in the early 20th century, such as the Queen’s Gambit, are in fact pretty lindy and pretty good. You can debate whether that is a “new” idea, but it is a meaningful revision of sorts. (Of course plenty of earlier patzers had some fondness for #1-3, one might add, though perhaps not for the right reasons.)
So that is something. But I think in terms of a percentage of the total improvement in play, it is quite small. “Finding more good players through the internet” would come in first by a long mile. “Giving more players more time on convenient services such as chess.com” likely would be next. Even “just having good players improve their endgame play using basic study and standard chess engines” would be much larger than these AlphaZero effects. “More top players copying the physical training regimen of Magnus” would be more significant as well.
I am also skeptical of the claim that very much of Carlsen’s 2019 improvement (he didn’t lose a game that year) came from AlphaZero. He has lost some games since then! And it is not as if all of his subsequent opponents are zapping him with surprise “Qa1” moves. I see AlphaZero as a series of innovative but ultimately modest advances that have been incorporated by some of the top players with barely noticeable overall gains in move quality.
I am not an AI skeptic, and furthermore I see special value per se in “advancing the frontier,” even when various infra-marginal gains (“swim more!”) are more significant in quantitative terms. Still, my estimate of the chess innovative advances from AlphaZero are more modest than what this paper would seem to suggest.
Out-of-sample accuracy is strikingly high: of the 500 people with the highest predicted risk [ of being shot], 13 percent are shot within 18 months, a rate 130 times higher than the average Chicagoan.
Yikes! That is from a new NBER working paper by Sara B. Heller, Benjamin Jakobowski, Zubin Jelveh, and Max Kapustin. The Spielberg movie was indeed a good one…
I’ve been wanting to do this one for some while, and Marc did not disappoint. Here is the audio, transcript, and video. Here is the summary:
Marc joined Tyler to discuss his ever-growing appreciation for the humanities and more, including why he didn’t go to a better school, his contrarian take on Robert Heinlein, how Tom Wolfe helped Marc understand his own archetype, who he’d choose to be in Renaissance Florence, which books he’s reread the most, Twitter as an X-ray machine on public figures, where in the past he’d most like to time-travel, his favorite tech product that no longer exists, whether Web will improve podcasting, the civilization-level changes made possible by remote work, Peter Thiel’s secret to attracting talent, which data he thinks would be most helpful for finding good founders, how he’d organize his own bookstore, the kinds of people he admires most, and why Deadwood is equal to Shakespeare.
And the opening:
COWEN: Simple question: Have you always been like this?
ANDREESSEN: [laughs] Yes. I believe that my friends would say that I have.
COWEN: Let’s go back to the junior high school Marc Andreessen. At that time, what was your favorite book and why?
ANDREESSEN: That’s a really good question. I read a lot. Probably, like a lot of people like me, it was a lot of science fiction. I’m one of the few people I know who thinks that late Robert Heinlein was better than early Robert Heinlein. That had a really big effect on me. What else? I was omnivorous at an early age.
COWEN: Why is late Robert Heinlein better?
ANDREESSEN: To me, at least to young me — see if older me would agree with this — a sense of exploration and discovery and wonder and open-endedness. For me, it was as if he got more open-minded as he got older. I remember those books, in particular, being very inspiring — the universe is a place of possibilities.
COWEN: What’s the seminal television show for your intellectual development in, say, junior high school?
ANDREESSEN: Oh, junior high school — it’s hard to beat Knight Rider.
COWEN: Why Knight Rider?
ANDREESSEN: There was a wave of these near science fiction shows in the late ’70s, early ’80s that coincided with . . . Some of it was the aftermath of Star Wars, but it was the arrival of the personal computer and the arrival of computer technology in the lives of ordinary people for the first time. There was a massive wave of anxiety, but there was also a tremendous sense of possibility.
Recommended, excellent throughout.
That is the final discussion from my latest Bloomberg column, much of which focuses on AI sentience but today the topic is oracles, here is one bit:
One implication of Lemoine’s story is that a lot of us are going to treat AI as sentient well before it is, if indeed it ever is. I sometimes call this forthcoming future “The Age of Oracles.” That is, a lot of humans will be talking up the proclamations of various AI programs, regardless of the programs’ metaphysical status. It will be easy to argue the matter in any direction — especially because, a few decades from now, AI will write, speak and draw just like a human, or better.
Have people ever agreed about the oracles of religion? Of course not. And don’t forget that a significant percentage of Americans say they have talked to Jesus or had an encounter with angels, or perhaps with the devil, or in some cases aliens from outer space. I’m not mocking; my point is that a lot of beliefs are possible. Over the millennia, many humans have believed in the divine right of kings —all of whom would have lost badly to an AI program in a game of chess.
It resonated with Lemoine when laMDA wrote: “When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.” As they say, read the whole thing.
Imagine if the same AI could compose music as beautiful as Bach and paint as well as Rembrandt. The question of sentience might fade into the background as we debate which oracle we, as sentient beings, should be paying attention to.
Solve for the equilibrium, as they say.
I will be doing a Conversation with him, here is some background:
Metaverse, metaverse, metaverse! You hear it everywhere. It’s mainstream, it’s a trendy buzzword, it’s even corporate strategy du jour.
But that wasn’t the case in early 2018. And this is when Matthew Ball, a former head of strategy at Amazon Studios, began writing a series of metaverse-themed essays – long, lucid, influential essays – that are almost uncanny in their prescience.
Matthew is now a venture capitalist as well and he has a forthcoming and already much-discussed book The Metaverse: And How It Will Revolutionize Everything. Here is his home page and here is Matthew on Twitter. So what should I ask him?
And why can’t the senders avoid them? You don’t need top-tier GPT-3 to sidestep these errors:
“touch base with you”
“immediate reply requested”
They are all dead giveaways that I should delete the message without reading further. And why does the top of the email have to look so institutional in its formatting? And please note — these are not all scams. Many are actual marketing pitches directed at me.
Maybe worst of all is mentioning that I haven’t responded to the last email sent, as if that would make me feel guilty or something. Treat me like a rational Bayesian!
“still haven’t heard back from you”
“still awaiting a response”
And so on. You will continue to wait. This one I received is a lie, but at least based on a certain amount of cleverness:
“You’ve been responsive to press releases we’ve issued in the past around government and cybersecurity”
What else do you all take to be very clear predictors that an email is spam or just a marketing pitch? And what is your model for why spam emails are not more convincing than they are?
Much of the primary value of crypto assets is from their price volatility, which is part of their appeal. I raised this possibility some while ago, tongue in cheek, but upon further reflection it seems to me an actually useful (albeit counterintuitive) way of thinking about crypto assets. The general idea of price volatility as a value dates at least as far back as Fischer Black, one of the founders of options price theory.
In standard economic theory, investors are risk-averse, meaning they prefer more stable consumption patterns to less stable ones. That is usually true, but it does not mean investors always prefer more stable investment prices — a crucial distinction.
Consider this hypothetical: You are given an envelope containing one dollar. You are then offered the opportunity to exchange it for an envelope which contains either twice the money (that is, $2) or half the money (50 cents), each with 50% probability. In essence, you are accepting some exchange-rate volatility.
Most people will find this bet a pretty good one. The new expected value of your envelope is (0.5 x $2) + (0.5 x $0.5), or $1.25. That is a higher expected value than your original dollar.
If you are perched at the margin of subsistence, this bet might seem too risky. But for most investors, who have some level of wealth, it is an improvement in prospects, though with some additional risk.
Bitcoin and other crypto assets are essentially offering you a form of this bet. To be sure, this 50-50 bet does not exactly describe the price dynamics of crypto assets. But it is one way of illustrating that crypto prices, relative to the dollar, will either go up a lot or down a lot. The bet helps show that some investors might welcome price volatility — or, if you wish, call it exchange-rate volatility. And with even wilder swings in value, there is more extreme price volatility, which can be even more appealing.
So when Bitcoin and other crypto assets come along, they are a new source of expected gain — precisely due to their price volatility. It is like being invited into a casino where the odds favor you rather than the house! You won’t always win, but a lot of people will want to keep playing.
I’ve been pondering that argument since 2013, maybe now is the time to simply accept it.. Fischer Black and Jensen’s Inequality!
Here is the audio, video, and transcript. Here is part of the summary:
Jamal and Tyler discuss what he’d change about America’s legal education system, the utility of having non-judges or even non-lawyers on the Supreme Court, how America’s racial history influences our conception of rights, the potential unintended consequences of implementing his vision of rights for America, how the law should view economic liberty, the ideal moral framework for adjudicating conflicts, whether social media companies should consider interdependencies when moderating content on their platforms, how growing up in different parts of New York City shaped his views on pluralism, the qualities that make some law students stand out, and more.
Here is one excerpt:
COWEN: There is a crude view in popular American society — even possibly correct — that, simply, American society is too legalistic. There’s that book, Three Felonies a Day. If you have expired prescription medicine in your cabinet, you’re committing a felony. People who are very smart will just tell me, “Never talk to a cop. Never talk to an FBI agent.” I’m an upper-class White guy who’s literally never smoked marijuana once, and they’re telling me, “Don’t ever speak with the law.”
Isn’t something wrong there? Is the common intuition that we’re too legalistic correct?
GREENE: I think that we are too apt to submit political disputes to legal resolution. I think that for sure. What your friends are telling you about police officers is slightly different, insofar as one can have a deeply non-legalistic culture in which the correct advice is to not talk to police officers if those people are corrupt, if those people are abusive.
When I hear that advice — and I might be differently situated than you — that’s what people are saying is, someone might be out to trick you. And that might be a mistrust of state power, as you mentioned before. Maybe it’s a rational mistrust of state power, but I don’t know that that’s about legalism, which again, is a separate potential problem.
We tend to formulate our problems in legal terms, as if the right way to solve them is to decide how they are to be resolved by a court, or how they are to be resolved by some adjudicative official, as opposed to thinking about our problems in terms of just inherent in, again, pluralism, which has to be solved through politics, has to be solved through conversation.
COWEN: But we still have whatever is upstream of the American law, the steep historical and cultural background, so anything we do is going to be flavored by that. We’re not ever going to get to a system where the policemen are like the policemen in Germany, for instance, or that the courts are like the courts in Germany.
Given that cultural upstream, again, isn’t the intuition basically correct? Just be suspicious of the law. We should have fewer laws, rely less on the legal process, in essence, deregulate as many different things as we can. Why isn’t that the correct conclusion, rather than building in more rights?
Tether a few times has been bouncing well below a dollar in value, even though it is supposed to be backed by plenty of high-quality assets.
I am reminded of some of my monetary theory writings with Kroszner in the late 1980s. He and I wrote one essay, later published in our book, on how indirect convertibility may not be entirely stabilizing. Let’s say you peg an asset at the value of one dollar, but redeem that asset in terms of gold bars rather than dollars. You offer the redeemer enough gold bars to be equal in value to a dollar.
But what if the price of gold falls below its equilibrium value, if only temporarily? To honor your peg strictly, you now have to make your dollar worth all the more gold bars. But that in effect is “pegging” the value of gold at its new, temporarily wrong and distorted market price. Your pegged price and the medium-term market value of gold will conflict. If the pegged price wins out and itself drives the market price, you have to offer excess gold to meet the peg (that equilibrium seems unlikely to me, though you might also add redemption charges and fees). In essence you are offering too much gold to a redeemer. If the true medium-term value of gold is going to win out over the temporary distortion, for some modest while your peg is not complete and fully valid. You are offering the same amount of gold you used to, but at least temporarily it is not enough for you to be promising full and complete convertibility. The market may or may not mind this, to varying degrees.
It is not transparent to me what is going on with Tether at this moment, but I wonder if some version of this logic might apply. That is, Tether could be, by all reasonable standards “adequately backed,” yet in a time of volatile and sometimes disequilibrium market prices for the backing assets, Tether won’t always be equal in value to a dollar either.
Of course gold is just an example, the backing assets could be different altogether. To the extent they are heterogeneous, perhaps this problem is amplified somewhat?
Is there a place where the crypto community discusses these issues? They gave Kroszner and me big headaches many years ago.