Category: Web/Tech

Does the O-Ring model hold for AIs?

Let’s say you have a production process, and the AIs involved operate at IQ = 160, and the humans operate at IQ = 120.  The O-Ring model, as you may know, predicts you end up with a productivity akin to IQ = 120.  The model, in short, says a production process is no better than its weakest link.

More concretely, it could be the case that the superior insights of the smarter AIs are lost on the people they need to work with.  Or overall reliability is lowered by the humans in the production chain.  This latter problem is especially important when there is complementarity in the production function, namely that each part has to work well for the whole to work.  Many safety problems have that structure.

The overall productivity may end up at a somewhat higher level than IQ = 120, if only because the AIs will work long hours very cheaply.  Still, the quality of the final product may be closer to IQ = 120 than you might have wished.

This is another reason why I think AI productivity will spread in the world only slowly.

Sometimes when I read AI commentators I feel they are imagining production processes of AIs only.  Eventually, but I do not see that state of affairs as coming anytime soon, if only for legal and regulatory reasons.

Furthermore, those AIs might have some other shortcomings, IQ aside.  And an O-Ring logic could apply to those qualities as well, even within the circle of AIs themselves.  So if say Claude and the o1 model “work together,” you might end up with the worst of both worlds rather than the best.

The uneven effects of AI on the American economy

That is the topic of my latest Bloomberg column, here is one excerpt:

To see how this is likely to play out, start with a distinction between sectors in which it is relatively easy to go out of business, and sectors in which it is not. Most firms selling computer programming services, for example, do not typically have guaranteed customers or revenue, at least for long. Employees have to deliver, or they and their company will be replaced. The same is true of most media companies: If they lose readers or customers, their revenue disappears. There is also relatively free entry into the sector in the US, due to the First Amendment.

Another set of institutions goes out of business only slowly, if at all. If a major state university does a poor job educating its students, for example, enrollment may decline. But the institution is still likely to be there for decades more. Or if a nonprofit group does a poor job pursuing its mission, donors may not learn of its failings for many years, while previous donors may pass away and include the charity in their wills. The point is, it can take a long time for all the money to dry up.

Which leads me to a prediction: Companies and institutions in the more fluid and competitive sectors of the economy will face heavy pressure to adopt AI. Those not in such sectors, will not.

It is debatable how much of the US economy falls into each category, and of course it is a matter of degree. But significant parts of government, education, health care and the nonprofit sector can go out of business very slowly or not at all. That is a large part of the US economy — large enough to slow down AI adoption and economic growth.

As AI progresses, the parts of the economy with rapid exit and free entry will change quickly.

Recommended, read the whole thing.

Scott Alexander on the Progress Studies conference

Here is one excerpt:

Over-regulation was the enemy at many presentations, but this wasn’t a libertarian conference. Everyone agreed that safety, quality, the environment, etc, were important and should be regulated for. They just thought existing regulations were colossally stupid, so much so that they made everything worse including safety, the environment, etc. With enough political will, it would be easy to draft regulations that improved innovation, price, safety, the environment, and everything else.

For example, consider supersonic flight. Supersonic aircraft create “sonic booms”, minor explosions that rattle windows and disturb people underneath their path. Annoyed with these booms, Congress banned supersonic flight over land in 1973. Now we’ve invented better aircraft whose booms are barely noticeable, or not noticeable at all. But because Congress banned supersonic flight – rather than sonic booms themselves – we’re stuck with normal boring 6-hour coast-to-coast flights. If aircraft progress had continued at the same rate it was going before the supersonic ban, we’d be up to 2,500 mph now (coast-to-coast in ~2 hours). Can Congress change the regulation so it bans booms and not speed? Yes, but Congress is busy, and doing it through the FAA and other agencies would take 10-15 years of environmental impact reports.

Or consider solar power. The average large solar project is delayed 5-10 years by bureaucracy. Part of the problem is NEPA, the infamous environmental protection law saying that anyone can sue any project for any reason if they object on environmental grounds. If a fossil fuel company worries about a competition from solar, they can sue upcoming solar plants on the grounds that some ants might get crushed beneath the solar panels; even in the best-case where the solar company fights and wins, they’ve suffered years of delay and lost millions of dollars. Meanwhile, fossil fuel companies have it easier; they’ve had good lobbyists for decades, and accrued a nice collection of formal and informal NEPA exemptions.

Even if a solar project survives court challenges, it has to get connected to the grid. This poses its own layer of bureaucracy and potential pitfalls.

Do read the whole thing.  And congratulations to Jason Crawford and Heike Larson for pulling off this event.

Metascience podcast on science and safety

From the Institute for Progress.  There are four of us, namely  Dylan Matthews, Matt Clancy, and Jacob Trefethen as well.  There is a transcript, and here is one very brief excerpt:

Tyler Cowen: I see the longer run risks of economic growth as primarily centered around warfare. There is lots of literature on the Industrial Revolution. People were displaced. Some parts of the country did worse. Those are a bit overstated.

But the more productive power you have, you can quite easily – and almost always do – have more destructive power. The next time there’s a major war, which could be many decades later, more people will be killed, there’ll be higher risks, more political disorder. That’s the other end of the balance sheet. Now, you always hope that the next time we go through this we’ll do a better job. We all hope that, but I don’t know.

And:

Tyler Cowen: But the puzzle is why we don’t have more terror attacks than we do, right? You could imagine people dumping basic poisons into the reservoir or showing up at suburban shopping malls with submachine guns, but it really doesn’t happen much. I’m not sure what the binding constraint is, but since I don’t think it’s science, that’s one factor that makes me more optimistic than many other people in this area.

Dylan Matthews: I’m curious what people’s theories are, since I often think of things that seem like they would have a lot of potential for terrorist attacks. I don’t Google them because after Edward Snowden, that doesn’t seem safe.

I live in DC, and I keep seeing large groups of very powerful people. I ask myself, “Why does everyone feel so safe? Why, given the current state of things, do we not see much more of this?” Tyler, you said you didn’t know what the binding constraint was. Jacob, do you have a theory about what the binding constraint is?

Jacob Trefethen: I don’t think I have a theory that explains the basis.

Tyler Cowen: Management would be mine. For instance, it’d be weird if the greatest risk of GPT models was that they helped terrorists have better management, just giving them basic management tips like those you would get out of a very cheap best-selling management book. That’s my best guess.

I would note that this was recorded some while ago, and on some of the AI safety issues I would put things differently now.  Maybe some of that is having changed my mind, but most of all I simply would present the points in a very different context.

Does it matter who Satoshi was?

That is the topic of my latest Bloomberg column, here is one excerpt:

It also matters if Satoshi was a single person or a small team. If a single person, that might mean future innovations are more likely than generally thought: If Satoshi is a lone individual, then maybe there there are more unknown geniuses out there. On the other hand, the Satoshi-as-a-team theory would mean that secrets are easier to keep than people think. If that’s the case, then maybe conspiracy theories are more true than most of us would care to admit.

According to many speculations, Satoshi came out of a movement obsessed with e-cash and e-gold mechanisms, dating to the 1980s. People from those movements who have been identified as potential Satoshi candidates include Nick Szabo, Hal Finney, Wei Dai, David Chaum and Douglas Jackson, among others. At the time, those movements were considered failures because their products did not prove sustainable. The lesson here would be that movements do not truly and permanently fail. It is worth experimenting in unusual directions because something useful might come out of those efforts.

If Peter Todd is Satoshi, then it’s appropriate to upgrade any estimates of the ability of very young people to get things done. Todd would have been working on Bitcoin and the associated white paper as a student in his early 20s. At the same time, if the more mainstream Adam Back is involved, then maybe the takeaway is that rebellious young people should seek out older mentors on matters of process and marketing.

I believe that in less than two years we will know who Satoshi is.

Acemoglu interview with Times of India

Here is part of the segment on AI:

Given the potential for AI to exacerbate inequality, how can we redirect technology?

We need to actively steer technological development in a direction that benefits broader swathes of humanity.  This require a pro-human approach that prioritises enhancing worker productivity and autonomy, supporting democracy and citizen empowerment, and fostering creativity and innovation.

To achieve this, we need to: a) Change the narrative around technology, emphasising societal control and a focus on human well-being. b) Build strong countervailing powers, such as labour unions and civil society organisations, to balance the power of tech companies, and c) Implement policies that level the playing field, including tax reforms that discourage automation and promote labour, data rights for individuals and creative workers, and regulations on manipulative digital advertising practices.

Here is the full interview.

How should strong AI alter philanthropy?

That is the theme of my latest Bloomberg column, and here is one bit:

One big change is that AI will enable individuals, or very small groups, to run large projects. By directing AIs, they will be able to create entire think tanks, research centers or businesses. The productivity of small groups of people who are very good at directing AIs will go up by an order of magnitude.

Philanthropists ought to consider giving more support to such people. Of course that is difficult, because right now there are no simple or obvious ways to measure those skills. But that is precisely why philanthropy might play a useful role. More commercially oriented businesses may shy away from making such investments, both because of risk and because the returns are uncertain. Philanthropists do not have such financial requirements.

And this oft-neglected point:

Strong AI capabilities also mean that the world might be much better over some very long time horizon, say 40 years hence. Perhaps there will be amazing new medicines that otherwise would not have come to pass, and as a result people might live 10 years longer. That increases the return — today — to fixing childhood maladies that are hard to reverse. One example would be lead poisoning in children, which can lead to permanent intellectual deficits. Another would be malnutrition. Addressing those problems was already a very good investment, but the brighter the world’s future looks, and the better the prospects for our health, the higher those returns.

The flip side is that reversible problems should probably decline in importance. If we can fix a particular problem today for $10 billion, maybe in 10 years’ time — due to AI — we will be able to fix it for a mere $5 billion. So it will become more important to figure out which problems are truly irreversible. Philanthropists ought to be focused on long time horizons anyway, so they need not be too concerned about how long it will take AI to make our world a fundamentally different place.

Recommended, interesting throughout.

Reflections on Palantir

Here is a new essay by Nabeel Qureshi, excerpt:

The combo of intellectual grandiosity and intense competitiveness was a perfect fit for me. It’s still hard to find today, in fact – many people have copied the ‘hardcore’ working culture and the ‘this is the Marines’ vibe, but few have the intellectual atmosphere, the sense of being involved in a rich set of ideas. This is hard to LARP – your founders and early employees have to be genuinely interesting intellectual thinkers. The main companies that come to mind which have nailed this combination today are OpenAI and Anthropic. It’s no surprise they’re talent magnets.

And this:

Eventually, you had a damn good set of tools clustered around the loose theme of ‘integrate data and make it useful somehow’.

At the time, it was seen as a radical step to give customers access to these tools — they weren’t in a state for that — but now this drives 50%+ of the company’s revenue, and it’s called Foundry. Viewed this way, Palantir pulled off a rare services company → product company pivot: in 2016, descriptions of it as a Silicon Valley services company were not totally off the mark, but in 2024 they are deeply off the mark, because the company successfully built an enterprise data platform using the lessons from those early years, and it shows in the gross margins – 80% gross margins in 2023. These are software margins. Compare to Accenture: 32%.

The rest is interesting throughout.  As Nabeel and a few others have noted, there should be many more pieces trying to communicate what various businesses and institutions really are like.

What surprised you the most this year?

What has surprised you the most this year?

The development of decentralized training for AI models, from DiLoCo and DiPaCo from Google DeepMind to Distro coming up from Nous Research. It completely changes the game in terms of how we ought to think about large models and what we can do to train them. This means pure compute thresholds are not going to be very useful, and that we will have even less of a way to centrally control the means of knowledge production.

That is from Rohit Krishnan, who answers other questions as well.  Interviewed by Derek Robertson, via Mike Doherty.

Dario Amodei on AI and the optimistic scenario

Here is a longish essay, here is one excerpt:

Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.

I view human imperfections, and current institutional and legal constraints as more binding than Dario does, and thus I think speeds of progress will be lower than he does. But there is much in his essay I agree with.

Who benefits from working with AI?

We use a controlled experiment to show that ability and belief calibration jointly determine the benefits of working with Artificial Intelligence (AI). AI improves performance more for people with low baseline ability. However, holding ability constant, AI assistance is more valuable for people who are calibrated, meaning they have accurate beliefs about their own ability. People who know they have low ability gain the most from working with AI. In a counterfactual analysis, we show that eliminating miscalibration would cause AI to reduce performance inequality nearly twice as much as it already does.

That is from a new NBER working paper by  Andrew Caplin, David J. Deming, Shangwen Li, Daniel J. Martin, Philip Marx, Ben Weidmann & Kadachi Jiada Ye.

The decline in retail sales jobs

It is steeper than I had thought:

The final labor market trend we uncovered was a very rapid decline in retail sales jobs, show in the figure below. Retail sales hovered at around 7.5 percent of employment from 2003 to 2013 but has since fallen to only 5.7 percent of employment, a decline about 25 percent in just a decade. Put another way – the U.S. economy added 19 million total jobs between 2013 and 2023 but lost 850 thousand retail sales jobs. The decline started well before the pandemic.

And STEM jobs truly are on the rise, even though that is what they may be telling you in school:

The figure also shows rapid employment growth in business and management jobs. The fastest growing occupations in that category are science and engineering managers, management analysts, and other business operations specialists. This is especially striking because STEM employment declined slightly between 2000 and 2012.

With a good picture at the link.  That is all from David Deming, with further interesting material throughout.