Category: Web/Tech

Solve for the NIMBY equilibrium?

We are just beginning to think these issues through:

The government’s plan to use artificial intelligence to accelerate planning for new homes may be about to hit an unexpected roadblock: AI-powered nimbyism.

A new service called Objector is offering “policy-backed objections in minutes” to people who are upset about planning applications near their homes.

It uses generative AI to scan planning applications and check for grounds for objection, ranking these as “high”, “medium” or “low” impact. It then automatically creates objection letters, AI-written speeches to deliver to the planning committees, and even AI-generated videos to “influence councillors”.

Kent residents Hannah and Paul George designed the system after estimating they spent hundreds of hours attempting to navigate the planning process when they opposed plans to convert a building near their home into a mosque.

Here is the full story.  Via Aaron K.

In defense of Schumpeter

Factories of Ideas? Big Business and the Golden Age of American Innovation (Job Market Paper) [PDF]

This paper studies the Great Merger Wave (GMW) of 1895-1904—the largest consolidation event in U.S. history—to identify how Big Business affected American innovation. Between 1880 and 1940, the U.S. experienced a golden age of breakthrough discoveries in chemistry, electronics, and telecommunications that established its technological leadership. Using newly constructed data linking firms, patents, and inventors, I show that consolidation substantially increased innovation. Among firms already innovating before the GMW, consolidation led to an increase of 6 patents and 0.6 breakthroughs per year—roughly four-fold and six-fold increases, respectively. Firms with no prior patents were more likely to begin innovating. The establishment of corporate R\&D laboratories served as a key mechanism driving these gains. Building a matched inventor–firm panel, I show that lab-owning firms enjoyed a productivity premium not due to inventor sorting, robust within size and technology classes. To assess whether firm-level effects translated into broader technological progress, I examine total patenting within technological domains. Overall, the GMW increased breakthroughs by 13% between 1905 and 1940, with the largest gains in science-based fields (30% increase).

That is the job market paper of Pier Paolo Creanza, who is on the market this year from Princeton.

My excellent Conversation with Sam Altman

Recorded live in Berkeley, at the Roots of Progress conference (an amazing event), here is the material with transcript, here is the episode summary:

Sam Altman makes his second appearance on the show to discuss how he’s managing OpenAI’s explosive growth, what he’s learned about hiring hardware people, what makes roon special, how far they are from an AI-driven replacement to Slack, what GPT-6 might enable for scientific research, when we’ll see entire divisions of companies run mostly by AI, what he looks for in hires to gauge their AI-resistance, how OpenAI is thinking about commerce, whether GPT-6 will write great poetry, why energy is the binding constraint to chip-building and where it’ll come from, his updated plan for how he’d revitalize St. Louis, why he’s not worried about teaching normies to use AI, what will happen to the price of healthcare and hosing, his evolving views on freedom of expression, why accidental AI persuasion worries him more than intentional takeover, the question he posed to the Dalai Lama about superintelligence, and more.

Excerpt:

COWEN: What is it about GPT-6 that makes that special to you?

ALTMAN: If GPT-3 was the first moment where you saw a glimmer of something that felt like the spiritual Turing test getting passed, GPT-5 is the first moment where you see a glimmer of AI doing new science. It’s very tiny things, but here and there someone’s posting like, “Oh, it figured this thing out,” or “Oh, it came up with this new idea,” or “Oh, it was a useful collaborator on this paper.” There is a chance that GPT-6 will be a GPT-3 to 4-like leap that happened for Turing test-like stuff for science, where 5 has these tiny glimmers and 6 can really do it.

COWEN: Let’s say I run a science lab, and I know GPT-6 is coming. What should I be doing now to prepare for that?

ALTMAN: It’s always a very hard question. Even if you know this thing is coming, if you adapt your —

COWEN: Let’s say I even had it now, right? What exactly would I do the next morning?

ALTMAN: I guess the first thing you would do is just type in the current research questions you’re struggling with, and maybe it’ll say, “Here’s an idea,” or “Run this experiment,” or “Go do this other thing.”

COWEN: If I’m thinking about restructuring an entire organization to have GPT-6 or 7 or whatever at the center of it, what is it I should be doing organizationally, rather than just having all my top people use it as add-ons to their current stock of knowledge?

ALTMAN: I’ve thought about this more for the context of companies than scientists, just because I understand that better. I think it’s a very important question. Right now, I have met some orgs that are really saying, “Okay, we’re going to adopt AI and let AI do this.” I’m very interested in this, because shame on me if OpenAI is not the first big company run by an AI CEO, right?

COWEN: Just parts of it. Not the whole thing.

ALTMAN: No, the whole thing.

COWEN: That’s very ambitious. Just the finance department, whatever.

ALTMAN: Well, but eventually it should get to the whole thing, right? So we can use this and then try to work backwards from that. I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. How can we accelerate that? What’s in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and roadblocks will be. I assume someone running a science lab should try to think the same way, and they’ll come to different conclusions.

COWEN: How far off do you think it is that just, say, one division of OpenAI is 85 percent run by AIs?

ALTMAN: Any single division?

COWEN: Not a tiny, insignificant division, mostly run by the AIs.

ALTMAN: Some small single-digit number of years, not very far. When do you think I can be like, “Okay, Mr. AI CEO, you take over”?

Of course we discuss roon as well, not to mention life on the moons of Saturn…

Andrej and Dwarkesh as philosophy

If you follow AI at all, you probably do not need another recommendation of the Andrej Karpathy and Dwarkesh Patel podcast, linked to here:

I hardly ever listen to podcasts, but at almost two and a half hours I found this one worthwhile and that was at 1x (I don’t listen to podcasts at higher speed, not wanting to disrupt the drama of the personalities).  What struck me is how philosophical so many aspects of the discussion were.  Will this end up being the best “piece of philosophy” done this year?  Probably.  Neither participant of course is a trained philosopher, but neither were Plato or Kierkagaard.  They are both very focused on real issues however, and new issues at that.  And dialogue is hardly a disqualifying medium when it comes to philosphy.

Some guy on Twitter felt I was slighting this book in my tweet on the matter.  I’ll let history judge this one, as we’ll see which issues people are still talking about fifty years from now (note I said nothing against that book in my tweet, nor against contemporary philosophy, I just said this podcast was philosophical and very good).  I’ve made the point before (pre-LLM) that current academic philosophers are losing rather dramatically in the fight for intellectual influence, and perhaps more of a serious engagement with these issues would help.  I’ve seen plenty of philosophical work on AI, but none of it yet seems to be interesting.  For that you have to go to the practitioners and the Bay Area obsessives.

The American economy is showing its flexibility

That is the theme of my recent Free Press column on AI.  Excerpt:

The more quickly the world changes, the more the quality of our capital markets matters. And the world is indeed changing quickly: AI will soon be present in virtually every job. Many of us already use it informally for legal and medical advice, research, and even companionship.

And to close:

From the point of view of an investor, it matters whether or not we’re in an AI bubble. But if you are seeking to understand long-term social and economic trends, the bubble question is primarily a matter of short-term interest and timing. It will not decide where the economy is headed long-term.

Instead, what we are seeing is that America, at the drop of a hat, can turn on a dime and reallocate capital on an unparalleled scale, to our great and enduring benefit. Unless you were around to witness World War II, none of us have seen anything like this before. Do not expect the ride to be smooth or predictable, but feel free to sit back and enjoy: This is history in the making.

By some estimates, at least three-quarters of the world’s compute is in the United States.

The evolution of Albanian AI governance

Albania’s AI-generated minister, Diella, is “pregnant,” Prime Minister Edi Rama has announced. He revealed plans to create “83 children”, or assistants, one for each Socialist Party member of parliament.

“We took quite a risk today with Diella here and we did very well. So for the first time Diella is pregnant and with 83 children,” he said at the Global Dialogue (BGD) in Berlin. Rama said the “children,” or assistants, will record everything that happens in parliament and keep legislators informed about discussions or events they miss.

“Each one…will serve as an assistant for them who will participate in parliamentary sessions, and will keep a record of everything that happens and will suggest members of parliament. These children will have the knowledge of their mother,” Rama said.

Here is the full story, bizarre throughout.  At least you cannot say they are anti-natalist.

Should we worry about AI’s circular deals?

The yet once again on target Noah Smith reports:

As far as I can tell, there are two main fears about this sort of deal. The first is that the deals will artificially inflate companies’ revenue, tricking investors into overvaluing their stock or lending them too much money. The second is that the deals increase systemic risk by tying all of the AI companies’ fortunes to each other.

Let’s start with the first of these risks. The question here is whether AI’s circular deals are an example of round-tripping or vendor financing.

Suppose two startups — let’s call them Aegnor and Beleg2 — secretly agree to inflate each other’s revenue. Aegnor buys ad space on Beleg’s website, and Beleg buys ad space on Aegnor’s website. Both companies’ revenues go up. They’re not making any profits, and they’re not generating any cash flows, because the money is just changing hands back and forth. But if investors are looking for companies with “traction”, they might see Aegnor and Beleg’s topline revenue numbers go up. If they fail to dig any deeper, they might give both companies a bunch of investment money that they didn’t earn. This is called “round-tripping”, and it happened occasionally during the dotcom boom.

Now what I just described is completely illegal, because the companies colluded in secret. But you can also have something a little similar happen by accident, in a perfectly legal way. If there are a bunch of startups whose business model is selling to other startups, you can get some of the “round-tripping” effect without any collusion.

On the other hand, it’s perfectly normal and healthy for, say, General Motors to lend its customers the money they use to buy GM cars. In fact, GM has a financing arm specifically to do this. This is called vendor finance. It’s perfectly legal and commonplace, and most people think there’s nothing wrong with it. The transaction being financed — a customer buying a car — is something we know has value. People really do want cars; GM Financial helps them get those cars.

So the question is: Are the AI industry’s circular deals more like round-tripping, or are they more like vendor finance? I’m inclined to say it’s the latter.

Noah stresses that the specifics of these deals are widely reported, and no serious investors are being fooled.  I would note a parallel with horizontal or vertical integration, which also can have a financing element.  Except that here corporate control is not being exchanged as part of the deal.  “I give him some of my company, he gives me some of his — my goodness that is circular must be some kind of problem there!”…just does not make any sense.

When will quantum computing work?

Huge investments are flowing into QC companies today. IonQ has a $19B market cap, Rigetti has a $10B cap, and PsiQuantum recently raised $1B.3D-Wave is not relevant, despite high qubit counts. Their machines are annealers, rather than gate based, and have less computational power than the QCs that IonQ, Rigetti, PsiQuantum, etc. are working on. This is a lot of money for an industry generating no real revenue, and without an apparent path to revenue over the next 5 years. Qubit counts have not been doubling each year, but even if they did, we’d have 32 kq machines in 2030.4If qubits double each year, 1,000 qubits today grows to 32 kq in 5 years’ time. There are few – if any – commercial applications for machines of that size. Will these companies keep raising larger rounds until they achieve 100 kq? Or have they got some secret sauce we don’t know about that investors are betting on? If there has been a true breakthrough, we should see much faster growth in qubit count, as well as larger and larger quantum processors, running increasingly massive programs. Note that the QC ecosystem is reasonably public and both private companies and university labs are competitive players. Advances tend to get published rather than stowed away.

Here is more from Tom McCarthy.

Will there be a Coasean singularity?

By Peyman Shahidi, Gili Rusak, Benjamin S. ManningAndrey Fradkin John J. Horton:

AI agents—autonomous systems that perceive, reason, and act on behalf of human principals—are poised to transform digital markets by dramatically reducing transaction costs. This chapter evaluates the economic implications of this transition, adopting a consumer-oriented view of agents as market participants that can search, negotiate, and transact directly. From the demand side, agent adoption reflects derived demand: users trade off decision quality against effort reduction, with outcomes mediated by agent capability and task context. On the supply side, firms will design, integrate, and monetize agents, with outcomes hinging on whether agents operate within or across platforms. At the market level, agents create efficiency gains from lower search, communication, and contracting costs, but also introduce frictions such as congestion and price obfuscation. By lowering the costs of preference elicitation, contract enforcement, and identity verification, agents expand the feasible set of market designs but also raise novel regulatory challenges. While the net welfare effects remain an empirical question, the rapid onset of AI-mediated transactions presents a unique opportunity for economic research to inform real-world policy and market design.

I call it “AI for markets in everything.”  Here is the paper, and here is a relevant Twitter thread, there is now so much new work for economists to do…

Words of wisdom

Among these changes, the most underrated is not misinformation or kooky conspiracy theories or even populism per se — it’s relentless negativity. One thing that we’ve learned from revealed preferences on the internet is that negativity-inflected stories perform better

The impact of ultra-negativity is symmetrical in the sense that both sides do it, but it’s asymmetrical in the sense that conservatives outnumber progressives. In practice, oscillating extremism results in a right-wing authoritarian regime, not a left-wing one.

That is from the gated Matt Yglesias.  The important thing is to keep a positive, constructive attitude toward what is possible.  Content creators who do not do that, no matter what their professed views, are supporting the darker sides of MAGA.

So keep up the good work people!

Predicting Job Loss?

Hardly a day goes by without a new prediction of job growth or destruction from AI and other new technologies. Predicting job growth is a growing industry. But how good are these predictions? For 80 years the US Bureau of Labor Statistics has forecasted job growth by occupation in its Occupational Outlook series. The forecasts were generally quite sophisticated albeit often not quantitative.

In 1974, for example, the BLS said one downward force for truck drivers was that “[T]he trend to large shopping centers rather than many small stores will reduce the number of deliveries required.” In 1963, however, they weren’t quite so accurate about about pilots writing “Over the longer run, the rate of airline employment growth is likely to slow down because the introduction of a supersonic transport plane will enable the airlines to fly more traffic without corresponding expansion in the number of airline planes and workers…”. Sad!

In a new paper, Maxim Massenkoff collects all this data and makes it quantifiable with LLM assistance. What he finds is that the Occupational Outlook performed reasonably well, occupations that were forecast to grow strongly did grow significantly more than those forecast to grow slowly or decline. But was there alpha? A little but not much.

…these predictions were not that much better than a naive forecast based only on growth over the previous decade. One implication is that, in general, jobs go away slowly: over decades rather than years. Historically, job seekers have been able to get a good sense of the future growth of a job by looking at what’s been growing in the past.

If past predictions were only marginally better than simple extrapolations it’s hard to believe that future predictions will perform much better. At least, that is my prediction.

Those new service sector jobs

Yes — as of late 2025, several robotics and AI startups are literally paying people to fold their laundry (or perform similar chores) while recording themselves, in order to train robots in dexterous, human-like task performance.

Companies such as EncordMicro1, and Scale AI have launched paid “data collection” programs aimed at generating real-world video datasets for robotic learning. Participants are compensated to film themselves carrying out everyday household activities — folding laundry, loading dishwashers, making coffee, or tidying up. The footage is then annotated to help AI systems learn how to manipulate deformable objects, coordinate finger movements, and complete multi-step domestic tasks.

That is from Perplexity, original cite from Samir Varma.