Category: Web/Tech

The AI polity that is Albania?

While the rest of Europe bickers over the safety and scope of artificial intelligence, Albania is tapping it to accelerate its EU accession.

It’s even mulling an AI-run ministry.

Prime Minister Edi Rama mentioned AI last month as a tool to stamp out corruption and increase transparency, saying the technology could soon become the most efficient member of the Albanian government.

“One day, we might even have a ministry run entirely by AI,” Rama said at a July press conference while discussing digitalization. “That way, there would be no nepotism or conflicts of interest,” he argued.

Local developers could even work toward creating an AI model to elect as minister, which could lead the country to “be the first to have an entire government with AI ministers and a prime minister,” Rama added.

While no formal steps have been taken and Rama’s job is not yet officially up for grabs, the prime minister said the idea should be seriously considered…

AI is already being used in the administration to manage the thorny matter of public procurement, an area the EU has asked the government to shore up, as well as to analyze tax and customs transactions in real time, identifying irregularities.

Here is the whole Politico story, via Holger.

Data center facts of the day

JLL estimates $170bn of assets will require construction lending or permanent financing this year. Between now and 2029, however, global spending on data centres will hit almost $3tn, according to Morgan Stanley analysts. Of that, just $1.4tn is forecast to come from capital expenditure by Big Tech groups, leaving a mammoth $1.5tn of financing required from investors and developers.

About $60bn of loans are going into roughly $440bn of data centre development projects this year, twice as much debt as in 2024, according to a recent presentation by law firm Norton Rose Fulbright. More than $25bn of loans were underwritten in the first quarter of this year alone, according to a report by Newmark.

Here is more from a very good FT article.

My excellent Conversation with Nate Silver

Here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler and Nate dive into expected utility theory and random Nash equilibria in poker, whether Silver’s tell-reading abilities transfer to real-world situations like NBA games, why academic writing has disappointed him, his move from atheism to agnosticism, the meta-rationality of risk-taking, electoral systems and their flaws, 2028 presidential candidates,  why he thinks superforecasters will continue to outperform AI for the next decade, why more athletes haven’t come out as gay, redesigning the NBA, what mentors he needs now, the cultural and psychological peculiarities of Bay area intellectual communities, why Canada can’t win a Stanley Cup, the politics of immigration in Europe and America, what he’ll work on next, and more.

Excerpt:

COWEN: If you think about the Manifold types in terms of the framework in your book, how they think about risk — is there a common feature that they’re more risk-averse, or that they worry more? Is there a common feature that they like the idea that they hold some kind of secret knowledge that other people do not have? How do you classify them? They’re just high in openness, or what is it?

SILVER: They’re high in openness to experience. I think they’re very high in conscientiousness.

COWEN: Are they? I don’t know.

SILVER: Some of them are. Some of them are, yes.

COWEN: I think of them as high variance in conscientiousness, rather than high in it.

[laughter]

SILVER: The EAs and the rationalists are more high variance, I think. There can be a certain type of gullibility is one problem. I think, obviously, EA took a lot of hits for Sam Bankman-Fried, but if anything, they probably should have taken more reputational damage. That was really bad, and there were a lot of signs of it, including his interviews with you and other people like that. It contrasts with poker players who have similar phenotypes but are much more suspicious and much more street smart.

Also, the Bay Area is weird. I feel like the West Coast is diverging more from the rest of the country.

COWEN: I agree.

SILVER: It’s like a long way away. Just the mannerisms are different. You go to a small thing. You go to a house party in the Bay Area. There may not be very much wine, for example. In New York, if the host isn’t drinking, then it’d be considered sacrilege not to have plenty of booze at a party. Little things like that, little cultural norms. You go to Seattle — it feels like Canada to me almost, and so these things are diverging more.

COWEN: Why is belief in doom correlated with practice of polyamory? And I think it is.

SILVER: If you ask Aella, I guess, she might say, if we’re all going to die or go to whatever singularity there is, we might as well have fun in the meantime. There’s some of that kind of hedonism. Although in general, it’s not a super hedonistic movement.

COWEN: It seems too economistic to me. Even I, the economist — I don’t feel people think that economistically. There’s more likely some psychological predisposition toward both views.

SILVER: I guess you could argue that society would be better organized in a more polyamorous relationship. People do it implicitly in a lot of ways anyway, including in the LGBTQ [laughs] community, which has different attitudes toward it potentially. and if there’s not as much childbearing, that can have an effect, potentially. I think it’s like they are not being constrained by their own society thing that is taken very seriously in that group. There’s enough disconnectedness and aloofness where they’re able to play it out in practice more.

That creeps a little bit into Silicon Valley too, which can be much more whimsical and fanciful than the Wall Street types I know, for example.

Recommended.  Here is my 2024 episode with Nate, here is my 2016 episode with him.

The Rising Returns to R&D: Ideas Are not Getting Harder to Find (one hypothesis)

R&D investment has grown robustly, yet aggregate productivity growth has stagnated. Is this because “ideas are getting harder to find”? This paper uses micro-data from the US Census Bureau to explore the relationship between R&D and productivity in the manufacturing sector from 1976 to 2018. We find that both the elasticity of output (TFP) with respect to R&D and the marginal returns to R&D have risen sharply. Exploring factors affecting returns, we conclude that R&D obsolescence rates must have risen. Using a novel estimation approach, we find consistent evidence of sharply rising technological rivalry and obsolescence. These findings suggest that R&D has become more effective at finding productivity-enhancing ideas, but these ideas may also render rivals’ technologies obsolete, making innovations more transient. Because of obsolescence, rising R&D does not necessarily mean rising aggregate productivity growth.

Here is the paper by Yoshiki Ando (Singapore Management University, TPRI), James Bessen (BU, TPRI), and Xiupeng Wang.  Via Arjun.

David Sacks is correct

A BEST CASE SCENARIO FOR AI?

The Doomer narratives were wrong. Predicated on a “rapid take-off” to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence.

Instead, we are seeing the opposite: 

— the leading models are clustering around similar performance benchmarks;

— model companies continue to leapfrog each other with their latest versions (which shouldn’t be possible if one achieves rapid take-off);

— models are developing areas of competitive advantage, becoming increasingly specialized in personality, modes, coding and math as opposed to one model becoming all-knowing. 

None of this is to gainsay the progress. We are seeing strong improvement in quality, usability, and price/performance across the top model companies. This is the stuff of great engineering and should be celebrated. It’s just not the stuff of apocalyptic pronouncements. Oppenheimer has left the building. 

The AI race is highly dynamic so this could change. But right now the current situation is Goldilocks:

— We have 5 major American companies vigorously competing on frontier models. This brings out the best in everyone and helps America win the AI race.  As @BalajiS

has written: “We have many models from many factions that have all converged on similar capabilities, rather than a huge lead between the best model and the rest. So we should expect a balance of power between various human/AI fusions rather than a single dominant AGI that will turn us all into paperclips/pillars of salt.”

— So far, we have avoided a monopolistic outcome that vests all power and control in a single entity. In my view, the most likely dystopian outcome with AI is a marriage of corporate and state power similar to what we saw exposed in the Twitter Files, where “Trust & Safety” gets weaponized into government censorship and control. At least when you have multiple strong private sector players, that gets harder. By contrast, winner-take-all dynamics are more likely to produce Orwellian outcomes.

— There is likely to be a major role for open source. These models excel at providing 80-90% of the capability at 10-20% of the cost. This tradeoff will be highly attractive to customers who value customization, control, and cost over frontier capabilities. China has gone all-in on open source, so it would be good to see more American companies competing in this area, as OpenAI just did. (Meta also deserves credit.)

— There is likely to be a division of labor between generalized foundation models and specific verticalized applications. Instead of a single superintelligence capturing all the value, we are likely to see numerous agentic applications solving “last mile” problems. This is great news for the startup ecosystem. 

— There is also an increasingly clear division of labor between humans and AI. Despite all the wondrous progress, AI models are still at zero in terms of setting their own objective function. Models need context, they must be heavily prompted, the output must be verified, and this process must be repeated iteratively to achieve meaningful business value. This is why Balaji has said that AI is not end-to-end but middle-to-middle. This means that apocalyptic predictions of job loss are as overhyped as AGI itself. Instead, the truism that “you’re not going to lose your job to AI but to someone who uses AI better than you” is holding up well. 

In summary, the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.

Here is the tweet link.  As you have read here, I am quite pleased with GPT-5.  But it does not indicate that the more extreme (whether destructive or utopian) scenarios for AI development are correct, quite the contrary.  Below the Sacks tweet, you can read some rather unconvincing responses.

Is anyone worth a billion dollars?

That is the topic of my latest Free Press column.  Excerpt:

…in recent years they [Meta] have moved into AI in a big way. Over that same time period, the valuation of the company has risen from about $236 billion in November 2022 to almost $2 trillion at the end of this July.

The reasons for share price movements are not always transparent, but it is common consensus that AI investments are a significant reason why Meta is now worth much more. The original metaverse plans did not take off, and Facebook and Instagram are relatively mature products and they have not changed much as of late.

So the market, responding to Meta’s promises about AI, expects that it will deliver on that $2 trillion value. Yet their current Llama models are not state of the art. Meta needs something better and more competitive.

Meta thus has to justify an extra $1.8 trillion in its valuation, which of course they could lose if markets decide they are not up to the task. Spending some billions on top-quality AI personnel is easy to justify when viewed in terms of the value gain Meta already has been reaping.

And it is not just about justifying the current $2 trillion valuation. Meta possibly could be worth more yet. It probably has not escaped their attention that as of late, both Nvidia and Microsoft have had valuations of about $4 trillion. So the possibility of further upside enters the equation as well.

Keep in mind that better AI also will boost the profits Meta can receive from ads on Facebook and Instagram. Click-through rates on ads typically are small, so even a modest increase in targeting ability can mean a lot more profit. Meta does not have to achieve superintelligence to get its money back on these investments; they just need better AI. There is also a plan to put more ads on WhatsApp (currently the user experience is mostly ad-free), and that too can benefit from better AI and better ad targeting.

The general principle is that top talent is typically undervalued, if only because of egalitarian norms in pay structures.

GPT-5, a short and enthusiastic review

I am a big fan, as on my topics of interest it does much better than o3, and that is saying something.  It is also lightning fast, even for complex queries of economics, history, and ideas.

One of the most impressive features is its uncanny sense of what you might want to ask next.  And it has a good sense for when to give you a (sometimes interactive!) chart or diagram.  It is a much better writer than o3.

I have had early access, and love to just keep on asking it, asking it, asking it questions.  Today I was asking about Irish coinage disputes from 1724 (Swift) and now about different kinds of Buddhism and their historical roots.  It was very accurate on cuisine in northern Ghana.

It is the best learning tool I have.  Furthermore, it feels fun.

Here is a review from Ethan Mollick.

What is superintelligence?

The Free Press ran a symposium on that topic, I was a contributor.  Here is the closing part of mine:

“I asked o3 pro, a leading AI model from OpenAI, “What is superintelligence?” Here is the opening to a much longer answer:

Superintelligence is a term most commonly used in artificial intelligence (AI) studies and the philosophy of mind to denote any intellect that greatly outperforms the best human brains in virtually every relevant domain—from scientific creativity and social skills to general wisdom and strategic reasoning.

Is that a superintelligent answer? No. Is it better than mine? The AI thinks it is a toss-up: “Cowen’s version excels at pithiness and viewpoint; mine excels at breadth and rigor. Neither is universally ‘better’; each serves different reader needs.”

I can breathe easily, at least for today.”

A household expenditure approach to measuring AI progress

Often researchers focus on the capabilities of AI models, for instance what kinds of problems they might solve.  Or how they might boost productivity growth rates.  But a different question is to ask how they might lower cost of living for ordinary Americans.  And while I am optimistic about the future prospects and powers of AI models, on that particular question I think progress will be slow, mostly though through no fault of the AIs.

If you consider a typical household budget, some of the major categories might be:

A. Rent and home purchase

B. Food

C. Health care

D. Education

Let us consider each in turn.  Do note that in the longer run AI will do a lot to accelerate and advance science.  But in the next five years, most of those advances may not be so visible or available.  And so I will focus on some budgetary items in the short run:

A. When it comes to rent, a lot of the constraints are on the supply side.  So even very powerful AI will not alleviate those problems.  In fact strong AI could make it more profitable to live near other talented people, which could raise a lot of rents.  Real wages for the talented would go up too, still I would not expect rents to fall per se.

Strong AI might make it easier to live say in Maine, which would involve a de facto lowering of rents, even if no single rental price falls.  Again, maybe.

B. When it comes to food, in some long run AI will genetically engineer better and stronger crops, which in time will be cheaper.  We will develop better methods of irrigation, better systems for trading land, better systems for predicting the weather and protecting against storms, and so on.  Still, I observe that agricultural improvements (whether AI-rooted or not) can spread very slowly.  A lot of rural Mexico still does not use tractors, for instance.

So I can see AI lowering the price of food in twenty years, but in the meantime a lot of real world, institutional, legal, and supply side constraints and bottlenecks will bind.  In the short run, greater energy demands could well make food more expensive.

C. When it comes to health care, I expect all sorts of fabulous new discoveries.  I am not sure how rapidly they will arrive, but at some point most Americans will die of old age, if they survive accidents, and of course driverless vehicles will limit some of those too.  Imagine most people living to the age of 97, or something like that.

In terms of human welfare, that is a wonderful outcome.  Still, there will be a lot more treatments, maybe some of them customized for you, as is the case with some of the new cancer treatments.  Living to 97, your overall health care expenses probably will go up.  It will be worth it, by far, but I cannot say this will alleviate cost of living concerns.  It might even make them worse.  Your total expenditures on health care are likely to rise.

D. When it comes to education, the highly motivated and curious already learn a lot more from AI and are more productive.  (Much of those gains, though, translate into more leisure time at work, at least until institutions adjust more systematically.). I am not sure when AI will truly help to motivate the less motivated learners.  But I expect not right away, and maybe not for a long time.  That said, a good deal of education is much cheaper right now, and also more effective.  But the kinds of learning associated with the lower school grades are not cheaper at all, and for the higher levels you still will have to pay for credentialing for the foreseeable future.

In sum, I think it will take a good while before AI significantly lowers the cost of living, at least for most people.  We have a lot of other constraints in the system.  So perhaps AI will not be that popular.  So the AIs could be just tremendous in terms of their intrinsic quality (as I expect and indeed already is true), and yet living costs would not fall all that much, and could even go up.

Claims about DOGE and AI

The U.S. DOGE Service is using a new artificial intelligence tool to slash federal regulations, with the goal of eliminating half of Washington’s regulatory mandates by the first anniversary of President Donald Trump’s inauguration, according to documents obtained by The Washington Post and four government officials familiar with the plans.

The tool, called the “DOGE AI Deregulation Decision Tool,” is supposed to analyze roughly 200,000 federal regulations to determine which can be eliminated because they are no longer required by law, according to a PowerPoint presentation obtained by The Post that is dated July 1 and outlines DOGE’s plans. Roughly 100,000 of those rules would be deemed worthy of trimming, the PowerPoint estimates — mostly through the automated tool with some staff feedback. The PowerPoint also suggests the AI tool will save the United States trillions of dollars by reducing compliance requirements, slashing the federal budget and unlocking unspecified “external investment.”

The tool has already been used to complete “decisions on 1,083 regulatory sections” at the Department of Housing and Urban Development in under two weeks, according to the PowerPoint, and to write “100% of deregulations” at the Consumer Financial Protection Bureau (CFPB). Three HUD employees — as well as documents obtained by The Post — confirmed that an AI tool was recently used to review hundreds, if not more than 1,000, lines of regulations at that agency and suggest edits or deletions.

Here is the full story, I will keep you all posted…

*The Economist* on the speed of AI take-off

A booming but workerless economy may be humanity’s ultimate destination. But, argues Tyler Cowen of George Mason University, an economist who is largely bullish about AI, change will be slower than the underlying technology permits. “There’s a lot of factors of production…the stronger the AI is, the more the weaknesses of the other factors bind you,” he says. “It could be energy; it could be human stupidity; it could be regulation; it could be data constraints; it could just be institutional sluggishness.” Another possibility is that even a superintelligence would run out of ideas. “ AI may resolve a problem with the fishermen, but it wouldn’t change what is in the pond,” wrote Philippe Aghion of LSE and others in a working paper in 2017.

Here is the full piece, of interest throughout.