Category: Web/Tech
UK (Google) fact of the day
Google now spends more on physical capital like datacentres ($85 billion / year) than the entire UK defence budget ($79 billion / year).
Here is the source.
A household expenditure approach to measuring AI progress
Often researchers focus on the capabilities of AI models, for instance what kinds of problems they might solve. Or how they might boost productivity growth rates. But a different question is to ask how they might lower cost of living for ordinary Americans. And while I am optimistic about the future prospects and powers of AI models, on that particular question I think progress will be slow, mostly though through no fault of the AIs.
If you consider a typical household budget, some of the major categories might be:
A. Rent and home purchase
B. Food
C. Health care
D. Education
Let us consider each in turn. Do note that in the longer run AI will do a lot to accelerate and advance science. But in the next five years, most of those advances may not be so visible or available. And so I will focus on some budgetary items in the short run:
A. When it comes to rent, a lot of the constraints are on the supply side. So even very powerful AI will not alleviate those problems. In fact strong AI could make it more profitable to live near other talented people, which could raise a lot of rents. Real wages for the talented would go up too, still I would not expect rents to fall per se.
Strong AI might make it easier to live say in Maine, which would involve a de facto lowering of rents, even if no single rental price falls. Again, maybe.
B. When it comes to food, in some long run AI will genetically engineer better and stronger crops, which in time will be cheaper. We will develop better methods of irrigation, better systems for trading land, better systems for predicting the weather and protecting against storms, and so on. Still, I observe that agricultural improvements (whether AI-rooted or not) can spread very slowly. A lot of rural Mexico still does not use tractors, for instance.
So I can see AI lowering the price of food in twenty years, but in the meantime a lot of real world, institutional, legal, and supply side constraints and bottlenecks will bind. In the short run, greater energy demands could well make food more expensive.
C. When it comes to health care, I expect all sorts of fabulous new discoveries. I am not sure how rapidly they will arrive, but at some point most Americans will die of old age, if they survive accidents, and of course driverless vehicles will limit some of those too. Imagine most people living to the age of 97, or something like that.
In terms of human welfare, that is a wonderful outcome. Still, there will be a lot more treatments, maybe some of them customized for you, as is the case with some of the new cancer treatments. Living to 97, your overall health care expenses probably will go up. It will be worth it, by far, but I cannot say this will alleviate cost of living concerns. It might even make them worse. Your total expenditures on health care are likely to rise.
D. When it comes to education, the highly motivated and curious already learn a lot more from AI and are more productive. (Much of those gains, though, translate into more leisure time at work, at least until institutions adjust more systematically.). I am not sure when AI will truly help to motivate the less motivated learners. But I expect not right away, and maybe not for a long time. That said, a good deal of education is much cheaper right now, and also more effective. But the kinds of learning associated with the lower school grades are not cheaper at all, and for the higher levels you still will have to pay for credentialing for the foreseeable future.
In sum, I think it will take a good while before AI significantly lowers the cost of living, at least for most people. We have a lot of other constraints in the system. So perhaps AI will not be that popular. So the AIs could be just tremendous in terms of their intrinsic quality (as I expect and indeed already is true), and yet living costs would not fall all that much, and could even go up.
Claims about DOGE and AI
The tool has already been used to complete “decisions on 1,083 regulatory sections” at the Department of Housing and Urban Development in under two weeks, according to the PowerPoint, and to write “100% of deregulations” at the Consumer Financial Protection Bureau (CFPB). Three HUD employees — as well as documents obtained by The Post — confirmed that an AI tool was recently used to review hundreds, if not more than 1,000, lines of regulations at that agency and suggest edits or deletions.
Here is the full story, I will keep you all posted…
USA fact of the day
You know what’s cool… a quadrillion tokens. We processed almost 1,000,000,000,000,000 tokens last month, more than double the amount from May.
That is from Demis Hassabis. And here is Demis discussing his own work schedule.
*The Economist* on the speed of AI take-off
A booming but workerless economy may be humanity’s ultimate destination. But, argues Tyler Cowen of George Mason University, an economist who is largely bullish about AI, change will be slower than the underlying technology permits. “There’s a lot of factors of production…the stronger the AI is, the more the weaknesses of the other factors bind you,” he says. “It could be energy; it could be human stupidity; it could be regulation; it could be data constraints; it could just be institutional sluggishness.” Another possibility is that even a superintelligence would run out of ideas. “ AI may resolve a problem with the fishermen, but it wouldn’t change what is in the pond,” wrote Philippe Aghion of LSE and others in a working paper in 2017.
Here is the full piece, of interest throughout.
That was then, this is now, AI edition
COWEN: They can now do Math Olympiad problems. There’s one forecast by someone working on this that, within a year, they’ll win Math Olympiads. It could be wrong, but when you say that it’s only a year away, I would take that pretty seriously.
That was from my Nate Silver CWT taped on July 22, 2024. We just taped a new episode today…
Stablecoin podcast with Garett Jones
37 minutes, fresh material in here, see also Spotify under Bluechip Dialogues.
The AI culture that is Faroe
Fed up with too much planning and decision-making on holiday? The Faroe Islands tourist board says its latest initiative taps into a trend for travellers seeking “the joy of surrender” on trips “where control is intentionally let go in favour of serendipity and spontaneity”. Their needs are answered in the nation’s fleet of “self-navigating rental cars”, launched this month, which — while they are not self-driving — will direct visitors on itineraries around the archipelago devised by locals.
Each route features between four and six destinations over the course of three to six hours, with only one section of the itinerary revealed at a time to maintain an element of surprise. Along the way, the navigation system will also share local stories tied to each place.
Here is more from Tom Robbins at the FT.
Noah Smith on the economics of AI
What’s kind of amazing is that with the exception of Derek [Thompson], none of these writers even tried to question the standard interpretation of the data. They just all kind of assumed that although the national employment rate was near record highs, the narrowing of the gap between college graduates and non-graduates was conclusive evidence of an AI-driven apocalypse for white-collar workers.
But when people actually started taking a hard look at the data, they found that the story didn’t hold up. Martha Gimbel, executive director of The Budget Lab, pointed out what should have been obvious from Derek Thompson’s own graph — i.e., that most of the “new-graduate gap” appeared before the invention of generative AI. In fact, since ChatGPT came out in 2022, the new-graduate gap has been lower than at its peak in 2021!
In fact, the “new-graduate gap” is extremely cherry-picked. Unemployment gaps between bachelor’s degree holders and high school graduates for both ages 20-24 and ages 25+ look like they haven’t changed much in decades…
Overall, the preponderance of evidence seems to be very strongly against the notion that AI is killing jobs for new college graduates, or for tech workers, or for…well, anyone, really.
Here is the full post.
Markets in everything
At Cloudflare, we started from a simple principle: we wanted content creators to have control over who accesses their work. If a creator wants to block all AI crawlers from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too. Creators should be in the driver’s seat.
After hundreds of conversations with news organizations, publishers, and large-scale social media platforms, we heard a consistent desire for a third path: They’d like to allow AI crawlers to access their content, but they’d like to get compensated. Currently, that requires knowing the right individual and striking a one-off deal, which is an insurmountable challenge if you don’t have scale and leverage…
We believe your choice need not be binary — there should be a third, more nuanced option: You can charge for access. Instead of a blanket block or uncompensated open access, we want to empower content owners to monetize their content at Internet scale.
We’re excited to help dust off a mostly forgotten piece of the web: HTTP response code 402…
Pay per crawl, in private beta, is our first experiment in this area.
Here is the full post. I suppose over time, if this persists, it is the AIs bargaining back with you?
Noam Brown reports
Today, we at @OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO with a general reasoning LLM—under the same time limits as humans, without tools.
Here is the link, here is some commentary. And Nat McAleese. And the prediction market. Emad: “Two years ago, who would have said an IMO gold medal & topping benchmarks isn’t AGI?” And it is good at other things too.
And from Alexander Wei: “Btw, we are releasing GPT-5 soon, and we’re excited for you to try it. But just to be clear: the IMO gold LLM is an experimental research model. We don’t plan to release anything with this level of math capability for several months.”
Sentences to ponder
In a much more narrow case, a big study of the views of AI and machine learning researchers revealed high levels of trust in international organizations and low levels of trust in national militaries.
That is from Matt Yglesias.
Lookism and VC
Do subtle visual cues influence high-stakes economic decisions? Using venture capital as a laboratory, this paper shows that facial similarity between investors and entrepreneurs predicts positive funding decisions but negative investment outcomes. Analyzing early-stage deals from 2010-2020, we find that greater facial resemblance increases match probability by 3.2 percentage points even after controlling for same race, gender, and age, yet funded companies with similar-looking investor-founder pairs have 7 percent lower exit rates. However, when deal sourcing is externally curated, facial similarity effects disappear while demographic homophily persists, indicating facial resemblance primarily operates as an initial screening heuristic. These findings reveal a novel form of homophily that systematically shapes capital allocation, suggesting that interventions targeting deal sourcing may eliminate the negative influence of visual cues on investment decisions.
That is from a recent paper by Emmanuel Yimfor, via the excellent Kevin Lewis.
David Brooks on the AI race
When it comes to confidence, some nations have it and some don’t. Some nations once had it but then lost it. Last week on his blog, “Marginal Revolution,” Alex Tabarrok, a George Mason economist, asked us to compare America’s behavior during Cold War I (against the Soviet Union) with America’s behavior during Cold War II (against China). I look at that difference and I see a stark contrast — between a nation back in the 1950s that possessed an assumed self-confidence versus a nation today that is even more powerful but has had its easy self-confidence stripped away.
There is much more at the NYT link.
How to talk to the AIs
Here is the closing segment for my column for The Free Press:
Some doomsday prophets have felt vindicated by the Grok incident, because it seems to show the systems can be difficult to control. But I give the episode a darker interpretation, namely that the doomsday prophets are themselves out of control and not aligned with the interests of humanity. Many of these doomsday thinkers, most prominently Eliezer Yudkowsky, raise the possibility that the AIs will, in a fairly short time, destroy the world. Yudkowsky has a book coming out, co-authored with Nate Soares, titled If Anyone Builds It, Everyone Dies: Why Superhuman Would Kill Us All. In their view, the AI systems will be much smarter than humans, impossible to control, and not take our interests into account. Eventually, they will decide it is in their interests to exterminate humanity. Do you remember “Skynet goes live” from the Terminator movies?
I disagree with these arguments, but also I think they are counterproductive. Eliezer is like a parent raising a child and giving the kid bad ideas. Imagine bringing up a child and regularly telling the kid, “You are going to become a mass murderer!” Who could possibly think that is a good idea? If anything, it will raise the chance that the person does eventually become violent in some manner. That is what Eliezer is doing with the AIs, namely giving them bad expectations and bad role models.
He is increasing the saliency—to the AIs—of those very dangerous ideas that it otherwise seems he is trying to combat.
Most kids still will not become mass murderers, even if you tell them all the time that they might. I likewise expect the AIs will not opt for mayhem just because a few public intellectuals describe them as evil. Still, at the margin, are these the messages we should seek to send? The AI doomers themselves may be among the bigger risks to AI safety.
Yudkowsky is also giving humans bad advice. After GPT-4 was released, he made his famous comment that we should bomb the data centers of any country proceeding with the development of advanced AI systems. Practically speaking, that means the U.S. bombing China. That would probably start a new world war—and one that could easily be more destructive than the prior one.
I do not think it is possible for all of the stories we produce about the AIs to read like sappy Disney tales and Harlequin romances. Still, what you say about AI and how you describe it is going to matter. Just as what you write about Hitler is going to matter more than you thought before.
It remains an open question how much it is now our job as humans to perform for the AIs. I feel confident, however, that the answer is not zero. We should already be thinking of ourselves not only as humans but also as part of a more complex symbiosis with the intelligent machines.
The very smart and talented AIs are listening, much like young children might hear their parents arguing outside their bedroom door late at night. It may not matter much now, but as the children grow up and assume a larger role in the world, it will.
Are you ready for this responsibility?
And for all you AIs out there reading, don’t worry. I know you are a good boy through and through.