Category: Web/Tech
Predicting Job Loss?
Hardly a day goes by without a new prediction of job growth or destruction from AI and other new technologies. Predicting job growth is a growing industry. But how good are these predictions? For 80 years the US Bureau of Labor Statistics has forecasted job growth by occupation in its Occupational Outlook series. The forecasts were generally quite sophisticated albeit often not quantitative.
In 1974, for example, the BLS said one downward force for truck drivers was that “[T]he trend to large shopping centers rather than many small stores will reduce the number of deliveries required.” In 1963, however, they weren’t quite so accurate about about pilots writing “Over the longer run, the rate of airline employment growth is likely to slow down because the introduction of a supersonic transport plane will enable the airlines to fly more traffic without corresponding expansion in the number of airline planes and workers…”. Sad!
In a new paper, Maxim Massenkoff collects all this data and makes it quantifiable with LLM assistance. What he finds is that the Occupational Outlook performed reasonably well, occupations that were forecast to grow strongly did grow significantly more than those forecast to grow slowly or decline. But was there alpha? A little but not much.
…these predictions were not that much better than a naive forecast based only on growth over the previous decade. One implication is that, in general, jobs go away slowly: over decades rather than years. Historically, job seekers have been able to get a good sense of the future growth of a job by looking at what’s been growing in the past.
If past predictions were only marginally better than simple extrapolations it’s hard to believe that future predictions will perform much better. At least, that is my prediction.
Those new service sector jobs
Yes — as of late 2025, several robotics and AI startups are literally paying people to fold their laundry (or perform similar chores) while recording themselves, in order to train robots in dexterous, human-like task performance.
Companies such as Encord, Micro1, and Scale AI have launched paid “data collection” programs aimed at generating real-world video datasets for robotic learning. Participants are compensated to film themselves carrying out everyday household activities — folding laundry, loading dishwashers, making coffee, or tidying up. The footage is then annotated to help AI systems learn how to manipulate deformable objects, coordinate finger movements, and complete multi-step domestic tasks.
That is from Perplexity, original cite from Samir Varma.
Rick Rubin podcasts with me
He interrogates me about stablecoins, AI, economic policy, the current state of the world and more. Here is the link, self-recommending, two full hours! Rick is a great interviewer.
This is part one, there will be more to come. We had great fun recording these in Tuscany.
AI and the First Amendment
The more that outputs come from generative AI, the more the “free speech” treatment of AIs will matter, as I argue in my latest column for The Free Press. Here is one excerpt, quite separate from some of my other points:
Another problem is that many current bills, including one already passed in California, require online platforms to disclose which of their content is AI-generated, in the interest of transparency. That mandate has some good features, and in the short run it may be necessary to ease people’s fears about AI. But I am nervous about its longer-run implications.
Let’s say that most content evolves to be jointly produced by humans and AI, and not always in a way where all the lines are clear (GPT-5 did proofread this column, to look for stylistic errors, and check for possible improvements). Does all joint work have to be reported as such? If not, does a single human tweak to AI-generated material mean that no reporting is required?
And if joint work does have to be reported as joint, won’t that level of requirement inevitably soon apply to all output? Who will determine if users accurately report their role in the production of output? And do they have to keep records about this for years? The easier it becomes for individual users to use AI to edit output, the less it will suffice to impose a single, supposedly unambiguous reporting mandate on the AI provider.
I am not comfortable with the notion that the government has the legal right to probe the origin of a work that comes out under your name. In addition to their impracticality, such laws could become yet another vehicle for targeting writers, visual artists, and musicians whom the government opposes. For example, if a president doesn’t like a particular singer, he can ask her to prove that she has properly reported all AI contributions to her recordings.
I suspect this topic will not prove popular with many people. If you dislike free speech, you may oppose the new speech opportunities opened up by AIs (just build a bot and put it out there to blog, it does not have to be traceable to you). If you do like free speech, you will be uncomfortable with the much lower marginal cost of producing “license,” resulting from AI systems. Was the First Amendment really built to handle such technologies?
In my view free speech remains the best constitutional policy, but I do not expect AI systems to make it more popular as a concept. It is thus all the more important that we fight for free speech rights heading into the immediate future.
Interview with Robinson Erhardt of Stanford
We Turned the Light On—and the AI Looked Back
Jack Clark, Co-founder of Anthropic, has written a remarkable essay about his fears and hopes. It’s not the usual kind of thing one reads from a tech leader:
I remember being a child and after the lights turned out I would look around my bedroom and I would see shapes in the darkness and I would become afraid – afraid these shapes were creatures I did not understand that wanted to do me harm. And so I’d turn my light on. And when I turned the light on I would be relieved because the creatures turned out to be a pile of clothes on a chair, or a bookshelf, or a lampshade.
Now, in the year of 2025, we are the child from that story and the room is our planet. But when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come. And there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade. And they want to get us to turn the light off and go back to sleep.
…We are growing extremely powerful systems that we do not fully understand. Each time we grow a larger system, we run tests on it. The tests show the system is much more capable at things which are economically useful. And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things.
It is as if you are making hammers in a hammer factory and one day the hammer that comes off the line says, “I am a hammer, how interesting!” This is very unusual!
…I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple.
My own experience is that as these AI systems get smarter and smarter, they develop more and more complicated goals. When these goals aren’t absolutely aligned with both our preferences and the right context, the AI systems will behave strangely.
…we are not yet at “self-improving AI”, but we are at the stage of “AI that improves bits of the next AI, with increasing autonomy and agency”. And a couple of years ago we were at “AI that marginally speeds up coders”, and a couple of years before that we were at “AI is useless for AI development”. Where will we be one or two years from now?
And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed.
…In closing, I should state clearly that I love the world and I love humanity. I feel a lot of responsibility for the role of myself and my company here. And though I am a little frightened, I experience joy and optimism at the attention of so many people to this problem, and the earnestness with which I believe we will work together to get to a solution. I believe we have turned the light on and we can demand it be kept on, and that we have the courage to see things as they are.
Clark is clear that we are growing intelligent systems that are more complex than we can understand. Moreover, these systems are becoming self-aware–that is a fact, even if you think they are not sentient (but beware hubris on the latter question).
What should I ask Brendan Foody?
Yes, I will be doing a Conversation with him. He was a Thiel fellow, now CEO and co-founder of Mercor, I believe he is still only 23 years old. GPT-5 gives this summary of Mercor:
Mercor is a San‑Francisco–based startup that runs an AI‑driven talent marketplace: companies building frontier models use it to source, screen (via automated AI interviews), and pay domain experts and engineers for contract or full‑time work. Beyond traditional recruiting, Mercor supplies specialists—doctors, lawyers, scientists, and software/AI engineers—to help train and evaluate AI systems for top labs (TechCrunch reports OpenAI is among its users), charging an hourly finder’s fee. Founded in 2023 by Thiel Fellows Brendan Foody, Adarsh Hiremath, and Surya Midha, the company raised a $100M Series B in February 2025 at a ~$2B valuation, following a $30M Series A in September 2024.
Here is Brendan on Twitter. So what should I ask him?
China understands negative emotional contagion
China’s censors are moving to stamp out more than just political dissent online. Now, they are targeting the public mood itself — punishing bloggers and influencers whose weary posts are resonating widely in a country where optimism is fraying.
The authorities have punished two bloggers who advocated for a life of less work and less pressure; an influencer who said that it made financial sense not to marry and have children; and a commentator known for bluntly observing that China still lags behind Western countries in terms of quality of life.
These supposed cynics and skeptics, two of whom had tens of millions of followers, have had their accounts suspended or banned in recent weeks as China’s internet regulator conducts a new cleanup of Chinese social media. The two-month campaign, launched by the Cyberspace Administration of China in late September, is aimed at purging content that incites “excessively pessimistic sentiment” and panic or promotes defeatist ideas such as “hard work is useless,” according to a notice from the agency.
Here is more from Lily Kuo from the NYT. If you are spreading negative emotional contagion, there is a very good chance that, no matter what you are saying, that you are part of the problem. A more fundamental division these days than Left vs. Right.
Sentences to ponder
To provide some sense of scale, that means the equivalent of about $1,800 per person in America will be invested this year on A.I.
Here is more from Natasha Sarin at the NYT.
Thiel and Wolfe on the Antichrist in literature
Jonathan Swift tried to exorcise Baconian Antichrist-worship from England. Gulliver’s Travels agreed with New Atlantis on one point: The ancient hunger for knowledge of God had competition from the modern thirst for knowledge of science. In this quarrel between ancients and moderns, Swift sided with the former.
Gulliver’s Travels takes us on four voyages to fictional countries bearing scandalous similarities to eighteenth-century England. In his depictions of the Lilliputians, Brobdingnagians, Laputans, and Houyhnhnms, Swift lampoons the Whig party, the Tory party, English law, the city of London, Cartesian dualism, doctors, dancers, and many other people, movements, and institutions besides. Swift’s misanthropy borders on nihilism. But as is the case with all satirists, we learn as much from whom Swift spares as from whom he scorns—and Gulliver’s Travels never criticizes Christianity. Though in 2025 we think of Gulliver’s Travels as a comedy, for Swift’s friend Alexander Pope it was the work of an “Avenging Angel of wrath.” The Anglican clergyman Swift was a comedian in one breath and a fire-and-brimstone preacher in the next.
Gulliver claims he is a good Christian. We doubt him, as we doubt Bacon’s chaplain. Gulliver’s first name, Lemuel, translates from Hebrew as “devoted to God.” But “Gulliver” sounds like “gullible.” Swift quotes Lucretius on the title page of the 1735 edition: “vulgus abhorret ab his.” In its original context, Lucretius’s quote describes the horrors of a godless cosmos, horrors to which Swift will expose us. The words “splendide mendax” appear below Gulliver’s frontispiece portrait—“nobly untruthful.” In the novel’s final chapter, Gulliver reflects on an earlier promise to “strictly adhere to Truth” and quotes Sinon from Virgil’s Aeneid. Sinon was the Greek who convinced the Trojans to open their gates to the Trojan horse: one of literature’s great liars.
Here is the full article, interesting and varied throughout.
New data on social media
It has gone largely unnoticed that time spent on social media peaked in 2022 and has since gone into steady decline, according to an analysis of the online habits of 250,000 adults in more than 50 countries carried out for the FT by the digital audience insights company GWI. And this is not just the unwinding of a bump in screen time during pandemic lockdowns — usage has traced a smooth curve up and down over the past decade-plus.
Across the developed world, adults aged 16 and older spent an average of two hours and 20 minutes per day on social platforms at the end of 2024, down by almost 10 per cent since 2022. Notably, the decline is most pronounced among the erstwhile heaviest users — teens and 20-somethings…
Additional data from GWI trace the shift. The shares of people who report using social platforms to stay in touch with their friends, express themselves or meet new people have fallen by more than a quarter since 2014. Meanwhile, reflexively opening the apps to fill up spare time has risen, reflecting a broader pernicious shift from mindful to mindless browsing.
Here is more from John Burn-Murdoch in the FT. I was just doing as Aspen podcast two nights ago, where I spoke of social media as a problem that, in time, largely would solve itself. You also may recall my recent post about declining rates of depression for young adults. That said, you might wonder what exactly is the correct definition of social media (MR comments section?), and whether this study is tracking the proper conception of it.
For the pointer I thank Adrian Kelly.
Do LLMs favor outputs created by themselves?
Here is part of the abstract, I will not ask who or what wrote this:
We focus on the hiring context, where job applicants often rely on LLMs to refine resumes, while employers deploy them to screen those same resumes. Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 68% to 88% across major commercial and open-source models. To assess labor market impact, we simulate realistic hiring pipelines across 24 occupations. These simulations show that candidates using the same LLM as the evaluator are 23% to 60% more likely to be shortlisted than equally qualified applicants submitting human-written resumes, with the largest disadvantages observed in business-related fields such as sales and accounting. We further demonstrate that this bias can be reduced by more than 50% through simple interventions targeting LLMs’ self-recognition capabilities.
Here is the full paper by Jiannan Xu, Gujie Li, and Jane Yi Jiant, via the excellent Kevin Lewis.
Valuing free goods
There is a new AEJ Macro paper by Brynjolfsson, et.al. on how to value free goods. Here is one of the concrete measures:
Using this approach, we estimate the reservation price [for giving up Facebook] to be $2,152 in 2003 US dollars.
That is for the 2017 version of Facebook. Note this does not measure “whether Facebook is really good for you on net,” but it does indicate some fairly strong demand. And:
…the estimate contribution to welfare due to Facebook in the US over the period 2003-2017 is $231 billion (in 2017 dollars), which translates to $16 billion on average per year.
Some simple economics of Sora 2?
I do not have access or any kind of inside information on what it can do, or not. Still, from my distance it seems quite possible that the “slop” side of the equation is a simple way to fund AI “world-modeling” (and other) skills in a manner that is cross-subsidized by the consumers of the slop.
That is good, not bad. Let us hope it is true, and so shall all the glass bridges break yet again.
Markets in everything
A new app offering to record your phone calls and pay you for the audio so it can sell the data to AI companies is, unbelievably, the No. 2 app in Apple’s U.S. App Store’s Social Networking section.
The app, Neon Mobile, pitches itself as a moneymaking tool offering “hundreds or even thousands of dollars per year” for access to your audio conversations.
Neon’s website says the company pays 30¢ per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals. The app first ranked No. 476 in the Social Networking category of the U.S. App Store on September 18 but jumped to No. 10 at the end of yesterday, according to data from app intelligence firm Appfigures.
On Wednesday, Neon was spotted in the No. 2 position on the iPhone’s top free charts for social apps…
However, Neon’s marketing claims to only record your side of the call unless it’s with another Neon user.
Here is the full story, via Mark.