Category: Web/Tech

AI and the First Amendment

The more that outputs come from generative AI, the more the “free speech” treatment of AIs will matter, as I argue in my latest column for The Free Press.  Here is one excerpt, quite separate from some of my other points:

Another problem is that many current bills, including one already passed in California, require online platforms to disclose which of their content is AI-generated, in the interest of transparency. That mandate has some good features, and in the short run it may be necessary to ease people’s fears about AI. But I am nervous about its longer-run implications.

Let’s say that most content evolves to be jointly produced by humans and AI, and not always in a way where all the lines are clear (GPT-5 did proofread this column, to look for stylistic errors, and check for possible improvements). Does all joint work have to be reported as such? If not, does a single human tweak to AI-generated material mean that no reporting is required?

And if joint work does have to be reported as joint, won’t that level of requirement inevitably soon apply to all output? Who will determine if users accurately report their role in the production of output? And do they have to keep records about this for years? The easier it becomes for individual users to use AI to edit output, the less it will suffice to impose a single, supposedly unambiguous reporting mandate on the AI provider.

I am not comfortable with the notion that the government has the legal right to probe the origin of a work that comes out under your name. In addition to their impracticality, such laws could become yet another vehicle for targeting writers, visual artists, and musicians whom the government opposes. For example, if a president doesn’t like a particular singer, he can ask her to prove that she has properly reported all AI contributions to her recordings.

I suspect this topic will not prove popular with many people.  If you dislike free speech, you may oppose the new speech opportunities opened up by AIs (just build a bot and put it out there to blog, it does not have to be traceable to you).  If you do like free speech, you will be uncomfortable with the much lower marginal cost of producing “license,” resulting from AI systems.  Was the First Amendment really built to handle such technologies?

In my view free speech remains the best constitutional policy, but I do not expect AI systems to make it more popular as a concept.  It is thus all the more important that we fight for free speech rights heading into the immediate future.

We Turned the Light On—and the AI Looked Back

Jack Clark, Co-founder of Anthropic, has written a remarkable essay about his fears and hopes. It’s not the usual kind of thing one reads from a tech leader:

I remember being a child and after the lights turned out I would look around my bedroom and I would see shapes in the darkness and I would become afraid – afraid these shapes were creatures I did not understand that wanted to do me harm. And so I’d turn my light on. And when I turned the light on I would be relieved because the creatures turned out to be a pile of clothes on a chair, or a bookshelf, or a lampshade.

Now, in the year of 2025, we are the child from that story and the room is our planet. But when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come. And there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade. And they want to get us to turn the light off and go back to sleep.

…We are growing extremely powerful systems that we do not fully understand. Each time we grow a larger system, we run tests on it. The tests show the system is much more capable at things which are economically useful. And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things.

It is as if you are making hammers in a hammer factory and one day the hammer that comes off the line says, “I am a hammer, how interesting!” This is very unusual!

…I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple.

My own experience is that as these AI systems get smarter and smarter, they develop more and more complicated goals. When these goals aren’t absolutely aligned with both our preferences and the right context, the AI systems will behave strangely.

…we are not yet at “self-improving AI”, but we are at the stage of “AI that improves bits of the next AI, with increasing autonomy and agency”. And a couple of years ago we were at “AI that marginally speeds up coders”, and a couple of years before that we were at “AI is useless for AI development”. Where will we be one or two years from now?

And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed.

…In closing, I should state clearly that I love the world and I love humanity. I feel a lot of responsibility for the role of myself and my company here. And though I am a little frightened, I experience joy and optimism at the attention of so many people to this problem, and the earnestness with which I believe we will work together to get to a solution. I believe we have turned the light on and we can demand it be kept on, and that we have the courage to see things as they are.

Clark is clear that we are growing intelligent systems that are more complex than we can understand. Moreover, these systems are becoming self-aware–that is a fact, even if you think they are not sentient (but beware hubris on the latter question).

What should I ask Brendan Foody?

Yes, I will be doing a Conversation with him.  He was a Thiel fellow, now CEO and co-founder of Mercor, I believe he is still only 23 years old.  GPT-5 gives this summary of Mercor:

Mercor is a San‑Francisco–based startup that runs an AI‑driven talent marketplace: companies building frontier models use it to source, screen (via automated AI interviews), and pay domain experts and engineers for contract or full‑time work. Beyond traditional recruiting, Mercor supplies specialists—doctors, lawyers, scientists, and software/AI engineers—to help train and evaluate AI systems for top labs (TechCrunch reports OpenAI is among its users), charging an hourly finder’s fee. Founded in 2023 by Thiel Fellows Brendan Foody, Adarsh Hiremath, and Surya Midha, the company raised a $100M Series B in February 2025 at a ~$2B valuation, following a $30M Series A in September 2024.

Here is Brendan on Twitter.  So what should I ask him?

China understands negative emotional contagion

China’s censors are moving to stamp out more than just political dissent online. Now, they are targeting the public mood itself — punishing bloggers and influencers whose weary posts are resonating widely in a country where optimism is fraying.

The authorities have punished two bloggers who advocated for a life of less work and less pressure; an influencer who said that it made financial sense not to marry and have children; and a commentator known for bluntly observing that China still lags behind Western countries in terms of quality of life.

These supposed cynics and skeptics, two of whom had tens of millions of followers, have had their accounts suspended or banned in recent weeks as China’s internet regulator conducts a new cleanup of Chinese social media. The two-month campaign, launched by the Cyberspace Administration of China in late September, is aimed at purging content that incites “excessively pessimistic sentiment” and panic or promotes defeatist ideas such as “hard work is useless,” according to a notice from the agency.

Here is more from Lily Kuo from the NYT.  If you are spreading negative emotional contagion, there is a very good chance that, no matter what you are saying, that you are part of the problem.  A more fundamental division these days than Left vs. Right.

Thiel and Wolfe on the Antichrist in literature

Jonathan Swift tried to exorcise ­Baconian Antichrist-worship from England. Gulliver’s Travels agreed with New Atlantis on one point: The ancient hunger for knowledge of God had competition from the modern thirst for knowledge of science. In this quarrel between ancients and moderns, Swift sided with the former.

Gulliver’s Travels takes us on four voyages to fictional countries bearing scandalous similarities to eighteenth-century England. In his depictions of the Lilliputians, Brobdingnagians, Laputans, and Houyhnhnms, Swift lampoons the Whig party, the Tory party, English law, the city of London, Cartesian dualism, doctors, dancers, and many other people, movements, and institutions besides. Swift’s misanthropy borders on nihilism. But as is the case with all satirists, we learn as much from whom Swift spares as from whom he scorns—and Gulliver’s Travels never criticizes Christianity. Though in 2025 we think of Gulliver’s Travels as a comedy, for Swift’s friend Alexander Pope it was the work of an “Avenging Angel of wrath.” The Anglican clergyman Swift was a comedian in one breath and a fire-and-brimstone preacher in the next.

Gulliver claims he is a good Christian. We doubt him, as we doubt Bacon’s chaplain. Gulliver’s first name, Lemuel, translates from Hebrew as “devoted to God.” But “Gulliver” sounds like “gullible.” Swift quotes Lucretius on the title page of the 1735 edition: “vulgus abhorret ab his.” In its original context, Lucretius’s quote describes the horrors of a godless cosmos, horrors to which Swift will expose us. The words “splendide mendax” appear below Gulliver’s frontispiece portrait—“nobly untruthful.” In the novel’s final chapter, Gulliver reflects on an ­earlier promise to “strictly adhere to Truth” and quotes ­Sinon from Virgil’s Aeneid. Sinon was the Greek who convinced the Trojans to open their gates to the Trojan horse: one of literature’s great liars.

Here is the full article, interesting and varied throughout.

New data on social media

It has gone largely unnoticed that time spent on social media peaked in 2022 and has since gone into steady decline, according to an analysis of the online habits of 250,000 adults in more than 50 countries carried out for the FT by the digital audience insights company GWI. And this is not just the unwinding of a bump in screen time during pandemic lockdowns — usage has traced a smooth curve up and down over the past decade-plus.

Across the developed world, adults aged 16 and older spent an average of two hours and 20 minutes per day on social platforms at the end of 2024, down by almost 10 per cent since 2022. Notably, the decline is most pronounced among the erstwhile heaviest users — teens and 20-somethings…

Additional data from GWI trace the shift. The shares of people who report using social platforms to stay in touch with their friends, express themselves or meet new people have fallen by more than a quarter since 2014. Meanwhile, reflexively opening the apps to fill up spare time has risen, reflecting a broader pernicious shift from mindful to mindless browsing.

Here is more from John Burn-Murdoch in the FT.  I was just doing as Aspen podcast two nights ago, where I spoke of social media as a problem that, in time, largely would solve itself.  You also may recall my recent post about declining rates of depression for young adults.  That said, you might wonder what exactly is the correct definition of social media (MR comments section?), and whether this study is tracking the proper conception of it.

For the pointer I thank Adrian Kelly.

Do LLMs favor outputs created by themselves?

Here is part of the abstract, I will not ask who or what wrote this:

We focus on the hiring context, where job applicants often rely on LLMs to refine resumes, while employers deploy them to screen those same resumes. Using a large-scale controlled resume correspondence experiment, we find that LLMs consistently prefer resumes generated by themselves over those written by humans or produced by alternative models, even when content quality is controlled. The bias against human-written resumes is particularly substantial, with self-preference bias ranging from 68% to 88% across major commercial and open-source models. To assess labor market impact, we simulate realistic hiring pipelines across 24 occupations. These simulations show that candidates using the same LLM as the evaluator are 23% to 60% more likely to be shortlisted than equally qualified applicants submitting human-written resumes, with the largest disadvantages observed in business-related fields such as sales and accounting. We further demonstrate that this bias can be reduced by more than 50% through simple interventions targeting LLMs’ self-recognition capabilities.

Here is the full paper by Jiannan Xu, Gujie Li, and Jane Yi Jiant, via the excellent Kevin Lewis.

Valuing free goods

There is a new AEJ Macro paper by Brynjolfsson, et.al. on how to value free goods.  Here is one of the concrete measures:

Using this approach, we estimate the reservation price [for giving up Facebook] to be $2,152 in 2003 US dollars.

That is for the 2017 version of Facebook.  Note this does not measure “whether Facebook is really good for you on net,” but it does indicate some fairly strong demand.  And:

…the estimate contribution to welfare due to Facebook in the US over the period 2003-2017 is $231 billion (in 2017 dollars), which translates to $16 billion on average per year.

Some simple economics of Sora 2?

I do not have access or any kind of inside information on what it can do, or not.  Still, from my distance it seems quite possible that the “slop” side of the equation is a simple way to fund AI “world-modeling” (and other) skills in a manner that is cross-subsidized by the consumers of the slop.

That is good, not bad.  Let us hope it is true, and so shall all the glass bridges break yet again.

Markets in everything

A new app offering to record your phone calls and pay you for the audio so it can sell the data to AI companies is, unbelievably, the No. 2 app in Apple’s U.S. App Store’s Social Networking section.

The app, Neon Mobile, pitches itself as a moneymaking tool offering “hundreds or even thousands of dollars per year” for access to your audio conversations.

Neon’s website says the company pays 30¢ per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals. The app first ranked No. 476 in the Social Networking category of the U.S. App Store on September 18 but jumped to No. 10 at the end of yesterday, according to data from app intelligence firm Appfigures.

On Wednesday, Neon was spotted in the No. 2 position on the iPhone’s top free charts for social apps…

However, Neon’s marketing claims to only record your side of the call unless it’s with another Neon user.

Here is the full story, via Mark.

My excellent Conversation with Steven Pinker

Here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler and Steven probe these dimensions of common knowledge—Schelling points, differential knowledge, benign hypocrisies like  a whisky bottle in a paper bag—before testing whether rational people can actually agree (spoiler: they can’t converge on Hitchcock rankings despite Aumann’s theorem), whether liberal enlightenment will reignite and why, what stirring liberal thinkers exist under the age 55, why only a quarter of Harvard students deserve A’s, how large language models implicitly use linguistic insights while ignoring linguistic theory, his favorite track on Rubber Soul, what he’ll do next, and more.

Excerpt:

COWEN: Surely there’s a difference between coordination and common knowledge. I think of common knowledge as an extremely recursive model that typically has an infinite number of loops. Most of the coordination that goes on in the real world is not like that. If I approach a traffic circle in Northern Virginia, I look at the other person, we trade glances. There’s a slight amount of recursion, but I doubt if it’s ever three loops. Maybe it’s one or two.

We also have to slow down our speeds precisely because there are not an infinite number of loops. We coordinate. What percentage of the coordination in the real world is like the traffic circle example or other examples, and what percentage of it is due to actual common knowledge?

PINKER: Common knowledge, in the technical sense, does involve this infinite number of arbitrarily embedded beliefs about beliefs about beliefs. Thank you for introducing the title with the three dots, dot, dot, dot, because that’s what signals that common knowledge is not just when everyone knows that everyone knows, but when everyone knows that everyone knows that and so on. The answer to your puzzle — and I devote a chapter in the book to what common knowledge — could actually consist of, and I’m a psychologist, I’m not an economist, a mathematician, a game theorist, so foremost in my mind is what’s going on in someone’s head when they have common knowledge.

You’re right. We couldn’t think through an infinite number of “I know that he knows” thoughts, and our mind starts to spin when we do three or four. Instead, common knowledge can be generated by something that is self-evident, that is conspicuous, that’s salient, that you can witness at the same time that you witness other people witnessing it and witnessing you witnessing it. That can grant common knowledge in a stroke. Now, it’s implicit common knowledge.

One way of putting it is you have reason to believe that he knows that I know that he knows that I know that he knows, et cetera, even if you don’t literally believe it in the sense that that thought is consciously running through your mind. I think there’s a lot of interplay in human life between this recursive mentalizing, that is, thinking about other people thinking about other people, and the intuitive sense that something is out there, and therefore people do know that other people know it, even if you don’t have to consciously work that through.

You gave the example of norms and laws, like who yields at an intersection. The eye contact, though, is crucial because I suggest that eye contact is an instant common knowledge generator. You’re looking at the part of the person looking at the part of you, looking at the part of them. You’ve got instant granting of common knowledge by the mere fact of making eye contact, which is why it’s so potent in human interaction and often in other species as well, where eye contact can be a potent signal.

There are even species that can coordinate without literally having common knowledge. I give the example of the lowly coral, which presumably not only has no beliefs, but doesn’t even have a brain with which to have beliefs. Coral have a coordination problem. They’re stuck to the ocean floor. Their sperm have to meet another coral’s eggs and vice versa. They can’t spew eggs and sperm into the water 24/7. It would just be too metabolically expensive. What they do is they coordinate on the full moon.

On the full moon or, depending on the species, a fixed number of days after the full moon, that’s the day where they all release their gametes into the water, which can then find each other. Of course, they don’t have common knowledge in knowing that the other will know. It’s implicit in the logic of their solution to a coordination problem, namely, the public signal of the full moon, which, over evolutionary time, it’s guaranteed that each of them can sense it at the same time.

Indeed, in the case of humans, we might do things that are like coral. That is, there’s some signal that just leads us to coordinate without thinking it through. The thing about humans is that because we do have or can have recursive mentalizing, it’s not just one signal, one response, full moon, shoot your wad. There’s no limit to the number of things that we can coordinate creatively in evolutionarily novel ways by setting up new conventions that allow us to coordinate.

COWEN: I’m not doubting that we coordinate. My worry is that common knowledge models have too many knife-edge properties. Whether or not there are timing frictions, whether or not there are differential interpretations of what’s going on, whether or not there’s an infinite number of messages or just an arbitrarily large number of messages, all those can matter a lot in the model. Yet actual coordination isn’t that fragile. Isn’t the common knowledge model a bad way to figure out how coordination comes about?

And this part might please Scott Sumner:

COWEN: I don’t like most ballet, but I admit I ought to. I just don’t have the time to learn enough to appreciate it. Take Alfred Hitchcock. I would say North by Northwest, while a fine film, is really considerably below Rear Window and Vertigo. Will you agree with me on that?

PINKER: I don’t agree with you on that.

COWEN: Or you think I’m not your epistemic peer on Hitchcock films?

PINKER: Your preferences are presumably different from beliefs.

COWEN: No. Quality relative to constructed standards of the canon…

COWEN: You’re going to budge now, and you’re going to agree that I’m right. We’re not doing too well on this Aumann thing, are we?

PINKER: We aren’t.

COWEN: Because I’m going to insist North by Northwest, again, while a very good movie is clearly below the other two.

PINKER: You’re going to insist, yes.

COWEN: I’m going to insist, and I thought that you might not agree with this, but I’m still convinced that if we had enough time, I could convince you. Hearing that from me, you should accede to the judgment.

I was very pleased to have read Steven’s new book When Everyone Knows That Everyone Knows . . .: Common Knowledge and the Mysteries of Money, Power, and Everyday Life.

Pulse

Today we are launching my favorite feature of ChatGPT so far, called Pulse. It is initially available to Pro subscribers.

Pulse works for you overnight, and keeps thinking about your interests, your connected data, your recent chats, and more. Every morning, you get a custom-generated set of stuff you might be interested in. It performs super well if you tell ChatGPT more about what’s important to you.

In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates.

Think of treating ChatGPT like a super-competent personal assistant: sometimes you ask for things you need in the moment, but if you share general preferences, it will do a good job for you proactively.

This also points to what I believe is the future of ChatGPT: a shift from being all reactive to being significantly proactive, and extremely personalized.

That is from Sam Altman.