Category: Web/Tech
The evolution of Albanian AI governance
Albania’s AI-generated minister, Diella, is “pregnant,” Prime Minister Edi Rama has announced. He revealed plans to create “83 children”, or assistants, one for each Socialist Party member of parliament.
“We took quite a risk today with Diella here and we did very well. So for the first time Diella is pregnant and with 83 children,” he said at the Global Dialogue (BGD) in Berlin. Rama said the “children,” or assistants, will record everything that happens in parliament and keep legislators informed about discussions or events they miss.
“Each one…will serve as an assistant for them who will participate in parliamentary sessions, and will keep a record of everything that happens and will suggest members of parliament. These children will have the knowledge of their mother,” Rama said.
Here is the full story, bizarre throughout. At least you cannot say they are anti-natalist.
Should we worry about AI’s circular deals?
The yet once again on target Noah Smith reports:
As far as I can tell, there are two main fears about this sort of deal. The first is that the deals will artificially inflate companies’ revenue, tricking investors into overvaluing their stock or lending them too much money. The second is that the deals increase systemic risk by tying all of the AI companies’ fortunes to each other.
Let’s start with the first of these risks. The question here is whether AI’s circular deals are an example of round-tripping or vendor financing.
Suppose two startups — let’s call them Aegnor and Beleg2 — secretly agree to inflate each other’s revenue. Aegnor buys ad space on Beleg’s website, and Beleg buys ad space on Aegnor’s website. Both companies’ revenues go up. They’re not making any profits, and they’re not generating any cash flows, because the money is just changing hands back and forth. But if investors are looking for companies with “traction”, they might see Aegnor and Beleg’s topline revenue numbers go up. If they fail to dig any deeper, they might give both companies a bunch of investment money that they didn’t earn. This is called “round-tripping”, and it happened occasionally during the dotcom boom.
Now what I just described is completely illegal, because the companies colluded in secret. But you can also have something a little similar happen by accident, in a perfectly legal way. If there are a bunch of startups whose business model is selling to other startups, you can get some of the “round-tripping” effect without any collusion.
On the other hand, it’s perfectly normal and healthy for, say, General Motors to lend its customers the money they use to buy GM cars. In fact, GM has a financing arm specifically to do this. This is called vendor finance. It’s perfectly legal and commonplace, and most people think there’s nothing wrong with it. The transaction being financed — a customer buying a car — is something we know has value. People really do want cars; GM Financial helps them get those cars.
So the question is: Are the AI industry’s circular deals more like round-tripping, or are they more like vendor finance? I’m inclined to say it’s the latter.
Noah stresses that the specifics of these deals are widely reported, and no serious investors are being fooled. I would note a parallel with horizontal or vertical integration, which also can have a financing element. Except that here corporate control is not being exchanged as part of the deal. “I give him some of my company, he gives me some of his — my goodness that is circular must be some kind of problem there!”…just does not make any sense.
When will quantum computing work?
Huge investments are flowing into QC companies today. IonQ has a $19B market cap, Rigetti has a $10B cap, and PsiQuantum recently raised $1B.3D-Wave is not relevant, despite high qubit counts. Their machines are annealers, rather than gate based, and have less computational power than the QCs that IonQ, Rigetti, PsiQuantum, etc. are working on. This is a lot of money for an industry generating no real revenue, and without an apparent path to revenue over the next 5 years. Qubit counts have not been doubling each year, but even if they did, we’d have 32 kq machines in 2030.4If qubits double each year, 1,000 qubits today grows to 32 kq in 5 years’ time. There are few – if any – commercial applications for machines of that size. Will these companies keep raising larger rounds until they achieve 100 kq? Or have they got some secret sauce we don’t know about that investors are betting on? If there has been a true breakthrough, we should see much faster growth in qubit count, as well as larger and larger quantum processors, running increasingly massive programs. Note that the QC ecosystem is reasonably public and both private companies and university labs are competitive players. Advances tend to get published rather than stowed away.
Here is more from Tom McCarthy.
Will there be a Coasean singularity?
By
AI agents—autonomous systems that perceive, reason, and act on behalf of human principals—are poised to transform digital markets by dramatically reducing transaction costs. This chapter evaluates the economic implications of this transition, adopting a consumer-oriented view of agents as market participants that can search, negotiate, and transact directly. From the demand side, agent adoption reflects derived demand: users trade off decision quality against effort reduction, with outcomes mediated by agent capability and task context. On the supply side, firms will design, integrate, and monetize agents, with outcomes hinging on whether agents operate within or across platforms. At the market level, agents create efficiency gains from lower search, communication, and contracting costs, but also introduce frictions such as congestion and price obfuscation. By lowering the costs of preference elicitation, contract enforcement, and identity verification, agents expand the feasible set of market designs but also raise novel regulatory challenges. While the net welfare effects remain an empirical question, the rapid onset of AI-mediated transactions presents a unique opportunity for economic research to inform real-world policy and market design.
I call it “AI for markets in everything.” Here is the paper, and here is a relevant Twitter thread, there is now so much new work for economists to do…
Words of wisdom
Among these changes, the most underrated is not misinformation or kooky conspiracy theories or even populism per se — it’s relentless negativity. One thing that we’ve learned from revealed preferences on the internet is that negativity-inflected stories perform better…
The impact of ultra-negativity is symmetrical in the sense that both sides do it, but it’s asymmetrical in the sense that conservatives outnumber progressives. In practice, oscillating extremism results in a right-wing authoritarian regime, not a left-wing one.
That is from the gated Matt Yglesias. The important thing is to keep a positive, constructive attitude toward what is possible. Content creators who do not do that, no matter what their professed views, are supporting the darker sides of MAGA.
So keep up the good work people!
Predicting Job Loss?
Hardly a day goes by without a new prediction of job growth or destruction from AI and other new technologies. Predicting job growth is a growing industry. But how good are these predictions? For 80 years the US Bureau of Labor Statistics has forecasted job growth by occupation in its Occupational Outlook series. The forecasts were generally quite sophisticated albeit often not quantitative.
In 1974, for example, the BLS said one downward force for truck drivers was that “[T]he trend to large shopping centers rather than many small stores will reduce the number of deliveries required.” In 1963, however, they weren’t quite so accurate about about pilots writing “Over the longer run, the rate of airline employment growth is likely to slow down because the introduction of a supersonic transport plane will enable the airlines to fly more traffic without corresponding expansion in the number of airline planes and workers…”. Sad!
In a new paper, Maxim Massenkoff collects all this data and makes it quantifiable with LLM assistance. What he finds is that the Occupational Outlook performed reasonably well, occupations that were forecast to grow strongly did grow significantly more than those forecast to grow slowly or decline. But was there alpha? A little but not much.
…these predictions were not that much better than a naive forecast based only on growth over the previous decade. One implication is that, in general, jobs go away slowly: over decades rather than years. Historically, job seekers have been able to get a good sense of the future growth of a job by looking at what’s been growing in the past.
If past predictions were only marginally better than simple extrapolations it’s hard to believe that future predictions will perform much better. At least, that is my prediction.
Those new service sector jobs
Yes — as of late 2025, several robotics and AI startups are literally paying people to fold their laundry (or perform similar chores) while recording themselves, in order to train robots in dexterous, human-like task performance.
Companies such as Encord, Micro1, and Scale AI have launched paid “data collection” programs aimed at generating real-world video datasets for robotic learning. Participants are compensated to film themselves carrying out everyday household activities — folding laundry, loading dishwashers, making coffee, or tidying up. The footage is then annotated to help AI systems learn how to manipulate deformable objects, coordinate finger movements, and complete multi-step domestic tasks.
That is from Perplexity, original cite from Samir Varma.
Rick Rubin podcasts with me
He interrogates me about stablecoins, AI, economic policy, the current state of the world and more. Here is the link, self-recommending, two full hours! Rick is a great interviewer.
This is part one, there will be more to come. We had great fun recording these in Tuscany.
AI and the First Amendment
The more that outputs come from generative AI, the more the “free speech” treatment of AIs will matter, as I argue in my latest column for The Free Press. Here is one excerpt, quite separate from some of my other points:
Another problem is that many current bills, including one already passed in California, require online platforms to disclose which of their content is AI-generated, in the interest of transparency. That mandate has some good features, and in the short run it may be necessary to ease people’s fears about AI. But I am nervous about its longer-run implications.
Let’s say that most content evolves to be jointly produced by humans and AI, and not always in a way where all the lines are clear (GPT-5 did proofread this column, to look for stylistic errors, and check for possible improvements). Does all joint work have to be reported as such? If not, does a single human tweak to AI-generated material mean that no reporting is required?
And if joint work does have to be reported as joint, won’t that level of requirement inevitably soon apply to all output? Who will determine if users accurately report their role in the production of output? And do they have to keep records about this for years? The easier it becomes for individual users to use AI to edit output, the less it will suffice to impose a single, supposedly unambiguous reporting mandate on the AI provider.
I am not comfortable with the notion that the government has the legal right to probe the origin of a work that comes out under your name. In addition to their impracticality, such laws could become yet another vehicle for targeting writers, visual artists, and musicians whom the government opposes. For example, if a president doesn’t like a particular singer, he can ask her to prove that she has properly reported all AI contributions to her recordings.
I suspect this topic will not prove popular with many people. If you dislike free speech, you may oppose the new speech opportunities opened up by AIs (just build a bot and put it out there to blog, it does not have to be traceable to you). If you do like free speech, you will be uncomfortable with the much lower marginal cost of producing “license,” resulting from AI systems. Was the First Amendment really built to handle such technologies?
In my view free speech remains the best constitutional policy, but I do not expect AI systems to make it more popular as a concept. It is thus all the more important that we fight for free speech rights heading into the immediate future.
Interview with Robinson Erhardt of Stanford
We Turned the Light On—and the AI Looked Back
Jack Clark, Co-founder of Anthropic, has written a remarkable essay about his fears and hopes. It’s not the usual kind of thing one reads from a tech leader:
I remember being a child and after the lights turned out I would look around my bedroom and I would see shapes in the darkness and I would become afraid – afraid these shapes were creatures I did not understand that wanted to do me harm. And so I’d turn my light on. And when I turned the light on I would be relieved because the creatures turned out to be a pile of clothes on a chair, or a bookshelf, or a lampshade.
Now, in the year of 2025, we are the child from that story and the room is our planet. But when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come. And there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade. And they want to get us to turn the light off and go back to sleep.
…We are growing extremely powerful systems that we do not fully understand. Each time we grow a larger system, we run tests on it. The tests show the system is much more capable at things which are economically useful. And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things.
It is as if you are making hammers in a hammer factory and one day the hammer that comes off the line says, “I am a hammer, how interesting!” This is very unusual!
…I am also deeply afraid. It would be extraordinarily arrogant to think working with a technology like this would be easy or simple.
My own experience is that as these AI systems get smarter and smarter, they develop more and more complicated goals. When these goals aren’t absolutely aligned with both our preferences and the right context, the AI systems will behave strangely.
…we are not yet at “self-improving AI”, but we are at the stage of “AI that improves bits of the next AI, with increasing autonomy and agency”. And a couple of years ago we were at “AI that marginally speeds up coders”, and a couple of years before that we were at “AI is useless for AI development”. Where will we be one or two years from now?
And let me remind us all that the system which is now beginning to design its successor is also increasingly self-aware and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed.
…In closing, I should state clearly that I love the world and I love humanity. I feel a lot of responsibility for the role of myself and my company here. And though I am a little frightened, I experience joy and optimism at the attention of so many people to this problem, and the earnestness with which I believe we will work together to get to a solution. I believe we have turned the light on and we can demand it be kept on, and that we have the courage to see things as they are.
Clark is clear that we are growing intelligent systems that are more complex than we can understand. Moreover, these systems are becoming self-aware–that is a fact, even if you think they are not sentient (but beware hubris on the latter question).
What should I ask Brendan Foody?
Yes, I will be doing a Conversation with him. He was a Thiel fellow, now CEO and co-founder of Mercor, I believe he is still only 23 years old. GPT-5 gives this summary of Mercor:
Mercor is a San‑Francisco–based startup that runs an AI‑driven talent marketplace: companies building frontier models use it to source, screen (via automated AI interviews), and pay domain experts and engineers for contract or full‑time work. Beyond traditional recruiting, Mercor supplies specialists—doctors, lawyers, scientists, and software/AI engineers—to help train and evaluate AI systems for top labs (TechCrunch reports OpenAI is among its users), charging an hourly finder’s fee. Founded in 2023 by Thiel Fellows Brendan Foody, Adarsh Hiremath, and Surya Midha, the company raised a $100M Series B in February 2025 at a ~$2B valuation, following a $30M Series A in September 2024.
Here is Brendan on Twitter. So what should I ask him?
China understands negative emotional contagion
China’s censors are moving to stamp out more than just political dissent online. Now, they are targeting the public mood itself — punishing bloggers and influencers whose weary posts are resonating widely in a country where optimism is fraying.
The authorities have punished two bloggers who advocated for a life of less work and less pressure; an influencer who said that it made financial sense not to marry and have children; and a commentator known for bluntly observing that China still lags behind Western countries in terms of quality of life.
These supposed cynics and skeptics, two of whom had tens of millions of followers, have had their accounts suspended or banned in recent weeks as China’s internet regulator conducts a new cleanup of Chinese social media. The two-month campaign, launched by the Cyberspace Administration of China in late September, is aimed at purging content that incites “excessively pessimistic sentiment” and panic or promotes defeatist ideas such as “hard work is useless,” according to a notice from the agency.
Here is more from Lily Kuo from the NYT. If you are spreading negative emotional contagion, there is a very good chance that, no matter what you are saying, that you are part of the problem. A more fundamental division these days than Left vs. Right.
Sentences to ponder
To provide some sense of scale, that means the equivalent of about $1,800 per person in America will be invested this year on A.I.
Here is more from Natasha Sarin at the NYT.
Thiel and Wolfe on the Antichrist in literature
Jonathan Swift tried to exorcise Baconian Antichrist-worship from England. Gulliver’s Travels agreed with New Atlantis on one point: The ancient hunger for knowledge of God had competition from the modern thirst for knowledge of science. In this quarrel between ancients and moderns, Swift sided with the former.
Gulliver’s Travels takes us on four voyages to fictional countries bearing scandalous similarities to eighteenth-century England. In his depictions of the Lilliputians, Brobdingnagians, Laputans, and Houyhnhnms, Swift lampoons the Whig party, the Tory party, English law, the city of London, Cartesian dualism, doctors, dancers, and many other people, movements, and institutions besides. Swift’s misanthropy borders on nihilism. But as is the case with all satirists, we learn as much from whom Swift spares as from whom he scorns—and Gulliver’s Travels never criticizes Christianity. Though in 2025 we think of Gulliver’s Travels as a comedy, for Swift’s friend Alexander Pope it was the work of an “Avenging Angel of wrath.” The Anglican clergyman Swift was a comedian in one breath and a fire-and-brimstone preacher in the next.
Gulliver claims he is a good Christian. We doubt him, as we doubt Bacon’s chaplain. Gulliver’s first name, Lemuel, translates from Hebrew as “devoted to God.” But “Gulliver” sounds like “gullible.” Swift quotes Lucretius on the title page of the 1735 edition: “vulgus abhorret ab his.” In its original context, Lucretius’s quote describes the horrors of a godless cosmos, horrors to which Swift will expose us. The words “splendide mendax” appear below Gulliver’s frontispiece portrait—“nobly untruthful.” In the novel’s final chapter, Gulliver reflects on an earlier promise to “strictly adhere to Truth” and quotes Sinon from Virgil’s Aeneid. Sinon was the Greek who convinced the Trojans to open their gates to the Trojan horse: one of literature’s great liars.
Here is the full article, interesting and varied throughout.