The new Stripe stablecoin product

Stripe, a global payments platform, is building a new US dollar stablecoin product for companies based outside the United States, the United Kingdom and Europe in a move that may further expand the footprint of the dollar around the world.

Stripe CEO Patrick Collison confirmed the product on X, posting an invitation for companies interested in testing the solution. The move gained traction after Stripe recently received regulatory approval to acquire the stablecoin payments network Bridge.

Bridge’s network competes with banks and companies that use the SWIFT system, a global financial messaging network that facilitates international wire transfers. Two former Coinbase executives, Zach Abrams and Sean Yu, co-founded the company in 2022.

Here is the full article.  Note this is a product to ease stablecoin use, not a separate stablecoin from Stripe.

Is it genetics that will give you freedom from the AIs?

I sometimes wonder how good the AIs will be at predicting our productivity and our future courses of action.

Let’s say an advanced AI has your genome, a bunch of your test scores, and has plenty of video of you interviewing.  How well does the AI really understand you?

To be clear, I am not asking about the capabilities of the AI, rather I am querying about human legibility.  And my intuition is that the AI still will be surprised by you pretty often.  It will not know who is “the next Einstein.”

Some of the freedom you retain may — perhaps counterintuitively — come from your genome.  For purposes of argument, consider the speculative assumption that rare copy variants are important in genetics, and thus in your individuality.  In that case, the AI likely cannot get enough data to have a very good read on what your genes imply.  Even if the AI has everybody’s genome (unlikely), perhaps there just are not many people around with your rare copy variants.

It may also be the case — again speculatively — that rare copy variants are especially important for “top performers” (and mass murderers?).

So when the AIs come to scan you and evaluate you, perhaps it is the very genetic component that will protect you from high predictability.  Of course that scenario is the opposite of what you usually read.  In standard accounts, your genes make you a kind of captive or epistemic prisoner of the AI, a’la Gattaca.

But in practice you still might feel quite free and undetermined, even after receiving the report from the AI.  And it might be your genes you need to thank for that.

*Hope I Get Old Before I Die*

That is the new and fun book by David Hepworth.  It focuses on the careers of rock stars who simply keep on going and do not retire.

Can we admit that Paul McCartney and also the Rolling Stones have made the best of this?

Here is one bit:

Of the ten most-visited graves in the USA, just one is the resting place of a president.  The rest are all the graves of entertainers.

I liked this line:

‘Sometimes I feel like I work for Liz Phair,’ she [Liz Phair] says.  ‘And I have years off but then, like, I work for her.’

You can order the book here.

Xenophon’s consultation of the Pythia

1. Statement of prayer-question – Xenophon begins by verbally addressing Apollo, asking “to which of the gods should I sacrifice and pray in order best and most successfully to perform the journey I have in mind and return home in safety?” Only once this plea is uttered does Apollo’s priesthood record the god’s reply.

2. Ritual hymn & payment – Like all individual consultants, he had to buy a pelanos (sacrificial cake) and burn it on the altar while reciting the short Delphic paean in Apollo’s honour; the spoken hymn and the offering together signalled respect and opened the way for prophecy.

3. Sacrificial plea – A goat was sprinkled with water; if it shuddered, Apollo was deemed willing to speak. The consultants (or an attendant priest) then voiced a brief prayer “Hear me, Lord Apollo…” over the animal before it was sacrificed. Only after this spoken plea did the Pythia mount the tripod and deliver the oracle.

That is an o3 answer in response to one of my queries, namely whether you had to make incantations to oracles before they would respond.  You did!  If you scroll down, you will see that original answer is amended somewhat and improved in accuracy.  For instance “…drop the idea that each visitor had to intone a fixed hymn. At most, priests might intone a brief paean while the cake was burned…”

In any case, you could not do “one shot” with the oracle — you had to put a bit of effort into it.  If you simply approached them and asked for a prophecy of the future (and did nothing else) you would get no meaningful response.  In contemporary terminology, you needed a bit of prompting.

To return more explicitly to the current day, many people complain about the hallucinations of top LLMs, and indeed those hallucinations are still present.  (o3 is much quicker than o1 pro, but probably has a higher hallucination rate.)  If you ask them only once, you are more likely to get hallucinations.  If you ask a follow-up, and request a correction of errors, the answer usually is better.

Almost everyone evaluates the LLMs and their hallucinations on a one-shot basis.  But historically we evaluated oracles on a multi-shot basis.  It would be easy for us to do that again with LLMS, and of course many users do.  For the faster models the follow-up query really does not take so long.

Or just start off on the right foot.  Marius recommends this prompt:

Ultra-deep thinking mode. Greater rigor, attention to detail, and multi-angle verification. Start by outlining the task and breaking down the problem into subtasks. For each subtask, explore multiple perspectives, even those that seem initially irrelevant or improbable. Purposefully attempt to disprove or challenge your own assumptions at every step. Triple-verify everything. Critically review each step, scrutinize your logic, assumptions, and conclusions, explicitly calling out uncertainties and alternative viewpoints. Independently verify your reasoning using alternative methodologies or tools, cross-checking every fact, inference, and conclusion against external data, calculation, or authoritative sources. Deliberately seek out and employ at least twice as many verification tools or methods as you typically would. Use mathematical validations, web searches, logic evaluation frameworks, and additional resources explicitly and liberally to cross-verify your claims. Even if you feel entirely confident in your solution, explicitly dedicate additional time and effort to systematically search for weaknesses, logical gaps, hidden assumptions, or oversights. Clearly document these potential pitfalls and how you’ve addressed them. Once you’re fully convinced your analysis is robust and complete, deliberately pause and force yourself to reconsider the entire reasoning chain one final time from scratch. Explicitly detail this last reflective step.

I haven’t tried it yet, but it doesn’t cost more than a simple “Control C.”  Perhaps some of you can do better yet, depending of course on what your purpose is.

There is no reason why you cannot ask for better, and get it.  Beware those who dump on hallucinations without trying to do better — they are the Negative Nellies of LLM land.

And oh — o3 pro is coming soon.

Spain facts of the day

  • In 1990, less than 1% of the Spanish population were foreign residents. The foreign-born population was even smaller, with immigrants accounting for about 0.5% of residents.
  • In 2023, Spain alone accounted for 23% of all naturalizations in the European Union

As of 2025…

  • 14% of residents in Spain are foreign nationals.
  • Nearly 20% of Spain’s population was born outside the country.
  • 1 in 7 residents of Madrid were born in Latin America.

That is from the Show Notes to Rasheed Griffith’s podcast,

Large Language Models, Small Labor Market Effects

That is a new paper from Denmark, by Anders Humlum and Emilie Vestergaard, here is the abstract:

We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark. AI chatbots are now widespread—most employers encourage their use, many deploy in-house models, and training initiatives are common. These firm-led investments boost adoption, narrow demographic gaps in take-up, enhance workplace utility, and create new job tasks. Yet, despite substantial investments, economic impacts remain minimal. Using difference-in-differences and employer policies as quasi-experimental variation, we estimate precise zeros: AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 2.8%), combined with weak wage pass-through, help explain these limited labor market effects. Our findings challenge narratives of imminent labor market transformation due to Generative AI.

Not a surprise to me of course.  Arjun Ramani offers some interpretations.  And elsewhere (FT): “Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.”

Slow takeoff, people, slow takeoff.  I hope you are convinced by now.

The macroeconomics of tariff shocks

There is a new paper by Adrien Auclert, Matthew Rognlie, and Ludwig Straub.  It seems timely?:

We study the short-run effects of import tariffs on GDP and the trade balance in an open-economy New Keynesian model with intermediate input trade. We find that temporary tariffs cause a recession whenever the import elasticity is below an openness-weighted average of the export elasticity and the intertemporal substitution elasticity. We argue this condition is likely satisfied in practice because durable goods generate great scope for intertemporal substitution, and because it is easier to lose competitiveness on the global market than to substitute between home and foreign goods. Unilateral tariffs tend to improve the trade balance, but when other countries retaliate the trade balance worsens and the recession deepens. Taking into account the recessionary effect of tariffs dramatically lowers the optimal unilateral tariff level derived in standard trade theory.

I wonder what the policy implications might be.  Here is a good thread on the paper.

Thursday assorted links

1. Are these the most beautiful colleges in America?  Where is the brutalism?  UC Irvine has some.  I feel like I have been to way, way too many of those campuses.

2. Facts about egg supply chains.

3. Good robot photos in this NYT story about China.

4. “Apply to our 1517 garage science cohort – we’re looking for renegade scientists to kickstart a revolution

5. Greek primary surplus is 4.8% of gdp, remarkable.  And Argentina forecasts are now for 5.5% growth for 2025, also wonderful.

6. “Australian Radio Network (ARN), the media company behind KIIS, as well as Gold and iHeart, used an AI-generated female Asian host to broadcast 4 hours of midweek radio, without disclosing it.

Whose disinformation?

Meta, which owns Facebook and Instagram, blocked news from its apps in Canada in 2023 after a new law required the social media giant to pay Canadian news publishers a tax for publishing their content. The ban applies to all news outlets irrespective of origin, including The New York Times.

Amid the news void, Canada Proud and dozens of other partisan pages are rising in popularity on Facebook and Instagram before the election. At the same time, cryptocurrency scams and ads that mimic legitimate news sources have proliferated on the platforms. Yet few voters are aware of this shift, with research showing that only one in five Canadians knows that news has been blocked on Facebook and Instagram feeds.

The result is a “continued spiral” for Canada’s online ecosystem toward disinformation and division, said Aengus Bridgman, director of the Media Ecosystem Observatory, a Canadian project that has studied social media during the election.

Meta’s decision has left Canadians “more vulnerable to generative A.I., fake news websites and less likely to encounter ideas and facts that challenge their worldviews,” Dr. Bridgman added.

You can argue this one all sorts of ways, but perhaps there is a lesson in here…?  Here is the full NYT piece, via the excellent Kevin Lewis.