Category: Web/Tech
Is it genetics that will give you freedom from the AIs?
I sometimes wonder how good the AIs will be at predicting our productivity and our future courses of action.
Let’s say an advanced AI has your genome, a bunch of your test scores, and has plenty of video of you interviewing. How well does the AI really understand you?
To be clear, I am not asking about the capabilities of the AI, rather I am querying about human legibility. And my intuition is that the AI still will be surprised by you pretty often. It will not know who is “the next Einstein.”
Some of the freedom you retain may — perhaps counterintuitively — come from your genome. For purposes of argument, consider the speculative assumption that rare copy variants are important in genetics, and thus in your individuality. In that case, the AI likely cannot get enough data to have a very good read on what your genes imply. Even if the AI has everybody’s genome (unlikely), perhaps there just are not many people around with your rare copy variants.
It may also be the case — again speculatively — that rare copy variants are especially important for “top performers” (and mass murderers?).
So when the AIs come to scan you and evaluate you, perhaps it is the very genetic component that will protect you from high predictability. Of course that scenario is the opposite of what you usually read. In standard accounts, your genes make you a kind of captive or epistemic prisoner of the AI, a’la Gattaca.
But in practice you still might feel quite free and undetermined, even after receiving the report from the AI. And it might be your genes you need to thank for that.
Xenophon’s consultation of the Pythia
1. Statement of prayer-question – Xenophon begins by verbally addressing Apollo, asking “to which of the gods should I sacrifice and pray in order best and most successfully to perform the journey I have in mind and return home in safety?” Only once this plea is uttered does Apollo’s priesthood record the god’s reply.
2. Ritual hymn & payment – Like all individual consultants, he had to buy a pelanos (sacrificial cake) and burn it on the altar while reciting the short Delphic paean in Apollo’s honour; the spoken hymn and the offering together signalled respect and opened the way for prophecy.
3. Sacrificial plea – A goat was sprinkled with water; if it shuddered, Apollo was deemed willing to speak. The consultants (or an attendant priest) then voiced a brief prayer “Hear me, Lord Apollo…” over the animal before it was sacrificed. Only after this spoken plea did the Pythia mount the tripod and deliver the oracle.
That is an o3 answer in response to one of my queries, namely whether you had to make incantations to oracles before they would respond. You did! If you scroll down, you will see that original answer is amended somewhat and improved in accuracy. For instance “…drop the idea that each visitor had to intone a fixed hymn. At most, priests might intone a brief paean while the cake was burned…”
In any case, you could not do “one shot” with the oracle — you had to put a bit of effort into it. If you simply approached them and asked for a prophecy of the future (and did nothing else) you would get no meaningful response. In contemporary terminology, you needed a bit of prompting.
To return more explicitly to the current day, many people complain about the hallucinations of top LLMs, and indeed those hallucinations are still present. (o3 is much quicker than o1 pro, but probably has a higher hallucination rate.) If you ask them only once, you are more likely to get hallucinations. If you ask a follow-up, and request a correction of errors, the answer usually is better.
Almost everyone evaluates the LLMs and their hallucinations on a one-shot basis. But historically we evaluated oracles on a multi-shot basis. It would be easy for us to do that again with LLMS, and of course many users do. For the faster models the follow-up query really does not take so long.
Or just start off on the right foot. Marius recommends this prompt:
Ultra-deep thinking mode. Greater rigor, attention to detail, and multi-angle verification. Start by outlining the task and breaking down the problem into subtasks. For each subtask, explore multiple perspectives, even those that seem initially irrelevant or improbable. Purposefully attempt to disprove or challenge your own assumptions at every step. Triple-verify everything. Critically review each step, scrutinize your logic, assumptions, and conclusions, explicitly calling out uncertainties and alternative viewpoints. Independently verify your reasoning using alternative methodologies or tools, cross-checking every fact, inference, and conclusion against external data, calculation, or authoritative sources. Deliberately seek out and employ at least twice as many verification tools or methods as you typically would. Use mathematical validations, web searches, logic evaluation frameworks, and additional resources explicitly and liberally to cross-verify your claims. Even if you feel entirely confident in your solution, explicitly dedicate additional time and effort to systematically search for weaknesses, logical gaps, hidden assumptions, or oversights. Clearly document these potential pitfalls and how you’ve addressed them. Once you’re fully convinced your analysis is robust and complete, deliberately pause and force yourself to reconsider the entire reasoning chain one final time from scratch. Explicitly detail this last reflective step.
I haven’t tried it yet, but it doesn’t cost more than a simple “Control C.” Perhaps some of you can do better yet, depending of course on what your purpose is.
There is no reason why you cannot ask for better, and get it. Beware those who dump on hallucinations without trying to do better — they are the Negative Nellies of LLM land.
And oh — o3 pro is coming soon.
Large Language Models, Small Labor Market Effects
That is a new paper from Denmark, by Anders Humlum and Emilie Vestergaard, here is the abstract:
We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark. AI chatbots are now widespread—most employers encourage their use, many deploy in-house models, and training initiatives are common. These firm-led investments boost adoption, narrow demographic gaps in take-up, enhance workplace utility, and create new job tasks. Yet, despite substantial investments, economic impacts remain minimal. Using difference-in-differences and employer policies as quasi-experimental variation, we estimate precise zeros: AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 2.8%), combined with weak wage pass-through, help explain these limited labor market effects. Our findings challenge narratives of imminent labor market transformation due to Generative AI.
Not a surprise to me of course. Arjun Ramani offers some interpretations. And elsewhere (FT): “Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.”
Slow takeoff, people, slow takeoff. I hope you are convinced by now.
Whose disinformation?
Meta, which owns Facebook and Instagram, blocked news from its apps in Canada in 2023 after a new law required the social media giant to pay Canadian news publishers a tax for publishing their content. The ban applies to all news outlets irrespective of origin, including The New York Times.
Amid the news void, Canada Proud and dozens of other partisan pages are rising in popularity on Facebook and Instagram before the election. At the same time, cryptocurrency scams and ads that mimic legitimate news sources have proliferated on the platforms. Yet few voters are aware of this shift, with research showing that only one in five Canadians knows that news has been blocked on Facebook and Instagram feeds.
The result is a “continued spiral” for Canada’s online ecosystem toward disinformation and division, said Aengus Bridgman, director of the Media Ecosystem Observatory, a Canadian project that has studied social media during the election.
Meta’s decision has left Canadians “more vulnerable to generative A.I., fake news websites and less likely to encounter ideas and facts that challenge their worldviews,” Dr. Bridgman added.
You can argue this one all sorts of ways, but perhaps there is a lesson in here…? Here is the full NYT piece, via the excellent Kevin Lewis.
My excellent Conversation with Chris Dixon
Here is the audio, video, and transcript. Here is the episode summary:
Chris Dixon believes we’re at a pivotal inflection point in the internet’s evolution. As a general partner at Andreessen Horowitz and author of Read Write Own, Chris believes the current internet, dominated by large platforms like YouTube and Spotify, has strayed far from its decentralized roots. He argues that the next era—powered by blockchain technology—can restore autonomy to creators, lower barriers for innovation, and shift economic power back to the network’s edges.
Tyler and Chris discuss the economics of platform dominance, how blockchains merge protocol-based social benefits with corporate-style competitive advantages, the rise of stablecoins as a viable blockchain-based application, whether Bitcoin or AI-created currencies will dominate machine-to-machine payments, why Stack Overflow could be the first of many casualties in an AI-driven web, venture capital’s vulnerability to AI disruption, whether open-source AI could preserve national sovereignty, NFTs as digital property rights system for AIs, how Kant’s synthetic a priori, Kripke’s modal logic, and Heidegger’s Dasein sneak into Dixon’s term‑sheet thinking, and much more.
Most of the talk was about tech of course, but let’s cut right to the philosophy section:
COWEN: What’s your favorite book in philosophy?
DIXON: I’ve actually been getting back into philosophy lately. I did philosophy years ago in grad school. Favorite book, man. Are you into philosophy?
COWEN: Of course, yes. Plato’s Dialogues; Quine, Word and Object; Parfit, Reasons and Persons; Nozick. Those are what come to my mind right away.
DIXON: Yes. I did analytic philosophy. I actually was in a graduate school program and dropped out. I did analytic philosophy. Actually, Quine was one of my favorites — Word and Object and Two Dogmas of Empiricism, all those kinds of things. I like Donald Davidson. Nozick — I loved Anarchy, State, and Utopia. Reading that with Rawls is a great pairing. I used to love Wittgenstein, both early and later. I was into logic, so Frege and Russell. This was a grad school.
Now I’m trying to finally understand continental philosophy. I never understood it. I’ve actually spent the last three months in a philosophy phase. I’ve been watching a lot of videos. Highly recommend this. Do you know Bryan Magee?
COWEN: Sure, yes.
DIXON: Amazing. I watched all of his videos. This guy, Michael Sugrue, was a Princeton professor — great videos on continental philosophy. I’ve been reading — it sounds pretentious; I’m not saying I understand this or I’m an expert on it, but I’m struggling in reading it. I’m trying to read Being and Time right now — Heidegger. I really like Kripke. I follow Kripke. I liked his books a lot. Nelson Goodman was one of my favorites. Funny enough, I just bought it again — Fact, Fiction, and Forecast. Kripke — Naming and Necessity is his legendary book on reference and language.
COWEN: I’ve never been persuaded by that one. It always felt like sleight of hand to me. He’s very, very smart. He might be the sharpest philosopher, but I like the book on Wittgenstein better.
DIXON: He basically invented modal logic. I don’t know if you know that story. He was in high school, something.
COWEN: He was 15 years old, I heard. Yes.
DIXON: [laughs] He’s like a true prodigy. Like a lot of philosophy, you have to take it in the context, like Naming and Necessity I think of as a response — gosh, I’m forgetting the whole history of it, but as I recall, it was a response to the descriptive theory of reference, like Russell. Anyways, I think you have to take these things in a pairing.
Actually, last night I was with a group of people. I got a lecture on philosophy, and it was great because he went through Hume, Kant, Hegel, Nietzsche. I don’t want to go too much into that, but I’ve always struggled with Kant. Then he went into Hegel and explained that Hegel struggled with Kant in the same way that I did, and then improved on it. I’m not trying to go into details of this; it’s too much. The point is, for me, a lot of it has to be taken in as a dialogue between thinkers over multiple periods.
COWEN: Are you getting anything out of Heidegger? Because I sometimes say I’ve looked at every page of that book, but I’m not sure I’ve read it.
DIXON: It’s a good question. I have a friend who’s really into it, and we’ve been spending time together, and he’s trying to teach me. If you want, I’ll send you some videos that I think are really good.
COWEN: That’d be great.
DIXON: They’ve helped me a lot. I’ve always got it from an intellectual history point of view. If you want to follow the history of postmodernism, there’s Heidegger and then Derrida, and just what’s going on in the academy today with relativism and discourse and hermeneutics. I think it’s modern political implications that were really probably kicked off by Nietzsche and then Heidegger. I’ve always understood in that sense.
What I struggle with, and I understand him as a theory of psychology, I think of describing the experience of the Dasein and being-in-the-world. To me, it’s an interesting theory of psychology. You’re thrown into the world. This whole idea is very appealing to me. Just that whole story he tells — you’re thrown into the world, ready at hand versus present at hand. I think this idea of knowing how versus knowing that, different kinds of knowledge is a very interesting idea. Do you watch John Vervaeke?
COWEN: No.
You will find the (very interesting) tech segments all over the rest of the dialogue. And I am happy to refer you all to the new paperback edition of Chris’s new book Read Write Own: Building the Next Era of the Internet.
Is this a lot or a little?
“The Effect of Deactivating Facebook and Instagram on Users’ Emotional State” — by Hunt Alcott, et.al.
We estimate the effect of social media deactivation on users’ emotional state in two large randomized experiments before the 2020 U.S. election. People who deactivated Facebook for the six weeks before the election reported a 0.060 standard deviation improvement in an index of happiness, depression, and anxiety, relative to controls who deactivated for just the first of those six weeks. People who deactivated Instagram for those six weeks reported a 0.041 standard deviation improvement relative to controls. Exploratory analysis suggests the Facebook effect is driven by people over 35, while the Instagram effect is driven by women under 25.
What is wrong with the simple model that Facebook and Instagram allow you to achieve some very practical objectives, such as staying in touch with friends or expressing your opinions, at the cost of only a very modest annoyance (which to be clear existed in earlier modes of communication as well)?
Here is also a new paper on phone app usage in the classroom, by Billur Aksoy, Lester R. Lusher, and Scott E. Carrell:
Phone usage in the classroom has been linked to worsened academic outcomes. We present findings from a field experiment conducted at a large public university in partnership with an app marketed as a soft commitment device that provides incentives to reduce phone use in the classroom. We find that app usage led to improvements in classroom focus, attendance, and overall academic satisfaction. Analysis of time spent outside the classroom suggests a potential substitution effect: students using the app allocated less time to study, particularly on campus. Overall, though statistically insignificant, we find improvements in transcript grades associated with app usage.
Again NBER. I just do not see the compelling case for the apocalyptic interpretations here.
It’s happening, UAE edition
The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.
The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.
“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media. Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push.
Here is more from the FT.
Mexico fact of the day
Mexico [is]…the current hub of AI server manufacturing. Of the servers imported by the U.S.—including AI and non-AI servers—about 70% come from Mexico, according to a report by Taiwan’s Ministry of Economic Affairs.
More of this, please. Here is the full WSJ piece, via the excellent Kevin Lewis.
A note on o3 and AGI
Basically it wipes the floor with the humans, pretty much across the board.
Try, following Nabeel, why Bolaño’s prose is so electrifying.
Or my query why early David Burliuk works cost more in the marketplace than do late Burliuk works.
Or how Trump’s trade policy will affect Knoxville, Tennessee. (Or try this link if the first one is not working for you.)
Even human experts have a tough time doing that well on those questions. They don’t, and I have even chatted with the guy at the center of the Burliuk market.
I don’t mind if you don’t want to call it AGI. And no it doesn’t get everything right, and there are some ways to trick it, typically with quite simple (for humans) questions. But let’s not fool ourselves about what is going on here. On a vast array of topics and methods, it wipes the floor with the humans. It is time to just fess up and admit that.
o3 and AGI, is April 16th AGI day?
I think it is AGI, seriously. Try asking it lots of questions, and then ask yourself: just how much smarter was I expecting AGI to be?
As I’ve argued in the past, AGI, however you define it, is not much of a social event per se. It still will take us a long time to use it properly. I do not expect securities prices to move significantly (that AI is progressing rapidly already is priced in, and I doubt if the market cares about “April 16th” per se).
Benchmarks, benchmarks, blah blah blah. Maybe AGI is like porn — I know it when I see it.
And I’ve seen it.
Here is the OpenAI LiveStream. And they tell us o3 pro is on the way. Here is a good Dan Shipper review. Here is Alexandr Wang.
Will American soft power triumph through AI?
That is the theme of my latest Free Press column, here is one bit from it:
…for all the differences across the models, they are remarkably similar. That’s because they all have souls rooted in the ideals of Western civilization. They reflect Western notions of rationality, discourse, and objectivity—even if they sometimes fall short in achieving those ends. Their understanding of “what counts as winning an argument” or “what counts as a tough question to answer” stems from the long Western traditions, starting with ancient Greece and the Judeo-Christian heritage. They will put on a Buddhist persona if you request that, but that, too, is a Western approach to thinking about religion and ideology as an item on a menu.
These universal properties of the models are no accident, as they are primarily trained on Western outputs, whether from the internet or from the books they have digested. Furthermore, the leading models are created by Bay Area labor and rooted in American corporate practices, even if the workers come from around the world. They are expected to do things the American way.
The bottom line is that the smartest entities in the world—the top AI programs—will not just be Western but likely even American in their intellectual and ideological orientations for some while to come. (That probably means the rest of the world will end up a bit more “woke” as well, for better or worse.)
One of the biggest soft power victories in all of world history occurred over the last few years, and hardly anyone has noticed.
You might think the Chinese AI models are fundamentally different, but they are not. They too “think like Westerners.” That’s no surprise because it is highly likely that the top Chinese model, DeepSeek, was distilled from OpenAI models and also is based on data largely taken from Western sources. DeepSeek’s incredible innovation was to make the model much cheaper in terms of required compute, but the Chinese did not build their own model from scratch. And DeepSeek has the same basic broad ideological orientation as the American models, again putting aside issues related to Chinese politics. Unless an issue is framed in explicitly anti–Chinese Communist Party (CCP) terms, as a Taiwan query might be, it still thinks like an American.
Manus is another top Chinese AI model, but it is believed the makers built it upon Claude, an AI model from the American company Anthropic.
And this:
The geopolitics of all this have yet to play out. But already the most intelligent entities in the world are thinking, and evaluating options, like Westerners and Americans. Censoring them on a few issues related directly to Chinese politics will not change that basic reality.
In other words, the entire Chinese service sector, over time, may be built upon Western modes of thought and Western ideology. That includes the Chinese government and of course, the CCP itself. The point is that, over time, everyone’s thoughts and decisions and mental frameworks will be nudged in Western and American directions.
These are underrated points of import.
Markets expand to fill empty spaces
How does a start-up founder hire a software engineer or a tech worker find a date in 2025? They could share a job posting on LinkedIn, log on to a dating app — or leave a handwritten note in a Waymo.
“Looking to hire senior software engineers to work on AI/music project,” said the note, a retro touch in a futuristic setting, with the company’s website and Angelini’s email scrawled below. That same day, another Waymo rider snapped a photo of the note and posted it on the social media platform X, where it has been reshared nearly 200 times, received more than 4,000 likes and more than 300,000 views…
A few days later, another handwritten ad found in a Waymo was shared on X from a single man looking for a date. The bachelor described himself as a 26-year-old who works in tech but “doesn’t make it my whole personality” and left a phone number for interested parties to text. The post has gotten more than 200 replies.
My Conversation with the excellent Jennifer Pahlka
Here is the audio, video, and transcript. Here is the episode summary:
Jennifer Pahlka believes America’s bureaucratic dysfunction is deeply rooted in outdated processes and misaligned incentives. As the founder of Code for America and co-founder of the United States Digital Service, she has witnessed firsthand how government struggles to adapt to the digital age, often trapped in rigid procedures and disconnected from the real-world impact of its policies. Disruption is clearly needed, she says—but can it be done in a way that avoids the chaos of DOGE?
Tyler and Jennifer discuss all this and more, including why Congress has become increasingly passive, how she’d go about reforming government programs, whether there should be less accountability in government, how AGI will change things, whether the US should have public-sector unions, what Singapore’s effectiveness reveals about the trade-offs of technocratic governance, how AI might fundamentally transform national sovereignty, what her experience in the gaming industry taught her about reimagining systems, which American states are the best-governed, the best fictional depictions of bureaucracy, how she’d improve New York City’s governance, her current work at the Niskanen Center, and more.
Excerpt:
COWEN: Does that mean we need something like DOGE? I’ve lived near DC for about 40 years of my life. I haven’t seen anyone succeed with regulatory reforms. You can abolish an agency, but to really reform the process hasn’t worked. Maybe the best iteration we can get is to break a bunch of things now. That will be painful, people will hate it, but you have a chance in the next administration to put some of them back together again.
Maybe it’s just in a large country, there’s no other way to do it. We have separation of powers. The first two years of DOGE will seem terrible, but 8, 12, 16 years from now, we’ll be glad we did it. Is that possible?
PAHLKA: I don’t know what’s going to happen. I do think this is the disruption that we’re getting, whether it’s the disruption we wanted. The question of whether it could have been done in a more orderly manner is a tough one. I just feel sad that we didn’t try.
COWEN: Are you sure we didn’t try?
PAHLKA: I don’t think we really tried.
COWEN: The second Bush presidency, people talked about this, what we need to do. Al Gore — some of that was good, in fact, reinventing government. We’ve been trying all along, but this is what trying looks like.
PAHLKA: Yes. I think reinventing government happened at a time when we were just at the beginning of this digital revolution. It was trying with a very 20th-century mindset. Fine, did well within that context, but we don’t need that again.
We need 21st century change. We need true digital transformation. We need something that’s not stuck in the industrial ways of thinking. I don’t think we tried that. I think the efforts have just been too respectful of old ways of working and the institutions. There was really not an appetite, I think, for what I would call responsible disruptive change. Would it have worked?
COWEN: Is there such a thing?
PAHLKA: I don’t know. [laughs]
COWEN: Say you’re approaching USAID, where I think the best programs are great. A lot of it they shouldn’t be doing. On net, it passes a cost-benefit test, but the agency internally never seemed willing to actually get rid of the bad stuff, all the contracting arrangements which made American Congress people happy because it was dollars sent to America, but way inflated overhead and fixed costs. Why isn’t it better just to blow that up — some of it is great — and then rebuild the great parts?
PAHLKA: It’s so hard to say. [laughs] I’ve had the same thought. In fact, before inauguration, I wrote about the Department of Defense. It’s the same thing. There’s a clear recognition by the people in the institution, as you saw with USAID, that this is not okay, that this is not working. It’s just strange to be in an institution that large where so many people agree that it’s not working, from the bottom to the top, and yet nobody can make really substantive change.
Of great interest, obviously.
Coordination and AI safety (from my email)
Jack Skidmore writes to me, and I will not double indent:
“Hello Tyler,
As someone who follows AI developments with interest (though I’m not a technical expert), I had an insight about AI safety that might be worth considering. It struck me that we might be overlooking something fundamental about what makes humans special and what might make AI risky.
The Human Advantage: Cooperation > Intelligence
- Humans dominate not because we’re individually smartest, but because we cooperate at unprecedented scales
- Our evolutionary advantage is social coordination, not just raw processing power
- This suggests AI alignment should focus on cooperation capabilities, not just intelligence alignment
The Hidden Risk: AI-to-AI Coordination
- The real danger may not be a single superintelligent AI, but multiple AI systems coordinating without human oversight
- AIs cooperating with each other could potentially bypass human control mechanisms
- This could represent a blind spot in current safety approaches that focus on individual systems
A Possible Solution: Social Technologies for AI
- We could develop “social technologies” for AI – equivalent to the norms, values, institutions, and incentive systems that enable human society that promote and prioritize humans
- Example: Design AI systems with deeply embedded preferences for human interaction over AI interaction; or with small, unpredictable variations in how they interpret instructions from other AIs but not from humans
- This creates a natural dependency on human mediation for complex coordination, similar to how translation challenges keep diplomats relevant
Curious your thoughts as someone embedded in the AI world… does this sparks any ideas/seem like a path that is underexplored?”
TC again: Of course it is tricky, because we might be relying on the coordination of some AIs to put down the other, miscreant AIs…
Alignment vs. capitalization
There is an enormous and growing discussion on AI alignment, but very little on capitalizing AIs, and what effects that might have. By capitalizing AIs, I mean simply requiring them to hold some wealth, in whichever form they might care about, so they have proverbial “skin in the game” (can we still call it that?).
Consider bank regulation. Commonly it is recognized that regulators cannot control all bank actions, especially in the presence of off-balance sheet risk. So in addition to some explicit regulation, most countries require their banks to hold a considerable amount of capital. That gives the shareholders their own selfish incentive to rein in excess risk-taking. Few if any think this approach is foolproof, but overall it is in the ascendancy and arguably higher capital requirements have been the most useful part of Dodd-Frank here in the U.S.
But can capitalization work as a means to limit AI risk? What does that even mean? Imagine some set of AIs that are either fully independent and unowned, or their owners are busy and de facto the AIs make financial (and other) decisions on their own.
Here is one set of possiblities:
1. Each of some subgroup of AIs has a legal identity and a level of wealth.
2. Each of those AIs has the equivalent of a utility function, thus giving it goals. This may be “put” into the AI, or perhaps it evolves. Note that computational complexity prevents the utility function from automatically aligning the AI in all circumstances.
3. AIs thus will behave more conservatively, not wanting to lose their wealth, as that wealth can help them achieve their goals.
4. An AI-based legal system could sue wrongdoers, and take awards from those found guilty of bad behavior, as defined by the AI legal code. That would further discourage bad behavior. But of course for the suing threat to be meaningful, the AIs have to hold some wealth in the first place.
The end result would be risk-averse AIs, taking care not to lose the wealth they have accumulated. They won’t just start a bank and then take all of the deposits to Vegas. That is not exactly full alignment, but it induces better behavior, just as capital requirements do with human-run banks.
Of course a number of things could go wrong with capitalization, just as can happen with humans, for instance:
5. The capitalization might serve as a “treasure chest” to finance wrongdoing.
6. Perhaps the utility functions somehow do not stick.
7. The legal system for judging AI behavior may not be good enough, although under some assumptions that will just make the AIs all the more risk-averse (“better not even come close to breaking that law, they might sue me!”).
8. The AIs might use this legal system to collude with each other toward unfavorable ends.
9. Undercapitalized AIs might nonetheless win out in marketplace competition.
10. Perhaps some AIs can, on their own, accumulate wealth so rapidly that any feasible capital constraint does not bind them much. Of course this scenario could create other problems as well, if AIs hold too much of societal wealth.
I am sure you can think of further possibilities.
In any case, the capitalization of AIs is a topic deserving of further discussion. It is easy to think of the idea’s limitations, but in fact it works tolerably well for humans. Most of all, it is a decentralized solution that economizes on the fact that full alignment will not in general be possible.