Brian Slesinsky on AI taxes (from my email)

My preferred AI tax would be a small tax on language model API calls, somewhat like a Tobin tax on currency transactions. This would discourage running language models in a loop or allowing them to “think” while idle.

For now, we mostly use large language models under human supervision, such as with AI chat. This is relatively safe because the AI is frozen most of the time [1]. It means you get as much time as you like to think about your next move, and the AI doesn’t get the same advantage. If you don’t like what the AI is saying, you can simply close the chat and walk away.

Under such conditions, a sorcerer’s apprentice shouldn’t be able to start anything they can’t stop. But many people are experimenting with running AI in fully automatic mode and that seems much more dangerous. It’s not yet as dangerous as experimenting with computer viruses, but that could change.

Such a tax doesn’t seem necessary today because the best language models are very expensive [2]. But making and implementing tax policy takes time, and we should be concerned about what happens when costs drop.

Another limit that would tend to discourage dangerous experiments would be a minimum reaction time. Today, language models are slow. It reminds me of using a dial-up modem in the old days. But we should be concerned about what happens when AI’s start reacting to events much quicker than people.

Different language models quickly reacting to each other in a marketplace or forum could cause cascading effects, similar to a “flash crash” in a financial market. On social networks, it’s already the case that volume is far higher than we can keep up with. But it could get worse when conversations between AI’s start running at superhuman speeds.

Financial markets don’t have limits on reaction time, but there are trading hours and circuit breakers that give investors time to think about what’s happening in unusual situations. Social networks sometimes have rate limits too, but limiting latency at the language model API seems more comprehensive.

Limits on transaction costs and latency won’t make AI safe, but they should reduce some risks better than attempting to keep AI’s from getting smarter. Machine intelligence isn’t defined well enough to regulate. There are many benchmarks and it seems unlikely that researchers will agree on a one-dimensional measurement, like IQ in humans.

[1] https://skybrian.substack.com/p/ai-chats-are-turn-based-games

[2] Each API call to GPT4 costs several cents, depending on how much input you give it.
Running a smaller language model on your own computer is cheaper, but they are lower quality, and it has opportunity costs since it keeps the computer busy.

Comments

Comments for this post are closed