AI and economic liability

I’ve seen a number of calls lately to place significant liability on the major LLM models and their corporate owners, and so I cover that topic in my latest Bloomberg column.  There are numerous complications, and I cover a mere smidgen of them, but still more analytics are needed here.  Excerpt:

Imagine a bank robbery that is organized through emails and texts. Would the email providers or phone manufacturers be held responsible? Of course not. Any punishment or penalties would be meted out to the criminals…

In the case of the bank robbery, the providers of the communications medium or general-purpose technology (i.e., the email account or mobile device) are not the lowest-cost avoiders and have no control over the harm. And since general-purpose technologies — such as mobile devices or, more to the point, AI large language models — have so many practical uses, the law shouldn’t discourage their production with an additional liability burden.

Of course there are many more complications, and I am not saying zero corporate liability is always correct.  But we do need to start with the analytics, and a simple fear of AI-related consequences does settle the matter.  There is this:

On a more practical level, liability assignment to the AI service just isn’t going to work in a lot of areas. The US legal system, even when functioning well, is not always able to determine which information is sufficiently harmful. A lot of good and productive information — such as teaching people how to generate and manipulate energy — can also be used for bad purposes.

Placing full liability on AI providers for all their different kinds of output, and the consequences of those outputs, would probably bankrupt them. Current LLMs can produce a near-infinite variety of content across many languages, including coding and mathematics. If bankruptcy is indeed the goal, it would be better for proponents of greater liability to say so.

Here is a case where partial corporate liability may well make sense:

It could be that there is a simple fix to LLMs that will prevent them from generating some kinds of harmful information, in which case partial or joint liability might make sense to induce the additional safety. If we decide to go this route, we should adopt a much more positive attitude toward AI — the goal, and the language, should be more about supporting AI than regulating it or slowing it down. In this scenario, the companies might even voluntarily adopt the beneficial fixes to their output, to improve their market position and protect against further regulatory reprisals.

Again, not the final answers but I am imploring people to explore the real analytics on these questions.

Comments

Comments for this post are closed