Who should own the robots?

There’s two versions of this.

1. One or a small group of entrepreneurs owns the robots.

2. The government owns the robots.

I see how we get from where we are now to 1. How would we get to 2, and is 2 better than 1?

That is a comment and request from Mark Thorson.  It’s embedded in a longer thread, but I suspect you can guess the context.

I would focus on a prior question: what is government in a world where everything is done by the robots?  Say that most government jobs are performed by robots, except for a few leaders (NB: Isaac Asimov had even the President as a robot).  It no longer makes sense to define government in terms of “the people who work for government” or even as a set of political norms (my preferred definition).  In this setting, government is almost entirely people-empty.  Yes, there is the Weberian definition of government as having a monopoly on force, but then it seems the robots are the government.  I’ll come back to that.

You might ask who are the residual claimants on output.  Say there are fifty people in the government, and they allocate the federal budget subject to electoral constraints.  Even a very small percentage of skim makes them fantastically wealthy, and gives them all sorts of screwy incentives to hold on to power.  If they can, they’ will manipulate robot software toward that end.  That said, I am torn between thinking this group has too much power — such small numbers can coordinate and tyrannize without checks and balances — and thinking they don’t have enough power, because if one man can’t make a pencil fifty together might not do better than a few crayons.

Alternatively, say that ten different private companies own varying shares of various robots, with each company having a small number of employees, and millions of shareholders just as there are millions of voters.  The government also regulates these companies, so in essence the companies produce the robots that then regulate them (what current law does that remind you of?).  That’s a funny and unaccustomed set of incentives too, but at least you have more distinct points of human interaction/control/manipulation with respect to the robots.

I feel better about the latter scenario, as it’s closer to a polycentric order and I suspect it reduces risk for that reason.  Nonetheless it still seems people don’t have much direct influence over robots.  Most of the decisions are in effect made “outside of government” by software, and the humans are just trying to run in place and in some manner pretend they are in charge.  Perhaps either way, the robots themselves have become the government and in effect they own themselves.

Or is this how it already is, albeit with much of the “software” being a set of social norms?

Replacing social norms by self-modifying software –how big of a difference will it make for how many things?

Comments

Comments for this post are closed