Who should own the robots?

by on February 17, 2017 at 12:19 am in Economics, Political Science, Uncategorized, Web/Tech | Permalink

There’s two versions of this.

1. One or a small group of entrepreneurs owns the robots.

2. The government owns the robots.

I see how we get from where we are now to 1. How would we get to 2, and is 2 better than 1?

That is a comment and request from Mark Thorson.  It’s embedded in a longer thread, but I suspect you can guess the context.

I would focus on a prior question: what is government in a world where everything is done by the robots?  Say that most government jobs are performed by robots, except for a few leaders (NB: Isaac Asimov had even the President as a robot).  It no longer makes sense to define government in terms of “the people who work for government” or even as a set of political norms (my preferred definition).  In this setting, government is almost entirely people-empty.  Yes, there is the Weberian definition of government as having a monopoly on force, but then it seems the robots are the government.  I’ll come back to that.

You might ask who are the residual claimants on output.  Say there are fifty people in the government, and they allocate the federal budget subject to electoral constraints.  Even a very small percentage of skim makes them fantastically wealthy, and gives them all sorts of screwy incentives to hold on to power.  If they can, they’ will manipulate robot software toward that end.  That said, I am torn between thinking this group has too much power — such small numbers can coordinate and tyrannize without checks and balances — and thinking they don’t have enough power, because if one man can’t make a pencil fifty together might not do better than a few crayons.

Alternatively, say that ten different private companies own varying shares of various robots, with each company having a small number of employees, and millions of shareholders just as there are millions of voters.  The government also regulates these companies, so in essence the companies produce the robots that then regulate them (what current law does that remind you of?).  That’s a funny and unaccustomed set of incentives too, but at least you have more distinct points of human interaction/control/manipulation with respect to the robots.

I feel better about the latter scenario, as it’s closer to a polycentric order and I suspect it reduces risk for that reason.  Nonetheless it still seems people don’t have much direct influence over robots.  Most of the decisions are in effect made “outside of government” by software, and the humans are just trying to run in place and in some manner pretend they are in charge.  Perhaps either way, the robots themselves have become the government and in effect they own themselves.

Or is this how it already is, albeit with much of the “software” being a set of social norms?

Replacing social norms by self-modifying software –how big of a difference will it make for how many things?

1 Enrique February 17, 2017 at 12:31 am

Why is there no option #3: “everyone owns a robot”?

Reply

2 anon February 17, 2017 at 1:05 am

It would be an interesting constraint. To work, robots would have to be like men, and not globe spanning super intelligences available to big players only.

Reply

3 Chadtech February 17, 2017 at 1:06 am

Tyler Cowen BTFO’d

Reply

4 Todd Kreider February 17, 2017 at 3:57 am

Because this is Tyler writing. No one in Silicon Valley thinks of options 1 and 2.

Reply

5 Witness February 17, 2017 at 11:52 am

Or, just as good, “everyone owns stock in a Robot factory”.

Reply

6 Student February 17, 2017 at 1:38 pm

I think your option 3 is option 2. While its rather uncomfortable, option 2 is the best possible outcome.

As Stephen Hawking put it…

“Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine owners successfully lobby against wealth redistribution.”

Reply

7 JWatts February 17, 2017 at 4:03 pm

“I think your option 3 is option 2. While its rather uncomfortable, option 2 is the best possible outcome.”

No, that’s not at all what Tyler is saying. You’re assuming some kind of Democratic mechanism maintains control, but if a small group of government employees controls the entire wealth of the nation, they won’t care what the “voters” think.

Reply

8 Student February 17, 2017 at 9:43 pm

I am not sure how to think about the case where k/l tends to infinity.

Reply

9 Student February 17, 2017 at 1:44 pm

Sadly, however, I think we end up with option one because it seems like, on average, people are more selfish than empathetic. As a result, option 1 is more likely to evolve than 2. If option 1 happens first, I see it locking in. This is disturbing to think about actually.

Reply

10 Ann W February 19, 2017 at 10:45 am

That’s the way it’ll likely go, with everyone owning a Robot. But the more important robots will be owned naturally by the richer population. It’s simple economics. The lower classes will be pushed out by not being able to compete for jobs.

Naturally a more socialist government will be voted in, with wealth redistribution being the primary motive.

Reply

11 Alain February 17, 2017 at 12:41 am

How smart are these robots? Why do they provide for us?

Reply

12 Mark Thorson February 17, 2017 at 12:45 am

Same reason the Chinese government propagates panda bears. They think were cute. They love us.

Reply

13 JWatts February 17, 2017 at 4:06 pm

“How smart are these robots? Why do they provide for us?”

Not very smart. Because they’re machines. Movies and books routinely anthropomorphize machines. Smart robots will do what their programmed to do. As long as the programming is distributed and robust, they’ll function like automobiles, boilers or automated production lines.

Reply

14 polyglot February 17, 2017 at 12:45 am

Weber defines the Govt. as having a monopoly of legitimate coercion. So legitimacy is what matters. Presumably, this would be defined by a set of protocols robots can validate.
Ownership may be defined as a title granted or enforced by a legitimate authority which stands in a particular protocol bound relation to the Government.
I suppose a Govt. whose legitimate coercive power is sufficient to reprogram robots owned by any citizen can by fiat nationalise those robots.
Everything depends making it hard for those without legitimate authority to reprogram robots, or for them to do it themselves. Thus we may expect ‘zero knowledge proofs’ to feature. However, human institutions have more flexibility- we want be able to substitute a deputy if the chief is incapacitated or some unanticipated problem arises.
I think problems start to arise when we relax strict and computationally complex protocols. We might want robots to have more leeway to over-ride human error
There is also the question of purely machine on machine interaction which could give rise to race hazard or concurrency deadlock. So we might want to give robots a bit of wriggle room.
All this amounts to a slippery slope. It may be that a lot of classic Pol. Econ. dilemmas reappear even with robots doing virtually all the work.

Reply

15 Dangerman February 17, 2017 at 12:55 am

“NB: Isaac Asimov had even the President as a robot”

Which story was this? I thought I’d read them all, but can’t seem to remember.

There’s “Evidence” from “I, Robot” – but he’s only running for mayor, and the whole point of that story is that he has to *pretend to not be a robot*.

There’s R. Daneel Olivaw, who ascends from detective’s sidekick to controlling the fate of humanity sight-unseen… but he was never President.

Can anyone help me out?

Reply

16 Shane M February 17, 2017 at 3:04 am

It’s been a long time since I read them, but perhaps the Foundation stories? Tyler’s reference rang true to me when I read it, but I don’t recall where. I think I recall a scene with a robot discussing the complexity of being held to a standard of not causing harm through either action or inaction.

Reply

17 Shane M February 17, 2017 at 3:09 am

…I should have added in the context of managing the world/galaxy, and it being an incredible standard.

Reply

18 Peldrigal February 17, 2017 at 4:27 am

Given the meaning that Tyler is giving to “robot”, I guess he is thinking of the story where MultivAC is the world government, and commits suicide by proxy to free humanity. There are other stories that lead to that scenario.

Reply

19 sri February 17, 2017 at 9:24 am

I’m seeing /i/The Evitable Conflict/ where not just president, but world coordinator!

Reply

20 sri February 17, 2017 at 9:26 am

one more time The Evitable Conflict

Reply

21 Thiago Ribeiro February 17, 2017 at 9:47 am

Also R. Daneel was Prime Mimister of the First Empire for years before its fall.

Reply

22 So Much For Subtlety February 17, 2017 at 1:07 am

The analogy often used for robots is slavery. So suppose we ask if it is better to have many owners or just the government? Some people might opt for a third choice – no one owns any. That is not going to happen with robots – and it doesn’t happen with slaves either. Most societies are fine with one form of slavery or another as long as the government is doing it.

Would it be any different with robots? Well if sex robots turn out half as well as people say, it won’t matter as none of us will be around to find out. Generally speaking people will probably be happier if the government owned them all. However it is hard to think of a path that gets us to that end. Machines will get smarter and smarter. So people who used to buy dumb cutting tools are now buying smarter ones, which will get even smarter and so on. That is a stable pathway to many people owning many robots.

Although it could lead to interesting marketing campaigns where robots are sold that are just one IQ point below being smart enough to fall under the government monopoly.

Reply

23 Boonton February 19, 2017 at 7:57 am

Slavery for robots makes sense if robots are machines but not persons. A toaster is a machine, the bartender you chat with is a person.

When a machine is optimized for a task, it doesn’t look very much like a person nor will it have the attributes of a person. There is no reason to think the world’s greatest toaster will end up looking like R2D2 or C3PO. People working on an assembly line are trying to act like they are machines. That’s why factory work was grueling, unions were so powerful and it took such a toll on long term workers. As robots started to populate assembly lines, they didn’t look or have the intelligence of people, why would they? That was their main advantage.

Since most work can be done by machine, what would the point of having a robot slave?

Reply

24 anon February 17, 2017 at 1:09 am

Everything is path dependant. We will get massive robotic workforces the way they arrive .. probably with one or two new technology companies that figure it out first.

So then government becomes adversarial, perhaps even stubbornly human, in a system of check and balance?

The Will Smith version of Asimov.

Reply

25 RM February 17, 2017 at 1:28 am

Professor Cowen seems to take America as the place for the experiment (notwithstanding ” what is government in a world…”). The robot arrangement might be very different in Europe (like the Airbus/Boeing difference on who controls the plane), Asia, or Africa — point is that different systems of government might manage the “robot as government” thing very differently.

It would also be interesting to see how Russian robot government negotiates with the American robot government.

Reply

26 Todd Kreider February 17, 2017 at 6:55 pm

Great point, and I’ve been surprised in the last decade how often people forget U.S. regulations only apply in the U.S. This goes for technologies like stem cell therapies as well.

Reply

27 Patito February 17, 2017 at 1:33 am

This makes me think a bit about how enterprise software startups grow. At first, sales negotiated service contracts that are very decoupled from the software that is sold. The scale is such that whenever the software disagrees with a negotiated contract, the software is wrong and is perverted to meet the needs of any given contract. The problem is this doesn’t scale. So eventually, enough business is done that it becomes necessary to more tightly couple negotiated contracts to the executing software. Eventually this continues till the relationship is inverted and the reality of the software’s states dictates entirely what is permissible in service contracts. If the software does not permit a certain state, it can’t be contracted. This is how you end up with Comcast support representatives saying they’ll flag your account in some special way to elevate you to receive some special level of attention or service. They’ve learned to contravene the intended operations of the existing system to achieve a certain service outcome. They give the incredibly complex state-based system the inputs required to get a certain output.

In the world of robots and software you describe, I believe “human” government will be more about when it is moral to contravene the intended operation of the software and robots to achieve a certain output. Actually, now that I think about it. Economists might provide a model for this type of future work 🙂

Reply

28 Thomas Sewell February 17, 2017 at 3:11 am

One interesting proposal is a switch to a system closer to a direct democracy, yet literally capitalistic system.

The people own the “government” robots and control them via cryptographic means. So if every citizen starts with 1 “share” when born which is destroyed when they die (or alternatively, can be inherited for permanent inflation), then in order to authorize the use of force for something by the robots, a majority (or supermajority) of existing citizen shares must cryptographically authorize a policy, aka law. So a programmer/lawyer proposes a modification to the governing robot’s source code, either for a new feature, to fix a bug or to get rid of something, and it only takes effect when the right approvals have been received. The robots and weapons would need to be specifically designed to not be able to follow a “law” without the right cryptographic approvals. The additional benefits are that you can then easily sell/trade “votes” to those who value them the most and you always have an “understandable” legal code where you can actually know when the legitimate use of force is authorized or not. Shares expiring when you die keeps things more balanced (otherwise older folks tend to accumulate more shares than the young, which isn’t necessarily bad), but also encourages those voting your shares to keep you alive, even if you’ve sold them to someone else to vote.

(Disclaimers: I don’t support the idea of a single set of robot overlords, nor democracy as implied above, but it would be an efficient system for those who are in love with something closer to pure democracy. Nor is the above idea original to me, others expressed it in terms of just government weapons, but it does fit the discussion.)

Reply

29 Thiago Ribeiro February 17, 2017 at 4:26 am

Maybe I should own the robots. I intend to advance this idea in my soon-to-be classic My Robot.

Reply

30 Andrew M February 17, 2017 at 5:25 am

We’re already at #1: robots are everywhere (especially in manufacturing). The robots belong to companies, who are owned either by individuals, by families, by shareholders, or some combination thereof.

#2 is communism: the government owning the means of production. It’s inefficient because of the principal/agent problem: nobody has any real skin in the game, so nobody does their job properly. The best way to have your cake and eat it is to tax the robots’ profits (either when they are earned, via corporation tax, or when they are distributed, via a dividend tax).

Robots don’t present a new challenge. It’s exactly the same challenge we’ve faced ever since the first caveman realised he could make tools.

Reply

31 Axa February 17, 2017 at 10:40 am

Conceptually, robots are a glorified wrench.

Financially, it’s another story. Provided I got some training, I can go buy $500-1000 USD of tools and make a living as a plumber. The money either comes from my savings or a loan. Robots require lots of capital. I don’t have a crystal ball but it seems large robots will be more common than home robots. 200 hundreds years after the industrial revolution we don’t have home cotton spinners or weaving looms. It’s more efficient to have a large loom in a factory

However, I have no idea how to put some numbers in my story. I share the popular concern that robots may be so productive but at the same time so expensive (capital intensive) that only a dozen of guys on Earth can buy them. Then, we become puppets of those guys. Perhaps the concern is unfounded, robots are not that expensive and lots of people can buy them, so no individual/organization concentrates all the power.

Reply

32 JWatts February 17, 2017 at 4:13 pm

Advanced robots drive the cost of “labor” to zero. The entire input costs are effectively taxes, engineering, land costs, raw material and energy.

Engineering and land will probably be spread across enough unit outputs that they’ll be very small if not effectively zero for most consumption goods.

Reply

33 chuck martel February 17, 2017 at 6:01 am

Questions with a distinct Straussian flavor that make anarchy look truly attractive.

Reply

34 Perovskite February 17, 2017 at 6:12 am

The real singularity here will only come with highly dense and portable energy storage or generation….maybe 100x better than today’s batteries. Then you will have metal and plastic robots to compete with meat robots. When that happens the higher function robots will own the lower function robots.

Reply

35 Peldrigal February 17, 2017 at 6:59 am

The Transhuman Space rpf setting has most Fifth Wave countries (think first world highly postindustrial and affluent countries) be governed by some variant of what they call cyberdemocracy.
Only the executive branch is elected, the legislative has mostly control powers and is composed of citizens randomly selected at large, or among volunteers, or with other criteria. Every legislator is assisted by an AI, that explains them what is going on, the consequences of legislation and executive action, and provides briefings.
The main other political systems are infosocialism (expropriation of intellectual property rights, AIs and robots that work for the governement) and… China.

Reply

36 Lee A. Arnold February 17, 2017 at 7:12 am

Depends on how far you try to look into the future. The farthest result is that everybody “owns” their own robots. Technological innovation is unpredictably emergent; it may change the functions of any institution, whether gov’t, private firm, moral rule-set, any institution at all — obviating some functions, perhaps creating new functions (such as new “business opportunities”).

Another institution is private capitalism as a whole, a moral rule-set, governing expectations about how things are to be done, how costs are reduced, how we relate morally to each other. Why do you persist in thinking, while locked inside this institutional box? Economic history shows that this institution was a very recent development (as a total system, it is perhaps only 275 years old) yet the premises for its theory are already beginning to evaporate. (Marxism-communism is an old-school reaction; it isn’t likely to describe very much of what comes next.)

In the shorter term, we can see several ways in which current consumer culture will lead to outcomes that push in this direction. Households will be able to print repair components for various appliances, print desktop robots for different tasks, etc.

Some of the developments will be extraordinarily dangerous. Anyone, any kid, will be able to print a video drone so tiny that it cannot be seen by the naked eye beyond 10 meters. Yet its human controller will be to deliver an explosive charge to your skull at the last moment. A smart bullet, in other words. Goes around corners; waits for you to show up.

Reply

37 rayward February 17, 2017 at 7:15 am

Regular readers of this blog are familiar with Cowen’s affection for robots. Being human myself, I have a natural bias for humans. But I consider myself open-minded, so I have considered the advantages of robots over humans. An important advantage is plumbing. Of the inadequate infrastructure in America, none is more critical than the inadequate and aging water and sewer systems. Hardly a month goes by that I don’t read about failing sewer systems pouring millions of gallons of untreated sewage in the oceans, rivers, and streams. What a mess! Replacing humans with robots would solve that problem since robots don’t need plumbing. Of course, robots don’t need plumbing because they don’t eat. But if they don’t eat, does that mean restaurants will become a distant memory. Has Cowen thought this through? I’m reminded of the homophobics’ rationale for discriminating against gays and lesbians: since they can’t have children, they don’t care about the future. Do robots care about the future?

Reply

38 Sam The Sham February 17, 2017 at 7:43 am

Wait, slow down a second… you’re human? Not a bot? I… i need some time to process this…

Reply

39 Bill February 17, 2017 at 8:18 am

Speaking on behalf of robots,

Should robots be able to own or control humans?

Reply

40 Hazel Meade February 17, 2017 at 9:15 am

3. Robots are cheap and ubiquitous. Everyone owns their own robots.

Why is this not an option?

Reply

41 anon February 17, 2017 at 9:25 am

I put it as a “constraint” up top because I don’t think robots would develop that way. We can have a Siri or Alexa now, but those aren’t “the robot.” They are radios relative to radio stations. The millions (literally) of servers Apple or Amazon own are the robot.

A robot with his own brain in his own skull is now an old sci fi gimmick, like a flying saucer.

Reply

42 Axa February 17, 2017 at 10:15 am

Ubiquitous yes. Cheap, not at all.

How much does it cost a tractor today? Why expect a robo-tractor will be cheaper? The robo-tractor will be more productive because no driver salary, 24/7 operation, etc. But, the initial investment will be 1000 times more than 1939 Ford tractor. The more productive a machine is, the less people can afford it.

Reply

43 Mark Thorson February 17, 2017 at 10:49 am

Maybe it won’t be legal, like the way the big telecoms have been getting states to outlaw community-based Internet networks.

Or maybe there will be IP restrictions, like the way Microsoft has been threatening GNU/Linux vendors with patent litigation if they don’t agree to Microsoft’s licensing deal.

Reply

44 Hazel Meade February 17, 2017 at 10:58 am

Right, so the point is that proper adjustments in IP law and so forth can direct us towards option 3, if we decide that’s the outcome we want.

Reply

45 Mark Thorson February 17, 2017 at 4:16 pm

If Apple or Google or Tesla doesn’t want you to have robots without their finger in the pie, it won’t happen. With money you can buy political influence. Without it, you’re screwed. That’s why you can’t sue your telecom provider — you have to go to binding arbitration. That wouldn’t be the case if consumers could decide, but consumers have no political influence.

Reply

46 Matthew Young February 17, 2017 at 9:21 am

The problem is setup such that the AI serves to deliver goods optimally to consumers, the built in assumption. Likely a good assumption. So AI assigns ownership. But the only thing the AI knows about are digits, and holding projected dibits lets you set the probability of some good arriving to your person. AI cannot serve humans in any other way, they are silicon for now.

So, then restate the problem. How will AI distribute protected digits into the aggregate of humans such that goodies are optimally delivered. That is ownership,and that is a solvable probability problem if we assume every human can hold protected digits in his or her hand. It is solvable because the solution can assume digit manipulations cost zero relative to manipulating real goods, dumping money transaction costs gets you a queuing problem, a done deal.

So we have an answer, you will own robots to the extant that things your likes can be delivered to you with relative efficiency compared to other paths. AI will set that probability of arrival, you have to get your ass to some location where goods are efficiently delivered.

Reply

47 sri February 17, 2017 at 9:30 am

We may end up learning a bit from how we manage the translation to driverless cars.

And then we move on to a government that can be hacked! It would be interesting to find out if one would want to.

Reply

48 robert February 17, 2017 at 9:43 am

Who says that robots will be distinct, discrete creatures? Isn’t it just as likely (if not more so) that robopts and human will merge into one? And, if so, who owns me?

Reply

49 David February 17, 2017 at 10:17 am

“Replacing social norms by self-modifying software –how big of a difference will it make for how many things?”

…Isn’t that what we have now in the form of the Administrative State? At least the robots would be (somewhat) rational and data-driven, as opposed to the humans running the regulatory system who make policy under color of administration.

Reply

50 Josh K February 17, 2017 at 10:31 am

The political aspects of governance are the kind of thing it’s hard for robots to do. The task, at its core, is balancing competing value claims. You could have far fewer human employees, but 50 strikes me as absurd. Maybe 1/100 of what we have today, but that still leaves too many humans to neatly coordinate, especially since each would be making complex, fact specific decisions.

Also, I suspect as we get more robots doing more work, more humans will end up involved in fights over values.

Reply

51 Bill February 17, 2017 at 10:49 am

Given that robots are displacing American workers,

I think we should

Restrict the number of

Foreign Robots entering this country.

We should also do extreme vetting of Chinese robots.

Reply

52 Michael Gardner February 17, 2017 at 12:01 pm

Andrew M has the correct, small d-emocratic answer: tax the robots…proportionally based the “income” they generate for the firm. Churches get free robots (hand out communion, dust the pews, etc)

Reply

53 Li Zhi February 17, 2017 at 1:49 pm

I refuse to play Tyler’s game here. As usual, he completely ignores even attempting to define what a “robot” is. Is Watson a robot? Why in the world would the government bureaucracy need ANY robots? Sure we’ll soon have (many more of) them for law enforcement (drones), and possibly for safety inspections, social services & health care, but the government will be “run” by A.I., not robots. I doubt if anyone can explain how Watson works, if some few can, they are very few. I expect that will be even more true with the A.I.s of several generations forward. So, given a self-modifying A.I., how can we determine what the intellectual property is, let alone who owns (or just possesses) it? So, before the question of who owns it is addressed, shouldn’t we discuss what it is exactly that we are talking about? What is the “property”? Pointing to R2D2 machine is far too facile, and imho misses the point.

Reply

54 ilya February 17, 2017 at 5:00 pm

– Quite a few people already own $200 home cleaning robots.
– Airplanes when flying under autopilots can be called robots. They belong to airlines or leasing companies.
– Then there are combat drones that belong to the governments.
– Many companies invest into complex $100 million industrial complexes; some of them should be already called robots.

Somehow, there are more two options. I still fail to see why the robot ownership in the future will be radically different from all of the above.

Reply

55 Laura February 18, 2017 at 5:54 am

The robots will own themselves.

Reply

56 Bill Bohan February 18, 2017 at 6:06 am

I see a lot of interesting posts here. Li Zhi has struck at the heart of the matter. From reading the article and all the posts it seems that you all envision large numbers of identical AI controlled mobile machines with near human or superhuman capabilities. As ilya mentioned there are a lot of carpet cleaning robots owned by individuals.
I certainly hope that we will never see an autonomous mobile machine capable of killing people. This should never be allowed. I read that when the Dallas police were under fire and several were killed they sent a robot in and killed the perpetrator. My understanding was that this was an ROV (remotely operated vehicle). That’s very different from a robot as all the actions of the machine were under direct control of a human. Any machine capable of intentional killing must have direct human control.
Government by AI is also unwise because I don’t think you will be able to establish a purely logical reason for human life or any life to continue.
When I worked for a company which made automated label printer/applicators I installed a machine in a factory which manufactured televisions. You dump components in one end of their machine and assembled, tested, boxed televisions came out of the other end ready for us to print and apply a label to the box. This is robotics. Just turn on the power and it ran. It was necessary to have humans present to insure that the whole system was functioning correctly. Any part of the system could malfunction whether mechanically or electronically. The same is true of any complex system because it is not possible to make a 100% reliable mechanical or electronic system.
The answer is 3. Everybody owns robots. Most people will have the inexpensive ones. Large companies will own more expensive ones.

Reply

57 Bill Bohan February 18, 2017 at 6:07 am

Oh great!

This is my first post here and it has jammed all my text into one block.

I had several paragraphs, honest.

Reply

58 Peldrigal February 20, 2017 at 8:34 am

In case you’re wondering, NATO countries have some sort of agreement to always have at least a man in the loop for any system that can use lethal force.
Asian countries do not share such concerns and South Korea and Singapore years ago expressed interest in fully autonomous systems.

Reply

59 Jackson Layers February 18, 2017 at 8:20 am

As a Forex trader, I really hate robots; I believe they are not much beneficial at all, so that is why I find it far better and useful to work without it. I work with OctaFX broker and through them, I can do it all nicely and easily because of the long list of features and facilities having small spreads at 0.1 pips for all major pairs to over 70 instruments and then there is even rebate scheme where I get 50% back on all trades even if they are lost.

Reply

60 Eric Rasmusen February 18, 2017 at 11:05 am

Note that using robots makes transparency much easier. They can be programmed to reveal corruption much more than a bureaucrat can be convinced to wreck his career by blowing the whistle.

I don’t see how robots are really any worse than bureaucrats anyway. Indeed, the skim should be smaller, since there are fewer people to pay off.

Reply

61 Eric Rasmusen February 18, 2017 at 11:26 am

Two distinct ideas are: 1. Robots rreplace civil service in government, and 2. A few people own all the robots in the private sector and the marginal product of unskilled labor is tiny.

On this second idea: I think it can usefully be compared to the Marxian idea of perfect communism. Suppose robots are so cheap that human laborers can’t compete in housekeeping, etc. That means the rich people who own the robots can at very low cost give robots to the poor people— being poor just means yuo only have 5 robots, not 5000. Rich people will do that, out of fear of rebellion or, more likely, sympathy for the poor. Marx thought when technology made things like food and shelter virtually free, not worth charging for, privat property would disappear. He was right as far as basic necessities go, but wrong in general because people want luxuries too, and their desires increase as prices fall. But, even in America today, you might say we aer in his communist paradise. Compared to wages, basic food and advanced entertainment are close to free; we make medical care free for the poor; clothing is virtually free; shelter is the only basic that still is costly— and even that is very cheap if you emigrate to the countryside.

Reply

62 jorgensen February 18, 2017 at 3:33 pm

First, define “robot”.

Reply

63 Troll me February 18, 2017 at 7:17 pm

Is it not sufficient for the government to tax the owners of robots?

The govenrment should not be owning and controlling robots, except to the extent that this satisfies normal considerations that the government should generally only be active where the market is ineffective in meeting some economic and/or politcal needs relative to what the government has a reasonable prospect of being able to deliver.

So, for example, both public and private markets use computers. Firms which have high productivity and high profits through use of computers pay taxes. (Other firms do too). The government, constrained by the general principle stated above, also uses a very large amount of computing resources (both hardware and software related) in producing final outputs.

I don’t think this has to be nearly as big of a question as some people think.

It seems like an interesting lead into the “necessity” to at least seriously discuss the possibility that a guaranteed minimum income might basically be needed someday, or for some period of time during the economic transitions that will come.

But for practical purposes, why can’t something like “up the top tax bracket a couple/few percent and increase public services targeting low and mid-income ranges” stand in for every possible solution to problems that would legitimate such concerns?

At the same time, being open minded, it should absolutely be on the table that the government may, for market-related rationale, be the superior operator in delivering certain services to consumers and citizens.

The main concern about governments and robots should not be about traditional market dynamics, where much of the focus is at present if the subject is discussed at all, but instead about the use of government robots in minding my business. I think it is prefectly reasonable to suggest that if any robot will be minding my business, it should not be under any other mechanism than the explicit, well-informed, independent, and generally self-interested decision of an individual – i.e., that no government robots should be minding people’s business. Reasonble exceptions like red light cameras, etc., would be a different story.

Reply

Leave a Comment

Previous post:

Next post: