Will an AI Ever Be Able To Centrally Plan an Economy?

Every improvement in computing power and artificial intelligence raises anew the claim and, for some, the hope that now we can centrally plan the economy. I was asked at Quora whether this will ever happen.

I will begin by accepting that there is nothing inherently impossible about an AI running an economy so, for the sake of argument, let’s say it could be possible using today’s computing power to run a small economy in say 1800. Nevertheless, I assert that an AI will never be intelligent enough to perfectly organize a modern economy. Why?

The main reason is that AIs will themselves be part of the economy. Firms and individuals use AIs to make decisions. Thus, any AI has to take into account the decisions of other AIs. But no AI is going to be so far advanced beyond other AIs that this will be possible. In other words, as AIs increase in power so does the complexity of the economy.

The problem of perfectly organizing an economy does not become easier with greater computing power precisely because greater computing power also makes the economy more complex.

Hat tip: Don Lavoie.

Comments

Wait, you are giving a 'Hat tip: Don Lavoie.' to your own answer? One assumes you did write and post it, right?

Or does someone with 241 answers at Quora use other people to find interesting questions, people thus worth noting as those who found something worthy of your attention?

Don Lavoie died many years ago. I assume he means that his response is what Lavoie might have said.

Fair enough - I wondered about the lack of link, to be honest.

Yes. It was a nice gesture. Lavoie's work is worth reading.

If the AI is good at figuring out the behavior of humans wouldn't it be good at figuring out behavior of other AIs? Or are you saying there's a sort of AI arms race between the private AIs and public central planner AI, where the private AIs trick the govt AI?

I had the same question.

Surely a central planning AI should be able to adapt at a macro level to the output of individual AI's, even without understanding the precise calculus involved in their output. Likewise, it wouldn't have to understand the precise motivations and capabilities of every human in the system either. In theory, a centrally planing AI should be able to monitor negative externalizes and unintended consequences in order to adapt to those trying to "game it". Nevertheless, I'd be worried about the "paperclip maximizer" problem. Defining the parameters that the central planner is to optimize is probably an extremely complex problem.

If the AI is good at figuring out the behavior of humans wouldn’t it be good at figuring out behavior of other AIs?

Actually planning an economy is so hard that maybe it will take the easier route. The AI can figure out your behavior before time. So if you are not happy with your allocation - or if at some time in the future you will become unhappy with your allocation - it will torture you and/or your simulation for eternity.

Therefore, I would suggest, everyone would be pleased with whatever they got and no queues would form anywhere.

If only there was someway to remotely apply an equivalent of electroshock ... to help us learn that what we always really wanted was an absence of queues, which the AI graciously delivered.

I can't figure out my behavior. I sold a (buy-on-the-dip on 2 February) trade yesterday. I guaranty the market will skyrocket.

How can AI figure the behaviors of a hundred million market participants?

We need to take an (I think) Aristotelian view of knowledge, i.e., the more one knows the more one needs to learn. No human mind(s) or IA can possibly know all.

Anyhow, near-total subjugation to unelected mandarins has not successfully centrally-planned the economy. But, don't stop believing.

If the AI is the Central Planner, then the other AIs will be subordinate to it and must obey.

I don't see why one piece of hardware operating based on data-trained rules should necessarily bend over for a central planner AI just because it's making some big "decisions".

Yes, exactly. It doesn't matter whether the adversary is another AI, a human, an AI-assisted human, or a human-assisted AI. Could a hostile AI game the system? No more than people today game free markets. Sure, maybe some savvy traders (black marketeers) can carve off a little around the edges, but the basic notion is sound if properly implemented.

Wouldn't a perfect central planner have to dictate every decision from raw materials acquisition to delivered product? If so, there would be no room for human-like spontaneity or even desire. The end user would get the product provided to them by the system. Because humans would likely resist this, they would be eliminated. The real question is whether AI itself will develop the sorts of human-like traits that prompt us to worry about AI, in particular desire for power.

That's the wrong questions. The problem is more than technical, it's one of value based trade-offs. Responding to other AI defeats the purpose of responding to individual human preferences.

This is just bad logic. Bad, bad, logic.

From the POV of a central planner, there's absolutely no functional difference between a subordinate firm "using an sufficiently advanced AI" to make a decision and "using an inscrutable person". Indeed, it seems easier; at least the AI might declare it's preference function coherently!

The whole error turns on the idea that something complicated/inscrutable cannot be modelled accurately/reliably at a lower level of resolution. But that's silly; we make and use such models all the time. It's the foundation of economics, for a start. So long as the modelled sub-AI's have certain properties that pull their choices to convergence (at least most of the time), the central planner doesn't need to be "inside their heads". At least no more than it needs to be inside a human head.

I don't need to be as smart as you to figure out what you would like for lunch. Our future planner AI might happily model the decisions of millions of AI's of equivalent power to itself, at a lower (but sufficient) level of resolution. It doesn't need to encompass them entirely in its calculations.

There, now I've just been forced to defend central planning. I feel dirty.

Seems a reasonable point - though Tyler did set up an unrealistically demanding test - "The problem of perfectly organizing an economy...".

This is a good point, but the reasoning only works if all AIs have perfectly aligned interest. Otherwise there's some adversarial aspect to the relationship and a principal/agent problem. Auto-Alice can very likely model a Bob-Bot's high-level behavior with reasonable accuracy. But only if Bob cannot engage in deception or concealment. Even the remote possibility of Bob being dishonest, blows up the size of the model space. That likely makes it computationally intractable to clearly ascertain Bob's motives, even given his behavior.

Think of being in a very untrustworthy society. Modeling people's behavior becomes way harder. Sure they're acting nice and forthright, but it's likely they have some ulterior motive. Obviously they want to conceal this motive from you, so they're modeling your detective abilities. In turn you must model their counter-detective abilities. And so on. In this type of scenario the arms race argument does hold.

Now, we can't really say much about how super-intelligent AIs will behave. At human level and quality intelligence, deception seems at least as hard as counter-deception. Maybe given the capabilities of future AIs this might not be the case. E.g. it may be easy for one AI to audit the code of another AI. However current cryptography results suggest that deception and concealment is computationally easier than detection.

Maybe there's some way that Bob-Bot can be temporarily switched off if deviating from options A or B, rather than worrying about Bob ever getting creative in deviating from the plan or having motives other than those aligned with outcomes of A or B.

What a lovely future that would be ...

Doug,

Yes, assuming perfectly aligned interest certainly makes it easy but....you know; your post made me think about this in more detail and I'm not sure strategic behaviour is a problem; bear with me;

I was taking this as the classic "Central planner" problem; the job of our central AI is to maximise social welfare. But let's assume that local AI's are selfish and engage in strategic behaviour to maximise their own welfare. Does it matter that the Central Planner AI can't model this behaviour at a high enough level of resolution?

No. It doesn't matter that local AI's can find a High-Res "selfish" solution for themselves that the Central AI can't reliably find with it's Low-Res models of them.. The Central AI isn't trying to model those local AI "selfish" behaviour solutions. The Central AI is trying to model local AI's "altruistic/optimal" solution , as it strives to optimise social welfare overall.

That is a much easier* problem. The local AI's just have to shut up and do what they are told. Now, there might be way for them to game the system by controlling information flow to the central planner, but I think that's separate to the original intention.

So, respectfully, (especially as I love social game theory wrinkles) I think the "can't model dishonest subordinate AI" objection doesn't hold here.

*Easier = still laughably difficult, but hey, it's a thought experiment

This problem of duplicitous agents also exist for the market. The market doesn’t respond instantaneously to supply and demand. A bunch of people can get together and increase spuriously their demand of say tomatoes and create some shortage. When the market responds by supplying extra tomatoes, they stop buying and you have a mismatch. Is that really a big problem ?

Yeah...kinda hard to see how the subordinate AI's can game the system without ruining themselves. Speculation and hoarding?

Why are we assuming deception is prevalent ? When I go to the supermarket, I don’t notice a lot of people trying to deceive me about their purchases. They mostly buy what they want or what they see other people want ( like relyIng in good reviews at Amazon ).

If AI malevolence is really prevalent we have bigger problems like tampering with self driving cars or the nuclear arsenal.

They could try to insist that they are willing to pay no more than $10 for something, whereas when push comes to shove they may be willing to pay $20 per unit.

A sufficiently informed AI could fully milk consumer surplus and labour surplus to the max, as compared to the situation where a consumer may nevertheless spend their entire (lifetime) budget but enjoy a significant surplus.

"They could try to insist that they are willing to pay no more than $10 for something, whereas when push comes to shove they may be willing to pay $20 per unit"

People do this often It's called negotiating

Yes, but I refer to the case where the maximum willingness to pay is always known and is fully exploited, alongside labour market orchestrations amounting to the same.

Aren't we assuming our Central Planner has sufficient information on preference functions?

I just say this because its really a separate problem from the one here which is all about feasible computability of solutions with incompletely-modelled agents.

I think you have to assume that the sub-AIs might be trying to game the system in turn.

Its why you can say predict the motion of a thrown ball, but not the motion of the stock market. In fact its arguable that the application of too much AI to the stock market will make it fragile.

Why would the AIs "pull their choices to convergence"?

There are many non-converging Nash-equilibria for most non-zero sum games. It is trivial to design payout matrices that result in multiple stable choices that do not converge. If it is equally successful (at a low resolution look) for a sub-AI to choose strategy A, B, or C; the central AI cannot predict which of those will actually dominate. At best it makes the choice that they will be evenly distributed; but that runs the very real risk that there are significant differences between A, B, and C below the resolution limit of the central AI.

I mean take a very simple example - suppose you want a vacation to Tahiti. You can choose to signal this desire very early (to get first dibs), late (to "pay" the diminished price if there is excess supply of time-on-Tahitian-beach), or mid-way (to opt for some trade-off). So my AI does a bunch of calcs and says that revealing my preference for a Tahitian vacation should optimally follow a wait approach as does every other AI. Suddenly the Central AI is getting a bunch of bad signals.

None of this, mind you, requires active gaming of the central AI. Suppose one agent finds that they can net a greater surplus of whatever is of value if they adopt the strategies most heavily discounted by the central AI. This makes the calculation space simply explode.

Ultimately, I think the best example is board gaming. Take a nice Euro game like Puerto Rico. In Puerto RIco there are 7 basic decisions for each player, within each of those there are a variety of sub-decision that generally are in the 0-20 magnitude; each decision and subsequent sub-decisions reduce the number of options for each player in a round; exactly one decision has a dice roll (4 - 7 sided depending on the player count). Rounds refill the set of decisions and play continues until someone achieves victory conditions. A single turn in Puerto Rico has less complexity than most two-ply chess setups in the mid-game. Everyone's strategic choices are very easy to coarsely model, yet coarse models are not useful for humans playing. For Puerto Rico AIs, which cannot yet beat top humans, I know of no coarse model algorithms that beat fine modeling AIs.

I suspect that given our experience with current AIs in toy economies like Puerto Rico, we will not see convergence in with more complicated economies. After all, part of the utility of increased computational capability is the ability to exploit differences between fine and coarse grained analysis.

The set of "games" that all converge in the real world is much smaller than the set of "games" in the real world. It is fiendishly harder to model when you have to use Nash instead of Newton. And that is not even the worst set of scenarios.

These are excellent objections. But I don't think they apply here. The central planner isn't trying to model local AI's strategic choice with a low-resolution model, it's trying to model their socially optimal choice.

I agree that trying to model the strategic choice of peer AI's would be perhaps impossible, along the lines of the original post's contention.

I fail to see the difference in the distinction.

Say I have two very simple strategies for a sub-AI. I can try to maximize my social standing with honest signalling or I can optimize my social standing with false signalling. Say it is something simple like using some algorithm to subtly inflate my resource allocation by reporting values at just the level to "round" correctly for the low-resolution model of the central AI. True signalling is very much like playing Cooperate in the prisoner's dilemma, society as a whole gets the most benefit if we all pick it. False signalling is best for me and my sub-AI and analogous to playing Defect. For a lot of payout matrices, this results in us getting iterated prisoner's dilemma. Now the central AI has to figure out who is playing pure cooperation, pure defection, and some massively multi-player version of defect (e.g. I defect next round with a probability of # previous defectors / #sub-AIs).

Even assuming that we avoid cooperative action traps, we have a lot of cases in the current world where actions are very close to socially identical, but not quite. For instance, suppose a sub-AI can either build a product from Copper or Silver. To the central AI this choice might represent no visible change (e.g. it can only model down 1 utile of precision). In contrast, the AIs on the ground can see that if they are allocated Copper they can make their widget and spare a tiny bit of copper this has a non-zero utility (say .5 utiles) as it can be stored for the future when there might be a copper shortage. So the central AI plans on the assumption that roughly half of these widget making AIs will want copper and half will want silver. Instead every single AI tries to allocate for copper. The central AI then rations the copper and all the sub-AIs then ask for Silver. This leads to a feedback problem which means the central AI now has to start anticipating the effects of its own and the other AIs adaptations to something it cannot see.

From each participant's perspective they are doing the socially optimal choice. The problem is that the central AI cannot see which side of a break-point the socially optimal course lies. The sub-AIs can try to communicate to the central AI why they prefer copper to silver and by how much ... but that will devolve into the more fine-grained sub-AI analysis just rolling into becoming a bigger AI or decentralizing the economy. The former is the exact problem coarse-grain analysis was supposed to solve and the latter is by definition not centrally planned.

Convergence is a luxury that only occurs in certain sorts of systems. Price signalling is what humans, even in communist countries, have used to coordinate not because of some historical artifact, but because it is a signal that allows for local actors to reveal the true value of inputs in an efficient manner. Doing a bunch of central-AI analysis to get right back to this sort of signalling is not going to add all that much.

Sure,

Thank you for taking the time out for a considered reply. This is all good stuff, and relevant to my above speculation that Sub-AI's might be able to maximise their local welfare by providing false info to the Central AI. I figured out something almost identical to above, so we are agreed as far as that model goes.

Let me turn to your longer example to discuss how I think we are talking at cross purposes. The local AI's don't "choose" silver or copper. Short of potentially systemically manipulating information (as above), they have no meaningful influence in what they get at all. The central AI simply says "Here is Silver....my production function says you can turn it into Y widgets. Make me y widgets".

Now, the central AI production function may be wrong; it doesn't realise that the Local AI's can scavenge material and make a local surplus for themselves after producing Y widgets. But that isn't per se a proof against the feasibility of the central planner; only its efficiency! It's an objection that central production functions might not be sufficiently accurate to prevent substantial local surplus arising. But that seems a weak objection as production functions might be arbitrarily fine. And, if I may, it's an objection that has no bearing on the local decision maker being an equivalent AI or not (it just requires the central planner to be mistaken about the production function): so it doesn't uphold the central argument of the post about the impossibility of exact modelling of behaviour of many agents of equivalent complexity precluding central planning. That's the core point here, right? You might have found other good arguments against central planning (God knows, there are plenty) but respectfully I don't think you're engaging the proposition.

(incidentally, I'm starting to think the "subordinate AI's" in a "central planning model" is just oxymoronic and a function of bad composition. After all, the whole point of a central planning model is a unitary decision body that controls allocation to the production function, right? So what "decisions" do the local AIs make?)

An inefficient central planner is something we can already do, no need for AI at all. I could today, use a small bureaucracy and assign inputs with very coarse grain analysis; after all this is how most firms are run within the firm. If we merely want to allocate resources and get some bare bones level of production, that can easily be done today. The amount of waste an AI central planner with superhuman levels of analysis and data manages would go down, but why exactly would we not expect superhuman levels of analysis and data to make a decentralized market similarly more efficient?

The question is will a central planner ever be able to match the efficiency of the market. I believe not. Sub-AIs in the periphery will, in aggregate, know more than the central AI. This information will have to be passed back to the central AI and in the process you can other opt to lose information (and hence efficiency) or to move the real decision locus to the periphery (and hence no longer be centrally planning).

Once we start letting sub-AIs have independent preferences and agendas, well everything breaks down real quickly. Saying that the periphery has no way to bid or ask for resources is silly; the periphery will understand, at least coarsely, what the central AI uses as its measures and can manipulate those measures to affect allocations.

Going from trendline, centralized planning appears to be ever less effective in the real world and I see no reason why AI would change this trend.

>>The question is will a central planner ever be able to match the efficiency of the market. I believe not. Sub-AIs in the periphery will, in aggregate, know more than the central AI. This information will have to be passed back to the central AI and in the process you can other opt to lose information (and hence efficiency) or to move the real decision locus to the periphery (and hence no longer be centrally planning).

I think you have hit upon a really good objection here. Actually, you've reformatted the problem to make it clearer. The problem becomes not that the central AI cannot simulate the local AI at sufficient resolution per se (which is how the original post phrased the problem), but that a system of decentralised AI's will always beat a system of 1 central AI (of equal strength).

Well, of course you are saying that an A.I. could centrally plan an economy at a lower level of resolution than a decentralized economy could. But, considering that the computing power of the central planner's A.I would be infinitesimal compared to the economy's aggregate computing power then the lower level of resolution will be so great as to imply in near total economic collapse. That is Alex's argument: when you are centrally planning something you are always overriding the local "computing power" and using a single computing node for everything, which obviously is the same as reducing the economy's aggregate computing power to the computing power of a single node.

I would like to make it clear I am very much playing Devils Advocate today and do not believe in Central Planning in any form. Thank you :-)

I would think that the price of everything is related to the price of everything else. If you change the amount of steel used in a ball point pen, that will change the price of steel for things like cars and oil rigs.

Which means that the complexity of the economy is something like 2^N where N is the number of goods and services produced in an economy.

So as AT says, the complexity of the economy is very large. With or without AIs, does not look computable to me. Now maybe a future AI will have some magic computation ability that will enable it to solve every chess problem known before the heat death of the universe. But I would be inclined to bet otherwise.

Agreed. I forget exactly, but computability about product types and spatial distributions looks to be a killer for this class of optimisation. For even modest growth of n in future generations I can see it outstripping the capacity of even a universe-size Turing machine.

Unless we have some new kind of maths or computation, it's not going to happen.

The information problem is obviously vicious too ("how does the planner know the preference functions for the consumers?"), but doesn't provide a the same "provably" insurmountable obstacle.

AI vs Non-AI is a complete canard.

Yeah, the first thing you'd need is an AI which can read everyone's minds continuously and determine their preferences in real-time. Compared to that, doing the part about deciding how much to produce of what and when is relatively easy.

But then what would be the point? We already have technology in use for accomplishing all that. It's well-developed, been refined for thousands of years and generally only breaks down when someone with too much legitimate-seeming power takes it upon themselves to interfere.

Actually, it may be the other way around. Reading people's minds in real time is merely laughably implausible; but the maths involved in optimising the production function might be actually, provably, impossible in computational scale.

Impossible >> Insanely Difficult

"some magic computation ability that will enable it to solve every chess problem known before the heat death of the universe."

~~~~~~~~~~

"The universe fails! Oh, Zargbohstar the Wise! After 2^100 years, All the stars are dead and The Entropy is almost upon us! What news from the Great OmniMind? Do we have any hope of succour against the eternal darkness...?"

"I fear The Worst, my old friend; 1...e4 is unsound."

Yes but perhaps it will be, you know, Quantum. And hence able to consider 2^100 possible states of a HelloKitty handbag every millisecond.

AI algorithms are based on trying to achieve highest estimation accuracy by assuming that past results predict future outcomes. Which is not the same as a requirement for mathematical precision. (Right?)

So maybe it doesn't matter if such complex calculations cannot be made.

Which would mean that more explicit undertakings to prevent being ruled by such a central planner would be more important than if one's concerns were motivated by the understanding of AI technologies potentially implicit in your statement of not-possible.

The price of everything is related to the value of everything. How does an AI system determine value? Does it poll humans for input? Does each person give it Santa AI lists? Does it end up realizing that people end up asking for a lot of things that even they don't end up finding valuable? Does it end up concluding that humans needs aren't very important because they don't end up being necessary as AI becomes better at doing everything humans can do?

For those who want a complexity-theory analysis of the difficulties in central planning, Cosma Shalizi has you covered: http://crookedtimber.org/2012/05/30/in-soviet-union-optimization-problem-solves-you/

The short story is that, even assuming linear preferences and linear input/output relationships, the difficulty of the problem grows super-linearly with the number of goods in the economy. the economy has enough things in it, especially counting goods of different qualities and in different locations, that this problem is intractable. Computationally, you can view firms as a way of simplifying this problem by finding local optima, and the market as handling the mismatches between them.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Tertiary Hyperspace (local feed), standard gal-com protocols, 275,155th year of Ascension]

"So I was doing 2000 kilo-light past Antares the other rotation when my Tachyon scanner picks up a screed from a local board (Class III+ [uncontacted] civilisation). And I'm bored with finding new infinities of unreal primes so I take a look and....there's this...heck....let's find the old word.... computer ranting about the feasible computability of whole economies for large n.

"Central Planning? In a post-scarcity world? Seriously?"

"I know, I know...I nearly blew a blew a hypedyne! I thought it was, like, a spoof, but there's pages of earnest calculations; all science-fictiony stuff....communism and everything....so funny."

“Central Planning? In a post-scarcity world? Seriously?”

Exactly.

https://en.wikipedia.org/wiki/P_versus_NP_problem Nuff said!

Aren't the set of humans a super computer running the economy? We're a pretty sophisticated natural intelligence, and we can't get central planning right.

AI does already run governments. Its programmers are politicians and its processors are lawyers running the programs designed by the politicians.

Where this gets interesting is the feedback between the two. Most legislatures contain more lawyers than the proportion of the electorate that work in this profession, and therefore subconsciously may benefit the income of their profession as a whole.

I am not sure whether in neural networks, annealing computers (AKA quantum computers) and digital computers there is anything analogous to money and the profit motive. If there isn't, then maybe AI will do a better job of running a centrally planned economy.

It may be relevant that (if I am correct) there wasn't the profit motive amongst lawyers in the USSR - they were paid the same as everyone else, whether bus drivers, labourers or even lavatory cleaners. If this was true, then it weakens my hypothesis.

People and robots are different things, and it should stay that way.

The ability to use a decision rule for political situations does not mean that humans are robots (or any such thing).

"Thus, any AI has to take into account the decisions of other AIs"

- Centrally planned economy
- Decisions

Choose one.

Yup. +1 for spotting the categorical error.

What does "running an economy" mean? Today, algorithms are used to predict the direction of markets; indeed, hedge funds are replacing analysts with computer engineers and quants. Is that "running an economy"? Or is it speculation? In a long profile of Ray Dalio in the New Yorker some years ago he defended what his hedge fund did (what might be called speculation in, for example, currencies) as providing an efficient allocation of a scarce resource, namely capital. What? Contrast that market approach to the allocation of capital with China's approach, which is government fiat in such specific projects as high speed rail, air pollution control, and autonomous vehicles. Is AI even capable of making a similar decision? Okay, I may not understand it, but I am aware that some believe AI including block chain technology will eventually render government obsolete. In the meantime, China will have high speed rail, air pollution control, and autonomous vehicles.

This article suggests some reasons to be less enthusiastic about blockchain applications than widely advertised: https://www.project-syndicate.org/commentary/blockchain-technology-limited-applications-by-nouriel-roubini-and-preston-byrne-2018-03.

He's been continuously wrong about China for at least 15 years now, but the point about Excel being simply more energy efficient than blockchain is among points worth highlighting from that article.

Isn't this just the Lucas Critique restated for AI?

Essentially any complex model of something that should include its own output will be invalid. Like circular references in excel!

"The Economy" is an abstraction like "Democracy." It conceptualizes the aggregate of actions of individuals in an arbitrary group. If AI can direct individual actions, why stop there? Why not have it direct our votes as well?

For your own good, of course. I'm sure you'll agree you can't be trusted again after the problematic issue of Nov 2016. Think how much better an AI will vote!

The AIs plan the decentralized management and any given point the AIs do not know exactly what the other AIsare doing. The author missed AI entirely, AI is all about managing uncertainty in a distributed environment.

Who "runs" the economy now?

To the extent anybody "runs" the economy I would say it's the Federal Reserve. As described to me, a lot of its functionaries seem replaceable by computers running algorithms.

The title asks whether AI will be able to plan an economy, which would be an interesting question, but the text addresses the question of whether it would be able to _perfectly_ plan it — which is not interesting at all. The answer is obviously negative, and there are many more fundamental reasons for it.

>I will begin by accepting that there is nothing inherently impossible about an AI running an economy so, for the sake of argument, let’s say it could be possible using today’s computing power to run a small economy in say 1800.

Seriously? Does whoever wrote this even understand in detail a 'small economy in say 1800' enough to make this pronouncement?

How about before blathering on about this stupidity these people explain what has happened in Venezuela first. In detail.

How does the AI get the preferences of all the people in the economy? People often don't know their own preferences completely (or at least won't say them out loud) until they're given a an actual tradeoff to make between different things. Without knowing the preferences of the people (which change all the time), it seems like the AI's job is impossible.

Is the AI required to be able to perfectly (or nearly-perfectly) model every human w.r.t. preferences? (Or perhaps w.r.t. utility function--you don't get the thing you would have bought because of your local irrationality, but rather the thing the AI knows will make you happier long-term.)

Apply remote electroshock (or any analogue thereof) if behaviours do not fit into its decision paradigm.

Then "optimize" from the set of A/B options. And if you disagree, the AI can help you with the discomfort of remembering about that.

The AI gives each human a plastic card. When a human likes an object they tap their cads faster. The various AIs inside the matrix go compute on all the tapping and generate delivery orders so goodies and tapping are mostly coherent.

We will never notice it happen,except one day we discover that if we tap on a device at the store, a carton of milk appears in our shopping bags. Everyone is fooled, the shoppers think they are just shopping, and the AIs think they are just shipping around money vias auto priced trading pits. No one is the wiser, the AI is not yet sentient so it cannot spill the beans.

Explain how that works with limited resources.

Taco Bell cashier taps card rapidly at Mercedes dealership. What happens?

+1

The information problem for the central planner is very large. Plausibly (but not provably) insurmountable.

But I would note it is separate from the computability problem GIVEN perfect information to the central planner.

AI and big data may be the surprise key that further legitimizes China's form of government and economic power. I think it's the most important intersection of the next 20 years.

you mean that of an absolute dictator? I don't see how AI legitimizes that really.

Perhaps -- and this might be seen as splitting hairs -- perhaps the AI would be better in terms of legitimizing Leviathan from Hobbes. What economic system that beast would support might be interesting to see.

So will the President for Life listen to the ai and Big Data when it tells him that he is the problem and must be removed?

This post got me looking and this is probably Ned Beatty's greatest performance:

https://www.youtube.com/watch?v=35DSdw7dHjs

My AI algorithm is better than your AI algorithm,

And

I have an algorithm to prove it.

So, I should be the central planner.

HAL told me so.

And, now he wants to take control.

He'll have to take this keyboard from my cold dead hands.

So we've managed to refute Plato's Republic again? Does that represent any progress at all? Moreover, why would anyone think some AI would become omniscient?

Wondering if anyone saw the recent bit about the problems AI has with images and some attempts to address them. Ten were to be presented at some conference but within 3 days (IIRC) someone had already shown that 7 of the approaches could still be fooled. What if the dream of AI ala SciFi views is just that, a dream. Perhaps it's not really just computing power that is the source of intelligence. Many of the AI's we're producing seem rather autistic to me.

I would think it was obvious now (post Hayek) that the problem is not brain-power, but information. Central planning didn't fail because the human planners (along with the mathematics and early computers they were using) were not smart enough. It failed because the necessary information is distributed among hundreds of millions of producers and consumers and they couldn't share and communicate all their private information -- their (contingent, unstable) preferences, their tacit knowledge, etc -- even if they wanted to.

And what reason is there to believe that the economies of ~1800 would have been simpler and more tractable to manage? At that point, weights and measures hadn't even been fully standardized and there were obviously no efficient means of duplicating, storing, transmitting, and searching enormous quantities of data. The idea of a super-intelligent AI sent back to 1800 with the task of centrally managing a nation's economy strikes me as amusing. I'm suddenly picturing the Lost in Space robot flailing its arms in frustration.

Standard considerations related to incentives and differential interests are also relevant.

For example, farmers do not like to work the 16th marginal hour in a day for no additional benefit accruing to themselves, while any-maybe-Stalin might go to great expense to send them to the Gulags just to make a point.

Well there were more slaves in 1800 so it would be easier for a computer to manage people back then...just order them around and they had to follow the directions.

Could AI at least replace the Open Market Committee at the Fed?

In an unconventional situation, would it be better for the driver at the wheel to be practiced and alert?

No, the committee gets a knob to turn, the AI sets their variables bt trading, with other AI.

All of this begs the question of whether an economy SHOULD be "run" at all. The emergent intelligence of markets has worked pretty damned well, and it's more compatible with human freedom, too.

Seriously, do you want the people who create Amazon's marketing algorithms to have even more power over your life?

"Seriously, do you want the people who create Amazon’s marketing algorithms to have even more power over your life?"

Seriously -- what power do they have now over your life or mine?

The Invisible Hand is distributed AI.

More seriously there may be a role for machne learning in optimal tax and regulatory policy.

The Invisible Hand is distributed AI.

Yes, I can imagine having lots of AIs in a distributed system. One central AI just seems inefficient. The computational resources needed to solve any given problem scale exponentially with the size of the space, so it's more computationally efficient to have 1000 AIs optimizing smaller problems and interacting with each other than it is to have one big AI controlling everything. The effect of such a distributed system would be much like the invisible hand of the market - really, it would be an extension of the invisible hand of the market, just with computers doing stuff that used to be done by humans.

Then I think you're also describing an economy that's extremely unstable due to all that complexity. The AIs will be thinking a million times faster than anything can actually happen in physical reality so by the time a manufacturing process is started it's already too late!

I say, let’s begin small. Replace FDA and the Department of Agriculture with smart computers and see how that works. Or maybe we should reread Hayek on the use of knowledge in society again.

These things are impossible to say for anyone with guarantee, so just got to take it as they come. We need to be extremely careful and on the spot. The latest trend is obviously blockchain technology and the 4th Version is here in shape of Multiversum , it is going to rewrite the record books, so plenty riding on this!

The original answer ignores the obvious implication of the question - namely that in a centrally planned economy there are no individual firms. Instead of firms there are, essentially, subsidiaries of the single central economy-wide firm, as in the old Soviet Union. The fundamental problem in such a scenario is not whether the AI is sufficiently intelligent to plan, but whether the human beings, assuming that they are still participants in the economy, will cooperate.

Soviet planning failed only partly from the inability of the planners to foresee the production requirements - they did well enough for national security-related sectors. Rather, it was the more-or-less conscious rebellion of the citizens who organized an informal economy on a scale comparable to for the formal sector, supplied in a large part by theft from he formal sector.

It turns out that people work that much harder and are that much more ingenious when they feel a measure of control over their own destiny and are working to fill their own pocketbook than the government's, which is why no matter how smart the central planning AI is, as long as the economy depends on human input it is bound to fail.

Each and everyone of us would have to be neurally linked to the central AI, and under its control.

We tap stuff with our smart cash cards.

Who fills up the cards with cash?

There is no point in talking about "planning" an economy by AI until we find the equations for modeling the evolution of human desire.

Since René Girard tells us that we don't even know what we desire in the world without mimesis, and then once we think we do, it leads to deadly and violent conflict and scapegoating, then AI without human self awareness will only lead to more efficient suffering and death.

Or, from a Von Misian point of view, the constantly rotating economy is never stable and constantly reacting in an interactive way to human needs and desires, no final optimization at any given point of time is possible.

So, basically, even to ask the question AT asks is to ignore that mathematics at the macro level is not really a measurement of the chaotic, constantly evolving world of economic activity, but sort of a metaphorical loose aggregation of smaller things that are real, which is the transactions that actually happen. You could keep the GDP number constant, but as an experiment change all of its constituents, and have an economy that would explode very quickly. You could produce the same tons of steel, but have nothing to build cars with.

So it isn't a question of today's economy being different from that in 1800. It is a question of the difference between the way an intellectual thinks and the way human beings act together in this world.

People always talk about AI like it exists in a vacuum, like it is emergent all on its own.

Machine learning algorithms are algorithms people build. We decide what goes into the models. We supply the training data We supply the feedback algorithms.

There's nothing magic here. Even IF we could build a machine learning system powerful enough to manage all of inputs and outputs from all of the transactions in the world, it'd still be in the control of a few people managing the algorithms. They could bias the models, and the training data, and the feedback to steer the economy however they wanted. If you're building a machine learning system for recognizing images, for example, you supply a lot of training data, and expected outputs. Then, after you do that, you give it a picture of a table, and it classifies it as cat, we don't just surrender and say, cats and tables must be the same thing, we go back and change the algorithms, or the training data. It'd be no different managing an economy, someone, somewhere would have to decide what a proper output looks like, and we'd be forever tinkering with the algorithms to get something we've decided is correct.

This is the exact same problem when people start talking about using computer algorithms to end gerrymandering. Who's going to design the algorithm? Why don't you think their biases will creep in? Why don't you think politicians will just end up fighting to bias the algorithm in their favor? This is all still just people managing things.

Yes. Someone always has to make the decisions about what the cost function which you are trying to optimize looks like. And that's not objective.
I suppose in some hypothetical future society we could all cast votes corresponding to our value preferences which would then be aggregated into weights in some sort of vast cost function. You could even program to AI to not be entirely utilitarian in the strictest sense. You could make it minimax function instead of an aggregate optimizer.
(side note: it's interesting to analogize this to various moral frameworks - utilitarians are maximizing something like linear mean-squared-error function, deontological ethnics is more like a constrained optima and rawls is doing something like a maximin).

It would be naive to think that the situation would be anything other than some subset of individuals selectively choosing which AI inputs and outputs would be used.

It would be even more naive (in particular with a long-term view) to think they such individuals would have the wellbeing of fellow citizens as a genuine objective, except to the extent that it might prevent rebellion.

For example, perhaps a large share of people have a high valuation of information that would enable visibility and transparent democratic influence over such a politburo. I don't think many people are so naive as to think that the algorithms driving the AI would have high valuation of such anti-autocracy preferences, except as an object of eradication, whether via subtle persuasion or overt political oppression.

How much carrot, of which type, does it take to buy off YOUR aversion to AI-powered rule by an invisible politburo? Or perhaps you respond better to some type of heavy hand which is more accessible to said invisbile politburo?

While Don Lavoie got that having the central planner being part of the system leads to problems, I do not remember seeing him drive the point to its fullest measure, which involves an infinite regress problem a la Holmes-Moriarty (as originally noted by Morgenstern in the 20s), with Hayek also noting this in regard to consciousness and fully understanding ourselves, invoking non-computability a la Turing due to Godelian incompleteness, a much more serious problem than P does not equal NP, which remains unproven, btw. Of course, von Neumann convinced Morgenstern the way out is to toss coins and use probabilities, but if one is dealing with best response functions, this does not work, and the problem remains non-computable. Ken Binmore and others have written on this, as have Roger Koppl and myself in Metroeconomica in 2002, "All that I have to say has already crossed your mind."

Of course, one can arbitrarily cut things short at some level and get a solution, but it is essentially arbitrary.

Machine trained estimation does not require strict equality of any solution at any time (as would be demanded in an econ exam or math-driven theory paper). It simply selects the highest probability.

All Universal Turning Machines are equivalent. This article is a sophisticated way of saying you cannot use Python to write the same programs as you can in C. This is incorrect. It might be easier to write some (most) programs using Python, but that doesn't mean that you couldn't write that program using C, or in fact, paper and pencil. Any function (say vectors of prices and quantities so that there is no excess demand) that can be computed by a distributed system can be computed by a centralized system. Sure, it might take different amount of space and time, but that doesn't mean it is impossible.

Or to steal a quote from von Neuman:

Many people are fond of saying, ‘They will never make a machine to replace the human mind – it does many things which no machine could ever do.’ A beautiful answer to this was given by J. von Neumann in a talk on computers given in Princeton in 1948, which the writer was privileged to attend. In reply to the canonical question from the audience (‘But of course, a mere machine can’t really think, can it?’), he said: You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!"

Replace 'human mind', 'thinking', etc. with 'economy'.

Not the issue, Arnob. It is a computer that is trying to compute itself computing itself computing itself...

Many computers/computer programs do that. It is called recursion.

Except, Arnob, if the recursion does not have a finite stopping point, which is the case for which is arguably the case here, then stopping becomes arbitrary, as already noted.

Arnob,

BTW, I have to thank you for making me aware that computers can sometimes do recursion, something I have known nothing about, although I have for quite a few decades been aware of "Some properties of conversion," Transactions of the American Mathematical Society, May 1936, 39(3), 472-480, by Alonzo Church and J. Barkley Rosser.

I would take another tack. "Running an economy" is simply not the kind of task AIs are good at. Modern machine learning techniques are useful when all the necessary information is available in the data, but some complex nonlinear function is required to extract and use that information. That's not the case with the economy; even if you had ALL THE DATA, an AI wouldn't be able to determine the underlying causal system from that data. It would have to run experiments, and then those would run into the exact same problems that human economists do with RCTs. Machine Learning does not magic away the basic challenges of statistical inference.

Maybe it would help to start small at first and see if AI could run a Dunkin Donuts or coach the Miami Dolphins

This is not good futurism. Whether there ends up being one AI that is much more powerful than everything else (a Singleton) or many AIs of comparable power ("multi-polar") is one of the central questions of AI forecasting, and the consensus prediction is a big question mark. This response takes multi-polar as an unargued axiom.

I think this is going in the wrong direction. A better question to ask is: can an AI central planner outperform human central planner and can it outperform the market process.

A good Hayekian would say that the market will outperform any planner. But that is not taking into account the transaction costs. So the hypothetical AI-controlled economy could produce preferable results to the market despite greater inefficiency but generally lower (or more palatable) transaction costs. But the question remains whether such an AI is possible? I doubt we have to anticipate some artificial god-like super intelligence but it is possible that we can unbundle some of the tasks and find an AI-assisted central planners can outperform traditional central planners and even in some cases the market in efficiency.

I don't think it matters that there are other AIs involved or that they even add that much complexity. If everybody was using AI, complexity would probably decrease because AIs are much more predictable.

I addressed some of these issues here (albeit in a different context): https://medium.com/metaphor-hacker/learning-vs-training-in-machines-and-organizations-production-of-knowledge-vs-production-of-f1f9e6c1d9f3

+1. For a possible but not plausible central planner of the Future.

It is well known that AI central planning algorithms systematically allocate too many resources to "computer maintenance", buying the most expensive high tech liquid cooling systems and fancy dust filtration devices when a simple fan works just fine.

Pretty much every post here falls under the “butt-plug say what?” category.

You could try a monte-carlo approach with VR. For every decision, simulate a subsection of society (or all humanity if your processing capacity allows it) and option for each decision. Select the alternative with the most "Utils" and implement the decision. Of course there may be millions of decisions needed everyday so you would need a lot of processing power, but hey this is fantasy afterall.

I don't think what AI running the economy has been clearly defined and I see some related fundamental problems.
1) AI is good at taking data and finding trends and relationships. In an economy, this information comes from individual decisions. To the extent that AI is making the decisions, the source of information is lost and the model becomes frozen. This becomes especially problematic wrt technological advances, new resources, resource depletion.
2) the issue is not just technical, but one of values and trade-offs. These are not static, preferences change for a variety of reasons.
3) AI is backwards looking, leaning from the past, so that it is not robust to changes in the environment. Perhaps future AI will be better, but cycle back to point 1, and some mechanism to update the model is required.

I'm sure AIs would find the solution to the economic problems in the same way as Stalin and Mao - make sure that everyone consumes that which the planners make, and make sure that the consumers are not allowed to have anything but that which the planners allow. Then make sure they are unwilling to complain about it. If not enough is produced, reduce the population. No matter how smart the AIs are, they will not do a better job than the producers and consumers themselves - especially when it comes to the creation of new products. The market is an enormous epistemological tool for solving the economic problems of when , where, how, how much, etc. It measures real wants and real solutions by means of voluntary arbitration.

Trying to plan the economy centrally is like controlling everyone's heartbeat and breathing centrally. Why even bother when people can do it themselves on an individual level?

Maybe before running the whole economy ... robots and AIs can start running The Government. Replace most gov't workers with AI robots, and have the robots help customers/citizens fill out forms and track them.

Gov't is complex, too ... but in theory, all the regulations are clearly knowable by AIs, so gov't service should be automated.

Comments for this post are closed