AIs Quickly Learn to Collude

AI’s are better than humans at Chess and Go, why shouldn’t they also be better at the game of collusion? Calvano, Calzolari, Denicolò and Pastorello show that they are (here quoting a VOXEU summary by the authors):

[In Calvano et al. 2018a] we construct AI pricing agents and let them interact repeatedly in controlled environments that reproduce economists’ canonical model of collusion, i.e. a repeated pricing game with simultaneous moves and full price flexibility. Our findings suggest that in this framework even relatively simple pricing algorithms systematically learn to play sophisticated collusive strategies. The strategies mete out punishments that are proportional to the extent of the deviations and are finite in duration, with a gradual return to the pre-deviation prices.

Figure 1 illustrates the punishment strategies that the algorithms autonomously learn to play. Starting from the (collusive) prices on which the algorithms have converged (the grey dotted line), we override one algorithm’s choice (the red line), forcing it to deviate downward to the competitive or Nash price (the orange dotted line) for one period. The other algorithm (the blue line) keeps playing as prescribed by the strategy it has learned. After this exogenous deviation in period , both algorithms regain control of the pricing.

Figure 1 Price responses to deviating price cut

Note: The blue and red lines show the price dynamic over time of two autonomous pricing algorithms (agents) when the red algorithm deviates from the collusive price in the first period.

The figure shows the price path in the subsequent periods. Clearly, the deviation is punished immediately (the blue line price drops immediately after the deviation of the red line), making the deviation unprofitable. However, the punishment is not as harsh as it could be (i.e. reversion to the competitive price), and it is only temporary; afterwards, the algorithms gradually return to their pre-deviation prices.

…The collusion that we find is typically partial – the algorithms do not converge to the monopoly price but a somewhat lower one. However, we show that the propensity to collude is stubborn – substantial collusion continues to prevail even when the active firms are three or four in number, when they are asymmetric, and when they operate in a stochastic environment. The experimental literature with human subjects, by contrast, has consistently found that they are practically unable to coordinate without explicit communication save in the simplest case, with two symmetric agents and no uncertainty.

What is most worrying is that the algorithms leave no trace of concerted action – they learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude.

Tacit collusion isn’t actually illegal since it’s virtually impossible to prove, at least among humans. Tacit collusion by AIs is going to be much more common but perhaps also easier to prove if the antitrust authorities can demand access to the algorithms. No need to torture the data when you can torture the AIs. It’s going to be a strange world.

Hat tip: Ankur Delight.

Comments

"Tacit collusion by AIs is going to be much more common but perhaps also easier to prove if the antitrust authorities can demand access to the algorithms."

Or if the AI's tax returns are made public.

Serious question about that comment: if two companies independently implement AI algorithms that converge on "collusion", without actually communicating with each other:

1) Is that illegal?

2) Should it be illegal? If so, why?

"2) Should it be illegal? If so, why?"

No, because you've just made market actions illegal. Indeed, in the example given the machines ended up at a lower price than what they started.

Also, how can researches not realize that red/green color blindness is the most prevalent. And yes, having different line types would help, if you didn't use the same line types for the same shades.

For people that don't really understand what color blindness is, it doesn't generally mean you can't see the color. It means you have a hard time distinguishing between variation in the color particularly when it's shades of the same color or it's overshadowed by other colors, such as composites or thin lines.

A red / green color blind person, will often have a hard time telling the difference between lime and yellow, because they can't see the green very well. Or between purple and blue, because they can't see the red. And of course, thin lines exasperate the issue.

Yeah, that was my intuition, as well. However, Alex seems to assume it would (should?) be illegal. I wonder what the argument for that would be.

Seems like the invisible hand working efficiently.

got it,
sometimes its the invisible hand working efficiently
sometimes its the black hand working behind the scenes
there are many objective criteria that tell you the difference
there is also a shortcut
that shortcut ladies and gentleman is
John Prine
https://www.youtube.com/watch?v=jar8wiaE0dw
also there is John Prine

It depends on what the AIs have done. They can not do something their companies themselves are banned from doing.

As Adam Smith pointed out, "AIs of the same trade seldom meet together, even for playing go or chess, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices. "

Yeah, within the bounds of "tacit collusion" (converging on a price above the minimum, without actually communicating), what, if anything, would be illegal?

Everything. It two or more AIs are in cahoots,even tacitly, it is not different from the CEO of General Motors and the CEO of Ford being in cahoots.

Any other way would make a mockery of antitrust laws.

But surely to be "in cahoots" they would have to actually communicate and coordinate their activity in some way?

They know what they are doing. https://super.abril.com.br/comportamento/desafio-solidario-o-raciocinio-incrivelmente-logico-das-galinhas-de-penacho/.

AIs are also likely a lot better at blowing themselves up, too.

Selflessness is seemingly not really built into humans, though it is certainly something we have built into everything we have ever constructed (to avoid any confusion over the term 'create').

Why is this so hard to model? It's the classic Prisoners Dilemma game, as talked about by Robert Axelrod, where the best strategy is cooperation, then retaliation once and then cooperation thereafter.

Bonus trivia: The prisoner's dilemma game was invented around 1950 by Merrill M. Flood and Melvin Dresher.

The paper is confusing, as it calls the "competitive price" the "Nash price" when in fact if this is a Prisoners Dilemma problem, as it appears to clearly be (if the agents cooperate they both are better off as a whole), the Nash price is the grey line, not the orange line. And with a 'genetic algorithm' the formula I posted above is trivial to discover, even if you don't explicitly program it into the software. Much ado about nothing, but AlexT knows how to work a crowd to get page views, so this post will get a lot of attention.

The price resulting from perfect competition is the Nash price because the Nash equilibrium is the state when "no player can benefit by changing strategies while the other players keep theirs unchanged".

If companies A and B are colluding on a higher price, this is not a Nash equilibrium, because either A or B would benefit from lowering their price (and taking away market share from their competitor), if the other did not change their price.

Yes, competition vs. collusion on price is a classic example of a prisoner's dilemma, but you appear quite confused about the implications of that. A prisoner's dilemma is a dilemma precisely because it's a situation where cooperation is optimal, but parties typically don't cooperate. Humans typically don't cooperate in prisoner's dilemmas, and the Nash equilibrium in a prisoner's dilemma is non-cooperation. The fact that an AI is able to cooperate in this situation (and do so better than humans in analogous situations) is, indeed, news.

@dan1111 - thanks for that amplification. That machines cooperate more than humans is not imo news, but a programming artifact.

"It's the classic Prisoners Dilemma game,"

A critical assumption in the Prisoner's dilemma game is no ability to communicate. If the Prisoners can communicate and hear each other they can agree to not talk and be sure the other prisoner isn't talking also. Which voids the whole scenario.

So, no this is not a Prisoners Dilemma game.

This is a prisoner's dilemma, because there is no communication beforehand to set a price. They can only see the results after the fact (the price their competitor has set).

There are many iterations, because each agent has many opportunities to observe their competitor's price and adjust their strategy, but the dilemma characterises each iteration. Multiple iterations of the dilemma is something discussed extensively in the literature.

Once they've colluded to bribe politicians, what's to stop them?

We should all just be grateful that after two years and hundreds of millions of dollars, SOMEONE was able to find collusion SOMEWHERE.

So, two algorithms pick up the same signals and arrive at similar answers.

Doesn't this just reflect well established economic principles? Both responded to competitive prices and arrived at what the market would bear.

For the record, "AI" in this case is just a statistically based algorithm(s) that through trial and errror arrived at an equation to gives results similar to a pattern produced by people or another process. There's no intelligence involved and you certainly can't always trust it to learn the pattern you thought you were teaching it.

And I won't exclude the possiblity that the models were affected by something known as "leakage", where data fed to the models inadvertantly includes the answers the modelers are looking for.

In that case the headline should have been "Poorly Crafted AI Produces Unsuprising Results".

"For the record, "AI" in this case is just a statistically based algorithm(s) that through trial and errror arrived at an equation to gives results similar to a pattern produced by people or another process. "

Yes, normally referred to as Machine Learning. It is probably what they used here. And it's not really AI. It's generally very 1 dimensional in it's "intelligence".

What is this "tacit collusion"? How does it differ from standard market participation? It sounds like an oxymoronic, hand-wavy excuse for markets not behaving as predicted by some model, but maybe there is more to it than that.

Agree. I don't see how it's collusion if you never, well, collude. If what you do is set your price in the market to maximise your profit, and never talk to the other participants, how is that colluding at all, rather than just being free market? Are we saying that every market participant has an obligation to sell at their lowest profitable price, and if their price is any different than that then it's "collusion"?

I cannot agree more. “Collusion” sells more ads, I guess.

They concluded properly that an over supply of a good was detected and likely permanent, barring any other shock. The bots are not Euler.

What is missing in the experiment are the bots that produce the good, sellers, to match the buyers.

Tacit collusion isn't not illegal because it's impossible to prove, those two things are separate. Tacit collusion is not illegal because it doesn't, in the legal sense, involve an actual agreement between competitors.

"Tacit collusion isn’t actually illegal since it’s virtually impossible to prove, at least among humans. Tacit collusion by AIs is going to be much more common but perhaps also easier to prove if the antitrust authorities can demand access to the algorithms. No need to torture the data when you can torture the AIs. It’s going to be a strange world."

This is oddly inaccurate. Top-of-the-line modern deep learning techniques are almost completely opaque. Note how almost all commentary on top-level computer chess or Go programs works backwards from the observed moves: "the computer seems to value..." This is a known issue and I would expect unprovable collusion to be a huge problem if AIs are set up with this level of control.

(For some problems, such as predicting criminal recidivism for justice-system purposes, there's a strong case that the algorithms used are primarily valued for their opacity and illegibility. They don't seem to improve accuracy but do effectively diffuse decision-making blame.)

Good news for OPEC?

The best aspect of this is that it helps reinforce the nonexistence of God.

Yeah but god always gets the last move

The Goddess disagrees.

Which is first AI learning how to collude or how to cuddle?

I can't run a full blown AI. My unit is too small.

Seems like instead of trying to deal with price collusion at the communication level, it needs to be adressed at the market power level.

If theres only two gas stations in town, they can communicate with their signs.

If theres only one hospital and two health insurers serving your area...

"torture the AIs"

Alex just wiped out all LessWrong's work with one sentence. We're doomed.

Guess NSA was right to name their AI "Skynet".

Wait! Maybe they'll just torture Alex!

"Tacit collusion by AIs is going to be much more common but perhaps also easier to prove if the antitrust authorities can demand access to the algorithms. "

On the contrary, if the algorithms aren't traditional software artifacts but statistical/ML models. At least when humans collude they communicate their intentions with language; a black box which 'merely' maximizes some objective function could well be impossible to reason about, legally.

exactly, so regulation will move to the outcomes. And requirements to put in fail safes.

And this will lead to an endless cat and mouse game of workarounds and revised regulations...

Absolutely - ML processes can often be virtually impossible to understand why they do what they do except by output. There's no equation output which says "If price rises by X%, match by Y%".

Netflix's Alpha Go documentary pointed out this problem really well. Alpha Go was dominating Lee Sedol with strategies that had never been seen before and making decisions that nobody could understand, but it was clearly winning. Even the computer scientists running the program had to work hard to find out whether the program was making mistakes or making smart decisions when they were trying to optimise its performance.

There have been other papers on the same thing: using algorithmic models licensed to competitors to assist in pricing and market intelligence.

Or, they could all hire the same consultant.

Here's an antitrust law review article on the subject: http://www.minnesotalawreview.org/wp-content/uploads/2016/04/Mehra_ONLINEPDF1.pdf

"without communicating with one another"

What do they think "prices" even ARE?

Comments for this post are closed