An Interview with Preston McAfee

Here’s a good interview by the Richmond Fed of Preston McAfee. McAfee was one of the designers of the FCC’s spectrum auctions and used that experience to move from academia to technology firms. He has held top positions at Yahoo, Google and Microsoft. Here’s one issue that I have discussed before, tacit collusion among AIs..

EF: What are the implications of machine learning, if any, for regulators?

McAfee: It is likely to get a lot harder to say why a firm made a particular decision when that decision was driven by machine learning. As companies come more and more to be run by what amount to black box mechanisms, the government needs more capability to deconstruct what those black box mechanisms are doing. Are they illegally colluding? Are they engaging in predatory pricing? Are they committing illegal discrimination and redlining?

So the government’s going to have to develop the capability to take some of those black box mechanisms and simulate them. This, by the way, is a nontrivial thing. It’s not like a flight recorder; it’s distributed among potentially thousands of machines, it could be hundreds of interacting algorithms, and there might be hidden places where thumbs can be put on the scale.

I think another interesting issue now is that price-fixing historically has been the making of an agreement. In fact, what’s specifically illegal is the agreement. You don’t have to actually succeed in rigging the prices, you just have to agree to rig the prices.

The courts have recognized that a wink and a nod is an agreement. That is, we can agree without writing out a contract. So what’s the wink and a nod equivalent for machines? I think this is going somewhat into uncharted territory.

Comments

Sort of like everyone in the same industry hiring the same consultant to give them pricing advice and give them a sense of the market.

You can also use this a different way--to show that it was inevitable.

I had an investigation where I used game theory and assigned information sets to the investigators and asked them to play the competition game. What they discovered was that where the game ended up was what they thought was the result of a meeting of the minds or collusive outcome. But, how could they be conspirators.

The other side of this is that a firm's model could be used by the government to show that a merger between two firms would likely lessen competition--in markets with more players, but many of them using AI or the same off the shelf model.

Simulation experiments using the firm's own software might be the tool that stops further mergers in an industry.

... AI and advanced information technology will allow the Federal Government to regulate markets at an even more spectacular level ...

Yet again, tech novelties have sped ahead (and have been permitted to speed ahead) of prudent commercial regulation, attentive and active government oversight, and a robust legal framework for managing tech-sped enterprise.

Since our tech tyrants continue to be permitted to stay ahead of legal regulation to do whatever they want (not chiefly FOR us, TC, but TO us), perhaps we should give up on regulating tech tyrannies and wait for them to conjure models of legality and jurisprudence that they think might suit the rest of us.

Perhaps nothing would get tech tyrants' attention like the promise (should the ability present itself) to unplug the pernicious internet and all electronic "communications" media (apart from landline telephones) for an entire eight hours out of every day. (This would help trim vital electricity consumption across the very decades in which we are getting our first tastes of unalterable Technogenic Climate Change.)

From McAfee's discussion you almost would never suspect that tacit collusion (ie conscious parallel behavior) is entirely legal. And actual collusion is even harder to enforce on machines, because they'll cheat every single time.

If the game theoretic model arrives at some sort of tit for tat strategy on pricing where all the firms at first set their pricing high, and that leads the algorithms to set high prices, is that collusion?

If the model is stable, that would suggest that the market structure (few competitors) supports it, whereas if it were a competitive market, the optimal strategy would be to be the first to cheat.

Usually it's Tyler who posts a repeat (perhaps inevitable given the volume of his posts) but in this case Alex is posting the same link that Tyler did a day or two ago.

As I commented in Tyler's thread, my big takeaway from the article was something that McAfee mentioned off-hand: efforts to use machine learning technique to identify and estimate causal relationships, McAfee mentioned double machine learning. And that tipped me off to the efforts that have proliferated the last few years including a "What Can Machine Learning Do for Economics" seminar at Chicago's Becker Friedman Institute.

Or maybe Alex did this deliberately; Tyler merely posted it as one of his daily links with, as is his wont, no comment. Whereas Alex highlighted the snippet about machine learning and regulation, presumably inviting commenters to focus on that part of the article.

" It is likely to get a lot harder to say why a firm made a particular decision when that decision was driven by machine learning"

My own experience of applying machine learning to match/accelerate human decision making is that humans are at best able to volunteer only qualitative explanations for their decisions (and "The Mind is Flat" has a lot to say about our lack of insight into our own thinking). By contrast, techniques for understanding models are developing quite quickly. In the limit, surely silicon-based decision making systems are fundamentally easier to study than biological ones?

Utter bull!@#$. The ultimate black box model is the human brain. So let's walk this through:

A corporation makes a decision we don't like:

Human made it - we go ask them why. "Oh yeah, he just didn't strike me as trustworthy so I didn't give him the loan". Could be lying. Could be racist. Could not even remember properly. In general, people are terrible at explaining their decisions.

ML algo made it - We pull up the data used to make the decision. Call the algorithm to repeat the prediction to verify that, for this data, the model makes the same bad decision. Worst case scenario, for a true black box algo, you begin perturbing the feature space to iteratively find gradients in the model output. If desired, can be repeated for a very large sample of data to get general model behaviour.

Now we have model response vs features. Maybe it's monotonic. Maybe it's super noisy and bounces around and we think this is unfair. Regardless, we *can* understand how the model makes decisions with the data.

ML is way, way, way , way more transparent and easy to scrutinize compared to human decisions.

This version have lots of features and for the analysis we need to observe many of the things like remember password this is a real setup where we can know how it will be good to get information for the password and user name.

Comments for this post are closed