Fact Checking Increases Fake News

Florian Ederer and Weicheng Min have an interesting new paper called Bayesian Persuasion with Lie Detection which shows that under some conditions fact checking can increase fake news.

How does lie detection constrain the potential for one person to persuade another to change her action? We consider a model of Bayesian persuasion in which the Receiver can detect lies with positive probability. We show that the Sender lies more when the lie detection probability increases. As long as this probability is sufficiently small, the Sender’s and the Receiver’s equilibrium payoffs are unaffected by the presence of lie detection because the Sender simply compensates by lying more. However, when the lie detection probability is sufficiently high, the Sender’s equilibrium payoff decreases and the Receiver’s equilibrium payoff increases with the lie detection probability.

The paper is difficult so here’s my stab at the intuition.

Suppose that politicians always want war but war is not always good. If voters can always detect a lie, politicians would always report war-is-good only when war was good and not-war when war was bad. Now suppose that voters can never detect a lie and also that the true probability of war being good is low. In this case, if politicians always report war-is-good then the voters would always ignore the politicians and choose no-war. But suppose that politicians always report war-is-good when war is good but sometimes report war-is-bad when war is bad (and, of course, sometimes report war-is-good when war is bad). In this situation, voters could be better off listening to politicians than ignoring them completely and the politicians will also be better off because they will get war more often—always when it is good and sometimes when it is bad. So, even though they always want war, how often should politicians report war-is-bad when war is bad? Just enough so that the voters are indifferent between following the politician’s advice and ignoring it altogether. In other words, voters have a threat point—ignore the politician completely. Knowing this threat point, politicians should tell the truth just enough so that voters prefer following the politician’s advice to ignoring them completely. Thus, even though voters can never detect a lie, politicians should sometimes tell the truth.

Now assume that there is (imperfect) lie detection. It’s immediately obvious that holding all else equal voters will be better off because now they will detect some of the times politicians say war-is-good when in fact war-is-bad. But precisely for this reason, all else will not be held equal, politicians will now report war-is-good when it is bad more often, i.e. they will lie more often. Working “backwards”, the voters threat point—ignore the politicians altogether—hasn’t changed and so their expected utility won’t change but that means that lie detection can’t make them better off and the reason is that politicians will lie more often.

In fact, lie detection doesn’t help the voters unless it is very accurate–perhaps more accurate than in our world.

Note that in this model voters cannot punish politicians for lying nor is there any opportunity for politicians to signal, to take costly actions that could separate truth tellers from liars. The model is all about “senders” and “receivers” of information–it’s a model of Bayesian persuasion not Bayesian punishment. In a model with punishment the ability to detect lies–even the ability to detect lies ex post–could result in more optimistic scenarios. Similarly, signaling might help, at the expense of some cost.

Nevertheless, the lesson I take is that information revelation is rarely pure. Information revelation is strategic–what is revealed and when it is revealed are choices in a game that may have complex and counter-intuitive equilibria.

For more, see Peter Coy’s interesting article on Bayesian persuasion in the New York Times.

Comments

Comments for this post are closed