Algorithm Aversion

People don’t like deferring to what I earlier called an opaque intelligence. In a paper titled Algorithm Aversion the authors write:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

People who defer to the algorithm will outperform those who don’t, at least in the short run. In the long run, however, will reason atrophy when we defer, just as our map-reading skills have atrophied with GPS? Or will more of our limited resource of reason come to be better allocated according to comparative advantage?

Comments

Comments for this post are closed