Against human-AI collaboration
From a new NBER working paper by Nikhil Agarwal, Alex Moehring, Pranav Rajpurkar, and Tobias Salz:
Radiologists do not fully capitalize on the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating. The observed errors in belief updating can be explained by radiologists’ partially underweighting the AI’s information relative to their own and not accounting for the correlation between their own information and AI predictions. In light of these biases, we design a collaborative system between radiologists and AI. Our results demonstrate that, unless the documented mistakes can be corrected, the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI.
I am more optimistic in my views, noting there may well be contexts such as radiology where the collaborations fail. I collaborate with Google’s AI all the time, and I am pretty sure that joint effort does better than either myself or “Google with no human” unaided. Still, this is a cautionary note of some import, as many humans are not good enough to work well with AIs.