Tag: chess

Computers are Better at Recognizing Faces than Cyborgs

There was a brief window of time when computers could beat humans at chess but a human and a computer could beat a computer. In other words, there was a window of time when cyborgs could beat computers at chess. That window closed years ago (as Tyler predicted it would). Computers now beat humans and cyborgs. Humans aren’t especially evolved to be good at chess which is why only a few of us play chess well but we are evolved to recognize faces. Humans are incredibly good at recognizing faces. But computers are better. Even more surprisingly, computers are better at recognizing faces than cyborgs.

Psycnet: Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies, and private businesses to verify the identity of individuals. Although previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human-algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (> 90%) to improve their own face matching performance compared with baseline (sensitivity gain: Cohen’s d = 0.71–1.28; overall accuracy gain: d = 0.73–1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (> 89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenarios.

Hat tip: The excellent KL.