Pathbreaking paper on AI simulations of human behavior
By Benjamin Manning and John Horton:
Useful social science theories predict behavior across settings. However, applying a theory to make predictions in new settings is challenging: rarely can it be done without ad hoc modifications to account for setting-specific factors. We argue that AI agents put in simulations of those novel settings offer an alternative for applying theory, requiring minimal or no modifications. We present an approach for building such “general” agents that use theory-grounded natural language instructions, existing empirical data, and knowledge acquired by the underlying AI during training. To demonstrate the approach in settings where no data from that data-generating process exists—as is often the case in applied prediction problems—we design a highly heterogeneous population of 883,320 novel games. AI agents are constructed using human data from a small set of conceptually related, but structurally distinct “seed” games. In preregistered experiments, on average, agents predict human play better than (i) game-theoretic equilibria and (ii) out-of-the-box agents in a random sample of 1,500 games from the population. For a small set of separate novel games, these simulations predict responses from a new sample of human subjects better even than the most plausibly relevant published human data.
Here is a good Twitter thread. A broader AI lesson here is that you often have to put in a lot of work to get the best from your LLMs. And these results ought to have implications for the methods of psychology and some of the other social sciences as well.