Abstract: |
Understanding how people behave in strategic settings--where they make
decisions based on their expectations about the behavior of others--is a
long-standing problem in the behavioral sciences. We conduct the largest study
to date of strategic decision-making in the context of initial play in
two-player matrix games, analyzing over 90, 000 human decisions across more
than 2, 400 procedurally generated games that span a much wider space than
previous datasets. We show that a deep neural network trained on these data
predicts people's choices better than leading theories of strategic behavior,
indicating that there is systematic variation that is not explained by those
theories. We then modify the network to produce a new, interpretable
behavioral model, revealing what the original network learned about people:
their ability to optimally respond and their capacity to reason about others
are dependent on the complexity of individual games. This context-dependence
is critical in explaining deviations from the rational Nash equilibrium,
response times, and uncertainty in strategic decisions. More broadly, our
results demonstrate how machine learning can be applied beyond prediction to
further help generate novel explanations of complex human behavior. |