A key research challenge in human-AI systems is the current limitation in data, models and theories that explain their dynamic behavior, coordination, and performance. We do not yet fully understand the dominant socio-cognitive processes that determine the dynamic, adaptive, and learning behavior of human-AI teams. Of especial interest are decision-making problems in intellective tasks with uncertainty and limited resources: what are the rational, efficient, or irrational strategies and heuristics that humans tend to adopt in such circumstances? Useful socio-cognitive models should inform the design of efficient AI agents that improve the overall human-AI team performance. In other words, empirically-validated models and theories are needed to model and build the human-AI teams of the future and to intervene when their performance deteriorates. The project's broad objective is the development and experimental validation of a theory of coordination of human-AI teams in complex intellective tasks. We plan to combine fundamental insights and models of team behavior from social sciences with state-of-the-art machine learning and dynamical systems methods. Specifically, our objective include: 1. modeling socio-cognitive structures in human-AI teams, including transactive memory systems, influence systems, and prospect theories; 2. identifying leading cognitive processes, heuristics and biases that underlie the formation of socio-cognitive structures and affect the accuracy of human-AI team decision making; 3. designing supervisory/coordinating AI agents in human-AI teams, based on concepts from applied psychology and machine learning, and testing/validating them in sequential, risky, uncertain decision making tasks; and 4. modeling how human-AI teams cope with limited training data acquired over short sessions, including how they react to various manipulations and intervention schemes.
Funded by: Army Research Office.