Computational Cognitive modeling of socio-cultural perspective on Human-AI cooperation with PigChase Task
Research Poster Engineering 2025 Graduate ExhibitionPresentation by Swapnika Dulam
Exhibition Number 157
Abstract
Despite the continued anthropomorphizing of AI systems, the potential impact of racialization during human-AI interaction is understudied. In this poster, we explore how human-AI cooperation may be impacted by the belief that data used to train an AI system is racialized, that is, trained on data from a specific group of people. During this study, participants completed a human-AI cooperation task using the Pig Chase game, a popular Stag Hunt game variant. Participants from different self-identified demographics interacted with AI agents whose perceived racial identities were manipulated. This allows us to assess how socio-cultural perspectives influence participants’ decision-making in the game. After the game, participants completed a survey questionnaire to explain the strategies they used while playing the game and to understand the perceived intelligence of their AI teammate. Statistical analysis of the task behavior data revealed a statistically significant effect of the participant's demographic and the interaction between this self-identified demographic and the treatment condition (i.e., the perceived demographic of the agent). We built a cognitive model of the task using ACT-R cognitive architecture, using the game theory of mind model, to have a cognitive-level, process-based explanation of the participants’ perspectives based on results found from the study. This model helps us better understand the factors affecting the decision-making strategies of the game participants while interacting with presumably racialized AI agents.
Importance
A socio-cultural lens for studying human-AI interactions, especially when humans interact with racialized AI, is often understudied. Given the sustained impact of racism at various levels of society, it is more important than ever to understand its influence on human-AI interactions. We used data from the experiment to analyze the strategies participants employ in various treatment conditions. A cognitive model is built to examine the cognitive processes involved in decision-making strategies. This model helps us better understand how and when people choose not to cooperate with AI agents. We further examine if these factors are transferred to real-world scenarios. This model will make us one step closer to building socioculturally-competent AI models.
DEI Statement
The AI systems are only as good as the data used to train them. These datasets are far from being perfect as they reflect the biases present in society. The AI models that are developed for a biocentric man may not account for the needs of the entire population. Therefore, these systems overlook subtle nuances necessary for it to be widely acceptable to people of all races. This study aims to understand how humans cooperate with racialized AI and how they perceive racialized AI. Understanding these factors that contribute to implicit cognitive associations with certain types of racialized AI gives us deeper insights on human perspective towards AI agents and can help us build more socioculturally-competent AI systems.