22 April 2025


Cultural differences may lead to varied reactions towards interactive artificial agents. Our present understanding of human general attitudes towards AI and of their actual cooperation with and trust in fellow humans offers mixed predictions for how interactions between humans and AI-powered agents—for example, between human traffic participants and the much anticipated fully automated vehicles—may unfold in different parts of the world.
In this work, EMERGE partners from the Ludwig Maximilian University of Munich examined people’s willingness to cooperate with artificial agents and humans in two classic economic games requiring a choice between self-interest and mutual benefit. The authors observed that participants in the United States cooperated with artificial agents significantly less than they did with humans, whereas participants in Japan exhibited equivalent levels of cooperation with both types of co-player.
They found a notable difference in how people felt about exploiting their cooperative partner: people in Japan emotionally treated artificial agents and humans alike, whereas people in the United States felt bad about exploiting humans, but not machines. Their findings underscore the necessity for nuanced cultural considerations in the design and implementation of such technology across diverse societies
Read the paper in the link below.
-
Previous Article
Sabine Hauert at The Royal Institution. - View All Articles