09 January 2025


The widespread use of the “trustworthy AI” denomination by manufacturers and legislators sparks debates. On the one hand, several philosophers have argued that it is irrational to place trust in a machine. Instead, trust should be placed not in the machines themselves, but in the humans responsible for their development and deployment. Others argue in favour of trusting machines, emphasizing that as social animals, we already live in a world structured by trusting relationships. Accordingly, trust should be considered a pre-reflective condition of human experience and unproblematically extended to machines.
Since today’s machines are nowhere near any of these conditions, many AI ethicists advocate for employing the notion of reliability rather than trust when dealing with machines. Reliability, decoupled from trust, is understood as entailing solely performance-based or outcome-based standards for evaluation.
In this work, EMERGE partners from the Ludwig Maximilian University of Munich explore whether labelling AI as either “trustworthy” or “reliable” influences user perceptions and acceptance of automotive AI technologies. Utilizing a one-way between-subjects design, the research presented online participants (N = 478) with a text presenting guidelines for either trustworthy or reliable AI, before asking them to evaluate 3 vignette scenarios and fill in a modified version of the Technology Acceptance Model which covers different variables, such as perceived ease of use, human-like trust, and overall attitude.
While labelling AI as “trustworthy” did not significantly influence people’s judgements on specific scenarios, it increased perceived ease of use and human-like trust, namely benevolence, suggesting a facilitating influence on usability and an anthropomorphic effect on user perceptions. The study provides insights into how specific labels affect adopting certain perceptions of AI technology.
Read the paper in the link below.