03 May 2024


Trust is an essential factor determining human decisions, one that shapes and constrains our actions towards individuals, organizations, and institutions. The steadily increasing and encompassing role of intelligent machines has culminated in calls from both industry leaders and regulative bodies to develop so-called “trustworthy AI”.
In this work, EMERGE partners from the Ludwig Maximilian University of Munich argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, the authors show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus epistemically dubious behavior.
The normative demands of reliability for inter-agential action are argued to be met by an analogue to procedural metacognitive competence (i.e., the ability to evaluate the quality of one’s own informational states to regulate subsequent action). Drawing on recent empirical findings that suggest providing reliability scores to human decision-makers improves calibration in the AI system, we argue that reliability scores provide a good index of competence and enable humans to determine how much they wish to rely on the system.
Read the paper in the link below.