Calendar27 July 2023

Publication: Intelligence brings responsibility - Even smart AI assistants are held responsible Publication: Intelligence brings responsibility - Even smart AI assistants are held responsible

Who gets blamed when an accident happens? The artificial intelligence (AI) system or the human relying on it? The nascent field of experimental AI ethics has found strong evidence that AI systems are judged as responsible as humans when they negotiate traffic decisions independently or with humans as co-actors. Fully autonomous medical AI systems share responsibility with the supervising clinician. In medical and legal cases, AI is similarly held responsible when it provides social or moral guidance on whether a defendant can be released or whether a risky medical procedure should be performed.

But what happens when AI is merely an enhanced detection device, most closely resembling a mere instrument or tool? Would the mere instrumental use of AI leave the technology off the responsibility hook, or is the involvement of some form of intelligence sufficient to introduce attributions of responsibility?

To find out, EMERGE partners from Ludwig Maximilian University of Munich examined whether purely instrumental AI systems stay clear of responsibility. The authors compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users.

Their findings show that responsibility is shared when the warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools. Surprisingly, whether the warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do.

Source: Deroy, O., Longin, L., & Bahrami, B., Iscience, 26(8), 107494, 2023. DOI: 10.1016/j.isci. 2023.107494

Access EMERGE publications in the link below.