EMERGE promoted the organisation of the first Awareness Inside Portfolio workshop, titled 'Inside the Ethics of AI Awareness', which took place in Brussels on September 20th, 2023. It witnessed the participation of representatives from the EC and all Portfolio projects. The event comprised two sessions, one dedicated to the Portfolio and one Public. The Portfolio session featured presentations from representatives of the portfolio projects on the themes of ethics and awareness, followed by focused discussions in three brainstorming groups, each tackling one of the following pressing questions for the development of a joint Portfolio vision of Ethics of awareness:

  1. What are the ethical red lines when it comes to awareness/consciousness in artificial agents?
  2. Does awareness offer some opportunities to improve on the ethical guidelines for AI?
  3. What are the activities/initiatives that the Ethics portfolio could host in the next 12 months?

The Public session was centred around two keynote speeches from two invited speakers: Alan Winfield, from the University of Bristol, and Kathleen Gabriels, from Maastricht University. The abstracts of the presentations delivered by each project representative and by the keynote speakers are provided below.

Keynote Abstracts

Towards an Ethical Governance Framework for (self) aware systems - Alan Winfield (University of Bristol)

AI systems or intelligent robots that we might suspect have some degree of (self) awareness will need careful ethical governance. This is not only because of the possible impact of awareness on interactions with human users but the need for monitoring the operation of the systems themselves, especially if they exhibit signs of self-awareness. In previous work we suggested an ethical governance approach for explicitly ethical machines*. This paper will propose a framework, based on that approach, for the strong ethical governance of (self) aware systems. *AF Winfield, K Michael, J Pitt and V Evers (2019) Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue], in Proceedings of the IEEE, vol. 107, no. 3, pp. 509-517, March 2019, doi: 10.1109/JPROC.2019.2900622.

AI Enhancement: Can human intelligence be cognitively and morally enhanced by artificial intelligence? - Katleen Gabriels (Maastricht University)

The advantages artificial intelligence (AI) has over humans are, among other things, efficiency, speed, and consistency. Today, machines perform many sophisticated analyses as well or even better than humans, for instance in the context of medical decision-making, and sometimes even in ways humans would not themselves anticipate. AlphaGo defeated world champion Lee Sedol in 2016 in the Chinese board game Go. By playing millions of matches, AlphaGo discovered strategies. During the game with Sedol, AlphaGo made a brilliant move showing great insight. AlphaGo surprised human observers and eventually made people play Go differently by trying out the new moves. As AI is humanmade, it is fair to say that it has learned from us, from our human intelligence. Yet, the example of AlphaGo seems to suggest that human intelligence can also learn from artificial intelligence. The guiding question of this presentation is: to what extent can AI enhance us cognitively (see e.g., Nyholm, 2023) and morally (based on Volkman & Gabriels, 2023)? I will argue, among other things, that a modular system of multiple AI interlocutors could play a valuable role to enhance human moral awareness.

Abstracts of Portfolio Seminars

The Distributed Adaptive Control of Moral Reasoning in Aware Agents: The CAVAA perspective - Paul Verschure (CAVAA)

The creation of synthetic aware agents raises two interlocking ethical questions: how should we treat them, and what must be their minimal ability of moral reasoning to interact with humans effectively? In this presentation, I will focus on the second question, starting from the necessary link between morality and volition. An aware agent must have moral reasoning capability proportional to its ability to be driven by its own free choice. Based on these considerations, I will sketch an initial volition model based on our growing understanding of its neuronal substrate. From this compatibilist perspective, I will subsequently argue that moral aware agents will include a hierarchically structured moral reasoning architecture embedded in their broader control architecture. I will provide examples of this perspective from both behavioural research and synthetic psychology.

Perspectives on the Ethics of Collaborative Awareness - Anna Monreale and Bahador Bahrami (EMERGE)

This talk addresses the ethical issues raised by multi-dimensional and multi-agent awareness. In particular, we will discuss different aspects or dimensions of awareness which could be instantiated in AI systems, and examine how they could map into different kinds of explanations. Thus, we will analyse the relationship between the identified forms of awareness and the opportunity they offer in making the AI agents, both singular and collective, explainable.

Artificially Conscious Chatbots - Aida Elamrani (ASTOUND)

The ASTOUND project aims to improve on current chatbot technologies by implementing a module of artificial consciousness based on Michael Graziano’s Attention Schema Theory. This is a challenging aim not only from a technical perspective, but also from a scientifical and ethical perspective. There is currently no scientific consensus on the definition of consciousness, and the field of artificial consciousness being highly interdisciplinary, discussions are somewhat hindered by the lack of a unified approach. This talk is structured in two segments. The initial segment provides an overview of progress in artificial consciousness, establishing foundational definitions and clarifying core concepts. The second one addresses the ethical challenges around this technology, with a focus on the case of conversational agents

Artificial Awareness: moving from theoretical possibility towards empirical actuality - Michele Farisco (CAVAA)

To make progress in the discussions about the theoretical possibility, logical coherence, conceptual plausibility, technical feasibility and empirical realism of artificial awareness, it is necessary to start from a preliminary theoretical reflection, which includes the clarification of the relevant terms as well as of the logical, theoretical and empirical conditions for artificial awareness to arise. In this talk we first sketch a conceptual analysis of key terms from consciousness research (including how the concept of awareness relates to the concept of consciousness) to then focus on the multidimensionality of consciousness and on the concept of consciousness profiles, introducing a relevant taxonomy. Against this background, we analyse the theoretical possibility of artificial awareness and reflect about how to identify (and possibly quantify) the dimensions composing a particular profile, arguing for the need of developing empirical, notably ecological indicators of consciousness in the case of artificial systems.

Investigating the ethical dimension in the SymAware project- Ana Tanevska and Arabinda Ghosh (SymAware)

The SymAware project addresses the fundamental need for a new conceptual framework for awareness in multi-agent systems (MASs) that is compatible with the internal models and specifications of robotic agents and that enables safe simultaneous operation of collaborating autonomous agents and humans. The SymAware framework will use compositional logic, symbolic computations, formal reasoning, and uncertainty quantification to characterise and support situational awareness of MAS in its various dimensions, sustaining awareness by learning in social contexts, quantifying risks based on limited knowledge, and formulating risk-aware negotiation of task distributions. Finally, as new concepts of awareness can lead to reimagining the relation between humans and artificial agents, there's a need to investigate how this awareness framework can impact the ways in which people interact with AI systems. Given the ethical challenges and opportunities implied, within the scope of the project, SymAware will also aim to identify the requirements for ethical and trustworthy awareness in human-agent interaction.