About the “Awareness Inside” Challenge
Awareness and consciousness have been high on the Artificial Intelligence (AI) research agenda for decades. Progress has been difficult because it has been hard to agree on exactly what it means to be aware. Most researches would agree though that we do not have any truly aware artificial system yet, that awareness is much more than a sensorial sophistication and that it is much more than any Artificial Intelligence as we know it. But, what is it then that a user would expect from a service or device that has ‘awareness inside’?
Most scientific and philosophical accounts of awareness are based on a human subject perspective and at an individual level. They address the question of what it means for an individual human subject to be aware of, e.g., the environment, time or oneself and how one can assess awareness in this context. The problem is relevant, certainly, since many clinical and cognitive conditions can be linked to awareness issues. The concept is also relevant to emerging technologies as it has been argued, for instance, that humans will not accept robots (or chatbots, or decision support systems) as trustable partners if they cannot ascribe some form of awareness and true understanding to them.
The individual human-centric concept of consciousness hinders the application of awareness as a measurable feature of any sufficiently complex system. The study of awareness in other species and artefacts, or even more elusive concepts such as social awareness require a new perspective applicable to many systems. It can then also serve to attack the inter-subjective state and experience of awareness (i.e., what is it like to interact with an aware robot that, most probably, does not have the same kind of awareness than the human?), or to include non-conscious objects into the sphere of awareness (e.g., to become aware of the time without looking at the watch).
For technologies, awareness principles would allow a step-up in engineering complex systems, making them more resilient, self-developing and human-centric. Awareness is a prerequisite for a real and contextualised understanding of a problem or situation and to adapt ones actions (and their consequences) to the specific circumstances. Ultimately, awareness serves the coherent and purposeful behaviour, learning, adaptation and self-development of intelligent systems over longer periods of time.
Project Portfolio Members
The projects approved in this EIC Pathfinder Challenge “Awareness Inside” were expected to address each of the following three expected outcomes:
- New concepts of awareness that are applicable to systems other than human, including technological ones, with implications of how it can be recognised or measured.
- Demonstrate and validate the role and added-value of such an awareness in an aware technology, class of artefacts or services for which the awareness features lead to a truly different quality in terms of, e.g., performance, flexibility, reliability or user-experience.
- Define an integrative approach for awareness engineering, its technological toolbox, the needs and implications and its limits, including ethical and regulatory requirements.
In addition to EMERGE, seven other projects were approved in this EIC Pathfinder Challenge “Awareness Inside” call, listed below.
Improving social competences of virtual agents through artificial consciousness based on the Attention Schema Theory
ASTOUND proposes an Integrative Approach For Awareness Engineering to establish consciousness in machines. The approach consists of an AI architecture for Artificial Consciousness based on the Attention Schema Theory (AST), a novel approach to social cognition that reconciles some of the current most debated cognitive neuroscience theories of consciousness. According to the AST, the brain constructs subjective awareness as a schematic model of the process of attention, suggesting that an information-processing machine could attribute consciousness properties to others in a similar way. The AST-based architecture proposed by ASTOUND will combine an Attention Mechanism provided by the attentional layers in a deep neural architecture and a Long Term Memory module allowing interplay between internal and external stimuli (data) with an Attention Schema that will determine empathic and trustworthy decision-making.
Counterfactual Assessment and Valuation for Awareness Architecture
The CAVAA project proposes that awareness serves survival in a world governed by hidden states, to deal with the “invisible”, from unexplored environments to social interaction that depends on the internal states of agents and moral norms. Awareness reflects a virtual world, a hybrid of perceptual evidence, memory states, and inferred “unobservables”, extended in space and time. The CAVAA project will realize a theory of awareness instantiated as an integrated computational architecture and its components to explain awareness in biological systems and engineer it in technological ones. It will realize underlying computational components of perception, memory, virtualization, simulation, and integration, embody the architecture in robots and artificial agents, validate it across a range of use-cases involving the interaction between multiple humans and artificial agents, using accepted measures and behavioural correlates of awareness.
A metapredictive model of synthetic awareness for enabling tool invention
METATOOL aims to provide a computational model of synthetic awareness to enhance adaptation and achieve tool invention. This will enable a robot to monitor and self-evaluate its performance, ground and reuse this information for adapting to new circumstances, and finally unlock the possibility of creating new tools.Under the predictive account of awareness, and based on both neuroscientific and archeological evidence, we will: 1) develop a novel computational model of metacognition based on predictive processing (metaprediction) and 2) validate its utility in real robots in two use case scenarios: conditional sequential tasks and tool creation. METATOOL will provide a blueprint for the next generation of artificial systems and robots that can perform adaptive, and anticipative, control with and without tools (improved technology), self-evaluation (novel explainable AI), and invent new tools (disruptive innovation).
Smart Building Sensitive to Daily Sentiment
SUST(AI)N derives theoretical & experimental underpinnings to combine novel distributed intelligence, unprecedented sensing accuracy, and reconfigurable hardware in a smart building context into a conscious organism that achieves self-awareness through probabilistic reasoning across its connected sustainable devices. SUST(AI)N constitutes the first concentrated effort to explore novel advances in distributed intelligence, reconfigurable hardware, and environmental sensing to establish awareness for smart buildings that reaches global availability of information (C11; through data aggregation across connected reconfigurable hardware), and self-monitoring (C21; via distributed probabilistic intelligence and the sensing of group sentiment).
Symbolic logic framework for situational awareness in mixed autonomy
SymAware addresses the fundamental need for a new conceptual framework for awareness in multi-agent systems (MASs) that is compatible with the internal models and specifications of robotic agents and that enables safe simultaneous operation of collaborating autonomous agents and humans. The goal of SymAware is to provide a comprehensive framework for situational awareness to support sustainable autonomy via agents that actively perceive risks and collaborate with other robots and humans to improve their awareness and understanding, while fulfilling complex and dynamically changing tasks.
Context-aware adaptive visualizations for critical decision making
SYMBIOTIK envisions an effortless interaction dialogue between human and Information Visualization (InfoVis) systems to support decision making processes, inspired by known biological principles and guided by artificial intelligence (AI). Critically, this dialogue requires AI solutions with context awareness, emotion sensing, and expressing capabilities. We propose a novel framework where both the human and the machine cooperate towards a common goal and evolve together. Awareness principles will allow us to engineer complex systems, making them more resilient and more human-centric. We will define an integrative approach for awareness engineering and propose a specific open source implementation. Finally, we will demonstrate and validate the role and added-value of such an awareness framework in two scenarios: supporting novice-to-expert transitions and critical decision making.
Value-Aware Artificial Intelligence
The VALAWAI project will develop a toolbox to build Value-Aware AI resting on two pillars both grounded in science: An architecture for consciousness inspired by the Global Neuronal Workspace model developed on the basis of neurophysiological evidence and psychological data, and a foundational framework for moral decision-making based on psychology, social cognition and social brain science. The project will demonstrate the utility of Value-Aware AI in three application areas for which a moral dimension urgently needs to be included: (i) social media, where we see many negative side effects such as disinformation, polarisation, and the instigation of asocial and immoral behavior, (ii) social robots, which are designed to be helpful or influence human behavior in positive ways but potentially enable manipulation, deceit and harmful behavior, and (iii) medical protocols, where VALAWAI tries to ensure medical decision making is value aligned.