Project
In the ADVISOR project, we intend to advance the current state of the art to fulfil the need of integrating robotic technologies for improving the quality, efficiency and success in telehealth and at home healthcare, as expressed in PNRR M6C1 milestone, investment 1.2. In particular, we aim to investigate how to build Trustworthy and Transparent Socially Assistive Robots that are able to promote healthy lifestyle habits in people by engaging them in social interactions using behavioural and social cues (verbal and non-verbal), emotional and cognitive abilities adapted according to each individual.
One of the inner abilities of people is to easily read others’ intentions and behaviours. This, however, is not well transferred to their ability to read and understand the behaviours of a robotic interaction partner. Most of the time, lay users are not able to interpret the current state of a robot, not even on a simple level. Therefore, an increasingly important issue for the acceptance of robots in human homes is not only the pertinence of the robot behaviours use and robot’s application, but also the transparent interpretability of the robot's behaviours and its underlying decision-making processes. To fulfil this need, approaches have to be extended for developing systems that can analyse and modulate the behaviours of a robot to provide transparent, interpretable information to the users. This is particularly relevant in a healthcare scenario where we envisage robots that are able to adapt their behaviours to the users’ needs, in terms of personality, cognitive profile, medical records and requirements, and requests. Without clear and transparent behaviours, robots become unpredictable for people, and they might refuse to trust them and, as a consequence, make ineffective and unacceptable the robot’s attempts to guide them in a healthy lifestyle.
To endow people with the ability to understand and predict a robot’s behaviours, we will investigate how to develop a robotic Cognitive Architecture that integrates several arising techniques to make legible and trustworthy robots, such as the robot’s ability to talk to itself (i.e., Inner Speech), and increasing the quality and accuracy of the user’s ability to form mental model of the robot (i.e., Theory of Mind). However, while these techniques increase people’s perception of trust in a robot, it could happen that robots may exhibit unintended behaviours, in contrast with what is expected from them or people’s expectations, which endanger people’s physical and mental safety. To ensure adherence to the robot’s behaviours and well-being of the users, we will develop robots that are able to monitor their own state and contextual environment to dynamically adapt and recover from any violations (i.e., Verification Methods).