Abstract
The new humanoid robots not only perform tasks, but also can activate interactions and social relationships with other robots and with humans. In this view, the diffusion of humanoid robots with a physical structure reminiscent of the human body, endowed with decision-making abilities, and capable of externalizing and generating emotions, is opening a new line of research with a main objective of understanding the dynamics of social interactions generated by the encounters between robots and humans. However, this process is not easy. To be accepted by society, robots have to “understand” people and to adapt themselves to complex real-life social environments. This goal underlines the importance for research of aspects such as communication, acceptance, and ethics that require the collaboration between multiple disciplines, including psychology, neuroscience, design, mechatronics, computer science, philosophy, sociology, anthropology, biomechanics, and roboethics. This special issue seeks to gather knowledge from these disciplines with respect to human–robot confluence (HRC) in the application of robots in everyday life, including robot training partners and industrial collaborative robots (Cobots). It covers a wide range of topics related to HRC, involving theories, methodologies, technologies, and empirical and experimental studies. The final goal is to support researchers and developers in creating robots that not only have a humanoid body but that are really “humane”: accessible, sympathetic, generous, compassionate, and forbearing.
The new humanoid robots not only perform tasks, but also can activate interactions and social relationships with other robots and with humans. In particular, the increasing use of humanoid robots is influencing many daily contexts—cooperative work, assistive living, monitoring, security, education, and entertainment—generating frequent human–robot interactions (HRIs) in unstructured environments.
In this view, the diffusion of humanoid robots with a physical structure reminiscent of the human body, endowed with decision-making abilities, and capable of externalizing and generating emotions, is opening a new line of research with a main objective of understanding the dynamics of social interactions generated by the encounters between robots and humans. 1
As underlined by Vignolo et al., 2 “The success of the integration of robots in our everyday life is then subordinated to the acceptance of these novel tools by the population. The level of comfort and safety experienced by the users during the interaction plays a fundamental role in this process.”
However, this process is not easy. As underlined by different authors, humanization
3
—to mimic human appearance and behavior, including the display of humanlike cognitive and emotional states—is not enough.
4
For many individuals, who have not yet experienced a direct contact with them, humanoid robots are usually considered as a possible threat and this negative attitude plays an influential role in the intention to interact with them.
5
But individuals who have had a direct interaction with humanoid robots, too, often report feelings of discomfort, eeriness, and revulsion (feelings of “uncanniness”
6
). One view is that the ultimate goal is to have robots able to express their own values and interests.
7
Apparently, to be accepted by society, robots have to demonstrate a real social experience
6
: to “understand” people and to adapt themselves to complex real-life social environments. In other words, they have to be really “humane”: accessible, sympathetic, generous, compassionate, and forbearing. Recently, Sandini and Sciutti
8
suggested a possible agenda for achieving humane robots:
Gain intuition, becoming partners rather than just sophisticated tools. Think beyond real time. Use an anthropomorphic imagination for HRI.
A further point has been suggested by Leveringhaus 9 : an ethical framework that makes a commitment to human rights, human dignity, and responsibility a central priority for developers and researchers working with humanoid robots.
These goals underline the importance for research of aspects such as communication, acceptance, functionality, ethics, and effectiveness that requires the collaboration between multiple disciplines, including psychology, neuroscience, design, mechatronics, computer science, philosophy, sociology, anthropology, biomechanics, and roboethics.
This special issue has gathered knowledge from these disciplines with respect to human–robot confluence (HRC) in the application of robots in everyday life, including assistive and rehabilitation robotics. It covers a wide range of topics related to HRC, involving theories, methodologies, technologies, and empirical and experimental studies.
The opening article by Fox and Gambino challenges the classical vision of many HRI studies that apply to this field the rules and theories that apply to interpersonal interaction research. According to their vision the starting point should be our knowledge about personal relationships and in their article, they present the predominant interpersonal theories whose primary claims can be foundational to our understanding of human relationship development (social exchange theories, including resource theory, interdependence theory, equity theory, and social penetration theory). Moreover, they discuss whether interpersonal theories are viable frameworks for studying HRI and human–robot relationships given their theoretical assumptions and claims. The article closes by providing suggestions for researchers and designers, including alternatives to equating human–robot relationships to human–human relationships.
The next article, by Jung and colleagues, uses the uncanny valley model to analyze the affective and cognitive responses to service humanoids. In particular, the article focuses on the effect of affective responses on trust, which is regarded as a critical cognitive factor influencing technology adoption, in two service contexts characterized by different levels of expertise: hotel reception (low expertise) and tutoring (high expertise). The results suggest that affective and cognitive responses are more positive for the high-expertise humanoid (tutoring) than for the low-expertise humanoid (hotel reception). This finding suggests that when people form impressions of humanoids conducting certain tasks for them, their assessments differ based on the task type. Moreover, the impact of the uncanny valley is attenuated by trust in robots. For this reason, people's attitudes are less influenced by humanoids' peripheral cues (e.g., appearances) in tasks requiring higher levels of expertise.
The first article by Manzi and colleagues examined the different psychological effect generated by two commercial humanoid robots: NAO and Pepper. In particular, their study aims assessed the variability of the attributions of mental states, expectations of robotic development and negative attitudes after observing a real interaction with a human (an experimenter). Their results suggest that both the observation of interaction and the physical appearance of the robot affect the attribution of mental states, with a greater attribution of mental states to the Pepper robot compared with NAO. People's expectations, instead, are influenced by the interaction and are independent of the type of robot. Finally, negative attitudes are independent of both the interaction and the type of robot.
In their second article, Manzi and colleagues developed and tested the “Scale for Robotic Needs” to explore the diverse expectations that people have about humanoids. Then, they used a latent profile analysis to describe five profiles of expectations that can be placed along a continuum of humanization of robots and whose ends are those who consider robots as pure technological tools at the service of humans (i.e., mechanical properties) and those who expect robots to be part of our society in the near future (i.e., self-determination). The study also suggests that negative attitudes toward robots are strongly related to people's expectations.
The article by Banks and colleagues explored the impact of two different heuristics (mental shortcuts that quickly but nonoptimally facilitate decision-making)—the machine heuristic (technology is systematic/unbiased, therefore its products are good) and the nature heuristic (natural things are pure/innate, therefore anything natural is good)—on our evaluation of humanoid robots. Specifically, their study explored (1) if invocation of agent-cued heuristics is inherently tied to activities and (2) whether either/both heuristics are evoked when agents exhibit both organic and machinic properties (as with cyborgs). Findings indicate that the nature heuristic may be dominant over the machine heuristic, but this primacy may be driven by operational contexts. Moreover, agent-category cues function as frames for interpreting agent behavior, which in turn influences perceptions of behavioral outcomes. However, ambiguous category membership may cause equivocation in this process.
Nijssen and colleagues in their study tried to provide an answer to a critical question for HCI: do we take a robot's needs into account during a common task? Using two experiments, they investigated whether individuals would take the needs of a robotic task partner into account to the same extent as when the task partner was a human, and whether this was modulated by participant's anthropomorphic attributions to said robot. Their results suggest that the effect of humanizing a task partner indeed increases our tendency to take someone else's needs into account in a social decision-making task. However, this effect was only found for a human task partner; not for a robot. Thus, if studies on HRI want to investigate specific behavioral parameters and draw conclusions about how those behaviors compare with human–human interaction, an experimental condition in which those parameters are measured vis-à-vis another human is required.
The article by Hanoch and colleagues explored a classical topic of social psychology: peer pressure. A large body of evidence has shown that peer pressure can impact human risk-taking behavior but we do not know yet if robot presence can have a similar impact or not. So, the study evaluated participants' risk-taking behavior either alone, in the presence of a silent robot, or in the presence of a robot that actively encouraged risk-taking behavior. The results revealed that participants who were encouraged by the robot did take more risks, but the presence of a silent robot did not entice participants to show more risk-taking behavior.
Zhu used two studies to investigate the effects of social anxiety on the adoption of robotic training partners among university students. The first study confirmed that university students with higher social anxiety are more likely to choose robotic training partners than human training partners. The second study underlined the mediating role of sense of relaxation suggesting that training robots can improve the quality of life in socially anxious people.
Rossato and colleagues explored the subjective experience of younger and senior workers interacting with an industrial collaborative robot (Cobot). Results suggest that workers' acceptance of Cobots is high, regardless of age and control modality used. However, differences emerged between senior and younger adults in the evaluations of user experience, usability, and perceived workload of participants: Senior reported a slightly lower evaluation of usability and higher levels of frustration and physical and temporal perceived demands. Nevertheless, senior workers evaluated the system as more supportive and reported a well-perceived performance.
The final article by Gaggioli and colleagues discusses the existing limitations of humanoid robots that emerge when robots are faced with real-life contexts and activities occurring over long periods. In their view, it happens because collaboration is a complex relational process that entails mutual understanding and reciprocal adaptation. To overcome this issue, they suggest a change of paradigm: shifting from “human–robot interaction” to “human–robot shared experience.” In their view, HCI research should focus on the emergence of such shared experiential space between humans and robots. On one side, this requires the introduction and use of new concepts such as coadaptation, intersubjectivity, and individual differences. On the other side, it implies a significant change in current mainstream design approaches that are still focused on the functional dimension of the HRI.
In conclusion, the contents of this special issue constitute a sound foundation and rationale for future research aimed to explore HRC in the application of robots in everyday life, including assistive and rehabilitation robotics. In particular, this special issue provides strong preliminary evidence to justify future research for developing a new generation of humanoid robots that can acquire and demonstrate a real social experience. The challenge for research and developers in the next 5–10 years is to design and develop social robots able to “understand” people and to build a shared communicative and relational experience. 10 Only in this way will it be possible to experience “humane” robots—accessible, sympathetic, compassionate, and forbearing—able to really support their human counterparts.
Footnotes
Author Disclosure Statement
No competing financial interests exist.
Funding Information
This special issue was funded by Università Cattolica del Sacro Cuore (D3.2—2018—Human–Robot Confluence project).
