Abstract
In the past years, the field of collaborative robots has been developing fast, with applications ranging from health care to search and rescue, construction, entertainment, sports, and many others. However, current social robotics is still far from the general abilities we expect in a robot collaborator. This limitation is more evident when robots are faced with real-life contexts and activities occurring over long periods. In this article, we argue that human–robot collaboration is more than just being able to work side by side on complementary tasks: collaboration is a complex relational process that entails mutual understanding and reciprocal adaptation. Drawing on this assumption, we propose to shift the focus from “human–robot interaction” to “human–robot shared experience.” We hold that for enabling the emergence of such shared experiential space between humans and robots, constructs such as coadaptation, intersubjectivity, individual differences, and identity should become the central focus of modeling. Finally, we suggest that this shift in perspective would imply changing current mainstream design approaches, which are mainly focused on functional aspects of the human–robot interaction, to the development of architectural frameworks that integrate the enabling dimensions of social cognition.
Introduction
In the novel “Machines like Me” by Ian McEwan, the new generation android named Adam becomes part of Charlie's life. An intense relationship full of contradictions typical of human relationships is established between the two main protagonists and Miranda, the third vertex in the love triangle: Adam profoundly influences Charlie's life. At the same time, Adam set up by Charlie (and Miranda) with some initial parameters of personality, living the daily experience of human relationships, changes his way of interacting with Charlie, developing new ideas about the world and itself. The author depicts the phenomenon of coadaptation between humans and robots: the two agents, one human and the other artificial, share different experiences, modifying their way of experiencing themselves and the world.
This scenario forces us to ask ourselves what we want for future robotics. Do we desire that robots become passive prostheses that extend our natural capabilities under our direct control, or do we wish to develop artificial entities that are capable of autonomy, mutual understanding, empathy, and ultimately relational skills?
In this article, we will argue that the second stance is necessary if we are to build robots that can actively collaborate with us, rather than just passively work next to us. Our main proposition is that future robots should progressively become autonomous collaborative agents, rather than simple executors of our explicit commands.
Although interest in human–robot interaction and toward the development of socially competent machines is growing, this often is limited to very contextualized short-term instances.
For example, the bartender robot, the guide robot in museums, the robot helping sales, or the robot receptionist—where the interaction is by definition cursory and bound to the specific domain competencies.
A common belief about social robotics is that their realization is limited by sensors' performance or hardware/processing capabilities. Now, substantial advances have been attained in materials, actuators, sensors, and computational power. Indeed, this has brought about important improvements in the physical abilities of current robots and to computational systems able to solve the most complex logical challenges, such as winning against the best players of chess, Go, and StarCraft, by exploiting recent solutions of Artificial Intelligence.
However, despite all these advances, current social robotics is still far from the general abilities we expect a robot collaborator to be equipped with. Effective collaboration in humans stems from “growing together,” that is, from building a mutual understanding, which evolves over long periods through shared experiences.
Consistently, we argue that the introduction of effective robot collaborators hinges upon the development of an intersubjective space between humans and machines. In particular, we suggest that this is a requirement for shifting from a vision of robots as tools, or prostheses, to one where robots are autonomous agents able to collaborate with us.
Although our approach to developing robots endowed with social capabilities necessarily stems from a human-centered epistemology, we contemplate the possibility that the collaboration of humans and robots may lead to the emergence of a novel, “porous” epistemology—one contaminated by the perspective of the robots themselves.
Building Shared Experiences Between Humans and Robots: Why and How?
In recent years, the interest in the development of collaborative robots has grown significantly, as it has become evident that even application areas traditionally populated by robots alone could benefit from the shift toward human–robot collaboration. This is happening in particular in the manufacturing industry, where so-called co-bots (an abbreviation of co-operative robots) are replacing classical independent robots. Applications of collaborative robotics today extend to health care, caregiving, search and rescue, construction, entertainment, sports, and many others.
However, despite this growing interest in collaborative robots, it is questionable whether the current robotic platforms fall into the category of collaborative machines. Collaboration is a broad concept, which is used to describe a wide variety of behaviors where more than one agent works on a single task. 1 What we want to propose here is that collaboration cannot be limited to agents working side by side on complementary tasks, but also involves the establishment of mutual understanding and coadaptation.
In robotics, coadaptation is generally regarded as the adaptation to the skills of the user over time, to potentially trigger a corresponding adaptation in the human fellow operator. 2 However, skills and actions are only a component of the relational processes involved when two humans collaborate. Similarly, also human–robot interaction should embrace complex coadaptation, where also the perceptual, affective, and cognitive dimensions dynamically change and somehow merge in a mutually transformative shared experience. 3 Following this reasoning, the bidirectionality of the process becomes central—as the unit of analysis should not be the individual, but the emerging system represented by the dyad or group. 4
However, we do not consider coadaptation as the only key dimension to developing collaborative robots. Insights in developmental science, philosophy of the mind, and behavioral economics point to further relevant dimensions in collaboration, which include intersubjectivity, individual differences, and identity.
Intersubjectivity
Studies in lifespan show that preferences and acceptability of robots in different contexts 5 are related to the like-me nature of social robots, as a function of the developmental level, concerning physical features and behaviors.6–8
The construction of intersubjectivity is an essential step for developing more human-like exchanges between humans and robots. As in humans, repeated interactions build a common history, through the sharing of subsequent experiences, characterized also by errors, mismatches, and relational reparations.5,9 This process, in turn, creates a relational memory and generates expectations about the relational experiences. In envisaging a human–robot relationship, as imagined by Ian McEwan, it is essential that the first form of intersubjectivity 10 is established, to imagine a sharing of experiences between humans and machines.
We suggest that the robotic agent represents a cultural and material artifact, which influences the individual's psychological development. Vygotsky's cultural-historical theory 11 explained well how culture shapes the lines of development of our intelligence, through cultural and material artifacts. Social robots are not exempt from this type of influence, as they are themselves cultural and material artifacts and, therefore, can contribute to directing the psychological development of the individual in an original and, in many aspects, innovative way. In this perspective, the possible relationship between humans and robots and the emergence of intersubjectivity becomes a natural outcome.
Another important author in developmental psychology who helps shed light on the possible coadaptation between humans and robots is Daniel Stern, one of the foremost experts in studying the ontogeny of intersubjectivity. Stern 10 suggests that intersubjectivity is a need and, at the same time, a fundamentally human condition: our mind, by its nature, is constantly seeking other people with whom to resonate and share experiences.
Two aspects emerge from Stern's hypothesis: the interpersonal dynamics that regulate intersubjectivity; and the motivational elements that drive human beings to enter into a relationship with others. The first concerns how the robot would position itself in interpersonal terms with the human in this coadaptation logic.
Research has shown that provisional assimilation of the robot into a range of intersubjective dynamics is possible, as highlighted by a recent study by Manzi et al. 12 In this study, a robot that simulates salient social behaviors, such as eye-gaze, can trigger social expectations in humans, starting from the first months of life. Consequently, it generates an intersubjective space in infants that have not yet experienced the complexity of relational dynamics. The second issue concerns the “fundamental human condition” highlighted by Stern.
Indeed, humans are born with a set of capacities that are modified through experience and learning. Two key questions arise in this context: What basic equipment should be implemented into the robot to establish an intersubjective space with humans? And: Is it only necessary for the robot to simulate human skills or should the robot be equipped with some basic skills that can be developed autonomously through experience and learning?
Again, research has shown that some typically human processes, such as trust, are fundamental for building and maintaining relationships with a robot even in early childhood in short-term interactions. 9 However, studies have shown that behavior simulation is sufficient to elicit relational engagement only in short-term interactions with robots, 13 whereas it is desirable to equip the robot with skills that develop autonomously through experiences with human partners and the world in the case of long-term interactions.14,15
Here we come back to Vygotsky, as Marchetti et al. 6 underline, who introduced the concept of the Zone of Proximal Development, 11 which represents an intersubjective space between two subjects of the relationship. In this zone, cognitive discrepancy of one subject can become a powerful motivator of coadaptation. From this perspective, one of the most intriguing challenges for human–robot interactions is the possibility for robots to provide their human partners with stimuli, inputs, and interpretations that moderately exceed the partner's current capabilities, building bridges to more advanced forms of shared understanding and capabilities.
Dumouchel and Damiano 16 and Damiano and Dumouchel 17 show that dialogue is the fundamental structure and basic pattern of how humans act and think. “The real anthropomorphism in social robotics derives from basic cognitive structures and in particular from our tendency to teleological thought and dialogue as the main form of interaction”. 16 (p110, our translation) This may explain why humans tend to treat artifacts (and in particular humanoid artifacts) as interlocutors/partners. This is especially true for artifacts such as social robots, which should be able (in the framework of collaborative robots expressed before) to express and/or perceive emotions; communicate with high-level verbal dialogue; learn/recognize models of other agents; establish/maintain social relationships; use natural cues (gaze, gestures, etc.); exhibit distinctive personality and character; learn/develop social competencies. 18
The recent behavioral economics literature shows that the decision to trust a partner or to act in a trustworthy way—despite being incompatible with the rationality assumption of the economic theory—is rather common across experimental subjects.
Social robots are very useful in devising experiments able to explain the emergence of co-operation beyond the traditional approaches of self-regarding preferences (e.g., repeated games with the infinite horizon or with a finite uncertain horizon) or others regarding preferences (e.g., equity and/or fairness-based preferences). Some preliminary experimental results by Maggioni and Rossignoli 19 show that a verbal dialogic interaction with the robotic partner, who verbally reacts to the actions of the human player, in a simple repeated co-operation game, reduces the otherwise negative bias that human subjects show toward robot partners when compared with human partners.
Individual factors
In the process of coadaptive development of the human–robot relationship, it is important to consider that different outcomes are influenced by multiple variables. At the individual level, personality has been identified as a core factor for understanding the nature and quality of this relationship. 20 Personality refers to “those characteristics of the person that account for consistent patterns of feelings, thinking, and behaving”21(p6) and it explains the way people respond and interact with others in social settings.
In brief, researchers have found a positive impact of human personality—especially personality traits according to the Big Five taxonomy 22 —on various human–robot interaction outcomes, including distance and approaching direction, perceptions, and attitudes toward the robot, emotion toward the robot, anthropomorphism, and trust.23–26
Furthermore, research has also considered the contribution of different personality characteristics of robots on the human–robot relationship. Specifically, several studies have highlighted that extroverted and socially intelligent robots were more often preferred in terms of acceptableness, trustworthiness, and enjoyableness.27–29 Finally, the third area of investigation is constituted by the analysis of human–robot personality similarities and differences (match and mismatch). In this regard, several studies found a positive effect of personality match on the quality of interaction, in terms of enjoyment and engagement, social attraction, credibility, trust, and compliance.30–33
Research showed that, besides personality traits, other individual differences affect both interactions and interpersonal relationships with robots, as well as the expectations of implementation of technical and interactional features into robots.
Different studies analyzed the effect of people's negative attitudes toward robots on human–robot interaction, revealing that repeated interactions can reduce the levels of anxiety experienced toward robots 34 and that the attitudes are a moderator factor for the effects of social presence. 35 Specifically, the greater the human negative attitudes toward robots, the less social influence robots exert in interactive games on the human partner. 36 In other words, people's attitudes toward robots can positively or negatively influence coadaptation between humans and robots, shaping a specific intersubjective space that is fundamental for sharing experiences and for relational dynamics.
Attitudes toward robots are also an important predictor factor of people's expectations for implementing relational skills into robots. A recent study showed that the expectations of young adults can be placed along a continuum of humanization of the robot and that negative attitudes toward robots can reveal the type of expectations in terms of humanization. 37 The results showed that more positive attitudes toward robots are not necessarily associated with a greater desire to implement robots' relational skills. These findings stress that individual traits are crucial for understanding the different coadaptation forms between humans and robots.
Thus, although human–robot relationships develop dynamically after only short interactions, 38 it is important to take into account some individual factors that come into play and contribute coherently and adaptively to the co-construction of the human–robot relationship.
Identity
Although a key component of coadaptation is the possibility to change, following the context, the events, and the needs of the partner, this poses a risk. If an agent constantly changes, it is impossible to define its identity. In other words, the human will no longer be able to build an understanding of the partner and anticipate it, practically destroying the possibility for an interaction. 39 Indeed, identity is a key component of a relational architecture. On the one hand, it cannot be conceived as a static preprogrammed feature, otherwise, it would hinder coadaptation; on the other hand, providing unconstrained adaptability to the robot would prevent the evolution of stable behavioral features, which are necessary building blocks to develop and sustain the relationship, by eliciting positive familiarity. 40 Since identity in humans derives from a complex interplay of genetic, environmental, social, and cultural factors, determining how this should be intended for robots is a task that goes beyond the scope of this short reflection. However, this is a problem that will need to be tackled if we are to build human–robot shared experiences.
The intrinsic dialogic nature of the human being has long been debated in philosophy. Martin Buber in his book I and Thou 41 argues that the full understanding of one's own identity (the I) is strictly dependent on the dialogue with another presence (the Thou). The heteronomous revelation of a singular presence calls the subject into an open-ended relationship. At the core of this model of existence is the notion of “encounter” as a revelation of “presence” (Gegenwart). In contrast to “object” (Gegenstand), the presence revealed by an encounter occupies the space “in-between” the subject and another. This “in-between” space is defined as “mutual” (Gegenseitig).
This stance prompts a deeper exploration of the concept of “encounter” in the context of human–robot shared experiences.
Conclusion
In this article, we propose a shift in focus from establishing human–robot interactions to achieving human–robot shared experiences. We argued that to trigger this change in perspective, the following dimensions should be taken into closer consideration: the coadaptation between agents, the emergence of intersubjectivity, and the role of individual differences and identity.
Most current cognitive architectures are designed to allow a robot to intelligently operate in the environment, 42 with limited or nonexistent modeling of the components of social and affective cognition. This gap becomes evident when robots are faced with real-life contexts and activities, which often span over extended temporal periods. While the human partners grow, change, and learn from the interaction with each other, the robot is left to its capabilities, without the skill to adapt and form shared experiences with the partners.
From a methodological viewpoint, aiming at human–robot shared experiences would imply, on the one hand, changing the existing “solipsistic” approach to architecture design; on the other hand, developing more inclusive evaluation frameworks, which extend the focus from functional aspects of the human–robot interaction to its social experiential dimension.
Footnotes
Author Disclosure Statement
No competing financial interests exist.
Funding Information
This article was supported by Università Cattolica del Sacro Cuore (D3.2—2018—Human–Robot Confluence project).
