Abstract
Humanoid social robots (HSRs) are human-made technologies that can take physical or digital form, resemble people in form or behavior to some degree, and are designed to interact with people. A common assumption is that social robots can and should mimic humans, such that human–robot interaction (HRI) closely resembles human-human (i.e., interpersonal) interaction. Research is often framed from the assumption that rules and theories that apply to interpersonal interaction should apply to HRI (e.g., the computers are social actors framework). Here, we challenge these assumptions and consider more deeply the relevance and applicability of our knowledge about personal relationships to relationships with social robots. First, we describe the typical characteristics of HSRs available to consumers currently, elaborating characteristics relevant to understanding social interactions with robots such as form anthropomorphism and behavioral anthropomorphism. We also consider common social affordances of modern HSRs (persistence, personalization, responsiveness, contingency, and conversational control) and how these align with human capacities and expectations. Next, we present predominant interpersonal theories whose primary claims are foundational to our understanding of human relationship development (social exchange theories, including resource theory, interdependence theory, equity theory, and social penetration theory). We consider whether interpersonal theories are viable frameworks for studying HRI and human–robot relationships given their theoretical assumptions and claims. We conclude by providing suggestions for researchers and designers, including alternatives to equating human–robot relationships to human-human relationships.
Introduction
Defining social robots as “humanoid” implies that these machines are intended to be perceived similarly to people. Both designers and researchers of human–robot interaction (HRI) often rely on human-human interactions as models or standards for how to build and study robots.1–4 Although advances in engineering and artificial intelligence have made robots more human like, their communicative and social capacities are still relatively primitive, inhibiting human–robot relationship development. A common belief is that robots will eventually become sophisticated enough that they will be indistinguishable from humans, but this assumption begets two questions. First, is existing theorizing about interpersonal, human-human relationships applicable to studying human–robot relationships? And second, given what we know, should HRI designers' goal be to mimic them?
To address these issues, first we explicate humanoid social robots (HSRs), their features, and the common social affordances they currently offer. Then, we explore the extent to which these robots meet the assumptions of theories of interpersonal relationship development. We consider whether interpersonal theories are viable frameworks for studying HRI and human–robot relationships now and in the future. Finally, we provide some suggestions for researchers and designers.
Humanoid Social Robots
We define humanoid social robots as human-made technologies that can take physical or digital form, resemble people in form or behavior to some degree, and are designed to communicate with people.5–7 Examples of HSRs include conversational agents (e.g., chatbots or voice assistants such as Siri and Alexa 8 ), embodied conversational agents (e.g., virtual coaches or health care providers 9 ), consumer robots that specialize in education and home care (e.g., Zora robot), and robots that are designed primarily to interact with humans (e.g., Pepper). This conceptualization excludes robots that lack resemblance to humans or do not interact socially and semiautonomously with humans, such as industrial robots, robotic home appliances (e.g., Roomba), self-driving cars, and telepresence robots (e.g., Beam, Double Robotics).
The anthropomorphic characteristics of HSRs may prompt human users to treat HSRs in human-like ways.10,11 One of the most popular theoretical frameworks adopted in the study of human-computer interaction (HCI) and HRI is the computers are social actors perspective (CASA), derived from the media equation. 12 According to these perspectives, technology has outpaced biological evolution: human brains have not evolved to identify and distinguish mediated simulations. Instead, humans react mindlessly and naturally, responding to a media representation in the same way they would respond to its natural counterpart. 12
CASA argues that computers can demonstrate the potential for social interaction through anthropomorphic appearance cues (e.g., having a human-like face or form) or behavior (e.g., using language or bipedal locomotion). Human users respond naturally and mindlessly to these cues, treating the computer like another social being. Consequently, CASA claims that any rules or findings about human-human interactions should carry over to human-computer interactions if the computer demonstrates social cues. 12 Some studies have tested CASA's claims with HSRs and found support. For example, an HSR with gendered facial cues can lead people to apply gender stereotypes. 13 In addition, if an HSR is put on the same team as a human, the human will like it more than an HSR from a different team. 4
The CASA paradigm has been used to justify hypotheses suggesting HRIs are comparable with interpersonal interactions and relationships;1,3,14 however, results from empirical studies do not consistently support CASA's predictions. 15 Media technologies have evolved considerably since the bulk of the original research and theorizing (including CASA's predecessor, the media equation 16 ) emerged in the 1990s, presenting two challenges. First, early research within the paradigm was focused on interactions with simple computer interfaces or singular aspects of interfaces (e.g., voice). Extrapolating CASA's thesis to more complex, dynamic HSRs may not be appropriate. A second argument is that since the 1990s, people have considerably more experience with computers and robots. Thus, they have developed more specified social scripts for human-computer and human–robot interactions and are not applying human-human interaction scripts as CASA suggests. 15 Here, we suggest a third reason that findings and theories about human-human interactions do not necessarily apply to current HCI or HRI: modern social technologies such as HSRs are simply not sophisticated enough to fulfill the roles or perform the complex tasks that human social interactants do naturally.
Although robots will certainly become more flexible, sophisticated, and intelligent in the future, the types of HSRs that the average human consumer is likely to encounter in the coming years remain limited in their capacities due to complexity, cost, and technological constraints. Social scientists seeking to understand and explain HRI must consider the current state of HSRs and adopt a practical perspective of the foreseeable future rather than relying on assumptions that technological advances in artificial intelligence and robotics will be so swift as to obviate the need for theorizing in the intervening years or decades. For this reason, we outline the typical characteristics and affordances of current, prevalent forms of HSRs rather than the most cutting edge robotic technologies that very few people have experienced or are likely to experience in the near future.
Characteristics and social affordances of modern HSRs
Although modern HSRs are defined by some ability to make decisions and take actions on their own, they are not fully autonomous.17,18 Modern HSRs require a human user to initiate actions (e.g., through pressing buttons, writing a script, or launching a program) or interact, supervise, or intervene in the process (i.e., the human in the loop). Humans are also required to handle maintenance, such as recharging or cleaning, and manage any obstacles or technological problems the robot encounters.
Anthropomorphism and social affordances indicate potential to communicate with a user, 19 which is a definitive function of HSRs. 20 Modern HSRs vary in their anthropomorphic characteristics, or the extent to which they resemble and are perceived as human.21–24 Form anthropomorphism entails sensory cues that make a robot seem human like. 25 For example, HSRs may have a human-like voice or appearance. Even with high levels of form anthropomorphism, modern HSRs are unlikely to be mistaken for a human due to low levels of behavioral anthropomorphism, or the extent to which an HSR's actions resemble a human (e.g., gestures, spoken messages, nonverbal expressions). 25
Limitations in their social affordances and technological capacities hamper modern HSRs' ability to communicate in a human-like manner.6,24,26 One major issue is that modern HSRs lack the ability to attend to, recall, and apply relevant information from previous interactions with a human user, which diminishes social perceptions.26,27 Robot memory lacks persistence and sufficiently refined searchability for sensible ongoing social interactions: most do not maintain a memory of previous interactions, and if they do, retrieval is constrained to a few task-relevant queries. Modern HSRs cannot make sense of interactional history in the same way that humans do, and they are limited in their ability to apply such knowledge to novel social situations. 26
Because modern HSRs are limited in the tasks they are programmed to perform, interactivity can be difficult to navigate. The robot is constrained to a small set of possible responses, limiting responsiveness and contingency, which can violate users' expectations and diminish feelings of closeness and trust. 24 Because HSRs are designed to cater to their human user and limited in their interactional abilities, they have minimal conversational control. 15 They lack the autonomy to change topics or tasks, or to interrupt or terminate interactions with human users. HSRs are created to satisfy the human user's needs, and the human does not need deviations or defiance.
Without a persistent memory and the ability to execute contingent actions, the robot is limited in its ability for personalization, or tailoring an interaction to a specific individual. 15 In personal relationships, humans shape their messages based on their previous knowledge and interactions with a target, which enhances feelings of closeness. 28 Similar personalization is expected and desired from HSRs.24,29 Most HSRs cannot identify nor distinguish different users, however; it treats all users agnostic of individual variations or previous interactions. Even for robots that can recall some parameters for a specific user, this knowledge does not help personalize or tailor the message in real time based on the target's verbal and nonverbal cues. 26
Collectively, these current limitations suggest that modern HSRs lack many of the fundamental social capabilities of humans. Even if HSRs can engage in some forms of social interaction, these limitations have implications for how interactions transpire over time and, importantly, the viability of developing relationships with humans.
Considerations for human–robot relationship research
Despite the long-standing shortcomings of HSRs,6,24,26 a considerable amount of social scientific research, such as that from the CASA perspective, 12 is designed, hypothesized, and conducted assuming HSRs are perceived similarly to humans. Another issue is that the majority of HRI research involves a single session of interaction and often with a technology that is novel to the user.5,22 If users have no experience with a particular HSR, or with HSRs in general, they may be more likely to apply human-human scripts in their initial interactions due to what they perceive as the robot's social affordances. In this way, one-shot studies may suggest that yes, claims about interpersonal relationships can carry over.
Importantly, however, a relationship is not a one-shot experience. As Hinde clarified, “A relationship involves a series of interactions between two individuals known to each other.” 30 Relationships are characterized by familiarity established through multiple engagements, which indicates studies with a single session are not well-suited for determining an HSR's potential for a human-like relationship. 5
Moreover, perceptions of social affordances change over time as a user becomes more familiar with a technology, 15 which has been noted in several longitudinal studies with social robots.22,24 Often, these lead to expectancy violations when social robots cannot maintain human standards for communication and becoming acquainted. 22 Here, we explore the extent to which modern HSRs are capable of developing relationships similar to human-human relationships and whether predominant theories of relational development can be applied to studying human–robot relationships.
Applicability of Interpersonal Theories of Relationship Development
The predominant paradigm for understanding interpersonal relationships is based on the concept of social exchange. 31 The fundamental assumptions of social exchange are that people need resources to survive, other people can provide resources, and sharing and trading resources is a fundamental aspect of relationships. 31 Although this perspective has been proposed as a valid foundation for understanding human–robot relationships, 3 a closer examination of various theories indicates that the nature of modern HSRs may challenge their assumptions and claims.
Resources
According to the resource theory of social exchange, the resources people exchange in relationships range in how tangible or abstract they are and what function they serve. 32 For example, money or goods are tangible economic resources, whereas love, status, and information are more abstract and social. Tangible resources are transferred from one person to another; the giver must relinquish a resource such as money or goods, and ownership shifts from the giver to the recipient. Intangible resources, in contrast, are shared between two people. 31 Resources also vary on particularity, or how much a resource's value is contingent on who is providing it. 32 For example, money spends the same whether it is received from a bank teller or a spouse, whereas love is particular and presumably more valuable coming from one's spouse rather than a bank teller.
HRI challenge
HSRs have some resources to provide and perhaps exchange with humans, particularly services and information. A robot itself, however, is a tangible good that is owned by someone, and as such does not have ownership over resources such as money or other goods and cannot transfer them to a human recipient. As HSRs are subservient to humans, they have little to offer humans in terms of status. Although it is possible for a human to form an attachment 10 and perhaps love a robot, this is a unidirectional offering rather than a shared resource. Therefore, most social exchange resources outlined by resource theory may not be pertinent to evaluating human–robot relationships. In addition, a lack of persistent, personalized memory indicates that resources are not particularized from the robot's perspective and it is likely the human would not perceive a robot's resources as particularized either. Thus, as relational partners, robots would be perceived as interchangeable and relationships with them impersonal rather than special.
Costs, benefits, and equity
Social exchange theories also posit that humans are fundamentally self-interested and evaluate the costs and benefits they incur in a relationship.31,33 Within a relationship, individuals become interdependent through their exchange of resources, which may be more or less symmetrical. 33 As such, relationships may be evaluated on whether these exchanges are relatively balanced, or if one partner is incurring more costs or receiving more benefits. These evaluations may be based on perceptions of equity, or whether an individual is receiving benefits or output proportional to the amount of input or costs they are incurring, particularly compared with the social norm. 34 According to interdependence theory, one way to assess the costs and benefits of one's relationship is to compare it with other relationships, or to compare the relationship with the current partner to alternatives.33,35 If partners feel underbenefited, they are likely to experience dissatisfaction and seek to restore equity in the relationship. Over time, underbenefited partners may become dissatisfied with the relationship and terminate it, particularly if there are desirable alternatives.
HRI challenge
A major challenge to the applicability of social exchange theories is considering the nature of costs and benefits to robots. HSRs lack the motivation and desire that characterize human needs; they do not experience rewards, punishments, benefits, and costs in the ways that humans do. Furthermore, HSRs are servile to human controllers; there is a permanent inequity as they are designed to maximize benefits for humans with minimal, if any, consideration of the costs they might incur. They do not make autonomous evaluations of equity with their human controllers. They do not experience dissatisfaction, make comparisons, nor have the ability to act based on these assessments. They cannot leave their human controllers. Knowing this, humans do not have to consider their robot partner's costs or benefits, only the costs and benefits to themselves. They can make unilateral decisions without concern for the robot's wishes or well-being based on one self-serving principle: maximize my benefits and never mind the robot.
Self-disclosure
Social penetration theory (SPT) 28 suggests that relationships develop through reciprocal self-disclosure. Individuals consider the costs and benefits of ongoing disclosure and determine whether they want to intensify the relationship.
Altman and Taylor 28 used an onion metaphor to explain how individuals maintain many layers of self rooted in their experiences, in which the outside layer is the publicly observable self and private information is stored in deeper layers that must be uncovered. In a developing relationship, individuals peel back these layers through reciprocal self-disclosure, proceeding through stages characterized by expanding breadth and growing depth. Breadth is characterized as the range of topics or categories that comprise the self, such as social identities, interests, and experiences. Depth involves the beliefs and values that are central to the self. 28
According to SPT, the earliest stage of a relationship, orientation, is characterized by small talk and governed by social norms of appropriateness. 28 In the exploratory affective stage, slightly deeper self-disclosure occurs across a broader range of topics. In the affective exchange stage, feelings of intimacy escalate as partners reveal deeper facets of the self, including values, goals, or fears. The stable exchange stage is characterized by mutual understanding, and partners are comfortable disclosing deep private matters.
HRI challenge
Given modern HSRs are constrained in their tasks and abilities, they do not have much breadth. HSRs also do not have a unique cluster of beliefs, values, and self-image that characterize depth. Although HSRs may share information, it is not based on personal experience or self-image; thus, exchanges with HSRs arguably do not qualify as self-disclosure, and they could not engage in the reciprocal self-disclosure required in a developing relationship.
It should be noted that a human partner may make false attributions about the social potential of HSRs, particularly in short-term interactions. As SPT notes, social norms guide early interactions and disclosures are shallow. 28 Humans are more likely to follow scripts of socially acceptable behavior that may be easier for HSRs to mimic, and researchers may then observe effects similar to what would be expected of a human-human interaction. Over time, however, humans would recognize an HSR's lack of personalized persistent memory, which would be necessary to build a relationship.
In summary, common HSRs are not human like enough at this time to meet the fundamental assumptions and claims of key interpersonal theories, and it is unclear when, or if, they ever will be. This challenges the popular mindset of working from the assumption that human-human findings will apply to HSRs and applying our understanding of interpersonal interactions to HRI and human–robot relationships.
Discussion
Collectively, these issues indicate that researchers must carefully consider their theoretical options for studying contemporary human–robot relationships. One is to propose and test modifications of or extensions to existing interpersonal theories to accommodate HSRs. By examining intervening variables such as perceived agency, behavioral anthropomorphism, and perceived social affordances, researchers may be able to broaden the utility and scope of existing interpersonal theories. Alternatively, given modern HSRs violate interpersonal theories' fundamental assumptions of humanity, scholars could consider human-human interaction an inherent boundary condition of these theories and shift to developing and testing new models founded in human–robot relationships that may or may not explain human-human relationships. Either way, to ensure the validity of human–robot relationship research, it is crucial for scholars to conduct studies with multiple interactions over time and to account for experience and existing familiarity.18,36
Regardless of their current application, existing interpersonal theories could serve as a form of Turing test in the future study of human–robot relationships. If HSRs are eventually designed to meet theoretical assumptions, these theories can then be tested to see if they are upheld in the human–robot relationship development. The fit of human-human relationship theories would indicate that HSRs have achieved greater similarity to humans.
Yet, should being perceived as humans be the goal of HSR design? Generally HSRs are being created to complement or augment human capacity, performing tasks to serve humans. 37 By design, such HSRs will never have the same power or autonomy as the humans who program, own, and control them. Theoretically, it seems problematic to adopt an agnostic perspective about this inherent difference between humans and robots. Ethically, it seems problematic to humanize robots and encourage human-like relationships with objects that are under the control of, exist at the whim of, and are designed to satisfy their human owner. One concern is that if users develop social scripts with humanoid robots, these scripts could be applied to human-human interactions and lead to the objectification, dehumanization, or mistreatment of other people.11,38,39
Indeed, humans can be quite terrible, which also calls into question whether human-human interactions are always optimal models. Social exchange theories claim that humans are inherently self-interested; must robots be? Humans rely on oft-detrimental stereotypes and implicit biases to evaluate other people; could robots overcome this deficit? Lacking human-like characteristics could also be beneficial in particular contexts. For example, if a person is disclosing sensitive or stigmatizing information, they may fear being harshly judged, stereotyped, or rejected by a human due to existing social norms. Studies have shown that people disclose more to an HSR than a human, perhaps because an HSR may seem less judgmental or people may feel more anonymous interacting with a robot.40–42
Given their fundamental differences, one possibility is designing and studying social robots through different lenses than human-human relationships. One suggested model has been human-pet, or more specifically human-dog, relationships.29,43 Another proposed approach is conceptualizing robots more broadly as human companions. 44 Although these models may be viable in some contexts, social robots are distinct and warrant their own theorizing, particularly given the growing variation in the roles they play.
If we accept robots as unique social beings, we do not need to refer to them as “humanoid.” Indeed, HRI designers should explore novel ways social robots could interact, relate, and bond beyond human abilities and norms. Designers should consider ways robots may be uniquely suited to maximize positive social outcomes 45 or minimize negative ones. Such advancements may expand and illuminate not only human–robot relationships but also human-human relationships.
In conclusion, theories for understanding human-human relationships are likely unsuitable for examining modern human–robot relationships, given the current HSRs' shortcomings as social actors. These approaches may in fact be restrictive, as social robots may be able to compensate for human shortcomings or exceed human capacity in some ways. Going forward, researchers must continually reevaluate the emerging features and social affordances of robots to understand human–robot relationships now and in the future.
Footnotes
Author Disclosure Statement
No competing financial interests exist.
Funding Information
This research was not funded.
