Abstract
Robot service failure and subsequent user behavioral responses have emerged as a prominent scientific issue, warranting attention from multiple disciplines. A review of existing literature is crucial to synthesizing and comprehensively evaluating these studies. To this end, the present study undertook a structured systematic literature review to assess the relevant research on the concepts, dimensions, user response (including cognitional and behavioral), and recovery strategies related to service robot failure. Prior studies have largely followed interpersonal service interaction concepts and have identified several major consequences of service robot failure, including emotional and cognitive responses, negative attitudes, attributions of failure, and related behavioral and action-based responses. Notably, recovery strategies for robot service failure can be categorized into two main types: robot-initiated strategy and human intervention strategy. Further research on robot service failure is recommended in five key areas, including exploring the uniqueness of robot service failure, psychologically investigating user responses to robot failure, identifying novel remedy strategies for robot service failures, evolving the concepts of robot service failure and its remedies, and employing mixed-method and complementary research approaches.
Introduction
With the continuous advancement of digitalization, artificial intelligence (AI) service robots are widely used in service industries closely related to ordinary users. 1 The global service robotics market is expected to grow from $19.52 billion in 2022 to $57.35 billion by 2029, exhibiting a compound annual growth rate (CAGR) of 16.6% during the forecast period according to a recent report published by Fortune Business Insights. Service robots, defined as a special form of social robots, 2 offer many advantages over human staff in various service applications, including increased service efficiency, scalability, and reduced operating costs. 3 Moreover, scientists even imagine that AI-embedded robots have the ability to capture dynamic micro changes in user behavior and “intervene” when the service process deviates from normal, improving the overall user experience. 4 For example, Becker et al. 5 argued that robots with AI processors that store historical user data and agile facial expression capture systems can swiftly anticipate users’ potential needs and better communicate with them emotionally.
Early and notable service robot is Sony’s Aibo Robot Dog. 6 This robot is similar to a pet and provides a pleasurable experience to the user. Later edition of Aibo embedded an emotional bond with members of the household while providing them with love, affection, and the joy of nurturing and raising a companion. Another example is Henn na 7 in hotels. This robot does all the service work at the hotel without the assistance of a human employee, including check-in, check-out, cooking, and delivery of luggage.
However, current service robots in practice may still cause unintended failures when dealing with subjective and unique service content, compared to more objective and procedural tasks. Specifically, service robots may respond to users with mismatched service callings, leading to customer dissatisfaction or even anger. Therefore, it is important to study and address the factors contributing to robot service failures to ensure the successful implementation of service robots in the future. 8 A significant amount of research has focused on users’ attitudes toward and evaluations of service robots, with some journals dedicating special issues to this topic. For example, the International Journal of Social Robotics plans to publish a special issue titled “Social Robots for Personalized, Continuous and Adaptive Assistance” with a deadline of December 20, 2022, while the Journal of Business Research will also publish a special issue titled “Unanticipated and Unintended Consequences of Service Robots in the Frontline” with a deadline of March 31, 2023. Although these works have contributed to a comprehensive understanding of service robots in certain contexts, a comprehensive understanding of the failure and recovery aspects of service robots in the field is still lacking, especially with regard to their unique capabilities in experiential service provision compared to traditional self-service technologies. Despite the emotional nature of robot service interactions, there is still a risk of misunderstanding user intentions and providing incorrect or mismatched responses, leading to service failure. 8 Given the technological limitations of robots at present, service failures are inevitable, making it urgent to explore the undesirable outcomes of robot service failure systematically. Therefore, this study aims to provide a summary of the existing literature on service robot failure as well as recovery strategies and propose future research directions.
To the best of our knowledge, although some literature review papers on similar topics (e.g. conversational breakdowns, AI) have been analyzed, 9,10 no systematic reviews have been conducted in service setting based on users’ perspective. Therefore, this study aims to provide a summary of the existing literature on service robot failure that have recently been deployed in service settings and propose future research directions. The study is driven by four research questions: (RQ1) How is service robot failure conceptualized and categorized in the literature? (RQ2) How do users respond behaviorally, cognitively, and emotionally to robot service failures? (RQ3) What can be done to remedy robot service failure, and how effective are recovery strategies? (RQ4) What are the future research directions that arise from this review?
Method
In this study, a systematic review method was used to collect, identity, screen, and review related research papers. Unlike traditional and methodologically robust systematic review papers, we excluded the literature based on quality as the field we are looking at is a relatively new one. 11 Therefore, to collect more literature, we do not require that the corresponding papers be published in a journal or conference of a certain quality (e.g. SCI).
To resolve the four RQs in the last section, we only included robot service studies based on users’ perspective. For example, there are studies that, while conducted in service scenarios, examined human employees’ perceptions of service robots. 12 These studies were excluded as they do not fit the criteria of this study. We framed 20 years time limit (2003–2023), as early research on service robots did start in the 21st century. Research papers that were not published in English, focused on technical aspects or the design of robotic systems, and studied basic elements (e.g. SSTs) of service robots were excluded.
Identification
This study aimed to explore the current status of robot service failure and recovery. As an emerging topic, scholars in various disciplines and fields have paid close attention to robot service failure and recovery. Thus, this study limited data sources to the English periodicals and conferences as these published works underwent peer review process and are likely to meet our criterion. It should be noted that conference papers are not included in general literature review studies. However, in this literature review, many related papers in computer science (CS) and human–computer interaction (HCI) fields also presented in conferences and these papers also underwent peer review process. Thus, this study exerted papers in the following databases from the Web of Science (WoS). Further, for conference papers, this study searched papers in IEEE explore and AIS eLibrary as a supplement.
In this literature review, the term “service robot” may have multiple variants (e.g. chatbot, AI, and personal assistant) and these terms were used to conduct the search to ensure all related studies were include in this literature review. Additionally, the object of this study is to identity the outcomes and recovery strategies of robot service failure. Consequently, this study used the terms service robot failure/recovery, chatbot failure/recovery, AI failure/recovery, and personal assistant failure/recovery (e.g. TS = (service robot failure/recovery)) as a selection criterion for the topic (title, keywords, or abstract) to perform the initial search. Since the failure of service robots is a multidisciplinary concern and is on the rise. To gather as much literature as possible, this study is not restricted to subject areas. We captured an initial sample of 295 papers, including 280 articles, 13 review articles, 1 editorial material, and 1 meeting in WoS database. In similar vein, 491 articles were captured in IEEE explore and AIS eLibrary.
Screening and included
In this study, the abstracts and full texts of papers were meticulously examined to ensure the selection of relevant literature. Notably, a vast majority of papers retrieved from conference databases did not meet the study’s inclusion criteria by reading the title, and thus only conference papers were manually retained. For the WoS database, only articles that specifically investigated the use of robots in service scenarios with a focus on robot service failure and its impact on users were considered.
Hence, research papers that explored robots in industrial production, non-failure scenarios, and their effects on service employees were excluded. It is worth mentioning that review articles collected in the initial screening phase were also eliminated, and as of yet, no reviews on the topic of robot service failure have been published. Additionally, some papers were included in the manual screening process of Google Scholar citation check. As a result of this rigorous selection process, the initial pool of papers was substantially reduced, and ultimately, only 92 papers were included in the study. The PRISMA diagram can be seen in Figure 1.

PRISMA diagram.
Selected papers
In this literature review, the objective is to provide a comprehensive understanding of the current state of research on robot service failure. Information of these selected papers was inputted into a sheet on Excel. This included the author and year of publication, authors details (e.g. name, country, and affiliations), the type of robot (e.g. chatbot, humanoid robots) used, user’s reactions toward failure, recovery strategies, and used methods (e.g. survey, experiment). Due to the heterogeneous nature of the papers, such as different antecedent variables, this study did not conduct meta-analyses.
Descriptive analyses
In this section, this study provided a descriptive analysis of the selected papers through systematic literature review queries.
Publication by year
The present study investigates the temporal trends in research publications on the emergence of robot service failure and recovery from 2000 to March 2023. Our findings, as illustrated in Figure 2, demonstrate a significant and rapid growth in scholarly interest in this topic over the past 3 years, indicative of an exponential expansion. Moreover, the study highlights that the majority of publications on the subject, comprising 60.87% of a total of 92 articles, were produced within the period spanning from 2021 to March 2023.

Numbers of publications per year.
Publication by discipline
Figure 3 displays the distribution of journal or conference papers across various disciplines. The analyzed publications encompass a range of business journals, notably the Journal of Business Research, as well as journals with a specific focus on HCI such as Computers in Human Behavior. Additionally, publications in tourism journals (e.g. Journal of Hospitality & Tourism Research), society journals (e.g. Technology in Society), business research (e.g. Journal of Business Research), CS journals (e.g. Advanced Robotics), and other journals (e.g. Sustainability) were identified.

Numbers of publications per discipline.
Of note, papers discussing robot service failure and recovery have predominantly been published in business, HCI, and CS journals or conferences such as the ACM/IEEE International Conference on Human Robot Interaction and the IEEE/RSJ International Conference on Intelligent Robots and Systems. Subsequently, publications in tourism journals, followed by psychology and society journals, have contributed to the advancement of research in this subject area.
What is service robot failure?
Definition and dimensions
In existing research, service robots have various applications. Each of these forms assists users in accomplishing a service task goal. Although the forms vary, humans have similar nonhuman social judgments toward the different forms of service robots. Thus, this study defined the different forms of service robots as the category of robots and conducted the following sections to answer (RQ1).
Robot service failure refers to service as any form of actual or perceived misfortunes, errors, or problems that occur during the user experience process 13,14 and is a degraded state that causes a system to perform a behavior or service that deviates from the desired function. 15 Accordingly, it includes both perceptible and actual failures resulting from the correct behaviors of the robot as programmed. Examples of the former (i.e. perceptible failure) include when a robot gives a response as specified by the programmers, but the user is not satisfied with the response; examples of the latter (i.e. actual failure) include when a robot incorrectly processes a user’s order or message, resulting in a nonanswer. That is, the determination of robot service failure is based on the user’s judgment or perception perspective, and the robot’s service can be considered a failure as long as the service does not meet the user’s expectations. 14
In terms of divisions, existing research has mainly introduced the two-dimensional division of process failure and outcome failure from interpersonal service interactions into the robot service domain. The former refers to the defects or deficiencies in service delivery, while the latter refers to the failure to achieve the basic service content. In the robot service scenario, a process failure is mainly manifested as problems in the service delivery of the robot, and an outcome failure is mainly manifested as the robot’s failure to meet the user’s basic needs for the service. 16 Meanwhile, based on the technical characteristics of human–machine interaction, different dimensional divisions of service failure have also been proposed from the perspective of information processing of robots. For example, Brooks 17 proposed a two-dimensional division of communication failure and processing failure. The former means that there are failures in data transfer between modules, including data omissions, incorrect information, untimely information, and redundant information; the latter means that the robot is unable to process the information correctly and output the desirable results. Based on this division, Honig and Oron-Gilad 18 further refined processing failure into robot’s understanding and perception of input information and result output and proposed the communication failures, perception and comprehension of failures, and solving failures. Drawing on the research related to the service failure of e-commerce websites, Sun et al. 19 proposed a three-dimensional division of information failure, functional failure, and system failure based AI personal assistant as the research context. Information failure/functional failure means that the information/functions provided by robots prevent users from completing their transactional activities or goals. Giuliani et al. 20 classified failure as social norm violation and technical failure. It should be noted that the above service failures are based on the technical aspects of the robot and may not be applicable to AI robots handling service tasks today.
Recent research has begun to focus on nonperformance errors caused by robots when dealing with social problems. One study classified social errors caused by robots as breach in empathic and emotional reactions, insufficient social skills, misunderstanding of the user, insufficient communicative functions, and breach in collaboration and prosociality. 21 Similarly, another study based on chatbot proposed the division of functional and nonfunctional failures. 22 Furthermore, Castillo et al. 23 defined value co-destruction between user and robot in service interactions, encompassing dimensions of authenticity, cognitive challenges, emotional problems, functional problems, and integration conflicts.
In conclusion, while early studies treated robot failures as machine failures and focused on failures in information processing, as AI elements are increasingly given to robots, scholars have begun to focus on robot failures in performing nonfunctional tasks.
Characteristics of robot service failure
In contrast to service failures caused by human agents, robot service to users is based on a pre-written script and executed sequentially according to programs. 15 Regarding current robot service practices, there is more significant uncertainty in the service failure of robots. Therefore, this study summarized the following three characteristics of robot service failure based on current-related studies.
Unpredictability
Technical producers use advanced AI technology to endow robots with human characteristics, which in part increases users’ perceptions that robots have minds and the ability to act per users’ requests and intentions. 24 As robots move from perceptual intelligence to cognitive intelligence, the robots need to learn a large amount of information and adapt to different heterogeneous users’ needs during service interactions. However, the heterogeneous users’ needs are difficult to record and script comprehensively and exhaustively, resulting in users’ requests that exceed the implementation of the written program script and incorrect responses from the service robots. This also leads to the unpredictability of robot service failures. Especially in the case that many businesses using robots only treat service robots as service gimmicks 25 and do not take the initiative to update, upgrade, and replace the internal scripts of the robots, which subsequently results in unpredictable service failures.
Lack of service initiatives
In interpersonal service interactions, a human employee can judge service failures in advance through work experience, adopt corresponding proactive recovery strategies, or even prevent potential risks. For example, in a hotel service scenario, a human employee will realize that he or she is not satisfying the customer when the customer does not speak, and then he or she will engage in some proactive behaviors (e.g. asking further questions about the customer’s needs) to compensate for the possible failure without information input. Although current AI technologies have enabled robots to make judgments through micro changes in users’ expressions and tones, 5 robots’ responses are still passive service responses that cannot premediate service before users’ actions 26 ; that is, the robot’s response is based on the user’s input and makes the corresponding script execution. Krämer et al. 27 argued that interpersonal interactions are distinguished from human–machine interactions by the following characteristics: social perspective taking, common ground, information exchange, and assumed intentions. Conversely, robots do not possess these characteristics in service interactions and lack appropriate service initiatives.
Lack of mutual empathy
While robots performing nonfunctional tasks is a promising direction, users still treat robots as just another “specie” currently. 28 In the mutual interaction between users and human staff, there is a high level of empathy for each other as human beings. 29 Empathy is not related to the service employee’s job skills. For example, users may be more tolerant of service errors and failures when the human employee who provides the service is in a state of being reprimanded by the superior. This is due to the user’s empathy for the employee as a human being was activated. In contrast, robots were introduced to increase standardization and provide users with a consistent service experience. Users have relatively low empathy for robots even if the robot is in a degraded state (e.g. low power), which further results in relatively more independent and stable users’ cognitive inferences about robots in failure contexts that do not rely on external and environmental cues, and even more negative attitudes toward robot-caused failures. The aforementioned reason for a user’s inability to perceive a robot as a human counterpart can be attributed to the absence of a mental inferential perception that would be developed from recognizing the robot as a fellow member of the human species. 28
Consequences of robot service failure
For (RQ2), across the selected literature, this study summarized and concluded a comprehensive model of the consequences, as shown in Figure 4. The overall context of the model is a framework under robot service failure, in which the antecedents are factors related to robot failure, the mediating variables are mechanisms, and the dependent variables are attitudes, behaviors, and actions. The robot level (e.g. robot gender) and failure level (e.g. type of failure), social factors (e.g. artificial support), and user factors (e.g. technical efficacy) all play moderating roles. In the following section, this study will discuss the different parts of this framework separately.

Framework of robot service failures’ consequences.

Framework for robot service recovery.
Antecedents
Despite the vast amount of literature regarding robot service failure, there exists significant variation among scholars with respect to the antecedents considered in their respective studies. Twelve studies take the appearance of robots as antecedent variable; that is, the degree of anthropomorphism of robots. 30 –42 Within the academic discourse, several scholars have proposed alternative terminology to describe anthropomorphism; for example, robot cuteness 36 and robot design (warm vs. competence). 42
Three studies have examined the variance between human and robot service failures by utilizing a classification system distinguishing human versus robot failures as an antecedent. 16,34,43,44 Nevertheless, as posited in the preceding section, several other investigations have classified robot service failures and thus examined distinct outcomes. 16,20 –22,34 Within these investigations, there existed several inquiries that have endeavored to investigate the interplay between various antecedent variables. For example, Meyer et al. 42 probed the interplay of service outcome and robot design.
The comparison of service success and failure was also an important concern in the literature. 45 –47 Additional investigations pertaining to the robot service failure concentrate upon factors that generate stress, 19 potential errors (i.e. error-free vs. clarification vs. error), 48 and the nature of voice (i.e. human voice vs. machine voice). 49
Mechanisms
Throughout scholarly literature, researchers have examined three distinct mechanisms as a means of comprehensively understanding how individuals perceive service failure in robotics. The dominant themes present in the body of research include (1) emotions, (2) cognition, and (3) failure attribution.
Emotions
Users exhibit negative perceptions in response to robot service shortcomings, akin to the way they react when interpersonal service failures. 43 The phenomenon of anthropomorphization, or the attribution of human-like qualities to nonhuman objects, has been observed to intensify the negative reaction of users to situations involving service failure. This is believed to occur due to the association of this feature with social cues in users’ perception of robots. Additionally, when robots exhibit more human-like characteristics, users tend to hold higher service expectations of them than they would of mechanized robots. 30 Chen et al. 16 found that users are more likely to express anger toward robot failures than failures of human staff due to negative experiences that contradict their expectations. These emotional obstacles hinder acceptance of blame, amplifying user anger. Hadi and Block 50 also found that immediate emotional reactions to robot service failure were discouragement and anger. Filieri et al. 51 summarized the main adverse emotional reactions to robot service failures are anger, discouragement, sadness, and dissatisfaction through text analysis of 9707 hotel reviews on Ctrip.com and TripAdvisor. Another study examined the potential for users to experience empathetic responses toward robots in the aftermath of a mishap. 36
Cognition
Extensive research endeavors have been undertaken to explore the cognitive aspects of individuals who encounter a failure in the provision of robotic services. While maintaining the perception of competence, anthropomorphic robots increase the perception of warmth. 40 Thus, users believe that the service robot can deliver friendly and approachable service. 32 The issue of trust has garnered significant attention, as evidenced by four studies that have investigated user perspectives on unidimensional or multidimensional (e.g. goodwill trust, qualification trust) trust subsequent to robot service failures. 37,40,52,53 In a study by Lteif and Valenzuela, 47 they argued that robot service failures are perceived by users as a form of rejection and reinforce the desire of individuals to establish social connections with other human beings. The users’ rapport cognition after failure could be also found in Becker et al.’s study. 39 The occurrence of a robotic service failure is perceived to have an impact on the users’ likeability 41 and perception of the robot’s humanness/uncanniness, 54 credibility, 41 stability, 43 authenticity, 55 and performance expectancy. 35
A study by Gompei and Umemuro 56 found that robots with few communication errors increase users’ familiarity but decrease sincerity perceptions and this effect also holds for learning companion robots, where robots that make errors increase a user’s intention to establish a long-term relationship with them. 57 Users liked robots that made errors more than robots without errors, but users’ ratings of anthropomorphism and intelligence did not differ significantly between the errant and errant-free robot groups. 58 Robots with errors also stimulate users’ positive states (e.g. impression). 59,60 In addition, two studies 39,41 investigated the social presence perception after the robot service failure.
Failure attribution
Concerning identifying the responsible party after a service failure, an earlier study by Kim and Hinds 61 found a positive correlation between the degree of intelligence of the robot and the extent to which users attributed the failure to the robot. According to the attribution theory, users attribute less to service failures caused by robots than to service failures caused by human employees or humanoid robots. 33,34,38,42 However, when the attribution agent switched to the service provider (i.e. firm), users attribute more responsibility to the service provider in the robot service failure scenario than in the human employee service failure scenario, as users believed that the service provider modified the programs embedded in robots. 44,45 A study by Furlough et al. 62 compared user attributions to different factors in the robot service failure situation and found that the order of user attributions was human employees, robots, and environmental factors. Also, previous studies compared the interaction effects of different failure types and failure initiators (e.g. human vs. robot). 43 Attribution of service failure also depends on the robot’s form design, with warm (vs. capable) robots causing users to attribute more service failure to themselves. 45 Further, it has also been suggested that users’ attributions of responsibility for service failure are more stable for robots than for human service failure. 63
Moderators
Throughout the current literature, four corresponding sets of moderating variables were summarized in this study: (1) robot factors, (2) failure factors, (3) social factors, and (4) user factors.
Robot factors
This term denotes a specific element or design of the robot, such as robot’s gender, 50 race setting, 55 form design, 45 communication style, 64 intelligence level, 22 and even clothing (e.g. formal vs. casual). 55 One study conducted by Lee et al. 65 argued that expectation-setting strategies can prewarn user of the robot’s limitations before performing service tasks. In a similar vein, disclosing the robot’s process of understanding environmental and task information during human–robot collaboration can help humans better resolve failures and achieve task goals. 66 Lastly, robots’ expression of uncertainty also reduces response failures of conversations in human–robot interactions. 67
Failure factors
Failure factor refers to the moderating effect of the elements of failure itself on the consequences caused by the robot. A commonly researched aspect pertains to the extent of failure, though it may be referred to with different nomenclature across various studies. Failure severity, failure criticality, and failure magnitude are terms used by scholars such as Choi et al., 32 Mozafari et al., 52 and Sands et al. 43 to describe the degrees to which failures could have significant impacts. Another factor that is often studied is the type of failure (e.g. process failure vs. outcome failure). 32
Social factors
Studies also paid attention on the social aspects that changes the perception of robot service failure. Scholars have incorporated task types into their research endeavors as users exhibit variances in their thresholds for accommodating failure across diverse task scenarios. 18 In addition, two studies 52,53 have identified the moderating influence of service outcomes as diverse service results evoke varying social evaluations concerning the actions of chatbots from their users, while one study 38 has investigated the significance of relationship norm.
User factors
Similar to the analysis of user studies in the field of HCI, the investigation of individual variances is a critical aspect that researchers consider. Some studies focus on explicit demographic differences, for example, gender 35 and chronological age. 41 Alternative research efforts have directed their attention toward contrasting psychological predispositions. For instance, a study conducted by Fan et al. 33 focused on the roles of interdependent self-construal and technological efficacy; while Um et al. 46 concerned novelty seeking and need for interaction. Beyond physiological age, David-Ignatieff et al. 41 simultaneously examined subjective age.
Outcomes
The present study provided a comprehensive summary of two principal classifications of outcome variables: (1) attitudes and (2) behaviors/actions.
Negative attitudes
Regarding attitudes after a robot service failure, scholars mostly agree that robot service failure reduces users’ overall evaluation. 41,68 A study conducted by Wang et al. 30 argued that anthropomorphic robot accelerated adverse attitude generation in failure situation. This is because the degree of anthropomorphism stimulates human schemas, causing individuals to perceive intelligent user service as having similar human-like performance. Similar findings could be seen in Fan et al.’s study, 33 in which anthropomorphism increase users’ perceptions of the robot’s social role, thus making users more dissatisfied with the service failures caused by the robot. Other studies have comprehensively examined the overall user responses to robot service failures, including higher levels of negative word-of-mouth, 16,39,43 (dis)satisfaction, 34,39,69 forgiveness, 34,35,38 and switch intentions for robots 16 as well as lower level of engagement, 39 loyalty, 53,55 and positive response. 40 Furthermore, there exists a contention positing that robotic services are created utilizing an individual’s past behaviors, rendering them akin to a digital avatar crafted for said individual, fostering a greater self-AI association between the user and the robot compared to human staff service contexts. This phenomenon results in users being less prone to negative word-of-mouth communication because of impression management concerns. 70
Behaviors/actions
To the most important behavioral consequences, existing research has contrasting views on whether failed or unsuccessful robots ultimately influence user adoption or use. On the one hand, a robot that can resolve and clarify error messages during communication, as opposed to an utterly error-free robot, can identify the source of errors and demonstrate the robot’s efforts to improve performance, making it easier for users to perceive the robot as a personalized service subject and enhancing increased reuse intention. 48 On the other hand, in existing human–robot interaction literature, users may also experience resource depletion (e.g. emotional, relational, and informational resources) as a result of substandard service from the robot, resulting in avoidance or resistance coping strategies, including refusing to use the robot for further service 23 or churn, 53 lowering rentention, 42,52 usage intention, 54 or tolerance, 36 switching to the service or product of the firm’s competitor. 23 Sun et al. 19 categorized robot failure as a stressor and pointed out that functional, systematic, and informational failures of robots can cause a “technological invasion” of intelligent services, which can lead to technological exhaustion of robotics and ultimately weaken individuals’ reuse intentions. In their study, Fan et al. 49 centered their attention on the impact of failures on human–robot interaction in the future. Additionally, it has been proposed that inadequate robot performance may prompt users to manually operate the robot, in lieu of relying on automatic mode. 71 The utilization of robots as an auxiliary tool has been shown to foster users’ leniency toward robots that experience service failure, thereby enhancing their willingness to use the robot again in the future. 72 Other studies took an experimental approach to explore human face and head movements after robot failure. For example, Giuliani et al. 20 found that head movement and smiling were the most common human actions after a robot service failure, and Hayes et al. 73 noted that the most common human actions in response to robot failure were frowning and headshaking.
Recovery strategies for robot service failure
After a service failure, a lot of research has focused on the topic of how to remedy the failure. Service recovery is a series of response actions taken by the service provider to eliminate user dissatisfaction and complaints after a service failure. It generally includes compensation, responsiveness, apology, user choice, and restart, 65 aiming to correct the service mistake and try to induce user forgiveness and satisfaction. For (RQ3), in the existing research, recovery strategies fall into two main categories. On the one hand, the robot itself can perform verbally appropriate remediation after the service failure. This includes both the robot’s apology and explanation of the service failure event and the responsive remediation of the user with different tones, communication styles, and textual elements unique to robots. On the other hand, due to the specificity of robot service failures, a unique recovery strategy for failures with human service interactions is the intervention of human service staff (i.e. human intervention strategies). In summary, this study summarized a comprehensive research framework diagram of different recovery strategies and their effectiveness, as shown in Figure 5. In this diagram, the independent variables are the variables associated with different robot recovery strategies, the mediating variables are users’ perceptions toward the robot (e.g. social judgment) and changes in their state (e.g. emotional state), and the outcomes are users’ evaluations of recovery or overall satisfaction.
Antecedents
As mentioned, prior research has identified two predominant recovery strategies: robot-initiated and human intervention approaches. The ensuing section will provide detailed explication of these two key points, each discussed independently.
Robot-initiated strategy
The most effective way to retain users after a service failure is for the robot itself to remedy the undesirable situation. Three studies 32,74,75 proposed that prompting apologies by robots after service failures is an effective recovery strategy. Offering service options and explanations are effective ways to gain user favor because they demonstrate the robot’s proactivity to remedy service. 32,76,77 According to Mahmood and colleagues, 77 robots should assume accountability for their errors and offer genuine expressions of regret following their shortcomings. The similar recovery strategy is also called politeness strategy in Song et al.’s study. 77 Different communication styles between robots and users after service failure can impact the effectiveness of service remediation (e.g. social-oriented vs. task-oriented 78 ; gratitude vs. apology 79 ; whimsical vs. kindchenschema 80 ).
The robot can be set with the voice tone and response text in the built-in programs. This perspective has also been usefully explored in studies. Drawing on the benign violation theory, Yang et al. 81 concerned humorous responses in recovering service failures. Green et al. 64 delve deeper into the realm of humor and conducts research on the varying impacts of diverse forms of humorous responses (e.g. self-deprecating, affectionate, aggressive, and self-improvement). A study by Liu et al. 82 also migrated humor into the realm of emojis, suggesting a remedy for humorous emojis when chatbots fail. Lv et al. 83 explored the role of cuteness elements (appearance, tone, and manner of speech) in service failure recovery. Alternatively, empathic response strategies for robots have been proposed, 84,85 in which robots are asked to retell the failure event from the user’s perspective, showing the emotional intelligence of robots. An empathic robot can use perspective-taking and empathy to mitigate negative emotions 86 as robots store historical data and can more acutely capture users’ real-time emotional states, allowing them to offer solutions to failures more quickly. Other works explored “seamless” inquiry strategy, 87 and refinement strategy 88 sponsored by robots in service recovery.
Human intervention strategy
Human intervention represents the repair and rebuild of satisfaction after service failure by human–robot cooperation 32,89 and is a unique recovery strategy in robotic service failure scenarios. Three studies 22,90,91 have focused on examining the contrast between recovery providers, such as human employees and robots, and have identified that human intervention is a superior and more efficacious approach.
Meanwhile, some scholars advocated that the advantages of human intervention strategies are not significant. Ho et al. 92 concluded that there was no significant difference in the effectiveness of human and robot interventions for repairing the service experience after a robot service failure, and both were superior to fellow consumer interventions. This is because fellow consumer interventions are extra-role behaviors, while service repair by the firm’s service representatives (e.g. human, robot) are in-role behaviors. Therefore, the in-role behavior allows focal user to feel a higher degree of role congruence than the extra-role behavior conducted by fellow customers. Several studies suggested that the service recovery approach implemented by human intervention falls short in comparison to the self-recovery technique adopted by robots. 93 This discrepancy could be attributed to the fact that robot self-recovery affords users with uninterrupted service and circumvents the possibility of any deviations, such as suboptimal service attitudes, that may arise during human service.
In addition, Huang and Dootson 94 focused on the timing of disclosure of human intervention after robot service failure (early vs. late). Jones et al. 55 explored the impact of the digital avatar’s gender on user engagement in the service recovery situation. Chen et al. 96 suggested that service providers adopt a co-creation recovery strategy to cope with robot service failures, that is, involving users in the service recovery process.
Mechanisms
Similar to the prior section, this study provided a synopsis of the corresponding mechanisms and categorizes them into two components: (1) perceptions toward robot and (2) user state change.
Perceptions toward the robot
Prior research concurred that the implementation of robot recovery holds the potential to alter the attitudes and beliefs held by users regarding robots. It was found that service recovery restored or influenced user’s social judgments 32 and social perceptions (i.e. warmth perception, competence perception, and partner perception). 64 Trust has emerged as a critical factor in the empirical literature, with a noteworthy focus on two distinct aspects of trust in particular. Specifically, four studies have investigated trust as a mediator, with two exploring the general concept of trust, 84,85 and two further distinguishing between cognitive and emotional trust. 78,96 Moreover, certain investigations have transferred factors integral to interpersonal service to the context of robotic service recovery. These studies have proffered theories regarding perceived sincerity, 81,90,91,95 perceived naturalness, 91 perception of role congruity, 92 and likeable. 77 Perceived intelligence has been identified by three studies 77,82,95 as a unique aspect of user perception that distinguishes from the human service recovery. Furthermore, the utilization of humor appreciation served as a mediator in a study by Yang et al. 81 that examined the function of humor recovery strategies.
User state change
This concept pertains to the influence that robot recovery has on an individual’s cognitive and emotional states. Scholarly investigations into emotional phenomena tend to adopt a more selective approach, the scope of which often encompasses themes such as empathy, 52 tenderness, 56 nurturing instinct, 83 relational/efficacy needs, 79 negative emotions, 80 and emotion/problem-focused coping. 94 Conversely, in research that supports cognition or behavior as intermediaries, scholars concentrate on the subject of performance expectancy, 56,83 psychological distance, 84,85 face concern, 97 recovery choice, 22 and perceived governance. 22 A study by Song et al. 93 focused on valve and risk while examining perceived functional valve, perceived experience valve, and perceived privacy risk as mechanism variables.
Moderators
Within this phase of the research, two classifications of moderating variables are identified and characterized: (1) user factors and (2) robot/failure factors.
User factors
In two separate studies, demographic variables (i.e. age, 74 gender 80 ) were introduced as moderating variables. An increasing number of studies have directed their attention toward examining individual implicit differences, including constructs such as relationship orientation, 78 interdependent self-construal, 85 indulgence, 85 sense of power, 75,95 face concern for human interaction, 90 customer participation, 94 implicit personality, 82 and perceived diagnosticity of lay beliefs. 91 Additional research has incorporated moderating variables related to perceptions of robots and experiences with failure. These variables, includes technology anxiety, 80 failure experience, 95 and willingness to accept AI as social partners. 86
Robot/failure factors
Similar to the investigation of the robot service failure, the center of attention in the study of robot service recovery is either the robot or the elements comprising the origin of the failure. Four studies 56,80,81,83 were dedicated to examining the degree of failure severity while two 56,97 focused specifically on the effects of time pressure. Additionally, two studies 81,86 delved into the intricacies of robot design while two 22,93 more explored the impact of the degree of robot intelligence. Furthermore, a study has demonstrated that investigations that incorporate a wide array of interaction methods exhibit greater efficacy in providing service remediation, ultimately leading to the realization of favorable recovery outcomes. 84 A study by Brooks et al. 15 have demonstrated that both human support and task support are effective tools for mitigating users’ negative reactions to robot service failures. Finally, investigations have additionally scrutinized the moderating influence of recovery type (i.e. instrumental vs. informational), 92 performance level, 98 and linguistic form (i.e. quantitative vs. qualitative). 98
Outcomes
Similar to robot service failure research, the current studies heavily paid attention on future use, 32,84 reuse, 82 and discontinuance 98 after a service failure recovery. For other key literature, the final outcome variable for eight studies was satisfaction, 75,78,80,90,91,93,95,97 four studies opted for tolerance, 56,83,85,86 and two studies measured evaluation. 81,92 Liu-Thompkins and colleagues 86 conducted an investigation to ascertain the effects of robot service recovery on the aspects of well-being, user loyalty, and user equity concurrently. The final resultant variables that have been examined in the realm of robot service recovery include forgiveness 22,79 and aggression. 94
Future research avenues
With the continuous advancement of robotics in service marketing practice and academic research, studies on robotic applications have received focused attention and discussion from scholars in various disciplines in recent years. Many journals have released special issues to discuss the related research questions. However, research is still limited in focusing on robot service failure and recovery strategies. Based on the research mentioned above, this study suggests that future scholars can further expand and break through this topic from the following aspects. In this section, we answer (RQ4) by highlighting multiple avenues for future research.
The deep excavation of robot service failure’s uniqueness
Existing research has revealed differentiated user responses between robot service failure and human employee service failure and also explored user cognitive coping strategies in the context of failure from the perspective of robot design (e.g. degree of anthropomorphism). Studies have also explored the different types of failure (process vs. outcome) in the context of robot service failure. Although these studies confirmed and explained the negative consequences for user in experiencing robot service failure, the current research has yet to capture the underlying logic and uniqueness of robot failure in depth.
Based on AI job replacement theory, Huang and Rust 1 proposed four AI robot service models from low to high AI involvement, mechanical, analytical, intuitive, and empathic, and distinguished the roles that human employees and service robots could play in accomplishing service tasks. As AI technology continues to evolve, the earlier mechanical and analytical robots have been gradually replaced by intuitive and empathic robots for widespread use in user-facing services. The current AI technology paradigm is actively reshaping the structure of personal beliefs, social paradigms, and even economic systems. 99 In a recent piece of robotic services research literature, Esmaeilzadeh and Vaezi 100 even argued that “…consciousness is an emergent phenomenon in artificial intelligence….” Belk 4 pointed out that although algorithmically controlled, AI robots are also beginning to exhibit emotions, which challenges the conventional notion that robots lack the ability to think (i.e. lay belief) and introduces a new era of service applications and research in robotics human interaction research. 12 This suggests that service robots that have introduced AI technologies in recent years are very different from traditional service robots (e.g. self-service technologies), triggering a paradigm shift in future research on robotic services.
From the current research related to robot service failures, we suggest that a breakthrough can be made to dig deeper into the uniqueness of robot service failure from the following two aspects. On the one hand, the study of robot service failure should break through the comparison of human and robot services. In many service practices, robots provide services to users alone (e.g. hotel door greeting). Therefore, a deeper understanding of the practical use of robot services can be facilitated by exploring other aspects of robot failures instead of the mere comparison with human employees. On the other hand, the understanding and delineation of robot service failure in existing research continue from previous research related to human service contexts or a technical perspective on the information processing processes of robots. Such divisions may not be fully applicable to the study of service failure of robots. For the former, a classical division is between process failure and outcome failure. Because process failure and outcome failure may not be independent of each other in robot service contexts, process failure inevitably leads to outcome failure in some cases. For example, if a user locates or orders food through a voice robot and if the robot fails to identify or misidentifies the user’s needs or information in the process of communicating with the user (process failure), the robot must fail to provide an accurate service outcome (outcome failure). For the latter, the dimensional division based on the technological perspective overemphasized the role of AI information processing. Users see only the actual service results and may not have the ability or sufficient motivation to screen what part of the robot’s process has erred in. Therefore, what exactly is the nature of robot failure? Moreover, what are the manifestations? Future research could explore this deeply.
Exploring the psychological mechanisms of users’ reactions to failure
Existing studies have revealed users’ cognitive, emotional, attributional, and attitudinal responses after robot service failures and have validated and explained the possible mechanisms. Although these studies have provided valuable insights and lessons for a deeper understanding of users’ reactions to robots, there is still much room for expansion in future research. The degree of anthropomorphism of robots is a frequently mentioned topic in current research on robot service applications. To the extent of existing research, anthropomorphism can make users have more negative attitudes toward robot service failure. 30 There exists a biological species distinction between robots and humans. 16 Does the anthropomorphic design of the robot in the service failure context challenge the user’s self-identification with the human species? The tendency of users to compare themselves to robots as a species leads to resistance and aversion to robots. Similarly, the presence of robots affects users’ perceptions of themselves; based on the self-enhancement motivation, do robots that fail to serve to enhance or diminish the clarity of users’ self-concept? Based on the decision-making perspective, existing studies are based on service failures that occur in the robot service users themselves, so what are the cognitive responses of bystander users who witness a robot service failure when the service target is someone else? Based on an emotional perspective, existing research has focused on users’ emotional responses to robots in a single service interaction. Future research could also probe whether a dynamic contagion of emotions exists in multiple service interactions between users and robots 101 and whether this differs from users’ emotional responses in single failure situations. Do users’ negative emotions intensify or moderate when faced with a robot that fails multiple times? Similarly, robots have memory capabilities that far exceed those of humans, and their services are accompanied by the risk of personal privacy data disclosure, 102 so do users’ cognitive processing mechanisms differ for service failures associated with different levels of privacy?
In addition, existing studies have validated the issue of user reactions to robot service failures in different service scenarios (e.g. hotels and restaurants). However, there may be some differences in users’ reactions to failure in different social contexts. Schepers et al. 103 investigated the effects of different types of robots on users’ emotions in different service contexts (low cost vs. full service). Nevertheless, is there a difference in user acceptance of failure in service failure contexts? For example, are users more intolerant of robot failure in credit-based services than in experience-based services? Further, do differences in individuals’ uncertainty avoidance and individualistic tendencies in different social contexts lead to different perceptions of robot service failure among users in different countries? For example, in societies with a higher level of uncertainty avoidance (e.g. Japan), users are more cautious about technology products and therefore deal with robot service failures more cautiously; in societies with a lower level of uncertainty avoidance (e.g. Singapore), users are more willing to try and use new technology products and does this promote the tolerance of robot service failures. However, the existing studies have not yet compared the differential psychological mechanisms of user response to robot service failures in different social contexts, and future scholars can expand on this accordingly.
The unique recovery strategies for robot service failures
In the research on recovery strategies for robot service failures, many studies explored the effectiveness of a human intervention or robotic apology/explanation. However, as the current academic community has not yet developed a more unified understanding of the uniqueness of robot service failure, this has limited the in-depth analysis of recovery strategies to a certain extent. At present, human–robot collaboration in service scenarios is an essential means for service firms to accomplish their service goals. Therefore, it is also an effective recovery strategy for human employees to compensate for the service failure made by robots. However, human intervention after a robot’s service failure not only increases service costs but also results in a lack of remediation during the “golden time” of failure. Thus, it is crucial to attempt to propose more practical recovery strategies based on the robot’s characteristics as a direction for future research. Among the existing studies, Lv et al. 83 analyzed the role of cuteness in service failure recovery. Indeed, unlike human adult waitpeople, robots can change their vocal tones in built-in programs to allow users to perceive different speech features. Similarly, concrete acoustic properties (e.g. over-zero rate) can “refract” abstract speech features at different levels. 104 AI programs can help robots communicate with users with stable speech features. Future research may also attempt to investigate the effects of other speech features (e.g. soft moaning, solemn accent) on the effectiveness of robot service failure recoveries.
In addition, the built-in algorithm can also “guide” the robot’s output text, and only a few scholars have conducted preliminary explorations of the effects of robot textual replies (e.g. empathic replies) in existing studies. 85 Future scholars can also explore the effectiveness of other forms of textual replies to remedy robot service failures. For example, AI technologies could allow robots to mimic users’ linguistic styles based on real-time and historical conversation data. According to communicative accommodation theory, the matching of the linguistic styles of both parties affects the effectiveness of communication. 104 The robot’s communication and response after service failure by mimicking the user’s linguistic style may strengthen the user’s recognition and trust toward the robot, thus positively influencing service recovery evaluation. Also, future research can explore the effects of different linguistic strategies in robot service remediation as a practical guide to future robot service practices.
Evolutionary study of robot service failure and recovery
Individuals’ knowledge and lay beliefs of robots change in parallel with changes in science and technology, individual experience, and environmental development. In particular, today’s robots have deep learning algorithms and capabilities, and as individuals interact with robots more frequently, they become more aware of individuals’ interests, preferences, intentions, and behavioral responses, improving service performance. Since the mainstream stereotype of robots is a “high level of competence and low level of intelligence,” users’ perceptions of algorithms will change as they experience robot services or continue to promote and use robots commercially. Belk 4 noted the importance of emotional factors in robot services and suggested that robots will “replace” humans in some service industries in the near future. Especially for robots with deep learning capabilities, each time a robot interacts with a user, it increases the user’s familiarity with the robot, and the utility of the robot as a product may increase as the user continues to use it. Robots may learn more about users as they continue to interact with them, thereby increasing the utility of their use. Therefore, future research can be further expanded in the following two aspects. On the one hand, from a longitudinal AI technology development perspective, whether the rapid changes in robots give users differentiated perceptions of robot failure. It has been demonstrated that occasional inadvertent mistakes increase users’ likeness toward robots. 48 Why do users develop such attitudes toward robots? In the longitudinal dimension, has technology changed users’ perceptions of the role of robots, thus helping them to view the service failures of robots in a differentiated way? On the other hand, from the perspective of individual robot use, as user–robot interaction activities increase, robots become more like digital avatars of users. 70 Thus, how do user acceptance and failure tolerance of robots change dynamically as users use them more frequently?
Conducting complementary studies with a mix of methods
Existing studies on robot service failure and recovery are mainly based on behavioral experiments. The possible reasons for this are that, on the one hand, experiments can effectively exclude external environmental noises and obtain more accurate causal conclusions; on the other hand, the use of robots in the service field in the real world is on the rise, and data suitable for research are more challenging to obtain. Meanwhile, a few studies have also explored related issues using interviews, induction, 25 and survey. 19 However, with the increasing popularity of robots, existing studies have begun to call for an in-depth exploration of issues in robotic services using mixed methods. For example, Filieri et al. 51 used a human–robot hybrid approach to explore user emotional responses in human–robot interactions. In their study, the authors compared the advantages and disadvantages of XLNet, support vector machine, plain Bayesian, and random forest methods to analyze reviews. To investigate the effect of robot use on happiness, studies have begun to experiment with macro data (e.g. US state-level data) 105 for interpretation, while some scholars have begun to introduce more realistic field experiments (e.g. restaurant experiments) 103 for analysis. Thus, we suggest that future research could further explore the use of multiple mixed methods, including secondary data analysis, behavioral experiments, field experiments, and qualitative analysis, to explain and validate the effectiveness and accuracy of different types of real-world robotic service failures and different recovery strategies in different consumption scenarios and cultural contexts.
Conclusion and contributions
This study sought to explore the intricacies of robot service failure and recovery research, which is characterized by its fragmented and complex nature. To this end, a systematic review method was employed to identify and analyze the various dimensions, attributes, implications, and strategies associated with robot service failures. The review of 92 relevant articles provided insight into four distinct groups of outcomes that can result from robot service failures, as well as two categories of recovery strategies. These findings hold potential value for both practitioners and scholars in future research and practice.
Theoretical contributions
This study makes several key theoretical contributions to research in the robot service failure. First, the present study sets out to examine the failure of robots to provide satisfactory service in the domains of HCI, CS, business, and psychology, in an effort to offer a more comprehensive approach to exploring the phenomenon of robot service failure across disciplinary boundaries. This literature review represents the first known systematic investigation into the consequences of failed service provision by robots to users, including remedial strategies employed to address such failures. Our findings suggest a notable increase in publications pertaining to robot service failure within the interdisciplinary areas of interest in recent years, with a majority of studies published in HCI journals, followed by journals focusing on computer and information systems, as well as those pertaining to business research, marketing, and specific industrial sectors. The results of this review emphasize the potential for cross-disciplinary collaboration between these areas of study and highlight the importance of exploring the phenomenon of robot service failure through diverse theoretical lenses.
Second, in addition to engaging a systematic technological approach, this study also focused on the overarching intellectual structure that emerges from different streams of current literature. In doing so, this study were able to transcend traditional disciplinary boundaries and instead examined the multidisciplinary linkages and dialogues between three fields. Characterized by its multidisciplinary nature, this study employed a comprehensive and synthetic framework to understand the consequences of robotic service failures, ultimately summarizing and concluding on the triple mechanism of emotion, cognition, and attribution. Through a bibliographic coupling method, this study identified relevant topical areas and elucidates relationships that exist within the literature.
Third, in tandem with the preceding point, the present investigation demarcated two primary recovery strategies for tackling instances of the robot service failure, designated as robot-initiated strategy and human-intervention strategy. The study expounded upon the systemic outline of the potency of these strategies, and their underlying mechanisms. This theoretical orientation contributed to the integration of discrete forms of research effort and facilitated the cross-pollination in augmenting the efficacy of robotic technology within HCI and allied domains.
Finally, the current study posits that the failure of robot services is distinguished from that of interpersonal services. Hence, it is imperative that the examination of the former ought not to be displaced by a mere transference of the conventions governing the study of the latter. In light of this, the research advocates for a more all-encompassing and distinct outlook toward the investigation of the issue of robot service failure and recovery. 82 Additionally, the study calls for a more thorough and exacting approach to scrutinize these issues thoroughly.
Practical implications
Given the widespread use of robots in the service industry and their propensity for service failures, investigating the consequences of such failures and their recovery strategies is imperative for service management in companies. Research indicates that users experience negative effects more frequently than positive ones with robot services, underscoring the need for service firms to carefully consider the application of robots in various service contexts and contingent on user personality traits. To this end, companies should exercise caution and deploy robots in service settings that do not demand high levels of emotional interaction or personalization and implement effective user disclosure strategies to manage expectations. Notably, research suggests that introducing human-like qualities such as anthropomorphism, cuteness, and empathy in robots positively impacts user perception and improves remediation efficacy after service failures. Timely communication with users and augmenting human assistance are also crucial strategies to enhance robot services. Hence, this study proposes granting human employees final decision-making authority in service delivery to provide customers with a sense of personalized attention. Lastly, companies must strengthen customer relationship management and cultivate humanistic approaches toward their users to increase forgiveness and tolerance in instances of robot failure.
Limitations
Even though a systematic literature review was conducted, this study, as with any other systematic review, still suffers from several limitations which must be noted. First, the reviewed topic covered multidisciplinary studies (e.g. marketing, information system, industrial design), and the research has to hold a more general level. Although this study is dedicated to user research in this domain, we also concluded some emergent studies based on other perspectives, limiting the depth of this literature review. Second, the involved publications in this review were identified by searching keywords in specific databases, which can lead to omitting potentially relevant literature. For example, we mainly focused on journal or conference articles published in English and may overlook the studies written in other languages, although we strongly believed that the listed articles could represent the current state of the art of robot service failure studies.
Footnotes
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The author(s) are grateful for the research grants given to Dewen Liu from the Social Science Foundation of Jiangsu Province, China (Grant No. 23GLC016), Talent Introduction Project of NJUPT, China (Grant No. NYY222011).
