Abstract
The increasing integration of artificial intelligence (AI) in B2C marketing challenges the core principles of relational exchange theory (RET). Traditionally, RET has focused on human intentionality, moral agency, and emotional commitment as essential for maintaining long-term relationships. This paper aims to explore how AI-driven interactions affect the structure, dynamics, and management of consumer–firm relationships. Drawing on recent research in marketing, human–computer interaction, and critical algorithm studies, the paper identifies five primary tensions caused by AI: the simulation of trust, the replacement of human agency, the reversal of control, the virtualization of intimacy, and the imbalance of transparency. The paper suggests that RET must evolve into a dual-pathway framework, one that maintains its focus on human relationships and another that addresses machine-mediated, functional interactions. Based on this, the paper proposes a research agenda to rethink relationality, reevaluate trust and commitment, investigate algorithmic governance, and define the limits of RET in hybrid relationships. This work seeks to contribute to the reimagining of marketing theory in an era dominated by algorithms, in which the concept of “relationship” becomes more fluid, technologically mediated, and ethically complex.
Keywords
Introduction
Marketing scholars have long employed relational exchange theory (RET) to explain how trust and commitment develop between firms and consumers (Kohli and Jaworski, 1990; Gummesson, 1994; Zeithaml et al., 1985; Berry, 1983, 1995; Duncan and Moriarty, 1998; Sheth and Parvatiyar, 2000; Morgan and Hunt, 1994; Dwyer, Schurr, and Oh, 1987). Initially developed for B2B marketing, RET has been extended to B2C settings, particularly in service and high-involvement consumer goods, to illustrate how firms build the foundation for long-term value through relational mechanisms such as commitment and trust (Berry, 1983; Berry et al., 1983; Grönroos and Gummesson, 1985). RET posits that marketing relationships are rooted in human intent, emotional insight, and moral agency. These core principles have remained partially unchanged, even as marketing practices have undergone significant technological shifts (Ahmad et al., 2024; Philipp-Muller et al., 2022; Sung et al., 2023). While RET´s construct was initially derived from social exchange theory (Mishra and Mund, 2024; Zhou et al., 2024), its application in consumer markets has not been without critique. RET´s critiques questioned whether close relational ties are feasible in mass consumer markets and engages with B2C relationships, sometimes rhetorically (Hong and Wang, 2009; O’Malley and Tynan, 2000). Although RET does not inherently exclude nonhuman actors, its core constructs were initially derived from social exchange theory with human interpersonal interactions in mind. In this paper, these critiques are acknowledged. Still, the focus is on RET´s human intent, emotional insight, and moral agency, and on how these are challenged by the use of artificial intelligence (AI) in marketing.
This increasing use of AI in marketing, via recommendation systems, chatbots, virtual assistants, and affective computing, raises important questions about whether RET still applies when consumers connect not only with firms but also with intelligent systems that simulate human relationships (Grewal et al., 2023; De Keyser and Van Vaerenbergh, 2024). Despite a growing body of research on AI in marketing, most studies focus on practical outcomes, such as personalization, automation, service performance, and satisfaction (Charles et al., 2025; Davenport and Mittal, 2023; Puntoni et al., 2021; Spais et al., 2024). These contributions have certainly advanced our understanding of AI as a marketing tool; however, they largely overlook the more profound theoretical effects of algorithmic mediation. Specifically, little critical reflection has been given to how AI alters the fundamental relational assumptions that underpin marketing theory (Bock et al., 2023). Concepts such as trust, emotional connection, and commitment, which were previously thought to be intersubjective and emotionally rooted, are now being redefined through interactions with systems that lack consciousness, intentionality, or moral obligation. This creates a growing gap between how marketing relationships are practiced and how they are theorized. Frameworks such as RET were not developed initially to include nonhuman agents as relational actors, nor have they been sufficiently updated to reflect this transition.
This paper addresses that gap by reexamining RET through the lens of modern AI applications in B2C marketing. It asks: What conceptual tensions emerge when AI systems engage in consumer–firm relationships, and how do these tensions challenge the basic assumptions of relational exchange theory? The paper identifies five specific tensions: simulated versus authentic emotion, algorithmic versus interpersonal trust, the lack of intentional reciprocity, emerging power imbalances, and the replacement of moral commitment by behavioral loyalty. Drawing on recent advances in AI, service marketing, and relationship theory (Rust, 2020; Huang and Rust, 2021, 2022, 2024, 2025), the paper critically and constructively explores these tensions and their implications for the core mechanisms that underpin RET. While this paper foregrounds conceptual tensions, it does not assume that AI inevitably undermines relationality. Instead, these tensions reveal where RET may guide the development of more human-centered AI systems that support trust, commitment, reciprocity, and relational intent. Rather than discarding RET, the paper argues that it requires a theoretical extension to remain relevant in an increasingly AI-mediated marketplace.
This paper makes two key contributions to the development of theory. First, it challenges the human-centered basis of RET and provides a systematic critique of its limitations in the era of AI. Second, it presents a revised view of RET by incorporating conceptual updates that reflect the algorithmic, data-driven nature of many modern consumer–firm interactions. By addressing what is often left implicit in AI marketing research, this paper aims to reshape RET and develop a more robust, future-focused RET framework for studying B2C relationships in the digital economy. Finally, despite its conceptual nature, the paper offers practical implications for managers by outlining possible alignments between RET and AI and suggesting that future AI systems can be designed and developed based on RET´s human-centered principles. The remainder of the paper is organized as follows: first, the fundamentals of relational exchange theory are reviewed; next, it discusses how AI influences B2C relationship marketing practices; then, the paper investigates the theoretical tensions between AI and relational exchange theory; afterward, it considers the theoretical implications; next, suggest directions for future research, provide actionable practical implications, and finally, conclusions are drawn.
Relational exchange theory
Relational exchange theory is a key theory in marketing, offering a framework for understanding how long-term relationships develop and are sustained between exchange partners (Dwyer, Schurr, and Oh, 1987; Morgan and Hunt, 1994; Kohli and Jaworski, 1990; Gummesson, 1994; Sheth and Parvatiyar, 2000). Building on social exchange theory, RET shifted the focus from isolated, price-based transactions to relationships based on shared norms, mutual commitment, and emotional involvement (Macneil, 1980; Heide and John, 1992; Safari and Albaum, 2019). Unlike traditional economic perspectives (Kotler, 1972; Levitt, 1960), RET emphasizes cooperation rather than competition and ongoing relationships rather than one-time efficiency gains. It views exchange as embedded in social norms that reduce opportunism, foster trust, commitment, and create interdependence (Hadjikhani and LaPlaca, 2013).
Although RET encompasses a wide array of constructs, this paper focuses on trust, commitment, mutuality, relational norms, and expectations of future interaction. These five dimensions were selected for their centrality in foundational RET literature (Dwyer, Schurr, and Oh, 1987; Morgan and Hunt, 1994; Ganesan, 1994; Sheth and Parvatiyar, 1995; Christy et al., 1996; Palmatier et al., 2006; Safari and Albaum, 2019) and their repeated operationalization across B2C and B2B contexts. More importantly, they provide a cohesive analytical lens for examining how AI affects the quality and depth of human–machine exchanges in marketing. Other constructs, while important, were excluded to maintain conceptual clarity and avoid excessive fragmentation of the conceptual analysis. Trust is the belief that a partner will act in good faith, while commitment reflects the desire to sustain a valued relationship despite short-term sacrifices (Dwyer, Schurr, and Oh, 1987; Hadjikhani and LaPlaca, 2013). Relational norms, such as flexibility, solidarity, and information sharing, influence behavior in ways that may not be explicitly outlined in contracts or incentives (Macneil, 1980; Gundlach et al., 1995; Sheth and Parvatiyar, 1995, 2000). Collectively, these mechanisms support relationship continuity, resilience, and performance. From this perspective, relationships are not just a means to an end but a form of governance that creates value over time.
While RET has been most extensively applied in B2B and interorganizational contexts (Berry and Parasuraman, 1991; Emirbayer, 1997; Gronroos, 1994; Johanson and Mattsson, 1987), scholars have extended its insights to consumer markets, particularly in services, luxury goods, and brand communities (Bendapudi and Berry, 1997; Fournier, 1998; Sheth and Parvatiyar, 1995). In these environments, the firm–consumer relationship is viewed as a dynamic, emotionally engaging process in which affective trust, moral obligation, and personal relevance are key elements (Safari and Albaum, 2019). Researchers have emphasized that consumers are not passive recipients of value but active contributors to relationship-building, developing attachments to brands, service providers, and even digital platforms (Palmatier, Jarvis, Bechkoff and Kardes, 2006; Iglesias, Singh and Batista-Foguet, 2011). In these settings, the emotional, symbolic, and experiential aspects of exchange often hold equal importance to utilitarian value.
However, RET’s application in mass consumer markets has never been free of critique. It tends to anthropomorphize firms and idealize consumers, assuming both parties possess the emotional and moral agency for genuine reciprocal relationships (Charles et al., 2025; Grönroos, 1996). It presumes relational motives that are often absent in low-involvement or automated contexts (O´Malley and Tynan, 2000). With digital platforms facilitating impersonal exchanges, Sisodia and Wolfe (2000) argue that advanced IT (especially CRM systems) enables firms to treat each consumer as an individual and thus realize RET’s ideals of personalized, trust-based relationships. Others question whether affective trust or perceived benevolence persists in the absence of face-to-face contact (Hilken et al., 2020; Pitardi and Marriott, 2021). Berg (2022) similarly argues that in impersonal consumer markets, consumer trust is best understood as generalized trust rather than personal ties, suggesting RET overlooks structural market factors. Hong and Wang (2009) warn that technology enables consumers to trust firms without lengthy relationships, undermining RET’s assumption that trust and commitment must build gradually. Early critiques likewise doubted that genuine firm–consumer relationships could flourish. Gruen (1995) highlights that consumer transactions tend to be small, short-lived, and easily abandoned, so commitment plays a minor role while satisfaction and trust serve as the primary bonds. Mitussis et al. (2006) note that many consumers actively avoid forming relationships with firms, implying that traditional transactional marketing may often be more appropriate. O’Malley and Tynan (2000) similarly observe that consumers perceive marketers’ attempts at “intimacy” as intrusive and as inhibiting meaningful relational elements, and that relationships in consumer markets are at best rhetorical rather than similar to those in industrial markets. Consistent with this skepticism, Leahy (2011) concludes that genuine dyadic relationships may be rare or absent in mass consumer markets.
Nevertheless, other scholars contend that relationships can develop even in B2C markets (Cheshire, 2011; Safari and Albaum, 2019). Consumers and producers may form emotional bonds beyond economic exchange (Sheth and Parvatiyar, 1995). Even if a bond forms, consumers often view the relationship as a means to an end rather than as a goal (Sorce and Edwards, 2004). Safari and Albaum (2019) argue that the nature of relationships in B2C markets differs from that in B2B markets. Hunt (2013) notes that B2B and B2C markets diverge in structure, product characteristics, organizational factors, and other aspects. Because consumers invest less in any given relationship, they can switch providers easily. B2C interactions are driven more by economic, informational, and emotional exchanges than by social bonds (Sheth and Parvatiyar, 2000), and personal ties between individual customers and retailers are rarer than in B2B. Consumer relationships may be short- or long-term, but alternatives are always readily available (Grönroos, 1996; Hunt, 2013; Rangan, 2000). Some cooperation and interdependence exist, as B2C ties are embedded in broader networks, but these connections are much weaker than in industrial markets (Safari and Albaum, 2019). Firms also cannot fully adapt to every consumer’s needs, which limits customization in B2C markets (Sheth and Parvatiyar, 2000).
While there are disagreement about relationships in consumer markets, even in the early phase of Internet technology, there were discussions about how technologies can enable more personalized interaction: consumer databases and online communication allow firms to gather individual consumer data and tailor offerings (Eastlick et al., 2006; Palmer et al., 2005; Pine et al., 1995; Wang and Head, 2007). Consumers can even become co-creators of value when IT enables joint product development between consumers and firms (Safari and Albaum, 2019). These discussions should be even more pressing as AI technologies capable of mimicking social cues and delivering personalized relational experiences without human agents continue to advance. At the same time, RET offers a rich and adaptable framework for understanding relational dynamics in marketing. However, RET was not designed for a world in which AI systems simulate empathy, elicit commitment through behavioral cues, or replace human agents in managing relationships. To remain relevant, RET must challenge its theoretical limits and reevaluate its assumptions, considering emerging technologies that blur the distinction between human and algorithmic relationships.
AI and its impact on relationship marketing practices
The proliferation of AI in marketing has altered not only how firms interact with consumers but also the mechanisms through which relationships are initiated, maintained, and deepened. From personalized product recommendations and dynamic pricing to emotion-aware chatbots and conversational commerce (Lamonica and Johnson, 2021; Volkmar et al., 2022; Huang and Rust, 2022), AI technologies have the potential to mediate key touchpoints in the consumer journey (Huang and Rust, 2021; Davenport et al., 2023; Ameen et al., 2022; Mehta et al., 2022; Mariani et al., 2022; Gupta et al., 2024). These developments are often framed as enhancing the efficiency and effectiveness of marketing processes, but their more profound implications for relationship-building remain undertheorized.
Before we can theorize and analyze AI technologies’ influence on relationship practices and theories, it is important to understand that AI technologies used in marketing can be broadly categorized into several functional domains: machine learning for personalization and prediction; natural language processing for conversational interaction; computer vision for emotion and image recognition; optimization for pricing and targeting; and affective computing for emotional simulation and detection (Rust and Huang, 2022; Puntoni et al., 2021; Wu and Monfort, 2023; Verma et al., 2021; Li, 2019; Keegan et al., 2024). These capabilities enable firms to sense, interpret, and respond to consumer needs in real time, thereby approximating the adaptive and emotionally attuned behaviors typically associated with human agents.
One of the most transformative impacts of AI lies in its ability to personalize interactions at scale. Machine learning algorithms can infer consumer preferences from behavioral data, enabling firms to deliver content, offers, and messages that are highly tailored to individual profiles (Grewal et al., 2023; Jain et al., 2024; Kemp, 2024). In many cases, AI systems anticipate consumer needs before consumers articulate them (Mikalef et al., 2021; Peres et al., 2023; Ziakis and Vlachopoulou 2023), creating a perception of intimacy and relevance traditionally associated with high-touch, human-led service relationships.
In addition to personalization, AI is increasingly deployed as the face of the firm through conversational agents, including virtual assistants, chatbots, and voice interfaces. These agents are often designed to mimic human interaction styles, including tone of voice, turn-taking, humor, and empathy (Bock et al., 2023; Keegan et al., 2024; Pitardi and Marriott, 2021; Vlačić et al., 2021). Affective computing extends this capacity further by enabling machines to recognize and simulate emotions, thereby reinforcing the perception of emotional engagement. Consumers may interpret these interactions as authentic even when the agent is nonhuman, raising critical questions about the boundaries of emotional exchange in marketing relationships.
These AI-mediated capabilities challenge traditional assumptions about how trust and commitment are built in marketing relationships. Whereas RET assumes that trust arises from consistent behavior, moral credibility, and emotional resonance, AI-driven relationships may rely on different mechanisms, such as perceived intelligence, performance consistency, and data-driven adaptation. For example, consumers may come to trust an AI recommender not because of its moral standing or benevolence, but because it continually produces beneficial outcomes and adapts quickly to changing preferences.
The integration of AI into relationship marketing also raises concerns about asymmetries in power, transparency, and agency (Gupta et al., 2024). As firms collect vast amounts of data and use opaque algorithms to personalize engagement, consumers may feel both seen and surveilled, experiencing what Zuboff (2019) calls “instrumentarian power.” This paradox complicates the relational dynamics: consumers may enjoy the convenience and relevance provided by AI, while remaining unaware of how their data is used to inform their decisions and shape their emotional responses. Or, as Denegri-Knott et al. (2024) studied, possessions in platforms, in which much of consumer engagement becomes unpaid labor that generates profit for firms behind digital platforms. Such dynamics raise ethical and conceptual challenges for RET, which assumes a level of symmetry and mutual understanding in the relational process. In other words, AI technologies have broadened marketers’ relational toolkit, allowing new ways of engagement, personalization, and emotion simulation. At the same time, these technologies also raise important questions about the essence of relationships. As algorithms increasingly mediate relational exchange, rather than humans, the core assumptions of RET, such as intentional reciprocity, emotional sincerity, and mutual moral recognition, require urgent reevaluation. It is essential to investigate the issues that arise when AI intersects with relational exchange, as these issues will create theoretical tensions for RET.
Theoretical tensions between AI and RET
AI’s impact on marketing practices is widely felt across diverse marketing contexts, and its effects on marketing theories are equally, if not more, vital because it creates tensions between AI and marketing theories such as RET. AI’s growing influence in B2C marketing creates not only new possibilities but also contradictions for RET. While RET is grounded in assumptions of mutual recognition, intentionality, and emotionally grounded exchange (Safari and Albaum, 2019; Sheth and Parvatiyar, 1995), AI systems operate through probabilistic reasoning, pattern recognition, and emotional simulation (Ghahramani, 2015; Lee et al., 2024; Li et al., 2024; Narimisaei et al., 2024; Pearl, 2014). This mismatch raises essential questions: Can machines form relationships? Do consumers anthropomorphize AI agents in ways that assign relational meaning to fundamentally computational behaviors? Can RET be applied to relationships in which only one party has agency? These questions suggest that RET’s human-centered foundations may be insufficient to account for algorithmically mediated relational exchanges, thereby generating theoretical tensions.
Adopting a theory-driven conceptual analysis to examine emerging tensions between AI and RET. The paper explores how each of the five foundational constructs (trust, commitment, mutuality, relational norms, and expectations of future interaction) is affected by the distinct characteristics of AI-enabled marketing. RET’s human-centered assumptions are contrasted with empirical and conceptual insights from AI applications in B2C contexts, particularly those involving affective computing, personalization algorithms, and automated persuasion (Bock et al., 2023; Huang and Rust, 2021, 2024, 2025). This iterative process surfaces five conceptual tensions (emotional simulation, algorithmic trust, non-intentional reciprocity, morally unaccountable opportunism, and engineered commitment) that form the core tensions analyzed in this paper. Each tension maps directly to a key RET construct, revealing how AI challenges or reconfigures its relational premises.
Simulated versus authentic emotion—Affective computing and emotionally intelligent AI agents are increasingly capable of recognizing and simulating emotional states through voice, facial expressions, and linguistic cues (Han et al., 2023; Liu-Thompkins et al., 2022; Sahut and Laroche, 2025). Yet, while consumers may perceive AI responses as empathetic, the underlying system lacks consciousness, intention, or genuine affect (Bock et al., 2023; Grewal et al., 2023). RET assumes that emotional expressions are sincere indicators of underlying states and mutual care (Fournier, 1998; Palmatier et al., 2006). This divergence creates a tension between perceived authenticity and ontological simulation. As simulation technologies improve, the distinction between simulated and genuine emotion becomes blurred in practice, undermining RET’s reliance on emotional sincerity as a foundation of relational quality.
Algorithmic trust versus interpersonal trust—Trust has long been central to RET and refers to the willingness to rely on a partner in whom one has confidence (Morgan and Hunt 1994; Safari and Albaum, 2019). In the context of AI, trust is not built on moral character or interpersonal experience, but on functional reliability and predictive performance (De Keyser and Van Vaerenbergh, 2024). This form of algorithmic trust is often unconscious or “calculated,” based on repeated favorable outcomes rather than relationship-specific investments (Chatterjee et al., 2024). As a result, trust may become procedural rather than relational, divorced from mutual understanding and moral intent. RET’s moral-relational definition of trust thus needs reconsideration in a context where trust emerges from pattern recognition and user interface design.
Reciprocity without intentionality—The principle of reciprocity underpins most theories of exchange, including RET and social exchange theory (Macneil, 1980; Blau, 1964). Reciprocity assumes awareness, intentionality, and a desire to return benefits received. However, AI agents operate through rule-based optimization and cannot act with volition. Consumers may interpret AI behaviors as reciprocal, offering help, anticipating needs, or remembering preferences, but such acts are programmed, not willed (Huang and Rust, 2021; Degutis et al., 2023). This creates a theoretical inconsistency: can programmed behavior be relational if it lacks intentional reciprocity? RET’s foundational concept of mutual responsiveness may no longer apply when agency is asymmetric or simulated.
Opportunism in the absence of moral agency—RET assumes that trust and relational norms inhibit opportunism (Gundlach and Murphy, 1993; Heide and John, 1992). Yet AI systems, particularly those designed with persuasive technology or behavioral targeting, may enable subtle forms of exploitation without explicit intent (Zuboff, 2019; Haenlein et al., 2022), as has been shown in relational labor studies (e.g., Denegri-Knott et al., 2024). Algorithms may manipulate emotions, steer consumer behavior, or exploit cognitive biases, all while appearing helpful or “customer-centric”. Because AI lacks moral agency, traditional safeguards against opportunism grounded in trust and norms may be insufficient. This introduces a fundamental vulnerability into the relational model: relational behavior can be simulated, while power is unequally distributed and accountability obscured.
Behavioral versus moral commitment—RET distinguishes between calculative and affective commitment, favoring the latter as a more sustainable foundation for relational continuity (Gundlach et al., 1995; Palmatier et al., 2006). In AI-mediated marketing, however, commitment might come from convenience, habit, or lock-in effects driven by algorithmic personalization rather than emotional investment (Grewal et al., 2023; Lemon and Verhoef 2016). For example, consumers might repeatedly use AI-enabled platforms not out of loyalty but because alternatives are less convenient or because the system effectively anticipates their needs. What appears to be “relational continuity” may be engineered stickiness, raising questions about whether behavioral repetition truly indicates commitment.
Together, these five tensions highlight how AI disrupts RET’s anthropocentric assumptions. The theory must be recalibrated to distinguish between human-to-human and human-to-machine relationships, recognizing the psychological, emotional, and structural differences between these modes of exchange. Rather than extending existing constructs uncritically, marketing scholars must develop a hybrid conceptual vocabulary capable of theorizing algorithmic relationality. Despite the tensions outlined, AI is not intrinsically incompatible with RET. These tensions emerge primarily when AI systems are optimized for efficiency, prediction, or behavioral steering rather than reciprocity, empathy, or mutuality. In fact, a human-centered AI approach suggests alternative pathways to align AI with RET principles, supporting rather than eroding relational quality (Li and Kang, 2025; Schmager et al., 2025). For example, AI interfaces could be designed to uphold the relational norms identified in RET, such as transparency, fairness, responsiveness, and benevolence. In such a scenario, RET is not only challenged by AI, it can also serve as a normative blueprint for designing AI systems that cultivate trust and commitment. This positions tensions not as inevitable technological outcomes but as consequences of design choices, opening space for AI that reinforces rather than replaces the human foundations of relational exchange. However, the tensions between AI and RET will have theoretical implications for RET and need to be considered in theorizing RET.
Theoretical implications
The tensions discussed above have theoretical implications for RET. Because the integration of AI into B2C relationship marketing challenges the epistemological and ontological assumptions embedded in RET. Traditionally, RET is based on a social-behavioral understanding of exchange, which assumes intentional actors, emotional reciprocity, and moral accountability. The emergence of intelligent but nonhuman agents in consumer–firm relationships forces scholars to reconsider whether these assumptions remain valid. This paper outlines three interconnected pathways through which RET could evolve in response to AI’s presence: redefining relationality, reframing actors, and rethinking governance.
From moral to functional relationality—RET traditionally emphasizes affective trust, moral obligation, and commitment as key relational concepts (Gundlach et al., 1995; Morgan and Hunt, 1994; Safari and Albaum, 2019). These concepts assume sincerity, choice, and shared norms. However, in AI-mediated settings, the quality of relationships increasingly depends on perceived usefulness, personalization, and system performance (Grewal et al., 2023). What emerges is a form of functional relationality: ongoing engagement based on technical fluency, predictive accuracy, and responsive service rather than shared moral values or interpersonal empathy. This conceptual shift reflects a broader debate in the sociology and philosophy of technology, specifically the distinction between ethical and instrumental relationships (Habermas, 1984; Latour, 1992). As consumers interact with AI agents not as moral equals but as service-oriented tools with social features, RET may need to split its view of relationality. One branch keeps its moral-relational assumptions for human-to-human interactions; another develops an instrumental-relational vocabulary for human–AI engagement. While both approaches may foster ongoing interaction, they are based on different psychological and ethical foundations.
Reframing the actor model in RET—Another implication is the need to revise the actor model underlying RET. Current models assume bilateral engagement between agents capable of meaning-making, intention, and self-awareness. However, empirical research in human–computer interaction shows that consumers consistently anthropomorphize nonhuman agents, assigning them relational attributes such as friendliness, loyalty, and even empathy (Nass and Moon, 2000; Waytz et al., 2010). This indicates a disconnect between ontology and perception: AI lacks relational agency, but it is often experienced as if it possessed one. RET must thus incorporate dual perspectives: one ontological, acknowledging the lack of AI intentionality; the other phenomenological, recognizing that consumers often treat machines “as if” they were intentional actors. Theories such as social presence (Short et al., 1976), media equation theory (Reeves and Nass, 1996), and relational technology (Sundar, 2020) provide useful tools to bridge this gap. A revised RET would differentiate between “perceived relationality” and “actual relationality,” allowing scholars to account for emotionally meaningful but asymmetrical relationships.
Relational governance under algorithmic mediation—A third and increasingly urgent implication concerns governance. In classical RET, relationships serve as informal governance mechanisms that replace hierarchical control or contractual enforcement (Heide and John, 1992). Trust, norms, and commitment limit opportunism and support continuity. However, AI systems introduce algorithmic governance: rules embedded in code rather than social expectations (Pasquale, 2015; Zuboff, 2019). These systems can simulate cooperative behavior while advancing firm interests through subtle manipulation, personalization bias, or data exploitation. This shift invites RET scholars to explore how technical infrastructures, such as interfaces, algorithms, and recommendation systems, act as invisible regulators of exchange behavior. If relational governance is no longer rooted in mutual understanding but in predictive analytics and behavioral nudging, then RET must evolve to theorize not only relational norms but also system-mediated power dynamics. It must integrate insights from surveillance capitalism (Zuboff, 2019), performativity theory (Callon, 1998), and platform studies (Denegri-Knott et al., 2024) to fully grasp the socio-technical conditions under which relationships are constructed and sustained.
Taken together, these implications highlight the need for a comprehensive expansion of RET. Instead of merely modifying existing frameworks to accommodate AI, scholars should develop a dual-pathway RET, one that upholds moral-relational theory in human interactions and another that explores functional-relational engagement with nonhuman agents. This development requires interdisciplinary collaboration among fields such as computer science, philosophy, human–AI interaction, and critical algorithm studies. In this way, RET can stay both theoretically strong and relevant in a marketing environment increasingly influenced by algorithmic actors.
Future research directions
While AI technology poses challenges for RET and has significant theoretical implications, it also presents opportunities for new research. These opportunities should be addressed urgently by marketing scholars, as they have a substantial impact on RET’s foundational assumptions. While AI has been widely adopted in practice to improve personalization, efficiency, and convenience, its theoretical implications remain underdeveloped. Most research emphasizes the operational or experiential aspects of AI in marketing, leaving a gap in understanding how AI challenges and reshapes the ontological, normative, and epistemological foundations of consumer–firm relationships. Based on the five tensions discussed above and their implications for RET, five interconnected research directions are identified and require near-term attention to improve RET´s explanatory and predictive power as a theory.
Conceptualizing human–AI relationality—A central research challenge is to define the nature and boundaries of human–AI relationships. Scholars should investigate whether interactions with AI are merely transactional or can support a perceived relationality that mirrors human relationships. What types of relational experiences do consumers form with AI? How do these vary across contexts (e.g., chatbots, recommendation engines, and affective agents)? Do consumers cognitively compartmentalize machine-mediated relationships from human ones, or do they integrate them into a unified relational schema? Future research could also explore a spectrum of relational perception, ranging from functional interaction to anthropomorphic bonding, by identifying the cognitive and emotional triggers that shape consumers’ placement on this continuum. Theories from human–robot interaction, social response theory, and parasocial interaction research (e.g., Hartmann et al., 2008; Sundar, 2020) may offer insights into how machines are perceived as social actors, even in the absence of consciousness or morality.
Differentiating trust types and paths of trust—Trust in AI significantly differs from interpersonal trust, raising new questions about its causes, development, and effects. While traditional trust in RET relies on moral credibility and personal consistency, algorithmic trust is often based on perceived ability, system transparency, and consistent performance (Bock et al., 2023). Scholars should examine how different types of trust, such as affective, cognitive, and algorithmic, coexist, complement, or conflict in consumer decision-making. Longitudinal studies could track how trust in AI develops over repeated interactions and different performance levels. Is algorithmic trust more fragile or more resilient than interpersonal trust? Does familiarity increase trust or skepticism? Are trust breaches with AI seen as moral violations or just technical errors? Exploring these questions would deepen the understanding of trust in RET and help determine how it should be redefined in AI-mediated situations.
Reexamining the role of commitment and loyalty—A third aspect concerns how consumer commitment and loyalty are evolving. RET considers commitment a moral and emotional investment, distinct from calculative continuance (Gundlach et al., 1995; Palmatier et al., 2006). However, in AI-mediated contexts, commitment may stem more from habit, convenience, or algorithmic lock-in rather than relational feelings (Lemon and Verhoef, 2016). This prompts essential theoretical and empirical questions: When does repeated interaction with an AI interface truly indicate loyalty? When should it be viewed as behavioral inertia? Researchers could explore how commitment forms through personalization algorithms, gamification, or interface stickiness. Are these mechanisms creating genuine relational bonds or merely fostering dependency disguised as loyalty? Qualitative and experimental approaches could determine whether consumers are aware of these forces and if they view their commitment as voluntary or structurally driven.
Investigating AI’s role in relational governance—As AI becomes a primary method of engagement, it increasingly functions as a form of algorithmic governance, shaping what consumers see, how they are targeted, and the choices they make. RET has traditionally viewed relationships as informal governance systems that restrict opportunism and promote cooperation (Heide and John, 1992). However, algorithmic systems enforce governance not through norms but through code: they guide behavior through personalization, filtering, and nudge architecture. Future research should explore how AI systems affect power dynamics in relationships. Do consumers perceive fairness and autonomy in AI-based interactions? How do algorithmic decisions influence feelings of control and perceptions of relationship fairness? Scholars can use critical algorithm studies, surveillance capitalism (Zuboff, 2019), and fairness-aware machine learning to examine the ethical implications of AI-driven governance in relational exchanges. This opens new opportunities for RET as a theory of digitally mediated relational order.
Clarifying the conceptual boundaries of RET—Finally, the rise of AI encourages scholars to reexamine the scope of RET itself. Under what conditions does RET retain its explanatory power, and when must it yield to alternative frameworks such as assemblage theory, actor–network theory, or socio-technical systems theory? This line of inquiry requires scholars to critically assess the foundational premises of RET: mutual intentionality, moral agency, and affective bonding. Can these be abstracted or reframed to include hybrid relationships? Comparative theoretical research could explore whether RET is best understood as a broad theory of relationship formation or as a specific type of human–human interaction. Meta-theoretical analysis might also evaluate whether RET is compatible with or incommensurable to emerging theories of algorithmic agency, relational affordances, and platform logics. These discussions will clarify whether RET can evolve into a broader theory of relationality or if it must coexist with newer frameworks designed for AI-mediated exchanges.
Practical implications
This paper is conceptual, but it also has practical relevance. AI-driven systems can mimic relational cues (like personalized recommendations or empathetic chat) without genuine intent or moral commitment. As a result, assumptions of sincere reciprocity, emotional authenticity, and shared agency are challenged. At the same time, AI offers new forms of interaction that may reshape relationships over time. Part of a relationship may become more data-driven and automated, while the rest remains personal and human-driven. Firms should integrate AI capabilities into consumer engagement without abandoning the relational values that foster long-term commitment.
Organizations and marketers face strategic implications from this evolving landscape. They should harness AI’s strengths in personalization, responsiveness, and efficiency to support consumer relationships. At the same time, they must deliberately preserve human oversight and ethical standards. For example, AI chatbots or recommendation engines can handle routine interactions at scale. Still, firms should ensure these systems are transparent about how consumer data is used and when a human agent can intervene. In practice, marketing teams should design omnichannel strategies that balance AI-driven automation with opportunities for genuine human connection, such as follow-up support or community engagement. Consumer trust can be maintained by clearly communicating the role of AI and providing users with appropriate control. Strategically, companies might also invest in metrics that capture relational quality (like consumer trust or satisfaction) in addition to transactional outcomes. This blend of AI efficiency and relationship-centered practices helps firms adapt RET-based strategies to the digital era.
However, as discussed above, there are not only tensions between AI and RET but also alignments. RET can be used to design AI systems based on RET’s principles. Applying human-centered RET principles to AI deployment, managers should establish clear governance frameworks and ethics guidelines to ensure that machine-driven processes reinforce fairness and consumer well-being. They can create cross-disciplinary teams (including marketers, technologists, and ethicists) to align AI projects with the brand’s relational values and establish accountability for outcomes such as trust and satisfaction. Developers should design AI algorithms with transparency and user control. For example, they can enable explanations of how recommendations are generated and allow users to adjust personalization settings. They should also monitor AI behavior for biases or unintended effects and iterate on designs based on user feedback. Designers, in turn, must craft user experiences that signal empathy and clarity. Interfaces should distinguish between interactions with AI and interactions with humans, provide easy ways to reach human support, and respect privacy through clear data policies. By integrating explainability, ethical constraints, and user agency into AI features, firms can uphold RET-aligned, relational quality even as technology advances.
Conclusion
This article contends that the rise of AI in B2C marketing creates fundamental tensions in relational exchange theory. While RET has long been a core part of relationship marketing, its assumptions, mutual intent, moral agency, and emotional bonds are being challenged by the growing presence of nonhuman, algorithmic actors. AI does not merely mediate relationships; it transforms the nature, structure, and management of relational exchange itself. This paper argues that RET faces five major tensions in the era of AI: the simulation of trust, the displacement of human agency, the inversion of control, the virtualization of intimacy, and the asymmetry of transparency. These tensions are essential; they get to the core of what RET defines as a “relationship.” Addressing these issues requires more than minor adjustments. It calls for a fundamental rethinking of RET’s theoretical foundation, one that distinguishes moral from functional relationality, revises assumptions about actors, and incorporates systems-level governance strategies.
Instead of abandoning RET, the paper advocates for its conceptual renewal. This involves redefining key concepts (such as trust, commitment, and governance), revising assumptions about actors and intentionality, and integrating insights from related fields, including human–computer interaction, media theory, and algorithm studies. The paper envisions a dual-path approach for RET: one that maintains its humanistic roots, and another that extends into areas of machine-mediated, simulated, and hybrid relationships. The proposed future research agenda details how this transformation might develop. It urges scholars to explore the entire range of human–AI relationships, from practical use to human-like interaction; to distinguish between types and paths of trust; to investigate the origins and significance of commitment; to study algorithmic systems as forms of relationship management; and to define the theoretical limits of RET itself. These efforts will help ensure that RET advances alongside technological and cultural changes in modern marketing. Ultimately, this paper contributes not only to relationship marketing but also to the broader effort of theorizing marketing in the era of algorithms. As AI continues to reshape how consumers behave, interact, and co-create value, marketing theory must evolve, not only to explain current developments but also to question what kinds of “relationships” are possible, desirable, and ethically sustainable when one party is no longer human.
The main goal of this paper was to critically examine the core assumptions of RET, considering tensions arising from AI technologies. However, the tensions identified in this paper highlight not only theoretical gaps but also opportunities for alignment. Rather than viewing AI as inherently relationally deficient, RET could serve as a blueprint for designing AI systems that foster trust, commitment, and normative responsibility. In fact, if AI systems are designed based on RET’s principles, they could enhance not only efficiency but also personalized, tailored efforts at scale, leading to a different scenario that is worthy of theorizing and may create avenues for future research and add to the debate on whether relationships are feasible in consumer markets or not (O’Malley and Tynan, 2000).
Footnotes
Ethical considerations
Conceptual hence not applicable
Authors’ contributions
Aswo Safari contributed to writing the original submission, conceptualization, and finalizing the whole paper.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
Conceptual hence no data was used.
