Abstract
Generative AI is upon us and is changing organizations and organizing. In this essay, we extend the relational perspective on technology, which argues for moving away from an entity-based view of technology to one that focuses instead on the evolving relations and functions between people, technologies, and organizations. We do so by introducing the concept of “progressive encapsulation” which captures GenAI’s potential ability to increasingly expand the “black box” and reduce visibility into and control over the relations and functions performed. We argue that progressive encapsulation is critical in our theorizing about GenAI. As an illustration and thought experiment we consider how GenAI and progressive encapsulation may necessitate changes in our theorizing about groups and teams in organizations.
Keywords
Introduction
Emerging technologies, especially AI, have been heralded as a major disruptive force in organizations and their presence invites us, as organizational scholars, to evaluate the sufficiency of our existing theories to predict modern-day organizational processes and outcomes (Berg et al., 2023; Cornelissen et al., 2024; Faraj et al., 2018). In this article, we consider how generative AI (GenAI) in particular changes the way we can and arguably should theorize about organizations and organizing. We contend that the emergence of GenAI necessitates even more than ever that we focus on theory and theorizing because of its pervasiveness and the velocity of change. To this end, we build on the relational perspective introduced by Bailey et al. (2022) and consider how recent advances in GenAI necessitate extensions to existing theory.
A relational view fundamentally argues for a move away from treating emerging technologies as entities in favor of attending to a continuously evolving set of relations and functions associated with their use. A relational view, we argue, is even more important with GenAI because of GenAI’s dynamic properties and how GenAI can learn, adapt, and absorb functions and relations in ways that may often be invisible to users and organizations, and which furthermore may evolve its functionality in interaction with and independent of users. If we focus on AI as an ‘entity’ rather than bringing into focus the unfolding relations that are involved in its use, we risk losing insight into organizational functions and behaviors because the operations of the AI reflect iterative learning that remains hidden, thus illegible to organizational actors. That is, it is not the technology (GenAI) itself that requires interrogation, but the relations and functions it performs and how those evolve. If we focus on the relations, we are able to consider the properties and functionalities of GenAI, how others engage with it, and the services it renders to other entities—thus creating overall a better vantage point from which to understand organizations and organizing. We argue that such an extension of the relational view is necessary because this illegibility fundamentally changes the relationship between the technology, the organization, and the way that work is organized and understood. We advocate for using the relational view to better grasp GenAI, but then also to theoretically advance the relational view by incorporating the concept of progressive encapsulation. Importantly, our stance toward GenAI is neutral. That is, it may have both positive and negative effects, but those will emerge and only become apparent in practice. So, rather than adopting a normative stance, we make the case for changing our theories and theorizing help to predict how and with what effects GenAI will unfold in organizations.
The relational perspective has to date not considered how novel and existing relations and existing and novel functions performed within these relations become increasingly internalized by technology because of how that technology is designed and operates. To meet this challenge, we extend the relational perspective by introducing the concept of progressive encapsulation to capture the way that GenAI is able to learn and adapt in ways that are difficult to observe, but nonetheless have discernible and material consequences. We argue that this encapsulation is key to theorizing about the role of GenAI in organizations and organizing.
We use examples of team coordination and team dynamics to illustrate how a relational approach that treats progressive encapsulation seriously can enhance our ability to theorize about organizations and organizing. That is, as an illustration and thought experiment, we articulate how extending the relational perspective on emerging technologies is useful in updating how we theorize about certain aspects of teams and team coordination in the context of GenAI. We use this as a foundation for suggesting broader implications for organizational theorizing and research.
Generative AI
“Generative AI” (GenAI) is a broad term denoting computational methods that can generate images, text, sound, or other content based on the data on which they were trained (Feuerriegel et al., 2024). Contemporary GenAI is largely based on deep learning techniques employing sophisticated algorithms rooted in computational networks designed to emulate what humans might produce. Whereas more traditional AI could memorize, replicate, and extrapolate, GenAI can learn to produce highly novel outputs (Grimes et al., 2023), which some scholars found to be on a par or even exceeding the quantity and quality of human-produced ideas (Eshraghian, 2020; Haase & Hanel, 2023). One of the key mechanisms in GenAI is its ability to learn patterns from vast amounts of data. Whether it is analyzing historical data on organizational structures, market trends, or employee behavior, these AI systems can identify intricate connections and generate potentially useful new ideas or solutions. Scholars have also found evidence that individuals who draw on GenAI in creative tasks perform better and enjoy the task more. These effects are particularly strong for lower skilled individuals (Noy & Zhang, 2023).
A prominent example of GenAI is the use of large language models (LLMs) such as GPT, BERT, Megatron, and Sparrow. Being trained with human instruction at first, such systems over time employ a largely autonomous and self-supervising approach to learning. Given enough training data, a certain model size (number of parameters considered), and computational resources, LLMs today can process and generate text with minimal fine-tuning of the parameters during training to perform a broad range of complex tasks. Importantly, chatbots such as ChatGPT-4 capture more of the context surrounding words (tokens) and can produce longer responses to prompts in sequences that emulate human communication. They can also answer queries without being trained on a particular task.
LLMs, diffusion models, and related approaches can summarize and abbreviate content, retrieve novel insights from patterns deeply embedded within huge vaults of data, translate, transform sounds, text, or videos into new forms of content (transform text to image), expand content such as creating synthetic data, or dialogue with users. Today, such systems write or revise emails, draft corporate strategies, assign work to employees, assess patterns of customer behaviors, create novel artwork, write social media posts, transform text into distinct styles, compose music, author movie scripts, create video clips, make short movies based on users’ scripts, design products with or without collaboration of human designers, write software code, and so on. Importantly, GenAI systems have the ability to produce realistic outputs by learning through feedback. Due to unique architectures and capabilities to learn and adapt, it has become an invaluable tool for advancing scientific findings as well (Wang et al., 2023). Chat-GPT has even passed the bar in the US Medical Licensing Examination, which according to many observers makes it suitable for a range of tasks in medicine, such as clinical practice, research, and education (Thirunavukarasu et al., 2023).
A Relational Perspective on Generative AI
In this paper, we focus on how GenAI changes the way that we can and should theorize about emerging technologies in organizations and organizing. To articulate how theory needs to be updated to address changes shaped by GenAI, we focus on and leverage the relational view of technology as articulated by Bailey et al. (2022).
A relational approach is particularly relevant with GenAI, we argue, because it is imperative to capture functions “becoming” in relation to other entities and how such becoming gives entities capacity to build and sever relations with existing or new entities. The relational view was built on a long history of organizational research that treats technology and technology use as intertwined with the social context (e.g., Barley, 1990; Orlikowski, 2000). Bailey et al. (2022) argue that “existing theories are insufficient to explain the magnitude and dynamism of relational possibilities involving emerging technologies” (p. 5) and offer the relational view to emphasize the co-constitution of technology, organization, and organizing. A relational view assumes a fluid and constantly evolving constellation of relations and functions. Bailey et al. (2022) assert that emerging technologies are infusing all aspects of organizational life and occupy an ever-increasing role in shaping all aspects of organizing. They also entreat us as organization scholars to shift away from a pure “entity” view of technology to one that incorporates a relational perspective—that is, “rather than viewing technologies as fixed entities in fixed relations, we offer that it is more fruitful to approach them as made up of relations and entwined in relations that are constantly evolving” (p. 4). Instead of studying technologies as stand-alone objects, they encourage us to “instead focus on the relations through which technologies are constituted and through which they interact with other processes and entities around them” (p. 4). In doing so, we are better positioned to capture how people, organizations, technologies, data, and other entities in organizational life and their effects are intertwined and co-created. A relational view is expedient for investigating GenAI, we argue, because of the dynamic and fluid way that GenAI interacts with organizations and various processes of organizing. Yet, as we begin analyzing GenAI through the relational perspective, we also uncover the need to extend this theory.
Fundamental to the relational view of emerging technology are relational dynamics characterized by constellations of relations in which new entities emerge or are discarded and as novel relations between entities surface. In GenAI, for example, we may observe new relations between the GenAI, a messaging system, and a manager eager to tailor a written message that requires sensitivity and precision for an audience of employees. The GenAI sits at the center of these relations. In Bailey et al. (2022), the focus on relations was predominantly between the emerging technology and other actors in and around organizations, for example between a robot apple picker, the orchard, and the farmer. However, with GenAI, emerging relations can also describe a design element inherent in the technology. For example, GenAI is in a state of constant flux as it forges new connections (e.g., in transformer architectures, vector embeddings of tokens that relate them to other tokens in a multidimensional space), that in turn feed its own modeling through self-supervised learning. Thus, the system is generating new relations and potentially new functions that need not rely on, or interact directly with, other organizational actors. The relational view also introduces the idea of relational cascades, that is, effects of technologies on one set of relations may trigger changes in relations throughout a constellation of relations. We can consider, for example, GenAI in a theme park. The park can provide customers with a smart device that tracks their movements, purchases, and preferences (all new functions). In doing so, it has the potential to continually generate new relations (cascades) between customers and stores, admission gates, rides, restaurants, and hotels. For example, the GenAI might place food orders based on a surge of customers, dynamically route workers to busier rides to avoid lines, and trigger discounts for toys that are not selling well. Similarly, if GenAI is used by managers to augment their ability to motivate and evaluate employees, such as determining job assignments or pay increases, new relations and functions may cascade between managers, GenAI, and employees, as well as within the AI as it learns. Cameron (2021), for example, describes how ride-sharing drivers play games with the algorithm in order to achieve their goals. With GenAI, as compared with previous emerging technologies (including those studied by Cameron), rather than being fixed, these functions and relations are likely to evolve and expand as the AI learns. In this process, it may ultimately obscure managers’ and employees’ visibility into the basis on which the AI is making decisions and the behaviors they are autonomously incentivizing.
Note that this obscuring visibility is a process, so we need a process theory that maps relations and functions at the outset and at the output stage so that we can better understand “organizing with AI” as a dynamic division of labor between AI systems, humans, and organizations. GenAI, for example, can record and help summarize meetings, produce records of doctors’ visits for increased accuracy which may allow doctors to be more present with patients, recommend and perform statistical analyses that can then be evaluated by data scientists, or design new protein structures as an essential step in streamlining drug discovery. Each of these reflect a new function in a new relation between workers, GenAI, and often other constituents, such as customers and pharmaceutical companies. If an AI summarizes meetings, for example, participants may be better able to focus on the content of the meeting. The AI may generate a draft of meeting notes and a designated individual may be assigned responsibility for reviewing those notes for accuracy (a new function) and potentially iterate with the GenAI to produce higher quality notes, perhaps with some nuance related to status structures (a new function and new relation), then ask the AI to distribute the final meeting notes to the attendees (a new function for the AI and a new relation between the AI and meeting attendees). GenAI may also create novel relations by generating a synthesized summary of the meeting using text, images, sound or videos that it generates based on the patterns observed. Or perhaps the AI can produce an assessment of the quality of the meeting based on the nature of the contributions, who participated, who spoke to whom, and the emotional tone of the meeting, thus introducing a new function (assessing meeting quality) and relations (e.g., if the assessment is shared with participants).
Constellations of relations, as described by Bailey et al. (2022), thus depict a complex compounding of relations between GenAI and humans and of the boundaries those compounding relations in turn create toward other constellations of relations. We posit that GenAI has the potential to significantly increase the complexity of relations, especially if the AI is engaging with these multiple systems without human intervention. It is therefore imperative that we focus on these constellations of relations to understand the continual unfolding of organizations and organizing. At the same time, as GenAI learns and absorbs more and more functions, the lack of direct supervision and legibility means that users and researchers have less visibility into the ways that these functions and relations are evolving.
Generative AI and Progressive Encapsulation
The basic function of GenAI systems today is deep learning enabled through a neural network design. At the level of the neural architecture, GenAI operates by rapidly strengthening and weakening relations between nodes in the network as the system gets exposed to new data. Their architectures, capacities, and flexibility raise critically important questions of how GenAI progressively begins to encapsulate functions and relations in organizations and in processes of organizing.
Our concept of “encapsulation” is inspired by the idea of encapsulation in computer science. In that field, encapsulation refers to legacy code that remains intact (wrapped) and is then connected via interfaces to new layers that allow it to function (Sneed, 2000). This resonates with the system architecture of GenAI in which novel inputs are increasingly encapsulated with the learning mechanisms building the model. It also alludes to the difficulty of “opening up,” inspecting, and understanding how the model learns. We borrow the term “encapsulation” to capture the way that GenAI can continually expand the “black box” around the relations on which it relies and the functions it performs, thus increasingly reducing the visibility of the relations and functions being enacted and the relational cascades being triggered. At the level of the architecture, GenAI operates by rapidly strengthening and weakening relations between nodes in the network as the system gets exposed to new data. Due to the learning mechanisms underlying GenAI, this encapsulation is likely to be progressive. We conjecture that in a work environment with human users that build relations with GenAI and exchange services with the system, the system gradually learns about tasks. As the system services the user or organization on a broader range of tasks, it becomes increasingly convenient to accept these services in exchange for more task-related data and support (Von Krogh, 2018).
The architectures, capacities, and flexibility of GenAI raise critically important questions of how GenAI progressively begins to encapsulate functions and relations in organizations. Specifically, we theorize that when GenAI promotes progressive encapsulation, this leads to more and more functions and relations being collapsed and encased. Encapsulation is a relational accomplishment. Inherently, the GenAI is likely ingesting data through multiple relations reflecting multiple functions and, from that, generating novel insights. The way GenAI learns and generates insights are functions not likely to be well defined or transparent. Thus, layers of relations and functions being enacted become increasingly obscured—that is, the AI typically capsulizes in a way that cannot be observed or recreated—so we can study relations and functions as input and as output, but the process for getting there is opaque (black box). We argue that as encapsulation happens, the distinction between human organizing and AI organizing may become blurred and, over time, with the ever-expanding role and functionality of GenAI, inaccessible to humans. If a manager is using GenAI for hiring, for example, the AI would likely continually improve on its ability to select promising candidates. Over time as managers and applicants interact with the system, the evaluation algorithm may become increasingly refined and inaccessible to the manager, thus making functions and relations employed in the task of hiring inaccessible as well. We argue that this is progressive, meaning that more and more functions are encapsulated as the GenAI learns and updates its model, thus increasingly doing more without supervision or intervention—but twinned to the effect that how it does so will become increasingly obscure.
To keep track of such progressive encapsulation, we need a theoretical apparatus to understand the importance of this process. Such theorizing needs to capture how relations are built and evolve between the AI and the organization’s members, teams, and other units. It begins with identifying the site of practice and work (work bench, team meeting, decision process, etc.), by looking not only at how GenAI becomes embedded in practice and used to perform work, but also explaining how that use changes over time as the GenAI offers novel functions. This means attending to and describing the functions of AI and humans, teams, and organization units as they evolve in this constellation of relations, and identifying the drivers and conditions that alter these functions over time. Thus, a relational perspective on GenAI will aim to explain the causes and conditions of progressive encapsulation as well as the changing constraints on the process. It seeks to explain how progressive encapsulation internalizes, builds, adapts, breaks, and reproduces relations in organizations.
Implications for Theory through the Illustration Team Coordination
To ground our understanding of a relational view of GenAI and articulate how we might exploit the relational perspective to make it adept at this task, we draw on the study of groups and teams in organizations. Our intent is to provide an illustration as well as a thought experiment in which we speculate about possible (as yet unknown) evolving relations and functions. Specifically, we examine implications for theorizing about team coordination. We selected team coordination as an illustrative example because we anticipate that this area within organizations is ripe for GenAI. That is, demands for agile, effective, and rapid coordination in teams such as those tasked with drug delivery and technology development are escalating (Ekins et al., 2013), yet coordination on these teams is complex and emergent. GenAI could potentially enable team members to transcend coordination challenges and more effectively apply their expertise to the work itself. Were that to happen, it would likely demand that, as such new relations and functionalities take shape, we extend existing theories or develop new theories to explain team coordination and GenAI-fostered team dynamics.
Team coordination
The mechanisms and processes of coordination, that is, the management of task interdependencies, are among the most studied aspects of organizing in the field of management and organization theory. Coordination has both a formal character, such as found in role descriptions, organization reporting lines, and information flows, and an informal character where organization members evolve social practices that help them cope with interdependencies arising from challenging situations at work and in their assigned tasks (Ben-Menahem et al., 2016). During the past few decades, studies have demonstrated how digital technology assists organizations in more effectively and efficiently managing formal interdependencies between organizational units, teams, tasks, roles, and individuals. For example, digital project management systems can help plan activities, define milestones, alert project participants to upcoming deadlines, or store and update work outputs (e.g., Massey et al., 2003). While such conventional systems enable organization members to perform their work in a coordinated manner, they are rarely designed to actively mediate or manage interdependencies.
A novel capacity of GenAI is learning about patterns of work performed in organizations (Jarrahi et al., 2023; Larson & DeChurch, 2020; Ooi et al., 2023). For example, by monitoring messaging patterns between organization members, work rhythms, work input and outputs, shifting responsibilities, objectives, and performance, such systems may increasingly enter the domain of informal coordination, hitherto reserved for human interaction. In their work on drug discovery teams, Ben-Menahem et al. (2016), for example, found that team participants complemented formal coordination structures by evolving a set of informal coordination practices. First, each team member repeatedly discussed with other specialists, and anticipated and adapted their own work to the needs of those specialists. Thereby, team participants could ensure that the work output contributed to the overall performance and objectives of the team. We might conjecture that as GenAI learns about the patterns of work and deduces the needs of various team participants over time from these patterns, specialists’ needs could be synthesized into simple statements that prompt each participant to reflect on their work as it relates to the activities of other participants. As a side effect, team participants may be able to focus more attention on their own work while still remaining vigilant to the emerging needs of their colleagues.
To theorize about the changes in team coordination as a result of GenAI, a relational perspective would suggest examining the changes in functions and relations on the team, between the team and the larger organization, and with the GenAI. In the study described above, team members were aware of what others were doing and adjusted their behavior accordingly. It is well known that coordination problems arise due to bounded rationality and satisficing among organizational actors managing interdependencies (Puranam et al., 2012). Fed with structural and processual data, GenAI could potentially capture and analyze a vast amount of strong and weak recorded interdependencies that otherwise would be “invisible” to the human observer. Thus, GenAI might absorb entirely the function of informal coordination and minimize the relations between team members. Were this to happen, what effects might we see in terms of cohesion, team development, psychological safety, conflict, and other dynamics that are intertwined with team coordination?
In her study of medical teams in a children’s hospital, Mayo (2022) examined emergent interdependence, especially between a core team and a shifting group at the periphery. Importantly, emergent interdependence was organic and associated with better patient outcomes. With the introduction of GenAI, might emergent interdependence be disrupted because team members blindly follow guidance from the AI? Or might emergent interdependence be less necessary as the AI reliably identifies more nuanced requirements for coordination inside and outside the team? Using a relational perspective, we might want to theorize the constellation of functions being performed by the GenAI and how that interacts with the remaining functions of the human team members. Doing so would provide greater insight into how this new constellation of functions, shared between the team members (core and periphery) and the GenAI, would impact team processes and team effectiveness.
GenAI challenges us to rethink our theories and how we theorize (i.e., processual, rather than cross-sectional) for team coordination. In addition to the above, it inspires us to ask what part of formal and informal management of interdependencies may become subject to progressive encapsulation. As team participants gradually make use of GenAI to formally coordinate their work, and anticipate the team’s needs and synchronize work, they may also build models that allow for even smoother formal and informal coordination of teamwork in the future. We need theorizing that captures what shapes team participants’ dynamic relations with AI to coordinate their work. Employing a relational perspective, we can theorize conditions shaping how various informal coordination practices, beyond need anticipation and synchronization, change the functions of team members and the AI as tasks become gradually encapsulated and participants dynamically relate to the AI and other team members. Moreover, GenAI may create a “digital twin,” a digital representation of work processes over time that helps preemptively avoid problems of coordination (with models being informed on past teamwork). An important question for future theory is how such twins fare when coordination needs to change due to increasing uncertainty, new glitches, or crises.
Another pertinent question relates to teams’ ability to evolve critical coordination practices when other practices become encapsulated within the GenAI. As GenAI assumes an increasingly important role in teams, scholars could investigate how the encapsulation of some practices shapes the capacity for teams to evolve other critical practices that might mitigate uncertainty or crises. In many types of teams (emergency wards, SWAT teams, firefighting teams), the impulse and efforts of team members to anticipate needs and synchronize activities may bring forth a sort of “heedful interrelating” (Weick & Roberts, 1993), collective mental models bult in situ and held by team participants on the distribution and interdependencies of tasks facing the team. As Weick and Roberts (1993) show, such heedful interrelating is necessary for teams to cope with unexpected and hazardous situations at work, in particular when some team member is no longer able to perform specific tasks due to accident or breakdown of equipment. To what extent are the emergence of such mental models that revolve around relations between teams, individuals, tasks, time, and GenAI shaped by a progressive encapsulation gradually hiding coordination practices from the view of team members? More generally, how would progressive encapsulation of coordination practices affect the ability of organizations, when as Weick and Roberts show, heedful interrelating is also indispensable for rapid response to major crises?
We also speculate that GenAI may affect the way we need to theorize about teams and team boundaries. Even in the absence of GenAI, research has begun to document the fluidity of team boundaries in modern-day work (e.g., Mortensen & Haas, 2018). A body of research on multi-team systems (e.g., Zaccaro et al., 2020), has described how networks of teams in organizations coordinate to achieve goals and has advanced theory to explain their processes and the emergent states of these multi-team systems. Flexible team structures therefore enable organizations to allocate specialists to projects whenever and wherever needed (e.g., use of star scientists) and often manifest in multi-team structures in which sub-teams are embedded within teams, or in which teams cross each other’s boundaries (Ben-Menahem et al., 2016). With the advent of GenAI, team boundaries may become increasingly porous, as GenAI has the potential to reduce coordination costs across boundaries.
This conjecture on team boundaries raises important questions for theorizing the impact of GenAI on teams. First, as we argued above, GenAI may smooth formal coordination structures and informal coordination practices, enabling a dynamic, flexible, and task-oriented allocation of team participation. While past technology allowed participants to schedule their work (e.g., Outlook), GenAI can now monitor work processes across teams in the organization, and guide participants’ dynamic transition between teams. GenAI may be able to assist participants in preparing their work and trigger their reflection on what type of input is needed as they embark on their work together. 1 As GenAI learns about work across teams and participants, and gradually encapsulates coordination, it may also propose novel team compositions and thereby shape team membership and dynamically rearrange boundaries. As scholars advance theory on GenAI and teams, we anticipate that considerations of team boundaries will be central. We may even need to ask what it means to be a team in the world of GenAI. Mortensen and Hass (2018) argue that contemporary teams often display fluid participation and a sharing of participants across many teams. Modern teams, they argue, should therefore be conceptualized as “dynamic participation hubs.” Gen AI may be central in assembling (and reassembling) and organizing dynamic participation hubs within and across organizations.
In arguing for a more flexible and dynamic view of team boundaries, Mortensen and Haas (2018) recommend we move theorizing away from conventional boundary spanning towards “resource brokerage.” GenAI may be ideally placed to broker human resources, physical resources (laboratories, pilot manufacturing facilities, meeting rooms), and digital resources (computations, storage) between teams as needs for resources arise. Such brokerage may become increasingly encapsulated as GenAI learns about the utilization and allocation of resources over time and combines them with team objectives and work schedules. Thus, we need theory on how GenAI relates to resource holders and how coordination unfolds between humans and digital, data-driven systems. A relational approach is well positioned to examine these relational cascades. We also need to understand the intricate social dynamics around digital brokerage. For example, GenAI may reduce the level of conflict between resource holders and resource takers through progressive encapsulation, but this may come at the expense of resource holders (i.e., managers or team members) feeling disempowered and their resources being expropriated. It may also come at the cost of weaker social ties and a reduced sense of team identity, and perhaps undermine a sense of psychological safety (as relations within the team degrade and their actions are always recorded and under surveillance). Mortensen and Haas (2018) underscore such dynamics by pointing to competing demands as a result of deteriorating team boundaries. GenAI is also likely to accelerate the “boundary blurring” between teams that Mortensen and Haas documented. An examination of the changing relations and relational cascades, especially power relations and shared identity, between participants, including the AI, provide a starting point for further theory development. Ideally, building new theory would result from access to algorithms and foundation models, but barriers to access may make this difficult. Alternatively, scholars can examine the forces that shape the GenAI, including those that shape inputs, the selection of algorithms, responses to AI-generated guidance (e.g., resistance to algorithmic management, as documented by Cameron, 2021) and outcomes, including how (visible) functions evolve for individuals, teams, and organizations.
Discussion
We began this article by noting that the relational perspective on emerging technology and organizing has yet to sufficiently incorporate GenAI’s malleable features of learning, adaptation, and, especially, encapsulation in its theorizing. More precisely, it remains unclear how novel and existing relations and functions performed within these emergent relations become increasingly internalized by GenAI as a consequence of how that technology is designed and operates. To meet this challenge, we began by offering an introduction to the design and operation of GenAI as a particular type of AI built on recent advances in computer science and computational linguistics. Drawing on the idea of cascading relations from Bailey et al (2022), we theorize how current and future functions performed by GenAI across relations in organizations may lead to a process of progressive encapsulation, whereby an increasing set of tasks could be performed and internalized by GenAI. Based on this conjecture, we proposed how a future theory of progressive encapsulation may inform the understanding of changes in team coordination, as teams and individual members evolve relations and utilize existing and new services offered by GenAI. We pointed to the possible positive effects of progressive encapsulation on coordination of team processes, such as reduced efforts by individuals to schedule and synchronize their work, but also to potential unintended consequences, such as less satisfying social interactions, fleeting team identity, and a reduced sense of power, ownership, and meaning at work. As mentioned earlier, our stance on GenAI is neutral and we anticipate both positive and negative effects, many of which will be enmeshed with practice.
The theory of progressive encapsulation needs refinement in future work. Broadly speaking, we need more conceptual understanding of the forces in and around organizations that accelerate and arrest such encapsulation. While these forces may be technological, such as novel versions of transformer architectures and more targeted foundation models, they may also be organizational, such as the potential loss of intellectual property stemming from widespread use of GenAI, or even societal, such as alienation from organizational members and organizations. We propose that theorizing such forces begins by answering some salient research questions in management and organization studies and considering such questions in five domains of inquiry. First, since power may in essence be understood as a relational concept (Bailey et al., 2022; Reed, 2012), our theory begs the question of how progressive encapsulation shapes and reshapes power structures in organizations. As coordination can be increasingly performed by GenAI across a broader set of roles, functions, units, and tasks, traditional power stemming from actors’ relations to and control over scarce resources may shift towards AI systems. What are the effects of such changes on power holders and resource brokers, and how do they respond to these changes? Are there alternative sources of power that workers seek or that organizations can provide? When is resistance to the implementation of GenAI in organization a result of perceived power loss, rather than the lack of trust in such systems as is often assumed? Does progressive encapsulation over time drive inclusion and empowerment of marginalized groups, or does it instead reinforce exclusion of these groups and fossilize social strata with ever decreasing visibility into the relational cascades? How do the relations between sources of data, algorithms, decision makers, and organizations factor into these actions? Do hitherto marginalized and disempowered groups in organizations gain a broader influence on organizational decision making? Or does the largely impersonal coordination by GenAI based on historical data on participation in decision making sever relations and further marginalize such groups?
Second, connecting to an old debate on the effects of automation in organization and management studies (Acemoglu & Restrepo, 2019; Faunce, 1965; see also debate in (Raisch & Krakowski, 2021; Tschang & Almirall, 2021), as progressive encapsulation evolves across levels and functions in organizations, how does the process shape the division of labor? As we discussed earlier, patterns of coordination may change dramatically with GenAI supporting teamwork. Over time coordination by GenAI will shape work as well as where work gets done in organizations. Such coordination may shape specialization of work in ways we have not seen before. Could organizations that deploy GenAI for coordination purposes evolve a sort of “hyper-specialization” where general tasks are internalized by GenAI (e.g., translation, accounting, auditing) and more specialized tasks (e.g., R&D, equipment sales, geothermal analysis) are performed by humans? Or will we see the onset of “hyper-generalization” in which such specialized tasks are performed by GenAI and are no longer available or visible to organizational members, who increasingly attend to general tasks, such as systems engineering, supplier negotiations, strategy, and leadership? Or will these patterns of division of labor amalgamate?
Third, foundation models built on vast and diverse data give GenAI enormous potential to generate novel and diverse ideas (Epstein et al., 2023), making them a potentially valuable source of creativity in any team or organization. However, how can GenAI become a valuable idea source through team creative processes, and how do relations and functions between GenAI and other team members in idea generation evolve? While GenAI can actively generate novel ideas and interact with other team members to foster creativity at the group level, there are many other subtle but important functions fulfilled by creative processes in teams, such as creating a shared sense of purpose, sharing tacit knowledge, fostering a group climate conducive to collaboration, or building commitment to the ideas generated. What is the impact of progressive encapsulation on these auxiliary processes?
Fourth, GenAI is able to be simultaneously present in many teams throughout an organization, which should enable improvements in knowledge sharing between teams. Such omnipresence enabled through progressive encapsulation could help organizations reuse and share knowledge across teams more effectively, and at the same time counter the wasteful duplication of work. Yet, these brief ideas need solid theorizing to be well understood. Under what conditions does GenAI work effectively in multi-team systems, and what experiences do human participants gain as these systems increasingly encapsulate teamwork? What alternative sources of power emerge if knowledge confers less power?
Fifth, there are many micro-related questions that follow from a relational perspective and progressive encapsulation. As an increasing set of services around tasks is internalized by GenAI, what are individuals’ psychological responses to the new relational fabric in which they are embedded? How do changes in functions and relations shape employees’ outcomes such as stress, creativity, sense of control, psychological safety, turnover intentions, and so forth. What are the conditions under which progressive encapsulation leads to a loss of orientation for individuals—an unmaking of meaning at work—and under what conditions does it not?
As these five areas of inquiry indicate, we call for process theories on emerging technologies to complement variance theories on AI. Where the latter traditionally stipulate features of technology and their relations to individual, team, and organizational outcomes, process theories seek to unravel the emerging nature of technology and how it relates to the processes of organizing. By following the introduction and diffusion of GenAI over time, it will be possible for scholars to investigate how relations and functions between the technology and organizational actors become subject to organizational, societal, and technical forces. This may demand a distinct style of process theorizing that Cloutier and Langley (2020) refer to as “recursive,” where reasoning around the forces becomes increasingly embedded within the “process ontology.” Given the functioning and design of GenAI, we recommend focusing on inputs and outputs in the relations between Gen AI, individuals, teams, and other units, and also on observing and explaining these forces that accelerate or arrest progressive encapsulation. The theoretical focus would be on various conditions that may explain the speed, stability, amplification, evolution, and so on, of progressive encapsulation.
Since ChatGPT-4 was launched late in 2022, most of us have been awestruck by the versatility and potential of this technology, but also alerted to its irrefutable risks. As a scholarly community, we react very differently to these developments. Some scholars argue that we should avoid theorizing about and researching AI because it is a technology undergoing such rapid change; the findings we offer today are likely obsolete tomorrow, so why bother? We take a slightly different position in this paper and contend that more instead of less scholarly work is needed on AI as an emerging technology, because it already impacts and will increasingly shape organizing. We are now at the cusp of digital technologies transitioning from inert entities to relational agents building, maintaining, and severing relations with social actors in organizations. GenAI is just the latest example of such emerging technologies that, due to their massive computational capacities and breadth of functions they offer, will fundamentally alter the practice of organizing. To prepare for a future in which such emerging technologies are increasingly present, we need “prospective theorizing” with speculative vigor (e.g., Gümüsay & Reinecke, 2024)—future oriented, imaginative, and perhaps even value-led (but logically developed) frameworks that allow us to understand both the opportunities and the risks of GenAI. Advancing a theory of progressive encapsulation as we have done in this paper, is just one such attempt in this direction.
Like most disruptive innovations there is of course much uncertainty surrounding the uptake of GenAI in organizations. However, unlike many digital innovations in the past, organizations are now poised to benefit from these technologies at scale. During the past decade, organizations have streamlined processes, intensified data capture, implemented analytics, created interfaces between systems, boosted novel and user-friendly applications, and greatly expanded their data storage or processing capability inhouse or with cloud providers. All of this and more has prepared many organizations for the rapid introduction and deployment of GenAI across a range of activities (see also Wachter & Brynjolfsson, 2024).
Footnotes
Acknowledgements
We are grateful for comments and support from Rudolf Maculan, and comments by Joep Cornelissen, Angelos Kostis, and Mark Mortensen.
Declaration of conflicting interests
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is based on research supported by the National Science Foundation under Grant No. 2211943 to the first author and a Research Grant from the Swiss National Science Foundation (Grant no: 100013_197763) to the second author.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
