Abstract
The past decade has witnessed a surge of interest in how AI will transform work and organizations, presenting scholars with the difficult task of studying emergent technologies. This article advances a relational perspective that emphasizes how firms and workers are situated in, and respond to, extra-organizational pressures and forces. We examine four sets of relationships that can frame scholarly engagement with AI: finance and technology, monopoly and competition, producers and adopters, and high-status and low-status workers. We argue that this perspective illuminates connections between levels of analysis that might otherwise seem unrelated, revealing how AI tools and practices are situated within, and contribute to, broader social, financial, and industrial dynamics.
The past decade has witnessed a surge of interest in how AI will transform work and organizations, once again presenting scholars with the difficult task of studying emergent technologies. As tech companies and their critics forecast purportedly unprecedented benefits or harms, analysts of organizations have endeavored to distinguish rhetoric from reality. Indeed, AI hype has become so prevalent that its contours and consequences for organizations have themselves become objects of study (Hoffman et al., 2022; Vinsel & Russell, 2020). A further challenge is identifying what, exactly, we are examining. “AI” is not a tangible, coherent thing; its meanings can vary across contexts, and its boundaries are subject to continual contestation (Joyce et al., 2021).
What do we mean by AI? Although the term has been in use since the 1950s, understandings of AI have taken different forms. The current usage typically refers to AI that is based on machine learning, a process where software, instead of being explicitly programmed to follow a set of steps to generate an output, makes predictions, executes decisions, and generates content based on the analysis of large volumes of data (Gutierrez Lopez & Halford, 2024). According to many, much of what is touted as AI is a form of “snake oil,” shrouded in hyperbole and misinformation (Narayanan & Kapoor, 2024). Leaving aside the contentious issue of whether or not something is “really” AI—along with its exaggerated capabilities and overvalued companies—it is clear that the platformization of software systems does imply some material shifts. This includes the extraction of massive amounts of data, huge aggregations of computational power, and the rise of highly modular computing tools aimed at prediction, automation, and content generation
How does one go about studying nascent and nebulous socio-technical developments? In this essay, we offer one approach. Rather than presenting a list of potential research areas or reflections on theoretical debates, our aim is to outline a critical orientation that involves multi-level, relational thinking. New technologies, after all, are always situated within larger structures and logics (Narayan, 2024). We examine four relationships that play a profound role in shaping the production and proliferation of AI tools and related technologies. We argue that this perspective illuminates connections between levels of analysis that might otherwise seem unrelated or external to the scope of study (Hart, 2018). Our approach builds on prior work including Irani (2015, 2019), Bailey and Barley (2020), Lindtner (2020), Dorschel (2022), Anthony et al. (2023), and Nieborg et al. (2024).
When it comes to ascertaining the material practices and effects of AI, we can start by uncovering how new technologies interact with the organizational contexts in which they appear–attending, for example, to the distribution of roles and authority (Brayne, 2017; Christin, 2017), or the impact of analyst predictions on actual adoption (Pollock et al., 2022). The organization need not always be the focus of analysis, nor the unit of inquiry. Rather, organizations offer themselves as a useful starting point, a place to locate oneself in order to render processes and technologies visible (Monteiro et al., forthcoming). Analyses of the organizational realm allow for a mid-level positioning between market and ecosystem actors (e.g., sellers of AI, consultants, investors) and workplace-level dynamics (e.g., skills, roles, workflows, teams). A relational perspective also considers different organizational forms and the linkages between them. For instance, the unique structures, financial logics, and organizational practices of asset-light startups are dependent on technological infrastructures maintained by the large, multidivisional corporations now colloquially termed ‘Big Tech.’
The firms driving AI development are embedded in unstable webs of relationships with investors, regulators, labor organizations, competitors, large tech platforms, consortia, and so on. For companies both within and beyond the tech sector, forces that may appear to be “extra-organizational” can have a substantial impact on when and how AI is developed and put to use. Indeed, the idea of firms as independent, bounded domains of planning and control is somewhat outdated, making relational thinking essential. Analyses that situate organizations within their institutional environments—acknowledging the multitudinous and at times contradictory pressures they face—can offer more comprehensive accounts of AI's development and consequences that other approaches may miss (Bailey & Barley, 2020). Indeed, the relationships we explicate here are relevant to the study of any of the technological movements associated with the rise of AI (e.g., platforms, algorithms, cloud computing, etc.).
In the remainder of this essay, we highlight four sets of relationships that can frame scholarly engagement with AI: finance and technology, monopoly and competition, producers and adopters, and high-status and low-status workers. We believe that each holds promise for advancing studies of AI, work, and organizations.
Relationship 1: Finance and Technology
Financial practices and investor logics have come to play an increasingly important role across the economy (Krippner, 2011). Researchers have examined how investors—both individual and institutional—wield their power to shape the priorities and management of firms. For example, scholars investigating the restructuring of publicly traded corporations have shown how the imperative to redirect firms' resources to investors leads to the erosion of wages, benefits, and job security for workers (Fligstein & Goldstein, 2022; Lazonick & O'Sullivan, 2000).
The emergence of the tech sector as we know it has gone hand in hand with the rise of a distinct form of financing known as venture capital (VC), making it important to revitalize this line of theorizing. In contrast with the wealth of studies on the “shareholder value” approach to governing publicly traded corporations, scholars have only recently begun to document the rise of venture capital and its consequences for privately held organizations and the economy more broadly (Klingler-Vidra, 2016; Langley & Leyshon, 2017; Rahman & Thelen, 2019). Venture capital investors build portfolios of startups with the potential to achieve rapid and exponential growth. These companies frequently burn through substantial stores of funding in their quest to displace incumbents. When startups successfully scale, profits and growth are often decoupled: a firm's valuation can skyrocket and generate windfall returns for investors regardless of whether it is able to generate operating profits (Kenney & Zysman, 2019).
Platform companies like Google, Facebook, and OpenAI—all of which accepted venture capital funding to fuel their growth—have been at the forefront of innovations in machine learning and AI. However, the relationship between corporate ownership structures and processes of technology development has yet to receive sufficient scholarly attention (Shestakofsky & Petre, 2024). Because startups are a major site of AI innovation, it is important to understand the distinctive consequences of the VC ownership structure for the development of technologies and labor relations inside firms.
In early-stage, venture-backed platform companies, managers face incredible pressure to deliver precipitous growth. This often results in efforts to increase key metrics through continual experimentation with the platform's rules and features. Work is organized around supporting experiments and managing their consequences for users and the firm, and decisions about which product features to pursue may be determined in large part by managers’ perceptions of investors’ preferences (Shestakofsky, 2024a). Nascent firms’ business models are often ephemeral and unsustainable (Vertesi et al., 2020). The users who rely on these platforms may be financially incentivized to adopt the product, only to find that rewards are ratcheted back as the startup evolves and investors shift their focus away from user growth and toward revenue generation (van Doorn & Chen, 2021). Careful examination of the relationship between investors and firm managers is thus critical to understanding how platform companies and their ties with users are organized.
Companies with other ownership models may face different pressures from investors, which may influence AI development—as well as the relationship between firms, employees, and users—in different ways. Braun (2022) argues that our models of shareholder value capitalism are in need of an update: the emergence of large asset managers that profit from extracting fees from clients may signal the decline of active investor involvement in the governance of publicly traded firms. Some have noted that, in the wake of Amazon's long road to profitability, investors in public markets are increasingly tolerant of startups that remain unprofitable long after their IPO (Driebusch & Farrell, 2018). Private equity firms may purchase an established company with the intention of consolidating a fragmented market or selling off assets (Olson, 2022). Future research on how particular investment logics are “imprinted” onto capitalist firms (Cooiman, 2024) can help us better understand the variety of contexts in which new technologies are generated, as well as their consequences for workers and organizations.
Relationship 2: Monopoly and Competition
Making sense of the rise of AI and digital platforms requires us to theorize the relationship between monopoly and competition. Although startups are a source of much AI development, AI's contemporary emergence is directly tied to Big Tech. Large tech companies often seek to acquire upstart rivals to cement their own monopolistic position in the market; these acquisitions in turn support the VC ecosystem by providing investors and entrepreneurs with opportunities for highly profitable “exits.” In some instances, large firms directly support startup growth through corporate venture capital funds (Rossi et al., 2020).
Meanwhile, tech behemoths like Amazon, Google, and Microsoft have established themselves as crucial providers of AI infrastructure such as cloud computing (van der Vlist et al., 2024). For companies like Meta, too, there is a clear link between the firm's massive scale as an owner of social media platforms, its internal cloud infrastructure, and its ability to develop AI tools. As such, the advancement of AI is predicated on monopolistic players with deep pockets, data access, and infrastructural ownership (Ferrari, 2023; Luitse, 2024). Cloud architecture is the infrastructural bedrock that makes AI possible by providing the enormous storage and processing capacity essential for data- and compute-intensive AI technologies (Narayan, 2022, 2023b). Without this infrastructure, AI tools—including generative AI—would simply not exist. This compels us to think more closely about the heavy concentration of assets within the tech sector, and to re-engage age-old issues of monopoly power, corporate strategies, and their effects.
The monopoly-competition relationship influences the structure of the AI industry and also reveals the internal dynamics of the tech industry at large. Centering the tech industry is useful—not only for its own sake, but also, as the next section elaborates, because it introduces new competitive and organizational logics into the rest of the economy (Narayan, 2024). Tech companies deploy new organizational strategies of growth and expansion that include data-intensive business models, the use of speculative finance, cloud infrastructure, and platform ownership. The tech industry as a whole concentrates assets and resources in ways that have significant consequences for industries and organizations.
Monopolistic trends within the tech industry fuel the rise of AI. But this is only one effect. Some relate to the new entrepreneurial risks and volatilities invoked by centralized ownership of critical resources (Cutolo & Kenney, 2021). Still others consider the concentrated power of platform owners through the eyes of platform workers who experience new forms of control and precariousness via algorithmic management (Wood et al., 2019). The continuous introduction of new technologies into workplaces—often poorly tested and lacking in safeguards—is another key consequence. Finally, the continuous introduction of new technologies and business models induces a pressure toward organizational change among incumbents within other sectors. This is fertile terrain for revisiting theories of organizational and workplace changes in the age of digital capitalism.
In analyses of the contexts and conditions in which organizations operate, the structural dynamic between monopoly and competition is likely to be particularly relevant. Here a relational perspective might see monopoly more as a powerful corporate fantasy or goal, rather than as a static, achieved state. As tempting as it can be to see successful capitalist firms as islands of power and rigid control, a more dynamic view understands the boundary between the capitalist enterprise and its volatile market as porous. Few (if any) organizations have the ability to render their environment entirely predictable and stable. As a corollary, competition can be seen as both a condition and as an organizational activity born out of this goal (Christophers, 2016). It represents an ever-changing set of practices that emerge from a fundamental tension or rivalry between capitalists over their share of the market.
In the context of the digital economy, monopolies are always in a state of becoming (Narayan, 2023b). Why? Because on their own, platforms have little value. Platform owners depend on a host of semi-independent third parties (Jacobides et al., 2018). Platform ecosystems, which include AI-producing companies, are now recognised as increasingly important sites of innovation and work. While platform owners might attempt to compete with these third parties, absorbing them entirely can disincentivize third-party use of the platform. For this reason, tech platforms must maintain the facade of being neutral and fair intermediaries. Large tech firms do encroach on smaller players’ territory, but eliminating the space for third parties risks a detrimental impact on their monopolistic aims, as the value created by third parties is crucial to the growth of the platform. Moreover, hostile platform policies can lead to a revolt among third parties (Meaker, 2024) and invite antitrust action. A platform ecosystem's level of “openness” and monopolistic character is thus negotiated and contested.
Relationship 3: Producers and Adopters
Whereas the previous section outlines a relationship that structures the production of AI, this one focuses on the logics enabling AI to proliferate. A relational approach considers the tech sector's entanglement with a wide gamut of other domains—the vast “outside” of the digital economy. The tech industry holds a unique position within the economy: because it produces technologies and business models that travel well beyond the remit of its own sector, it should be understood in relation to the full range of social domains and sectors upon which it impinges. Cooiman (2024, p. 587) uses the term “imprinting” to discuss venture capital's “power to structure” startup business practices. The ability to imprint can be used more generally to frame the tech sector's influence on other domains. It activates experiments in new practices, products, organizational forms, and markets among non-tech companies. The platform, AI, and data-intensive practices of the tech sector have a tendency to proliferate.
This catalytic aspect—here the influence of technologies, finance capital, business models, and organizational practices reverberating outward from the tech sector—can be viewed in either spatial or industrial terms. In the spatial sense, tech products and platforms have a profoundly transnational impact and scope: patterns of adoption and use cut across national borders. For example, researchers studying micro-work platforms show how the boundaries between labor pools have been undone, with employers in advanced industrialized countries gaining almost instantaneous access to digital workers in Africa and Asia, enabling a race to the bottom with regards to wages (Gray & Suri, 2019; Irani, 2015). Cloud computing providers such as Amazon Web Services alter the market for traditional IT services, which has had a strong impact on India's IT sector, triggering restructuring and change (Narayan, 2023a, 2023b). Uber's platform is available in 70 countries, Netflix in 190, Google in 219. What enables such portability and scale? How do these developments alter our understanding of the global economy?
Four decades of economic globalization precede the current conjuncture. This means that there are existing debates and frames we can repurpose to sharpen analysis of the global connections between the tech industry, regional economies, labor pools, and markets. For example, new technologies travel via relations forged through pre-established global supply chains. We must therefore position the digital economy in relation to the wider terrain of global political economy–and to do this, we can engage the rich literature on global production networks and value chains (Coe & Yeung, 2015; Grabher & van Tuijl, 2020). In spite of the novelty of emerging digital technologies, it is important to recognize that their proliferation is facilitated by existing industry structures that are decidedly global and transnational in scope, justifying a rejection of methodological nationalism.
Apart from a spatial and geographical reading of relationality, an industrial reading is also necessary. There is now ample evidence of platform models and technologies interrupting longstanding industry structures in domains as varied as transport, music, journalism, advertising, and banking. With AI-based shifts, too, we see a range of firms and users laying a claim on its function and uses. How do new technological developments gain momentum? Scholarship might either adopt a Schumpeterian focus on disruptive new entrants as engines of change (Schumpeter, 1950), or else focus on how large incumbent firms adopt new practices and technologies. For instance, traditional brick and mortar retailers have introduced digital platforms in response to the rise of Amazon's retail platform. Netflix introduced a streaming platform, which then compelled traditional media houses to also adopt platform strategies. Tech sectors put pressure on traditional firms to alter their practices. Here we might borrow from theories of market-making that see the market as a site of struggle, where there are coalitions, movements, and battles over domination (Fligstein & McAdam, 2019). A relational approach might therefore engage the fraught terrain of the AI market, the practices of new entrants, and the processes through which both technological affordances and “hype” coalesce to build momentum around AI. Here, the focus would shift to decision-making and organizational change within firms that adopt new AI, platform, or data-intensive business strategies, with the role of organizational politics in shaping adoption being key.
In either case, it is important to observe the specific processes that contribute to change. After all, technological change is not predetermined (MacKenzie & Wajcman, 1999), but unfolds in messy ways through organizational and market-focused power struggles. What is happening within the organization that chooses to adopt AI tools? Who is making these choices, and how is this authority developed? Organizations are internally divided and rife with conflict between divisions and interest groups (Narayan, 2023a; Vidal, 2022). Thinking relationally involves moving out of the tech sector per se and looking inside organizations that devise new practices as they consider and engage with AI.
Relationship 4: High-Status and Low-Status Workers
A relational perspective can also be fruitfully applied to the analysis of work and workforces in the age of AI. The term “tech sector” typically calls to mind well-paid technical and managerial professionals. Tech companies operating on the front lines of innovation are known as sites of high-risk, high-velocity change (Hoffman & Yeh, 2017; Koning et al., 2022). Academic accounts of tech work often investigate the experiences and trajectories of entrepreneurs and high-end developers (Sorenson et al., 2021). Many studies examine the everyday practices of the high-status workers who manage emerging firms and design and market their products (Ross, 2004; Stark & Girard, 2009; Neff, 2012; Turco, 2016).
Yet, as in the past, today's technological innovations are facilitated by the efforts of low-status workers who occupy marginal positions within both firms and the global economy. With the widespread adoption of machine-learning systems in the 2010s, researchers turned their attention toward the behind-the-scenes “data workers” whose labor supports the smooth functioning of algorithmic systems. AI companies in the Global North often outsource or offshore essential tasks—such as labeling images, screening out inappropriate content, or rating the output of models—to contract workers located in countries with lower prevailing wages (Gray & Suri, 2019; Roberts, 2019). Advances in AI have gone hand in hand with new techniques for sourcing, organizing, and managing low-wage workers—for example, through novel digital platforms that create planetary markets for labor (Graham & Ferrari, 2022). These developments highlight the importance of investigating the connections between the high-status and low-status work in and around AI systems. Studies of AI development that erase the contributions of peripheral workers risk obscuring key processes through which value is created and appropriated in tech companies.
A crucial insight emerging from the literature on data work is that high-status workers’ experiences and identities are both enabled and shaped by their ties to marginalized workers (Dorschel, 2022; Lindtner, 2020)—even when low-status workers are, by design, hidden from view. For example, “crowdwork” platforms like MTurk shape software developers’ subjectivities: by anonymizing and agglomerating workers into application programming interfaces, they allow engineers to view themselves as “innovators” rather than “employers” (Irani, 2015). Indeed, tech companies’ much-heralded office cultures—characterized by “openness” (Turco, 2016) and “speculative optimism” (Shestakofsky, 2024a)—are predicated on a geographic division of labor that separates workers with different degrees of authority, compensation, and voice rights.
While the vast inequalities dividing elite workers in the Global North from their poorly compensated counterparts in the Global South are clear, the consequences of this interdependence for low-wage workers are not monolithic. As the number of consumer-facing AI applications grows, firms have become more concerned with the accuracy and consistency of data workers’ output. Instead of sourcing data workers from crowdwork platforms, AI companies have increasingly turned to outsourcing firms to organize and manage their offshored workforces. These arrangements may offer data workers greater job stability, along with more opportunities to build skills, relationships with colleagues, and even ties with software developers (Le Ludec et al., 2023; Shestakofsky, 2024b). When companies seek subject-matter experts to train generative AI models, wages and skill requirements may increase further, and firms may source more data-annotation workers from countries in the Global North (Lu, 2024). Meanwhile, whereas technologists may find it invigorating to experiment with and iterate emergent AI systems, frontline customer support workers can face substantial stressors from users who are dissatisfied with confusing or fast-changing systems (Shestakofsky, 2024a). Thus, how firms organize the lower-status labor in and around AI systems can vary, with different consequences for workers inhabiting different roles and structural positions within both specific organizations and the global economy.
As AI continues to evolve, further research can shed light on shifting practices of data work. For example, some have noted the recent emergence of AI “red teaming,” where data workers help software developers identify weaknesses in LLM-powered chatbots by engaging in adversarial testing. In these instances, workers may be tasked with goading software into producing harmful content (Zhang et al., forthcoming). The consequences of these practices for data workers—and for the ties between high-status and low-status workers more generally—remain to be seen.
Conclusion
This essay has outlined four relationships that matter to the study of AI and related technologies, making the case that AI tools and practices are situated within, and contribute to, broader social, financial, and industrial dynamics. Relational thinking can help researchers see connections that might otherwise be rendered invisible due to disciplinary priorities or pre-existing typologies (Hart, 2018; McMichael, 1990). At a time when it can be difficult to separate AI fact from fiction, organizational analyses that center the multifarious relationships in which firms are embedded represent a promising avenue for inquiry.
We argue that AI is an unstable, inchoate technology–a loose industry term to which various meanings and products are affixed. Instead of studying outcomes by attempting to analytically isolate “AI,” researchers should examine it alongside an entangled set of socio-technical developments (e.g., big data, algorithms, cloud computing, platforms) and market relations. Moreover, identifying structuring relationships helps bring to light where organizational and technological outcomes—within the tech industry and among its broad-ranging customers—come from.
We recommend prioritizing the influence of financial logics (R1) and market-level conditions (R2) in analyses of organizations’ internal practices–and, indeed, viewing the tech sector's inventions and ideologies as inducing structural pressures on other industries as well (R3). As AI evolves, and as organizations adapt to these changes, the interdependent work practices and labor relations defining technology creation (R4) will also require our attention. Combating rampant speculation about AI's effects on work and organizing requires both granular analysis of how specific technological tools are used in situ and more ambitious, multi-level theorizing (Bailey & Barley, 2020; Shestakofsky, 2020; Thompson and Laaser 2021). This essay has outlined four relationships that can guide this process.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
