Abstract
Digital twins—or virtual representations tied to physical objects, processes, people, or environments—are touted by think tanks, consultancies, tech giants, and startups as crucial to what the World Economic Forum calls “Industrial Revolution 4.0.” With real-time data generated via sensors, digital twins promise to autonomously monitor, simulate, and even modulate the “real” world. While the market for digital twins is estimated at $10bn globally, the value they create beyond their commodity status can be difficult to ascertain. Examining digital twins at scales from the individual to the warehouse to the planet, this commentary argues that digital twins exploit existing but disaggregated or noninteroperable systems by assembling them into new arrangements that intensify digital enclosures. Digital twins invert the traditional hierarchy between managerial planning and the material world of bodies, labor, and enterprise operations because the coordination of decision making is posited as a site of value creation that can be undertaken by new digital twin platform architectures.
Digital twins are virtual replicas of physical systems built, operated, and optimized using data from database records, computer-aided design (CAD) models, and sensor and Internet-of-Things (IoT) devices. Unlike other representations, digital twins enable the two-way flow of information such that the virtual environment can control the physical system. With real-time data generated via sensors, digital twins feed powerful machine-learning operations capable of autonomously monitoring, simulating, and modulating its referent “real” world. While the global market for digital twins is projected to reach $74bn by 2027 (McKinsey, 2023), the value they provide beyond their status as commodities is a function of the forms of data collection and control they enable. Value from digital twins thus takes multiple forms: as a salable product, increasingly in the commercial structure of the platform; as a driver of performance, efficiency, and productivity gains; as the coordinating technology for bodies, processes, and environments; as a mechanism for harnessing probabilistic futures through data-driven simulation. This commentary offers a necessarily brief account of these shifting forms and mechanisms of value production as digital twins assemble bodies, processes, and environments into cybernetic platform architectures.
Digital twins are the fruition of decades of research and investment in computationally complex, but otherwise historically and technically disaggregated systems (networks, sensors, and computer graphics) bolted together into commercial products that seek to capitalize on billions of dollars of already sunk investment in computational infrastructure. The development of digital twins extends and consolidates the ongoing “sensorization” of lived space and bodies (Andrejevic and Burdon, 2015). They combine increasingly rich and comprehensive sensor data to reconstruct spaces and bodies so that these can be monitored and modulated in real time. By redoubling the physical world in virtual form, they promise to compress the temporal dimension of feedback-based control. As big tech increasingly shapes the digital twin market, digital twin products are becoming platform architectures that structure value generation (Sanchez-Cartas and Leon, 2021) by coordinating and managing material information networks and environments (Mackenzie 2017). As we argue here, traditional hierarchies that situate managerial planning above the material world of labor are inverted by the digital twin, as the automated coordination of labor itself becomes the site for value production.
Originating in engineering, manufacturing and building information management (Grieves and Vickers, 2017), digital twins simulate, monitor, and optimize machines, processes or infrastructure in contexts where precise data-driven simulations of equipment or assets are key to lifecycle management. In other words, they are directed toward the digital capture and control of assets within a defined system rather than immediate operational efficiencies or short-term system optimization. Digital twins are posited as paying dividends in the maintenance and adaptation of systems over time, enabling optimized and efficient responses to changing environmental conditions and contexts, rather than privileging swift and immediate gains. Real-time data streams from proliferating IoT devices and sensors keep the digital twin synchronized with the physical system, powering the feedback loop, and providing continuous monitoring that allows digital twins to be touted as technologies of automated optimization and performance enhancement. Data—securing its smooth and uninterrupted flow between real and digital environments—is the key to claims regarding real-time monitoring, predictive analysis, and continuous improvement. Constituting data in new ways by assembling, aggregating, integrating, and coordinating existing data streams is thus central to the value proposition of digital twins themselves.
Digital twins proceed from a desire to integrate human and nonhuman elements into singular, operative systems. Even when these systems don’t work as anticipated, their value is part of a larger project of creating environments that enable digital enclosure – the process whereby activities, interactions, and communications are folded into digital interactive contexts that render them monitorable, trackable, and recordable (Andrejevic 2007, 2022). “Platforming,” in this context, represents a form of digital enclosure: the capture and modeling of activities and interactions in a form that can then be modeled and acted upon. This, of course, is the business model outlined by Srnicek (2016) in his discussion of platform capitalism: the capture, storage and manipulation of activities that once took place in nonmonitored or differently monitored contexts. Data doubles (Deleuze's “dividuals”) are incipient digital twins: reproductions of individual activity that can be captured, modeled, and acted upon. The advent of the twin industry comes with the recognition that these models can be used to both record real-time activity, and to simulate possible scenarios for the purposes of prediction, pre-emption, and control. As William Bogard (2012) puts it in his discussion of the simulation of surveillance, “Simulations do not ‘represent’ real events so much as manufacture hypothetical ones for specific control contexts” (pp. 31, 32). We follow Srnicek (2016) in approaching platforms as digital infrastructures for interaction—infrastructures that are, in turn, associated with forms of governance, control, and rationalization enabled by the platforming of interactions and the data that encode them. In this respect, the “twin” is inseparable from the platform that enables it to appear and be acted upon. “Twins” don’t exist on their own as entities any more than, for example, a digital avatar. To describe them as a platform is to use the term metonymically to invoke the platform that assembles twins and connects them, via informated processes, to a referent. Originally bespoke in nature, digital twins are now increasingly sold by platform companies (Srnicek 2016) such as Amazon (with TwinMaker), Nvidia (with Omniverse) and Palantir (with Foundry and Gotham) as products that enable organizations to implement their own cybernetic platforms to yoke together various combinations of bodies, processes, environments, and institutions. Across three short cases that zoom out progressively from human body to warehouse to firm, we demonstrate how digital twins not only collect and reconstruct data from their referent subjects—biometric data from humans or financial data from businesses—but also transform their subjects into “environments” within which elements can be captured and coordinated as part of their constitutive systems. Digital environments are more malleable and thus more amenable to experimentation and modeling than their analog referents. Thus, digital twins emerge from the forms of sensorization, monitoring, and tracking enabled by digital enclosure (or platformization). They aggregate forms of information collected from a growing range of sensors into an approximation of an analog system, whether this be an individual body, a room, a building, or a city. Understanding how digital twins generate value from data requires attending to how they extract and coordinate data streams through operations that constitute a rationale for reconfiguring pre-existing commercial and industrial arrangements. This dependence on existing infrastructures means that inefficiencies and frictions within those systems are not removed by the digital twin but rather mitigated or overcome through a new platform architecture whose value lies in coordinating existing systems.
Twinning bodies
Digital twins posit the human as both a data source and an asset within complex systems. These platforms optimize the management of environments like warehouses, electrical grids, or building infrastructure, but they also increasingly twin the human actors within those environments. In these industrial settings, humans are integrated into systems to ensure efficiency, such as in tracking their movements or managing resources. This involves monitoring human behavior and wellbeing to optimize tasks, whether by controlling lighting in a building or directing workers in an Amazon warehouse or port dispatch system. In such contexts, capturing and using human-asset data aims to improve overall performance. What is distinctive about human digital twins is the range of data that can be collected: information about physical movements is supplemented by data about “workers’ physical, emotional, and cognitive aspects” (Davila et al., 2024). As one account puts it, “a PDT [personal digital twin] could represent a person as human by reflecting the different aspects of their life from birth throughout their lifetime” (Sahal et al., 2022). Digital twins thus rely on increasingly comprehensive forms of data collection across a growing range of dimensions to influence how resources and labor are organized in various environments and how the human body itself is datafied and integrated into cybernetic environments (Figure 1).

“Datafying the body.” Source: https://www.mdpi.com/1424-8220/22/15/5918.
A key challenge is the existence of sufficient background data for meaningful predictions, such as identifying potential heart attacks or pacemaker failures. For these simulations to work, comprehensive data pools are required for comparison and predictive accuracy. This requirement points to the speculative nature of digital twins: they must first gather extensive data to build the knowledge base necessary for simulations. Emerging sensor technology extends the available dimensions for data collection: wearables like smart watches and earbuds provide real-time biometric information that is used to monitor both physical and emotional states. From the perspective of modeling outcomes and managing workflows, the mantra is “the more data the better.” As this formulation implies, the goal of digital twins is to reproduce in data form the entirety of the entity being modeled: to render the physical world in digital, modulatable form. Without such data, individual simulations lack predictive power, leading to the question of how to acquire or create these datasets. While synthetic data generation is one option, it may not be reliable for life-critical systems, underscoring the challenge of finding or building these necessary data pools (Figure 2).

“Life-critical digital twin architecture.” Source: https://www.mdpi.com/1424-8220/22/15/5918.
It is telling that two of the primary domains for the deployment of human digital twins are healthcare and the workplace—and the latter comes to serve as a metaphor for the former. That is, the body is reconstructed digitally as a complex of functions to be managed in the name of enhanced performance—whether in terms of health outcomes or workplace productivity. As in the case of workplace control, the value of the information generated depends upon the promise of enhanced efficiency and the pre-emption of costly malfunction and failure. In keeping with logics of pre-emption, the realized value has a speculative element: replacing a part before it reaches the point of failure avoids the cost of failure while virtualizing the moment of failure. Moreover, to the extent that relevant dimensions remain beyond the reach of existing sensor systems, interventions may fail to take into account complex interactions that affect performance. Thus, the process of digital reconstruction is, in principle, endlessly generative. Every new sensor produces a new data dimension. In the workplace, for example, one approach envisions creating an “emotional profile through facial expressions, speech emotion recognition and conversational methods [with chatbots]. The system employs artificial intelligence (AI) models to detect signs of stress, fatigue, or potential mental and emotional complications” (Davila-Gonzalez and Martin, 2024). The resources for the creation of new data and the forms of value that accrue to it is limited only by the available sensing capacity: reality can be mined endlessly, thanks to the unbridgeable gap between representation and referent (Figure 3).

“Framework of HDT from a human factors perspective.” Source: https://doi.org/10.1186/s10033-024-00998-7.
A central challenge is making sense of the growing amount of data in real time. AI plays a central role in processing the vast amounts of data generated, enabling real-time interventions. This use of AI to interpret complex data feeds into the logic of simulation, where predictive models anticipate future events, helping to avoid undesirable outcomes. Recent developments in generative AI are seen as a way to process this data effectively, making it actionable for immediate intervention. This represents a shift toward pre-emption, where actions are taken to prevent problems before they occur. In that sense, digital twins involve more than just creating accurate models; they also rely on logics of “pre-emption,” a constant, dynamic intervention in systems based on anticipated future needs.
We can see, then, that a significant aspect of the digital twin phenomenon is its generative potential. By collecting and processing vast amounts of data, digital twins can simulate a range of possible futures. This “virtuality” allows for proactive planning, but it also relies on the ability to generate usable insights from incomplete data. It taps into a space between the known and unknown, where knowledge is both speculative and concrete, thus producing an ongoing flow of data that drives value creation. In healthcare, the data generated by digital twins could help optimize personalized treatment plans, adjusting for individual needs in real-time. Similarly, in workplaces, digital twins could continuously adapt the work environment to ensure it supports optimal performance based on an individual's current emotional, physical, and cognitive states. Ultimately, the value of human digital twins lies not just in the individual models themselves but in the customizable environments they help create. The models allow for automated adjustments at the level of the individual—whether worker or patient—through the platforming of virtual profiles within adaptable “milieus.”
Twinning labor
The integration of worker data and biometric monitoring into twin systems translates these techniques to the management of labor, seeking to align planning and action in real time. The world's largest tech companies, including Amazon Web Services (AWS), Microsoft and NVIDIA, market digital twins as platforms that can leverage a client's existing technological infrastructure through the application of advanced computational models that simulate work environments and labor therein. Amazon's in-house experiments have been a springboard to develop and sell digital twin products and associated services to its cloud customers. Off-the-shelf products like TwinMaker, available as an IoT service through AWS, promise to “optimize building operations, increase production output, and improve equipment performance” by allowing cloud customers to use their existing IoT, video, and enterprise data “where it already lives” to compose interactive three-dimensional (3D) scenes of factory floors and other environments. These models are provided to widen and deepen the platformization of labor within cloud-based ecosystems. Digital twins thus integrate and distribute the decision making, planning, and control of labor to purportedly enable holistic management and oversight via dashboards that are accessed via web applications and IOT devices.
Digital twins enhance the post-Fordist factory model by coordinating flows of data, action, and oversight; and by distributing them across work sites along global supply chains. Workers laboring in dynamic working environments supply constant data streams to update models and generate decision points for automated systems. Uncertainty—arising from supply chain or labor market volatility or even the inherent unpredictability of the human body—becomes the rationale for a new calculable ontology, transforming the past, present, and future of labor into manageable parameters. Promoted as a means to automate monitoring, predictive maintenance, virtual inspections, and optimization of planning, the value of digital twins to companies is often framed in terms of gains in operational efficiency and cost-savings, particularly for fields like logistics where competition is increasingly defined by firms seeking to capture larger and larger shares of the total value generated by labor along global supply chains (Cox et al., 2001; Reimann and Ketchen, 2017).
From warehouses to ports, managerial decisions have been made based on cost-benefit analyses, reshaping labor to fit those objectives. Yet digital twins invert the traditional hierarchy between managerial planning and the material world of labor because the cybernetic and automated coordination of decision making is posited as a mechanism of value extraction, both for firms and the tech companies providing the infrastructure and digital-twin platform services. For logistics firms, then, the value of digital twins is premised on conjoining human labor forces with their digital counterparts to extract performance enhancement via techniques of virtual experimentation, testing, and implementing optimization in work sites. Their appeal, and selling point, typically lies in how they leverage investments a firm has already made in its computational infrastructure, such as cloud services, sensors, and data operations. For major tech companies, cloud service and enterprise software providers like Amazon and SAP, the value of digital twins, then, lies in the rationale they generate for translating these investments into new use cases, products and services for data operations, machine learning models and technical architectures they supply—or hope to supply—to already highly automated environments.
This counters the widespread assumption that “data is value.” Rather than generating smoother, more precise, or even useful flows of labor data for managers, digital twins provide a new kind of technological platform for marrying older infrastructures to new commercial integrations, digital twin products and existing (and expensive) machine learning models. Generative AI, for instance, is now being promoted as a key technology for enhancing digital twins at scale. More than 500 published research papers have explored the convergence of these two technologies, with most use cases focusing on how large language models (LLMs), like GPT-4, can assist in interpreting digital-twin outputs and help human decision-makers understand their data. In practice, this often takes the form of an AI-powered “copilot” that interacts with the digital twin through a chatbot interface, assisting with communication and scenario planning.
In one project highlighted in a 2024 McKinsey report (Cosmas et al., 2024), LLM reasoning was integrated with digital twins for industrial planning and control in smart factories. In this system (Xia et al., 2023), human users provide a text prompt for an undefined planning task and AI agents, orchestrated through a multiagent framework similar to Microsoft's AutoGen, carry out the task. Each AI agent is assigned specific expertise and the system manager assigns the task to the most appropriate agent. After the agent provides an initial response, other agents refine it based on their expertise. An AI manager coordinates these inputs, ensuring consensus before delivering the final response to the human manager. AI systems thus not only generate responses, but also become managers in the distribution of cognitive labor between human and machine agents. In this setup, agency becomes more fluid, with AI playing a soft managerial role that supports and oversees human managers.
These experiments with digital-twin management of labor, though complex, are not revolutionary in terms of how they shift the politics of algorithmic management. While they introduce new computational and algorithmic techniques, they emphasize the use of automated systems for centralizing data-driven decision making to enhance worker surveillance, optimization, and control. They do, however, offer new opportunities for tech firms and companies along the supply and delivery chains to coordinate, consolidate, and leverage data streams, computational and cloud services, and expensive AI models into products and services that entrench labor as a flexible asset within hyperautomated environments that digital twins—and the companies that develop and sell their infrastructural technologies—can create and maintain.
Twinning enterprises
Scaling up from the twinning of workplace labor—in warehouses, factories, ports—to the coordination of enterprises marks a significant shift in the domain of application for digital twins. In the move from site to organization, the value of digital twins depends upon the endurance of the existing inefficiencies, frictions, and disjunctures in the large array of data inputs, services, sensors, and systems that proliferate in large-scale contemporary institutions. The very production of value, in other words, depends upon the proposition that inefficiencies exist and will continue to do so without the introduction of a new technological layer. Rather than replace these existing arrays, digital twins are platform products that instead promise to make manageable—even valuable—these disparate data streams. Promising not only to do the work of data fusion but also to make unified data systems operative in the material functioning of the enterprise, digital twins are sold as platform architectures that can be adapted to the needs, context, and extant architectures of any organization—and then leveraged to enable environmental modulation and control.
This shift requires digital twins to themselves be platform products rather than bespoke technologies, developed to fulfill specific tasks by specialists familiar with CAD or building information management. Emblematic of this platform conception of the digital twin are Palantir's Gotham and Foundry, both sold to government and commercial clients, respectively. In its marketing material, Palantir describes Foundry's core operational layer—the pretentiously named Ontology—as “a digital twin of the organization, containing both the semantic elements (objects, properties, links) and kinetic elements (actions, functions, dynamic security) needed to enable use cases of all types” (Palantir 2023). Through Ontology, Palantir's Foundry twins the more “abstract, informational and distributed systems that make up the firm or institution” and thus serves as the “computational representation of the spatial, temporal, and relational arrangement of an enterprise that enables analysis, simulation, and implementation of decisions” (Horn and Richardson, 2025).
In one of its advertised case studies, Foundry integrates billions of data points across the electrical grid of the Californian utility, Pacific Gas and Electric (PG&E), standardizing and organizing information across different sources to create a comprehensive model of the electrical network. Through Foundry, the organization can manage power plants, power lines, and other grid elements despite the disparate data streams and informational systems that animate them. Its value proposition lies quite precisely in the difficulty of integrating these systems and, even if effective data fusion were achieved, the even greater challenge of operationalizing predictive data analysis in real time. Crucially, Foundry and its Ontology do not replace these disaggregated data systems but rather integrate them so as to generate new forms of aggregated data. Aggregated data is the necessary foundation of simulations that identify, capture, and bring into being organizational futures that generate value in the form of stability, profitability, and risk mitigation. For PG&E, twinning their enterprise means reducing the risk and limiting liability for the kinds of infrastructural disasters—namely wildfires sparked by failing electrical wires that drove the company into bankruptcy in 2020. Implementing Foundry—and in doing so making Palantir systems inseparable from the ongoing operation of the enterprise itself—serves a logistical function: coordinating, orientating, and integrating existing data systems and their implications, findings, and decisions.
At stake in all this, however, is a larger ambition: to constitute and thereby capture the future through predictive analytics grounded in real-time sensing and simulation. This future capture is made operational through modulations in the present that seek systemic stability over time—or what cyberneticists called homeostasis. While the digital twin certainly promises to enable more effective optimization and efficiency in the short-term, its goal is not so much profit maximization in the moment as it is durability into futures beyond the horizon. Just as digital twins of the body and of labor seek to take control of emergence, to pre-empt the frailties of individual and population alike, so too does the twin of the enterprise mobilize ontopower (Massumi 2015). Understood as preemptive technologies, digital twins are positioned as thriving within the frictions, discontinuities, failures, and system gaps that plague the contemporary enterprise or institution. They are ontopowerful because they intervene preemptively within the emergence of shifts within the architecture, operation, and environment of the enterprise. Their very value depends upon the maintenance of uncertainties and disjunctures. Generating value from the digital twin, then, depends upon the inclusion of more and more of the enterprise, its operations, and its environment within the domain of the system.
Conclusion
While originally bespoke tools developed for specific applications, digital twins are increasingly sold as platforms for the automated decision making and management of bodies, factories, warehouses, cities, and even the planet. Shifting the locus of control from human decision to virtual coordination, digital twins have been enthusiastically embraced—and hyped—by major tech companies and consultancies despite their relatively low adoption in the field. Like many expensive, new and technologically complex systems, there is an apparent gap between the perceived and realized value of the system in its application beyond the realm of testing and experimentation. Yet we find all the more reason to pay critical attention to the financial and discursive investments in digital twins by widening industrial and social sectors. It is precisely their ambition to enable action in the present through the continuous capture of potential futures that makes digital twins worthy of scrutiny. Their promise is automated and continuous self-management, continuously remaking the world in response to futures captured but not yet arrived.
While optimization and efficiency are core principles of digital twins, their more fundamental value proposition resides in the mobilization of past and real-time data with the aim of adapting and maintaining the system into the future. Machine learning techniques that enable predictive forecasting of system (in)stability are deployed with the aim of simulating and capturing futures that enhance the efficient and optimized operation and endurance of the system. In this sense, then, digital twins—whether of bodies and populations, warehouses and factories, or organizations—are geared toward environmental management and modulation. This environmentality targets both internal and external milieus, with the virtual replica taking precedence and priority over the physical system itself. Digital twins promise to realize longstanding visions of environmental enclosure, with humans, sites, and organizations translated for the system as assets and elements for management and modulation. The twin model is contiguous with the ambition of technologies like augmented and virtual reality to redouble the physical world in digital form—what tech guru Kevin Kelly describes as a “mirrorworld” that “platforms” reality itself. As he puts it, somewhat breathlessly, “Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in the world.”
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was conducted by the ARC Centre of Excellence for Automated Decision-Making and Society (CE200100005), and partially funded by the Australian Government through the Australian Research Council.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
