Abstract
In crucial sectors like healthcare, education, and housing, policymakers are turning to the tools of market design to incentivize public and private actors to more efficiently and effectively produce the public good. Although market design has been a key policymaking tool for decades, datafication is increasingly central to this technocratic tinkering. This article explores a project of datafied market redesign in the U.S. healthcare industry, demonstrating that emerging federal health data regulations are designed to enable the state to more precisely quantify, and thereby incentivize, the production of “valuable” care. This case study demonstrates how both the public good and crucial data infrastructures are constrained through their enactment within market-based modes of governance. As this data-solutionism for extractive markets becomes a more prevalent mode of governance—particularly in areas like climate change—we must find alternative mechanisms for collectively defining the public good, and for achieving corporate accountability beyond financial incentive structures.
Keywords
Introduction
At the February 2023 “Health Datapalooza” conference in Washington, D.C., Food and Drug Administration Commissioner Robert Califf framed the fundamental problem of the U.S. healthcare industry for the policymakers, healthcare executives, and health IT professionals in the room: despite leading in medical innovation, the United States has low life expectancy, stark health disparities, and high per capita spending compared to other wealthy nations (Davis et al., 2014). The attendees were intimately involved in building improved data infrastructures to address these systemic failures.
Health Datapalooza, launched in the 2010s, coincided with federal efforts to digitize health records and standardize data sharing. These regulations have largely been seen as apolitical interventions to modernize data infrastructures in the United States, enabling easy transfer of patients’ healthcare records between doctors’ offices. Yet as FDA Commissioner Robert Califf's 2023 keynote address highlighted, these federal data regulations also explicitly aim to help “fix” a healthcare market that is failing to efficiently and effectively produce accessible, high-quality, cost-effective care. Across dozens of subsequent panels and discussions, Health Datapalooza attendees delved into the obscure intersections of data standards, payment models, and administrative processes to treat an ailing healthcare market.
This article situates emerging health data regulations within broader federal efforts to redesign healthcare markets. Rather than implement price controls or other direct forms of control over private health insurance and healthcare providers, the U.S. federal government has preferred to redesign markets, implementing new payment models and privatizing administration of public insurance programs. This project of healthcare market reform has spanned federal administrations going back to President Nixon. Depending on the administration, the exact goals of these market reforms have shifted—from outright cost-cutting to the “triple aim” of improved population health, improved patient experience, and reduced costs, to improved “health equity”—but the basic premise of market redesign has remained constant. Since the 2010s, this project has been renewed through intensified datafication, which promises a more precise quantification and incentivization of the public good.
This data-solutionism is not unique to the U.S. healthcare market. In a wide variety of industries and sectors, policymakers are turning to intensified datafication to grapple with the failure of markets to efficiently and effectively produce the public good. Most prominently, contemporary environmental governance attempts to calculate the production of “negative externalities”—that is, the environmental harms produced by a company but not accounted for on its ledger sheets—and to “internalize” these harms within the market. For example, cap-and-trade markets and carbon taxes rely on complex quantification processes (Liu, 2017; MacKenzie, 2009), instantiated to incentivize corporate behaviors aligned with the public good. The impacts of this mode of governance, characterized by datafied market design, warrant further scrutiny, both in terms of whether they achieve state actors’ explicit goals and in the precise ways that crucial public data infrastructures and the public good are enacted and constrained through market design.
In what follows, I provide a detailed case study of this intertwined project of state-led datafication and market design as a mode of producing the public good. First, I explore literature that situates the historical project of governing the public good through market design and its intersections with the broader phenomenon of datafied governance. I then present an empirical case study showing how emerging federal health data policies are designed to support the shift toward “value-based care,” a multidecade project of reforming healthcare markets toward paying for health outcomes rather than care inputs. Finally, I draw out the social implications of this specific mode of data-driven, market-solutionist governance—in healthcare and beyond.
The findings outlined in this article draw from over two years of ethnographic research in the healthcare industry, including as a participant observer at health industry and health regulator conferences and at formal and informal social events and meetings. Through involvement in these spaces, I was better able to see how those engaged in developing interoperable infrastructures articulate and present their work to the broader industry, and to participate in informal discussions about ongoing industry shifts.
In addition, the research findings presented here draw upon analysis of a corpus of discursive materials. This includes policy documents issued by federal and state governing bodies, particularly Health and Human Services (HHS), the Center for Medicaid and Medicare Services, and the Office of the National Coordinator for Health IT, pertaining to value-based care and health data interoperability. It also includes industry coverage (blogs, white papers, and recorded webinars/presentations) about value-based care policies and health data interoperability regulations, standards, and technologies, in the periods where these policies were developed and implemented. These materials span the years in which interoperability regulations were being conceptualized and initially advanced (2009–2016) and the aftermath of the Cures Act and subsequent federal rulemaking around health data interoperability (2016–2024). Through an analysis of these documents, I was better able to trace the articulated goals of health data policies and to specifically understand the role of these policies within a broader project of healthcare market reform.
The age of datafied governance
States, of course, have always relied heavily on information and quantification as a central aspect of governance. Indeed, information is central to the construction of the state as a stable entity—its people, its borders (Espejo, 2014; Mitchell, 2002; Scott, 2020; TallBear, 2015; Tooze, 2001). Recent scholarship has explored the ways that intensified processes of datafication, enabled by new forms of computation, are reshaping governance itself (Braman, 2009; Burrell et al., 2024; Cohen, 2016; Dencik and Kaun, 2020; Dencik et al., 2019; Johns, 2021; Purtova and van Maanen, 2024; Singh, 2019). This scholarship addresses the shifting relationships between states and big tech companies, which often control the means of datafication; the increasing opacity of algorithmic and data-driven modes of governance; the performance of increased rationality or efficiency enabled through computation/datafication; and the ways that publics are constructed and self-construct through data.
Previous Foucauldian scholarship on techniques of governmentality (Barry, 2001; Foucault, 2008), as well as postcolonial and STS scholarship on the role of information in constructing and extracting value from racialized populations, has emphasized these techniques and information infrastructures as foundationally political technologies. By scrutinizing processes of datafication, we may excavate unexamined assumptions about the role of the state, its modes of action, and its relationship and accountability to its publics (Costhek Abílio and Cruz, 2024; Dencik and Kaun, 2020). For instance, the Data and Society anthology “Keywords of the Datafied State” (Burrell et al., 2024) explores how our core understandings of phenomena like “bureaucracy” or the “public interest” are reconfigured via practices of state governance organized around the capture, analysis, and utilization of digital data. In this article, I demonstrate how market design, a key aspect of contemporary market-based governance—is both enabled and reproduced via intensified processes of datafication.
Specifically, the case study I present in this article examines the politics of datafication as it relates to a specific type or mode of governance: market design.
Market design in the platform economy
“Market design” describes the process of carefully constructing the rules surrounding markets to optimize their efficiency and effectiveness as markets. This practice is often deployed as a public policy intervention to “fix” markets that do not sufficiently increase social welfare or the public good. For example, market design is a central policy tool currently used to address the negative impacts of corporations on the environment. Instead of directly controlling carbon emissions, states use carbon taxes or credits to internalize negative externalities. In a wide variety of public policy domains, economists are called upon to design incentives and market mechanisms as a central tool for advancing the public good (Collier, 2017).
Market design is a crucial technique within neoliberal modes of governance, which situate the market as the ideal mechanism for efficiently and effectively producing the public good. Rather than rely on centralized, bureaucratic decision-making, subject to corruption or hierarchical favoritism, the “invisible hand” of the market in neoliberal accounts enables an apolitical, rationalized, and decentralized mode of distributing goods and resources (Davies, 2016).
Crucially, scholars have pointed out that this narration of the market as an apolitical, decentralized tool of governance has remained in place despite the growth since the 1970s of market design practices, through which economists and policymakers design and curate markets towards particular ends (Birch, 2020; Lepage-Richer and McKelvey, 2022; Nik-Khah and Mirowski, 2019b; Viljoen et al., 2021).
Nik-Khah and Mirowski argue that the ascendancy of market design as a practice coincided with the changing understanding of markets from being a kind of natural force to being like information processors or computing devices, which benefit from intentional design or engineering (MacKenzie, 2008; Nik-Khah and Mirowski, 2019a). Nik-Khah and Mirowski argue that economists realized they could leverage market design and the information-processing capacities of markets to “go the market one better”: to design outcomes that are favorable—for the public, for policymakers, or otherwise.
Viljoen et al. (2021) trace how this practice has become central within the platform economy, particularly via automated or algorithmic market design. Digital platforms are market designers par excellence. Uber uses information about rider behavior to design real-time pricing algorithms (Chen et al., 2015); Google collates multiple streams of information about users to automate real-time digital advertising auctions (Srinivasan, 2019); and social media platforms continuously redesign the newsfeed and content recommendation algorithms to optimize for increased user engagement, thereby increasing advertising revenue (Ruckenstein and Granroth, 2020). Platforms can both structure the flows of information they capture and use this information to continuously tinker with the design of the markets they mediate in advantageous ways (Pistor, 2020; Srnicek, 2017; van Dijck et al., 2019; Zuboff, 2019).
This article argues that, through intensified processes of datafication and market design, the state strives for a “platform-like” mode of governance: a capacity to use insights from comprehensive, centralized, and standardized data to reprogram markets in ways beneficial to the state (Sadowski, 2022; van Dijck, 2020; Yuan and Zhao, 2025). Although scholars have explored the impacts of increasingly datafied, platformized governance (Kitchin, 2023), this article specifically theorizes market design as central to the function of the state-as-platform. Crucially, this mode of platform-like governance relies on constructing “the public good” in ways that can be quantified and enacted within the framework of market redesign. In the next section, I explore how other scholars have critically evaluated the project of computing the public good.
Governing value
In contrast to the practice of market design through platforms, which are organized around the production of value for investors, market design in the context of public policy is in theory organized around the production of “the public good.” Following Foucault, the construction of the public good, visible within the construction of particular policies, technical infrastructures (Gross and Geiger, 2023; Metcalf and Sadowski, 2024), and market designs, is an area for interrogation if we wish to understand contemporary politics and governance.
Following this line of inquiry, Sharon (2018) draws from Boltanski and Thevenot's (2021) theorization of “orders of worth” to analyze the many ways that the common good is articulated in discourses surrounding the integration of Big Tech within digital health research practices. Specifically, she argues that a dichotomization of “public”/“private” or “civic”/“market” values is overly simplistic and does not enable a precise critique of Big Tech involvement that goes beyond privacy concerns (Sharon, 2018). Sharon instead articulates the “plurality” of ideas of “the public good” latent within these discourses, including not only doing good for society or enhancing wealth creation, but also increasing efficiency, increasing innovation and experimentation, and “proliferating life.” Through this multiplicity, Sharon complicates the equation of “private profit” and “public good” as self-explanatory frameworks for the morality of a given set of policies: as Sharon notes, a more nuanced approach is key for evaluating the growing “public–private partnership” nexus through which public goods and services are delivered.
The case studies below illustrate the idea that the “public good” as instantiated within emerging health data-sharing policies in the United States is substantively constrained by its enactment within a market-based mode of governance. This line of argumentation extends from Foucauldian examination of modes of rationality (or “governmentalities”) that are implicitly embedded within and enacted through particular practices, language, and technical instruments (Barry, 2001; Rose et al., 2006). Constructions of the public good are enacted within and through data infrastructure policies, in the practices of patients, providers, and other actors engaged in the everyday production of care.
Mennicken and Muniesa (2017), for instance, highlight how governance of healthcare, higher education, and correctional services within France and the United Kingdom has moved away from an approach rooted in public investment toward an “asset rationale,” focused on the optimization of return on investment. They stress how this mode of “governing through value” influences the “role and subjectivity … of public service users and providers.” They describe how, in the 1980s, the United Kingdom's National Health Service responded to a perceived funding crisis by redefining the role of clinicians as managers, responsible for optimizing their unit's performance on costs: These reforms transformed clinical managers into ‘asset managers’ who are responsible for the provision of good care, efficient working capital management, and for the management of the resources/assets entrusted to them, including patients. Patients are no longer merely recipients of care, but also sources for economic ‘value creation’ as their treatment has come to be linked to specific, variable financial returns. (Mennicken and Muniesa, 2017: 9)
This shift toward governing health through the lens of the “asset rationale” is pervasive, particularly within the United States. Yet intensified datafication is also transforming the way that the public value of care is conceptualized and managed by the state. Despite robust scholarship on the impact of datafication on the imaginaries and enactment of the welfare state (Dencik and Kaun, 2020), this literature has not closely attended to the intersection of datafication and practices of market design. In this piece, I explore how the public value of care is specifically constrained by and enacted through datafied market design: through the increased speed and scale of health data flows, the state seeks to redesign healthcare markets to more precisely optimize for the production of care as a public good.
Background: From managed care to value-based care: A history of healthcare market governance
The United States is unique in the degree to which healthcare goods and services are provisioned through markets rather than direct investment by the state. A little over half of the U.S. population is covered under health insurance plans subsidized by employers (US Census Bureau et al., 2024). In contrast, among the 38 OECD member countries, almost all health insurance is directly paid for by the state. 1 This is not to say that in the U.S. context, the state is absent from healthcare markets: since the 1940s, the U.S. federal government has maintained a regulatory infrastructure to support and manage this market-led provisioning of care—for instance, by exempting employer-sponsored health insurance from wage controls and taxes. Additionally, federal and state governments directly fund Medicare and Medicaid, public insurance programs that cover children, the elderly, and the disabled, individuals outside typical employer-based insurance markets. These programs cover approximately 40% of the U.S. population. Even these public health insurance funds are increasingly routed through private insurance companies: roughly 70% of Medicaid and 50% of Medicare enrollees are covered by a private insurance company contracted by the state. 2
Despite the extent to which private insurance companies dominate the U.S. healthcare market, this industry has been the site of extensive regulatory tinkering to manage costs and improve health outcomes. One major benchmark in the history of healthcare market governance is the shift toward “managed care” models. In the 1970s, shortly after the implementation of public insurance programs via Medicare/Medicaid, the Nixon administration sought to address rapidly rising healthcare costs (Mechanic, 2004). During this period, there were growing calls for a national health insurance plan that would centrally address issues of both cost and coverage in healthcare (Starr, 2017). Ultimately, these calls were abandoned in favor of “market-based reforms” (Agrawal and Veit, 2002). The Nixon administration supported a shift toward managed care models, which sought to control costs through market redesign. This included moving away from fee-for-service models, which pay providers directly for services rendered, toward prepaid plans, which (in theory) incentivize providers to manage the costs of care.
Although the term “managed care” is no longer making headlines, the basic problematization—insufficiently disciplined care markets—and the proposed solution—“market-based reforms”— have remained consistent. As evidenced by the FDA Commissioner's keynote speech at the 2023 Health Datapalooza, regulators still agree that healthcare costs are too high, while health outcomes and access to care are too low. The contemporary framework for solving this problem is called “value-based care.” Value-based care describes attempts to redesign healthcare payment models to more directly incentivize providers to improve “value”: that is, to reduce healthcare costs and improve healthcare outcomes. The 2010 Affordable Care Act was the biggest stride toward implementing these alternative payment models. A wide variety of actors—including not only policymakers and pundits but also many of the healthcare practitioners and technologists I encountered in my research—see these payment model reforms as the best chance for achieving a healthcare market aligned with the public good.
After nearly 50 years of attempting to redesign healthcare markets to produce quality care that is accessible and affordable, the federal government has turned toward intensified datafication to revitalize the promise of designing markets aligned with the public good. In the next section, I describe the way that policymakers in the United States have problematized healthcare markets, and the role of intensified datafication in addressing this problem. I explore how the state has sought to fix healthcare markets through two intersecting regulatory projects: market redesign, in the form of “value-based” payment models that use data to quantify and incentivize the public good; and intensified datafication, through regulations to advance standardized, scalable data-sharing infrastructures.
Findings
Problematization: Misaligned incentives, inaccessible information
Policymakers and pundits cite multiple reasons for the U.S. healthcare industry's high costs and poor outcomes, including an ageing population, structural racism and other “social determinants of health,” and inconsistent access to care due to an employer-based healthcare system. Nonetheless, there is near-consensus around “misaligned incentives” as a major problem within the industry. Since the managed care era of the 1970s, policymakers have pointed to the harms of fee-for-service payment models, which incentivize providers to offer more treatment or services regardless of their utility for patients. This incentive structure has consistently been faulted for driving up the costs of care, through to the present day.
In this problematization, providers are imagined as providing care that either directly advances their own financial interests—while neglecting larger concerns about efficient resource utilization—or is simply “undisciplined,” not aligned with best practices for providing rational or efficient forms of care. To address this problem, federal regulators have spent the past few decades attempting to shift away from fee-for-service payment models toward payment models that incentivize improving healthcare outcomes while reducing costs.
In addition to these “misaligned” incentive structures, the problem of high costs and poor health outcomes is also attributed to insufficient access to and use of information. In this account, due to fragmented, siloed, and nonstandardized data infrastructures, providers and patients do not have access to the information they need as rational actors to manage care effectively. This leads to duplicative treatments and “uninformed purchasing behavior,” since providers are not aware of the costs of any given treatment. Likewise, neither health insurers nor federal or state policymakers feel that they have optimal access to information with which to govern the provisioning of care.
The solution: Redesigning incentives, redesigning information infrastructures
Throughout the 2010s and until today, the federal government has sought to more efficiently and effectively produce the public good—conceived of as reduced costs and improved quality of care—through two intertwined regulatory projects: revised healthcare payment models and scalable, standardized data-sharing infrastructures. This section demonstrates how these projects are intertwined, highlighting the centrality of state-led datafication to the project of calculating “value” and “fixing” healthcare markets.
In 2006, Michael Porter and Elizabeth Teisberg coined the term “value-based care” in their book, “Redefining Health Care: Creating Value-based Competition on Results.” They characterize the healthcare system as in “crisis,” reflected via “high costs, unsatisfactory quality, and limited access to healthcare” (Porter and Teisberg, 2006). They conclude that previous decades of reforms have done nothing to address the underlying issue, and that the market should be redesigned around competition on the production of tangible healthcare outcomes: “Competition on value must revolve around results. The results that matter are patient outcomes per unit of cost at the medical condition level. Competition on results means that those providers, health plans, and suppliers that achieve excellence are rewarded with more business, while those that fail to demonstrate good results decline or cease to provide that service” (Porter and Teisberg, 2006).
This book came during a period of increased attention to measuring and rewarding “quality” in the healthcare industry, or the outcomes of care based on adherence to best practices. The Center for Medicare and Medicaid Services (CMS) began to experiment with “pay-for-performance” schemes (Jha, 2017), measuring and financially rewarding healthcare providers for meeting established “quality metrics.” By 2010, the Affordable Care Act (ACA) formally instantiated value-based care policies at the federal level. These policies included setting up the regulatory framework for “accountable care organizations” (ACOs), groups of provider organizations that can coordinate care, and the “Medicare shared savings program,” through which ACOs are able to keep any financial savings that result from more efficient and effective care. The ACA also launched the Center for Medicare and Medicaid Innovation, which is responsible for designing, piloting, and evaluating new value-based payment models.
In their book, Porter and Teisberg (2006) also emphasized that competition on value is impossible without broad access to information about patient results and prices. They urge the federal government to develop policies enabling more universal access to information about “results,” including by “defining outcome measures,” enacting “mandatory results reporting,” and “establish[ing] information collection and dissemination infrastructure” (p. 342).
Shortly before President Obama signed the ACA into law, the administration approved the HITECH Act, which allocated more than 20 billion dollars to incentivizing hospital and provider systems to digitize their paper records and adopt electronic health record (EHR) systems (Blumenthal, 2009). Over the course of 2010–2018, the Department of HHS set specific definitions for multiple phases of adoption and “meaningful use” of EHRs. As part of Phase 1, providers only had to meet basic adoption criteria—but by the end of Phase 3, they were required to demonstrate the use of electronic health information to meet improved quality measures. Representative Pete Stark (CA), summarizing the intent of these phases, suggested that “simply handing out money to help providers pay for a health IT system was not good enough. Incentives should be provided only to those who use health IT in ways that markedly improve patient care” (Stark, 2010).
In the years after HITECH's enactment, policymakers became increasingly interested in advancing not only digitization of health records but also health data “interoperability,” or standardized data sharing. In 2013, the federal government commissioned a report from a set of independent experts (“the JASON report”) about the state of health data in the United States. The JASON report argued that merely digitizing health data would not be sufficient to address the inefficiencies in the healthcare market: “The twin goals of improved health care and lowered health care costs will be realized only if health-related data can be explored and exploited in the public interest” (The MITRE Corporation, 2013: 13). They argued that to design a more efficient health market, health data would need to be broken out of siloed EHRs and not simply digitized. Taking cues from the JASON report, in 2016 President Obama signed into law the 21st Century Cures Act, which set forth the initial parameters and requirements for interoperability in U.S. health data infrastructures.
Through this history, we can see how state regulatory interventions into health data digitization and interoperability are situated within a broader project of quantifying and incentivizing “quality” and enabling competition around “results,” as in Porter and Teisberg's (2006) definition of value-based care. These projects of state-led datafication were understood not only as a way to help move data from one doctor's office to another, but also as a new approach to the problem of reducing costs and increasing quality in healthcare. The next section demonstrates in greater detail how these new data infrastructures are implicated in quantifying and paying for “value” in healthcare.
Datafication and the quantification of “value”
Datafication plays a key role in the shift from paying for “care”—in the form of treatments or services—toward paying for “health,” including individual or population outcomes resulting from those treatments or services. Decades of organizational and technical infrastructure have been built around quantifying and paying for care inputs: this largely relies on tagging treatments or services with structured billing codes to comprehensively document the services rendered. Documenting, quantifying, and paying for health outcomes rather than care inputs requires a fundamentally different, and arguably much more complex, informational infrastructure. The next two sections outline two key examples of these changing modes of valuing and paying for health outcomes via data: the encapsulation of value via quality measures, and the role of interoperable data in practices of risk adjustment.
Quality measurement
Most of the early “pay-for-performance” models advanced through CMS relied on “quality measures,” or standardized performance metrics for care delivery and outcomes. Quality measures play an increasingly important role in value-based payment models and involve several different subtypes: Process measures, for instance, track whether providers are adhering to recommended best practices for certain individuals and conditions—for example, whether patients undergo recommended preventative screenings. They also include “outcomes” measures, which measure things like the total rate of surgical complications, or “intermediate” outcomes, like the number of patients diagnosed with hypertension but whose blood pressure remained controlled over the course of a year following treatment. Value-based payment models offer financial incentives for provider organizations able to demonstrate certain thresholds of performance on these quality metrics.
There are numerous ways that quality measures are incorporated into different value-based care schemes. For instance, many “ACOs” participate in “shared savings programs,” where the provider organization is given a benchmark for expected total cost for the year. If the ACO spends less than that financial benchmark, they can keep all or a percentage of that savings. To avoid cost-cutting at the expense of patients, however, ACOs must also demonstrate that they adhered to a broad range of quality measures over the same period. These quality measures include process and outcomes measures, as well as “structure” measures (whether a hospital has adequate beds or hospital staff, for instance) and more.
Importantly, this requires healthcare organizations to have robust data collection infrastructures to be able to capture and report on these broad and complex sets of measures. It may even require the provider organization to be able to capture data for patients that happens outside of their healthcare system. For instance, if patients receive their vaccinations at a retail location, the provider organization may seek a way to receive and incorporate that vaccination information to improve their performance on a particular vaccination-related quality metric. Likewise, the provider organization needs a mechanism to not only report this data at the end of the year but also regularly track their performance in order to intervene or make improvements in real time. Making these improvements scalable may look like developing technical infrastructure to enable text-based notifications reminding patients about preventative care practices, or integrating prompts for providers within the EHR workflow about steps required to meet certain quality measures.
Broadly, policymakers imagine that the shift to value-based payment models will create a demand for more rationalized, information-driven forms of care. The 2015 “Interoperability Roadmap,” developed by the Office of the National Coordinator for Health IT (ONC), envisioned a growing need for provider access to more and more complete patient records: When providers are rewarded for value, interoperability can be a significant tool to help them
Through intensified datafication, federal policymakers hope that quality metrics can become a more and more robust tool for exerting “market discipline” on healthcare providers. At a major health IT conference in 2022, Rucker spoke about the limitations of legacy quality measures as a tool for calculating value in healthcare, and the impact of API-based exchange on advancing the project of calculating and incentivizing value: We came up with what was, at the time, a state-of-the-art pastiche of quality measurement as a proxy for consumer value, which I think is a pretty slim proxy … We've had these things that are half-solutions, because we don't actually have the data. We don't actually know everything … so we'll say, ‘Let's measure these things.’ They're really narrow … Now with big data we have the opportunity that retail has had—big data, volume, variety, velocity. That is what makes FHIR [APIs] powerful. We finally have a way to answer the value question in a totally different way.”–Don Rucker, HIMSS 2022, quoted in a piece by Jeff Lagasse in Healthcare Finance, March 17 2022.
Risk adjustment
Along with quality measures, “risk adjustment” is a central part of many value-based payment models. Shifting from paying for individual services to paying for efficient and effective production of health outcomes requires accounting for aspects of health outcomes that are not directly within providers’ control. Risk adjustment describes the practice of documenting and offsetting the financial accountability for health risks. For instance, within the Medicare shared savings program described earlier, CMS collects data about key variables of the patient population to adjust the expected costs/financial performance for a provider organization based on the expected health risks of the population being served. Providers report on key variables within a patient population, including age, sex, disability, and chronic health conditions. Populations that are “riskier,” or higher-needs, may justify a higher rate of healthcare expenditure.
Using data to justify higher expenditures on higher-needs populations, on its face, seems like a socially good practice. However, the realities of risk adjustment highlight the difficulties of using data and market design to perfectly align market incentives with the production of the public good. Under risk adjustment regimes, healthcare provider organizations are incentivized to document a patient's risk factors—their chronic illnesses, their previous substance abuse issues—in comprehensive detail. This is known as increased “coding intensity.” To mitigate the incentives toward maximizing patients’ apparent risk scores, CMS has taken steps to constrain the allowable rate of increased “risk score” for providers participating in value-based care payment models. Arguably, however, no amount of technocratic tinkering can minimize the violence of a market framework that seeks to circumscribe social health needs as a financial risk to be moved from ledger to ledger. 4
Through risk adjustment, certain aspects of health outcomes are circumscribed and deemed outside of the provider's direct control. To maximize the accounting of this risk, some insurers are increasingly interested in collecting “social needs” data, including through the integration of mental health records and data from community-based organizations. Sometimes the provider or insurer may implement programs to address these datafied health needs, such as referrals to social care to address housing or food instability. Other times, this data is collected purely as a financial accounting exercise, documenting health needs and then setting them aside as someone else's problem. And certainly, the hospital cannot be accountable for provisioning for all possible social care needs. Yet there is violence in a practice of datafication undertaken—not to address needs—but to render these needs as an “externality,” outside of what either the healthcare provider or payer can be held accountable for.
Discussion
The state-as-platform
Through the story of the shift to value-based care, we can see how contemporary governance is rooted in intertwined processes of datafication and market redesign. This approach to governance draws explicit inspiration from the modes of control that platform companies exert as market- and data-intermediaries. As Viljoen et al. (2021) explain, “mechanism design”—the theoretical underpinnings for the practical art of market design—has come to be “regarded as an authoritative and generalizable science for engineering choice and distributing value in society,” undergirding and justifying policymaking in a wide variety of domains. Viljoen et al. explore how the ethical narratives of mechanism design, as a tool to achieve the optimal distribution of resources, are “dispensed with” in the context of platform design. They trace how platforms, in particular online advertising auctions and gig work platforms, use computational techniques to engineer markets “to achieve information asymmetry, to distribute social costs in ways that benefit designers, and to orchestrate behaviors and choices in their systems” (ibid.: 2).
The state turns to intensified datafication, instantiated through regulatory interventions like HITECH and Cures Act, not only to improve provider and patient access to care, but also to enable insurers, including CMS, to exert a more platform-like mode of control over healthcare providers. Policymakers like former ONC head Don Rucker see APIs and other modern health data infrastructures as enabling a more direct insinuation of market discipline into providers’ daily care practices, thereby aligning their behaviors with public value, in the sense of improved “return on investment”: lower costs, higher quality of care. This is precisely the mode of intensified, algorithmic management that impacts platform-based workers, and an increasing number of professionals are “indirectly” managed through platform mechanisms.
Petre (2020) describes, for instance, the way that journalistic practices are reengineered to better align with management goals through platforms that quantify journalists’ performance on a wide variety of audience metrics. Introducing the idea of “engineering consent,” a play on Burawoy's classic notion of “manufacturing consent,” Petre argues that journalists do not necessarily experience this process as “algorithmic control.” Rather, the incentives and choices of the professional are reengineered through algorithmic feedback mechanisms. Likewise, media and platform studies scholars have explored the ways that social media platforms reengineer the behavior and subjectivity of their users to increase their legibility to advertizers. Wendy Chun's “Discriminating data: Correlation, neighborhoods, and the new politics of recognition” highlights how platform interfaces and recommendation algorithms are constructed to reproduce stable, machine-legible subjectivities and communities (Chun, 2021). In the shift from Nielsen audience scores to platform-based audience calculation, both the platform user and the content produced are reprogrammed. Through a recursive interaction with algorithmic interfaces, both user behaviors and the content they are exposed to are nudged toward that which is most valuable for the platform itself.
Likewise, the shift in healthcare systems from quality metrics reported via chart abstraction to quality metrics reported real-time via APIs enables a recursive reprogramming of behavior. As calculations about the value of a provider's behavior shift from the “rearview mirror” to a real-time dashboard, the healthcare provider begins to reconfigure their behavior in real time as well. In their critique of the practice of “governing through value,” Mennicken and Muniesa (2017) highlight how accountability shifts as the clinician becomes the “asset manager.” Through changing modes of management, they describe the clinician as adopting a “financial imaginary” that shapes their everyday care practices. Likewise, via datafied value-based care, the healthcare provider is reprogrammed as a mini-insurer, internalizing the logics of population-level cost/benefit analyses.
This is the dream of “aligned incentives”: reprogramming care practices and care choices according to the reduced costs and improved population health outcomes in which the state is interested. This dream is what motivated the shift to managed care policies and what motivates the shift towards standardized, scalable health data—sharing infrastructures, enabling an intensified, platform-like mode of visibility and control over market actors. Just as platforms have adopted mechanism design techniques from economists seeking to design policy interventions (Viljoen et al., 2021), states seek to achieve the levels of datafication necessary to enact platform-like modes of market design. While other scholarship has examined the characteristics of “the platform state” in a wide variety of contexts (Kitchin, 2023; Sadowski, 2022; Singh, 2019; Törnberg, 2023), this article specifically argues that datafied market design is central to this emerging mode of platform-like governance.
Just as society has sought to grapple with the implications and impacts of platform control across many industries, we must grapple with the political implications of state governance that increasingly adopts techniques of platform control. For instance, platforms make decisions about optimizing for value in a very centralized way. They also exert total control over the data that is collected to calculate this value, and over the algorithms and interfaces that are constructed to shape user behavior: this operationalization of “value” and engineering of behavior toward it also tends to be opaque. Platform governance can be understood as a practice of centralizing and depoliticizing decisions about the definition of value, while intensifying the mechanisms of discipline and control in constructing user behaviors according to those definitions. Ultimately, this platform-like mode of governance exacerbates tensions and problems inherent in market design as a mode of governance.
Governing the public good beyond market design
Intensified datafication revitalizes the promise of market design to help states efficiently and effectively produce the public good. However, market design requires the public good to be operationalized in a very narrow, economized sense. When public data infrastructures are designed to enable market design practices, these infrastructures likewise reflect this narrow definition. To more clearly draw out the harms and limitations of datafied market design, this section compares value-based care policies with carbon markets designed to incentivize more socially responsible climate practices.
Problematization—Through the lens of market design
Seeing the world through the market design mode of governance constrains our understanding of the failure to produce the public good efficiently and effectively as a problem contained within the market itself. Questions about whether markets are the best tool to address a problem, or are in fact the source of the problem, are not included in this problem frame. In the case of the U.S. healthcare industry, for instance, the high costs and low outcomes of care are problematized as a problem of “misaligned incentives” for healthcare providers. Likewise, a market-design lens on the problem of climate change points us towards poor market design, which does not account for the negative externalities of climate impact. In both instances, it is assumed that markets can be optimized by the incorporation of additional variables as a set of financial incentives: for example, the quantification of health outcomes, or the quantification of carbon emissions. In this account, the failure of markets to produce the public good is a problem of misaligned incentives that can be remedied through a set of financial incentives, motivating corporate actors to innovatively pursue the efficient production of the public good.
Yet there are many other ways to problematize the high costs and low outcomes of care beyond the misaligned incentives of providers, and many other ways to understand climate change beyond the failure to sufficiently capture “negative externalities” within the market itself.
Within the healthcare industry, one might argue that making healthcare a matter of profit means that the production of health can only ever be a secondary concern relative to the production of profit. Likewise, in the climate change domain, one might argue that the very production of profit always hinges on the appropriation of “cheap natures” (Moore, 2016), and that no complete accounting or accountability of the environmental impact of markets is possible. Market design requires states to precisely calculate and quantify the public good and to make market actors financially accountable for its production: to make “the public good” and “profit” as close to synonymous as possible. This framework requires more and more intensified datafication to capture the public good in an increasingly precise way.
In their paper, “The organization of markets for collective concerns and their failures,” Frankel et al. (2019) note that instances of markets failing to produce the public good “does not pave the way for the introduction of non-market forms of solving collective problems” (Frankel et al., 2019). Market design relies on an inherently conservative problematization: rather than consider nonmarket modes of governance, it assumes that the market can stay in place but must be reorganized, redesigned. Datafication opens up a new set of techniques for market designers to “assess, identify and repair market failures” (ibid.: 155), thereby extending and expanding the practice of technocratic tinkering instead of imagining other modes of governance beyond markets.
The “public good” as a market incentive
In this new direction toward which the market is driven, the public good is also constrained to a matter of technocratic engineering. In the market design mode of governance, the market designer is the ultimate arbiter of the public good. For instance, within value-based care payment models, valuable care is defined as the production of maximum health outcomes at a minimum of cost. Although patient experience is one dimension of CMS quality measures, quality metrics and the payment models and data infrastructures that support them undergird a definition of health that privileges population health outcomes and the efficiency of care. This definition of health deprioritizes the needs of the individual patient and the judgment of the individual provider in favor of a population-level calculus.
This is not to say that the state's definition of valuable care should be simply substituted with the definitions of patients or providers; there are likely significant disjunctures and fractures across the ways that “valuable care” would be defined by these groups. Any definition of “valuable care,” or the ideal outcomes produced through healthcare markets, is intensely political. Yet rather than seek to address these fractures and disagreements through a political process, market design substitutes political decision-making with decentralized decision-making by self-interested rational actors. The market designer defines an ideal end state, designs financial instruments intended to make that end state more profitable, and leaves the rest of the decision-making to rational actors pursuing those financial incentives creatively and competitively.
For instance, in the case of cap-and-trade markets, the state defines a specific benchmark of acceptable emissions but also lets corporations buy and sell “allowances” for excess emissions. This theoretically creates a financial incentive for corporations to creatively and competitively reduce their emissions. Decision-making about what level of emissions to adhere to, and how best to adhere to those emission standards is devolved to individual corporate actors. Achieving the public good is operationalized not as a matter of collective concern, but as a matter of optimizing individual corporate finances. Likewise, under value-based care, the public good is centrally defined by states as a set of specific quality metrics and target acceptable costs, and provider organizations are individually responsibilized for finding creative ways to increase their performance on these metrics. In both instances, the public good is depoliticized, financialized—rendered as a technocratic benchmark rather than a matter of collective concern or debate.
The economization of public data infrastructures
Although datafication always entails some degree of abstraction, categorization, and imperfect representation, it can nonetheless be an important tool to advance the governance of the public good. But when these data infrastructures are built to enable market design and a financialized accounting of the public good, the data and its utility for actual care practices is likewise constrained. For instance, the EHRs that were widely adopted as a result of HITECH are criticized as being cumbersome for healthcare providers, a “billing machine” (Paull, 2022) more aligned with producing documentation necessary for reimbursement from insurers than with the needs of clinical care. Likewise, designing data infrastructures around the capture of quality metrics—or carbon emissions for the purposes of cap-and-trade—means that the possibility of reusing this data for alternative modes of accountability or governance beyond market design is limited.
Broadly, data that quantifies health outcomes, disparities, and the inequitable distribution of resources has immense potential social use. What might it look like to design health data infrastructures for purposes other than building efficient, optimized markets? Ottinger et al. (2023) argue that while “patterns of alienation characterize big data,” there are also opportunities for data to be collected and used in ways that go beyond specific “regimes of value” like surveillance capitalism or data colonialism: “The questions for analysts and activists, then, are how particular practices of data collection and use may be weighted toward attunement or toward alienation; to what phenomena they attune or alienate us; and who is helped or harmed by these tendencies”(Ottinger et al., 2023: 3).
Beyond “computational accountability”
Market design, in both the health and environmental domains, attempts to make market actors accountable for the production of the public good. Yet this is a highly constrained form of accountability, rooted only in what can be calculated and accounted for via predetermined metrics and market devices. This “thin,” market-based accountability is readily subject to gamification and the “production of ignorance” (Kleinman and Suryanarayanan, 2013). Lippert, for instance, describes how this gamification unfolds in the context of carbon markets: “Once a carbon footprint is established, the organisation can ‘outsource’ the emissions it has calculated as having emitted (Ninan 2011): it can buy negative emissions via the carbon market and add these to its footprint, calculatively reducing its footprint by e.g., 20%, even by 100% to be carbon ‘neutral’ or by even more to be carbon ‘negative’.” This financial gamification of the “carbon footprint” implies “the risk of misjudging emissions at a global scale: subprime carbon (Chan 2009).” (Lippert, 2022)
It is this aspect of carbon markets that leads Lohmann (2005) to characterize them as successful—not in terms of reducing carbon emissions or environmental impact, but of “diverting financial and intellectual resources away from political actions and technological innovations that could stem the flow of fossil carbon from below- to above-ground” (Lohmann, 2005). In short, Lohmann argues that governance of climate change through carbon markets enables a diversion of accountability.
The same critique might be applied to the practice of value-based care: to what extent does datafication enable the continued displacement of forms of accountability for care beyond financialization or market discipline? How do datafied quality metrics substitute increasingly complex, nontransparent forms of computation for collective accountability that would ensure all health needs are met? And what might data infrastructures look like that would support alternative modes of accountability or relationality?
Conclusion
To date, scholars have critiqued both the politics of market design (Frankel et al., 2019; Nik-Khah and Mirowski, 2019b; Viljoen et al., 2021) and datafied and platformized modes of control (Chun, 2021; van Dijck, 2020). Through the case study of value-based care, this article explores the convergence of these phenomena via datafied market design. Specifically, this article demonstrates how the U.S. federal government has sought to implement standardized data-sharing infrastructures to enable a platform-like reprogramming of healthcare markets. In the face of pervasive market failures, datafication enables the eternal project of technocratic tinkering, designing markets that internalize “negative externalities” and make the production of profit synonymous with the production of the public good.
Parallels between the project of value-based care and climate change governance demonstrate the limits of how public value is conceptualized when decided by centralized market-makers and operationalized as a set of financial incentives or other market devices—and the limits on vital public data infrastructures when they are designed and instantiated with this kind of governance-through-market-design in mind. This analysis also suggests that the “computational accountability” that these policies seek to enact is necessarily thin, subject to gamification, and ultimately, a diversion from alternative forms of governance or accountability.
Footnotes
Acknowledgments
Thanks to colleagues in the Platform Economies Research Network and in the University of Michigan STS community for reviewing early drafts of this article. Immense appreciation to the many health IT professionals in Michigan and nationally who generously shared their time and insights with me as part of this research.
Ethical approval and informed consent statements
This study (#HUM00227935) was reviewed and approved by the University of Michigan Institutional Review Board on 13 March 2023. The requirement for written informed consent was waived, and interviewees provided verbal consent at the start of each interview.
Funding
Research that has informed this writing was in part supported by the National Science Foundation (Grant Nos. 1901171, 1513596, 1617898, and 1744359), the Lieberthal-Rogel Center for Chinese Studies at the University of Michigan, and The China-US Scholars Program (CUSP).
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement
The data underlying this article cannot be shared publicly due to the risk that sharing qualitative interview data may pose for the privacy of the individuals that participated in the study.
