Abstract
Sociological research on artificial intelligence (AI) is flourishing: sociologists of inequality are examining new and concerning effects of AI on American society, and computational sociologists are developing novel ways to use AI in research. The authors advocate for a third form of sociological engagement with AI: research on how AI can be publicly governed to advance equity in American society. The authors orient sociologists to the rapidly evolving AI policy landscape by defining AI and by contrasting two leading approaches to AI governance: a safety-based and an equity-based approach. The authors argue that the safety-based approach is the predominant one but is inadequate. They suggest that sociologists can shift AI policymaking to prioritize equity by examining corporate political power in AI policy debates and by organizing research around four sociological questions centered on equity and public engagement. The authors conclude with recommendations for supporting and coordinating policy-oriented research on AI in sociology.
Artificial intelligence (AI) is becoming more powerful and pervasive in everyday life. Whether knowingly or not, individuals now encounter AI technology and applications in quotidian and casual ways, such as when they search for something online using Google or write autocorrected documents and e-mails using Microsoft Office. Individuals also encounter AI in highly consequential ways, such as when they apply for jobs, housing, and social services or face loss of liberty when suspected as threats to public safety. Increasingly, sociologists are seeking ways to make sense of these rapid developments.
Within American sociology, engagement with AI currently takes two forms: inequality research on AI and computational research using AI. Inequality research on AI, which is rooted in a collection of agenda-setting studies in diverse fields (Benjamin 2019; Eubanks 2018; Noble 2018; O’Neil 2016), applies a range of methods to examine the proliferation of data surveillance and algorithm-based decision-making in American society, particularly as it relates to social services provision, employment, and the criminal legal system (e.g., Brayne 2020; Johnson and Zhang 2022; Kiviat 2023; McMillan Cottom 2020; for a review, see Burrell and Fourcade 2021). Computational research using AI, which is part of the flourishing field of computational social science (Edelmann et al. 2020; Lazer et al. 2009; Salganik 2017; see also Kesari et al. 2024), identifies novel ways to use AI, particularly machine learning and large language models, to address long-standing sociological concerns about inequality, social categorization, and political power (Bonikowski, Luo, and Stuhler 2022; Hwang and Naik 2023; Nelson 2021; Nelson et al. 2021; for reviews, see Brand, Zhou, and Xie 2023 and Molina and Garip 2019).
We suggest the need for a third form of sociological engagement with AI: research on how AI can be feasibly regulated through public policy to ensure that this technology advances equity in American society. In making this call, we follow DiPrete and Fox-Williams (2021) who advocate for “feasibility research,” or research on “the feasibility of change whether through policy intervention or through actions by individuals, organizations, and communities acting individually and collectively” (p. 24). DiPrete and Fox-Williams (2021) contrast this to a tendency within sociology to prioritize “frame-shifting research,” or research concerned with describing the extent of a problem and explaining how it is produced and reproduced. We will refer to important frame-shifting research on AI in sociology and other social science disciplines throughout this article, but our focus will be on introducing sociologists to emerging policy debates concerning AI and proposing critical avenues for further sociological research to intervene in these debates.
Specifically, we aim to demonstrate that this third form of sociological engagement is needed for two main reasons. First, it is needed because current policymaking efforts concerning AI are unduly constrained by the prominent involvement of technology companies pursuing private interests. To be sure, this dynamic is not limited to the AI industry, and we will describe the ways that tech firms are similar to and different from firms in other industries in this respect. Second, scholarly research on AI policy is, to date, led primarily by computer scientists, legal scholars, and economists. This is not to ignore the fact that influential sociologists have made crucial interventions to expand the purview and purpose of policymaking on AI toward more equitable and empowering ends for the public at large. This is especially evident in the development of the landmark Blueprint for an AI Bill of Rights (White House OSTP 2022), which was led by Alondra Nelson in her capacity as the Acting Director of the White House Office of Science and Technology Policy. Rather, we underscore the need to better understand how the Blueprint fits into the broader policymaking landscape concerning AI and how sociological research can reinforce and build upon the interventions initiated by the Blueprint.
We begin in the next two sections by orienting sociologists to the rapidly evolving policy landscape on AI. We discuss the challenges of defining AI and the importance of defining AI in a way that reveals the technological as well as social stakes involved in policy development around AI. Then, we compare two leading approaches that are emerging in AI governance: an equity-based approach anchored by the Blueprint and a safety-based approach anchored by the National Institute of Standards and Technology’s (2023) Artificial Intelligence Risk Management Framework. We argue that the latter is currently the predominant approach but is inadequate for governing AI, and thus the following two sections consider how sociologists can help shift AI policymaking to prioritize equity. We first discuss the political behavior of tech firms as one domain in which further research is needed to identify how corporate political power shapes the predominant focus of current policy debates on AI, and how that might change. We then turn our attention to identifying four sociological questions that can help sociologists to center AI as an issue of equity that concerns the public at large. We conclude by summarizing our argument and providing practical suggestions to help expand and strengthen the infrastructure for research on AI, equity, and policy in sociology.
Defining AI to Guide Public Policy
We suggest that any working definition of AI should aim to do four basic things, with the first three centered on the technology itself and the fourth centered on the technology’s social implications. First, a working definition should recognize that “AI is an umbrella term, comprised by many different techniques” such as large language models, computer vision, and speech recognition (Calo 2017:405). Second, a working definition should emphasize that AI is currently highly context-specific, which can both constrain and enable oversight and regulation, though this is changing rapidly with the advent of foundation models such as OpenAI’s GPT-4, which are models trained on broad data that can be used to complete a range of tasks (and which often use large language models as part of their architecture) (Bommasani et al. 2022; see also Jones 2023). Third, a working definition should be flexible enough to incorporate ongoing developments in AI.
Lastly, a working definition should make evident the potential of AI to intervene in the everyday life of Americans, often in an imperceptible manner. For instance, a recent Pew Research Center survey (Kennedy, Tyson, and Saks 2023) found that although most Americans report interacting with AI at least several times a week, they are largely uncertain of what AI is, with only 30 percent of Americans being able to correctly identify common applications that use AI. The survey also found that men, White and Asian Americans, and individuals with higher levels of education and income are more likely to correctly identify common uses of AI. This limited and skewed understanding among the public may hinder the safe and effective use of AI-based products and services. This limited understanding is also detrimental to the public interest because it precludes the public from engaging in “debates about the appropriate role—and boundaries—for AI” (Kennedy et al. 2023) and the democratic development of an “appropriate policy infrastructure” for AI (Calo 2017:407; see also Allen et al. 2024; Benjamin 2019; Noble 2018). 1
With these criteria in mind, we draw on the work of Joyce et al. (2021) and Calo (2017) to offer a working definition of AI: an approach seeking to mimic and augment human cognition, decision making, and expression that applies diverse computational techniques to massive data 2 in order to develop and deploy algorithms and models in (specific) social contexts and potentially in ways that are imperceptible and obfuscated to the public. 3 To date, the technology described in this definition is largely developed by private sector companies, which, as we discuss in the next two sections, are central to the current AI policymaking process in the United States. Table 1 provides an overview of the seven tech firms that have been identified by the Biden Administration as leaders in AI development: Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI. As Table 1 shows, these tech firms, which we define as organizations focused on developing and monetizing AI models and products, differ in terms of structure and activities (e.g., model development versus model access acquisition), which can present challenges for uniform policy development. We highlight generative AI activities among these tech firms because generative AI has become increasingly central to conversations on AI public policy (for a review, see Bail 2023).
Leading Tech Firms in AI Development.
Note: AI = artificial intelligence; API = application programming interface.
Publicly traded companies are required by the U.S. Securities and Exchange Commission to report their finances on an ongoing basis. Data for Amazon, Google, Meta, and Microsoft come from their publicly available quarterly reports for the most recent quarter, which is the fourth quarter of 2023.
Safety versus Equity: Emerging Approaches to AI Policy in the United States
Although American tech companies are leaders in developing AI technology, AI policy in the United States is quite nascent. Prior to 2021, there were very few formal efforts around AI governance at the federal level. Researchers studying AI and society have argued that in the absence of national standards for AI governance, perhaps stemming from earlier unsuccessful efforts at regulation of social media and online search engines, 4 tech firms in the United States have largely operated on a de facto policy of self-regulation when it comes to AI (Allen et al. 2024; Benjamin 2019; Calo 2017; Noble 2018). Although it is true that tech firms ought to establish professional standards and ethics, 5 professional standards and ethics cannot and should not substitute for policymaking, given that they have historically been unsuccessful in regulating industries due to a lack of enforcement and accountability mechanisms (Calo 2017). Moreover, such standards are limited to regulating behavior within industries and inadequate for regulating the broader social impacts of AI beyond those accruing to direct users (Benjamin 2019; Noble 2018).
Currently, there is growing interest in public oversight and regulation of AI among federal lawmakers and officials, as well as among state lawmakers (see Appendix A for recently enacted state legislative bills on AI). Drawing on our own review of recent advances in AI governance at the federal level, we identify two competing approaches that are emerging in AI policy development: (1) a safety-based approach anchored by the National Institute of Standards and Technology’s (2023) AI Risk Management Framework and (2) an equity-based approach anchored by the White House Office of Science and Technology Policy’s (2022) Blueprint for an AI Bill of Rights. These approaches, which were both developed by the Biden administration around the same time, are not entirely exclusive but instead have different emphases. Both approaches recognize the novel implications of AI for American society and the importance of public oversight and regulation. However, they focus on achieving different outcomes and identify different targets for intervention.
Specifically, the two approaches reflect a long-standing tension in U.S. public policy between a dominant “economic style of reasoning” that prioritizes efficiency and market-based solutions and a rights-centered logic that was more common in the postwar period (Popp Berman 2022). This tension has manifested as policy debates over whether efficient management of risks or affirmation of rights is a preferred strategy for governing issues of public concern such as health care and the environment (Popp Berman 2022). For our analysis of AI policy debates, we label these two approaches using prominent terms from the main AI policy documents themselves and which represent the ultimate goals of the leading initiatives: safety (as the end goal of managing risks) versus equity (as the end goal of assuring rights). We discuss the two approaches, which are summarized in Table 2, to orient sociologists to the rapidly evolving AI policy landscape as well as to initiate conversations on how we can collectively organize our research on AI to advance current policy debates.
Overview of Emerging Approaches to AI Governance.
Note: AI = artificial intelligence.
Safety-Based AI Governance
In 2023, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF). NIST, which is an agency in the U.S. Department of Commerce with roots in the physical sciences, is principally concerned with creating AI systems that are “trustworthy” and identifies efficient risk management as a foremost strategy for governing AI. In particular, managing risks to human safety and national security are essential features of trustworthy AI systems. Accordingly, the AI RMF is designed to help organizations that design, develop, deploy, and use AI to define and manage risks. The AI RMF generally defines risk as “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event” (NIST 2023:4). Risks are also defined specifically in relation to an organization’s “risk tolerance,” or its “readiness to bear the risk in order to achieve its objectives,” and “risk prioritization,” or its resources available to address risks (NIST 2023:7). To manage risks, the AI RMF recommends that organizations adopt four “functions:” (1) “govern” through “a culture of risk management,” (2) “map” risks within contexts, (3) “measure” risks through assessment, analysis, and tracking, and (4) “manage” risks by prioritizing and acting upon them (NIST 2023:20).
Since the release of the AI RMF, the Biden Administration has taken steps to implement this safety-based approach to AI governance, with growing support from tech firms. In 2023, the Biden Administration secured voluntary commitments from leading tech firms—including Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, and OpenAI—to “help advance the development of safe, secure, and trustworthy AI” (White House 2023b). Participating tech firms commit to take specific steps when developing new foundation models, such as: (1) “ensuring that products are safe before introducing them to the public,” (2) “building systems that put security first,” and (3) “earning the public’s trust” (White House 2023b). 6 They also agree to increase information sharing with one another to address safety risks stemming from AI, with the AI RMF named specifically as a potential tool for guiding such collaboration (White House 2023d). Relatedly, several leaders of tech firms advocated for a more precise and flexible risk-based approach to regulation in recent public hearings about AI organized by the U.S. Senate Judiciary Subcommittee on Privacy, Technology, and the Law.
President Biden provided further support for a safety-based approach to AI governance through a recent executive order. The 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directs federal action toward several priorities aligned with a safety-based approach to AI governance, such as establishing “new standards for AI safety and security” and “promoting innovation and competition” (White House 2023c; see also White House 2023a). Specifically, the executive order calls for the establishment of an AI Safety and Security Board and safety reporting requirements for AI developers, as well as allocates new national security investments to protect the country from AI-enabled attacks. As the executive order demonstrates, the safety-based approach is a particularly attractive strategy for federal lawmakers and officials because it reinforces a “dual focus on national security and global economic competitiveness,” which have been “the twin peaks of U.S. policy for seventy years” (Allen et al. 2024:6).
Equity-Based AI Governance
In 2022, the White House Office of Science and Technology Policy (OSTP) outlined an equity-based approach to AI policy in The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. The White House OSTP, which was led at the time by sociologist Alondra Nelson, sought to create equitable AI systems by protecting the rights of Americans, particularly in terms of “civil rights, civil liberties, and privacy” (White House OSTP 2022:4). Equity is defined as “the consistent and systematic fair, just, and impartial treatment of individuals” that “must take into account the status of individuals who belong to underserved communities that have been denied such treatment” (White House OSTP 2022:10). To ensure that AI systems are equitable, the Blueprint identifies five affirmative principles to guide AI development and use and provides recommendations for organizations across various sectors to implement these principles in practice. The five principles affirm and support Americans’ rights to (1) “safe and effective systems,” (2) “algorithmic discrimination protections,” (3) “data privacy,” (4) “notice and explanation,” and (5) “human alternatives, considerations, and fallback.”
There have been some federal efforts to implement the Blueprint and its equity-based approach to AI governance, though these efforts are fewer and less concrete compared with those supporting the AI RMF and its safety-based approach. Namely, President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of AI calls for federal action on several priorities aligned with an equity-based approach, such as “protecting Americans’ privacy,” “advancing equity and civil rights,” “standing up for consumers, patients, and students,” and “supporting workers” (White House 2023c; see also White House 2023a). The executive order also requests federal agencies to explore how they can protect civil and labor rights from potential infringements stemming from AI systems.
Although the Blueprint does not enjoy as vocal of support from tech firms compared with the AI RMF, despite the fact that tech firms were engaged in its development, this equity-based approach to AI governance has found support from other groups. Namely, the Blueprint has been lauded by civil sector organizations, such as Data & Society and the Center for Democracy and Technology (e.g., Haven 2022; Reeve Givens 2022). The Blueprint is also mostly aligned with several AI governance efforts occurring outside of the United States (for a review of normative frameworks for governing AI from around the world, see Allen et al. 2024). For example, the European Union (EU) has established protections for citizens’ data rights through the 2018 General Data Protection Regulation and is currently considering the EU AI Act, which would “ensure that AI systems are safe and respect fundamental rights and EU values” (Gibney 2024). This alignment of the Blueprint with international initiatives may prove increasingly beneficial as the United States engages other countries and governing bodies to develop policies and processes for global AI governance.
Our discussion of the two leading approaches to AI governance—a safety-based approach and an equity-based approach—illustrates that there are clear tensions in current policy debates about public oversight and regulation of AI. For sociologists concerned about AI policy, this means that it is no longer enough to call for more public governance of AI, a need that is now increasingly recognized and understood by federal lawmakers and officials, civil sector groups, and even tech firms. Rather, it is necessary to articulate what exactly public governance of AI should entail.
Our review suggests that AI policy development is increasingly consolidating around a safety-based approach. Although human safety and national security are important concerns, the safety-based approach focuses on risks while underappreciating the implications of AI for equity in American society, which, as the Blueprint recognizes, are quite profound. Moreover, the market-based orientation of this approach defers to tech firms and does little to check their growing power. We suggest that sociologists draw on their long-standing expertise in inequality, politics, and knowledge production to intervene in current policy debates about AI, specifically by challenging the predominant focus on a narrowly construed concept of “safety” and by asserting equity as a public priority. We propose two immediate research tasks, which we discuss in the next sections: (1) describing and examining the power of private corporations in shaping AI policy and (2) organizing existing and new AI research around a coherent framing in which AI is an issue of equity that concerns the public at large (rather than merely an economic and technocratic issue of private interest).
Examining Technology Firms as Political Actors
In order for sociologists to shift AI policymaking from its current narrow focus on safety to a broader focus that includes equity, it will be helpful to understand how tech firms behave as political actors. Tech firms have been adept at limiting policy debates about AI, ensuring that such debates advance their economic and political interests, as exemplified by their current efforts to advocate for a safety-based approach to AI governance. When policy debates on AI reflect the undue influence of tech firms, they are unlikely to yield equitable solutions and, more generally, should not be considered genuine debates conducted with the public interest in mind. Thus, addressing the role of corporate political power in the AI policymaking process is a necessary first step toward expanding and improving policy debates about AI.
To begin to address the role of corporate political power in the AI policymaking process, which is currently shaping policy developments to focus narrowly on risk management, sociologists can conduct theoretical and empirical research on the political activities of leading tech firms. Sociologists have a long tradition of examining the political power of corporations in American society (for a review, see Walker and Rea 2014), from analyses of the early influence of the railroad industry (Dobbin 1994) to analyses of the contemporary banking industry (Carruthers and Kim 2011). Sociologists are also well versed in theorizing about the peculiarities of American corporations more generally (e.g., Fligstein 1993; Knight 2023; Roy 1999). In recent years, sociologists have contributed valuable insights into how social media platforms can contribute to polarization and misinformation in political discourse (e.g., Bail et al. 2018; Rafail, O’Connell, and Sager 2024), and how tech companies’ often uncritical uses of data can further disempower marginalized communities (e.g., Benjamin 2019; Brayne 2020; see also Bender et al. 2021). However, there is yet to be a sociologically grounded articulation of how tech firms behave as political actors in AI policymaking.
Drawing on recent work in political economy by Rahman and Thelen (2019) and McMillan Cottom (2020), who examine the relationship between digital platforms and capitalism, and Young, Banerjee, and Schwartz (2018), who examine contemporary corporate political strategies during the Obama administration, we conceive of tech firms as political actors by identifying and unpacking three key features of these organizations. These three features are: (1) profitability based on distinctive brokerage of data, (2) advocacy of economic interests via diverse political tactics, and (3) maintenance of power by reconfiguring political relations. The first feature distinguishes tech firms from corporations previously examined by sociologists, while the second and third features are extensions of long-standing corporate political behaviors that have been adapted by tech firms.
Profitability through Data Brokerage
Leading tech firms, particularly those that are publicly traded, have a foremost interest in maximizing profits, and in this way, they are not so different from more traditional companies. Still, tech firms are also distinct in some respects, and Rahman and Thelen’s (2019) examination of “platform firms” is useful for understanding their particularities. What distinguishes “platform firms,” such as Amazon and Uber, from firms dominant throughout the twentieth century, including “stakeholder firms” such as General Motors and “network of contracts” firms such as Nike, is that these newer firms have the ability to create new forms of value. Specifically, platform firms typically leverage data to create profitable markets and then also serve as critical intermediaries in those markets. That is, data and the ability to privatize data are key to their competitiveness and profitability. For example, Uber collects and uses data on available passengers and drivers within cities to create local markets in which they broker ridesharing. We think these dynamics (of platform firms) also apply to tech firms, as many platform firms, such as Amazon and Meta, are also heavily invested in AI, and AI-focused organizations, such as Anthropic and OpenAI, leverage data to develop consumer-facing chatbots that they seek to monetize.
Adapted and Expanded Corporate Political Tactics
Like other profit-maximizing companies, tech firms develop and use political tactics to advance their economic interests. However, as their products and activities face growing public scrutiny, tech firms are working to expand their political strategy to include not only long-standing corporate political activities but also more novel tactics. Tech firms regularly seek to assert their interests through traditional political activities such as lobbying. For example, leading tech firms reportedly spent nearly $70 million combined on lobbying activities in 2021 (Zakrzewski 2022), a much larger amount than was invested by two groups that are historically top spenders on federal lobbying: the National Association of Realtors ($44 million) and the Pharmaceutical Research and Manufacturers of America ($30.4 million) (O’Connell and Narayanswamy 2022).
Although tech firms’ use of lobbying and other traditional political activities merits more scholarly attention, we want to highlight more subtle political tactics that many tech firms are using in current policy debates about AI. Here, we seek to extend the work of Young et al. (2018), who argue that “scholars have overemphasized overt mechanisms [for business political influence] such as campaign finance and lobbying” while underexamining other ways that businesses assert political power, such as through “capital strikes” that leverage their “structural control over the economy” (p. 4). We discuss two such strategies: (1) strategic obfuscation and (2) aligned action.
One political tactic that is used by tech firms and that remains empirically underexamined is what McMillan Cottom (2020) conceptualizes as strategic obfuscation. Building on Pasquale’s (2015) work on corporate data secrecy, McMillan Cottom (2020:443) argues that platform firms use obfuscation to protect their economic and political power by privatizing information that would be necessary for public deliberation and accountability as well as by promoting “obfuscation as a logic” in and of itself. When it comes to AI, tech firms likewise appear to exercise strategic obfuscation by, for example, making their models “closed” or proprietary; indeed, among the seven leading tech firms, all but one (Meta) have developed generative AI models that are proprietary as opposed to open source.
One development that differentiates tech firms’ current use of strategic obfuscation from past corporate uses of this political tactic is that tech firms are framing opacity in their public statements as not driven by self-motivated business interests (e.g., intellectual property, consumer privacy) but instead by the novelty and complexity of the technologies themselves. Although AI interpretability, or the ability of researchers to understand exactly how AI models work, is a well-documented challenge given their complex architecture (e.g., Murdoch et al. 2019; Stoyanovich, Van Bavel, and West 2020), and tech firms such as Anthropic have invested in addressing this challenge, it is unclear exactly how tech firms are seeking to make their models more interpretable, or even how they define interpretability. Tech firms’ calls for the public to trust their ability to improve the interpretability of their models also inhibits discussion of whether models should simply not be deployed when levels of interpretability are unacceptable.
Aligned action is another strategy used by tech firms. Young et al. (2018) extended the work of neoinstitutionalists (e.g., DiMaggio and Powell 1983; Zajac and Westphal 2004) by showing in their case study of large banks and corporations during the Obama Administration that firms often coordinate with one another in responding to regulatory challenges, far more often than would be predicted by theories defining firms primarily as rational economic actors. We suspect that tech firms likewise coordinate with one another often to respond to challenges from regulators as well as from social movements. For example, during recent Senate hearings on AI oversight, leaders of several tech firms shared similar recommendations for the development of narrow, risk-specific regulations of AI. A leader at IBM called for “Congress to adopt a ‘precision regulation’ approach to artificial intelligence,” which they defined as “establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself” (Montgomery 2023:4). Two months later, a leader at Anthropic testified that biological and national security threats should be foremost concerns for AI regulation (Amodei 2023).
Tech firms show similar alignment in responding to social protests. For example, Hamilton (2020) analyzed 63 public statements made by tech companies in response to nationwide protests against anti-Black police violence during the summer of 2020 and found similar linguistic tendencies, namely, a hesitancy “to even use the word ‘race’” and few references to White Americans. These examples suggest that tech firms are likely aligned in their political responses, but more empirical examination is needed to document the extent to which tech firms coordinate political action and to explain how such coordination occurs (e.g., through interlocking directorates, continuous exchange of talent).
Influence through Reconfigured Political Relations
The data-driven AI technology developed by tech firms enables them to not only gain profits but also maintain their influence by reconfiguring political relations. Rahman and Thelen (2019) and McMillan Cottom (2020) argue that platform firms differ from their corporate predecessors in terms of how they situate three core groups of economic stakeholders: investors, workers, and consumers. For instance, as McMillan Cottom explains, platform firms have largely succeeded in transforming consumers into “prosumers” (or productive consumers) and normalizing expectations that consumers will not only purchase products and services but also serve as “site[s] of extraction” (p. 443), providing valuable yet uncompensated data and content for firms to further monetize (see also Benjamin 2019; Noble 2018). Additionally, Rahman and Thelen explain that although late twentieth-century firms relied on mobilizing managers and investors against labor to advance their interests, platform firms have instead focused on mobilizing investors and consumers against laborers, a strategy that harkens back to the postwar period (Jacobs 2005). This strategy is particularly effective in the United States because of a weak labor movement and strong consumer protections.
We believe that this reconfiguration of political relations ushered in by platform firms—in which consumers are simultaneously empowered and exploited, labor is marginalized, and investors can focus on extraction—is also salient for understanding the political behaviors of AI-focused tech firms. There is likewise a need to examine stratification within the AI tech workforce and its implications for labor organizing (e.g., Twine 2022).
In sum, we have extended recent work in political economy to sketch the beginnings of a sociologically grounded articulation of how tech firms behave as political actors in AI policymaking—that is, how they define their interests, how they select, develop, and use political tactics, and how they mobilize (and demobilize) various stakeholders. Although there are some novel features to tech firms and their political activities that require new research, the large and rich body of research on American corporate political power that sociologists have built over decades remains useful for understanding them. Access to data about tech firms has been notoriously difficult for researchers to obtain, but more opportunities for research may emerge as tech firms engage in ongoing political activity and this activity is publicly documented.
Centering Equity and Public Engagement
As we have argued, AI policy development in the United States is coalescing around a safety-based approach that focuses on the efficient management of risks. This approach, while important for addressing human safety and national security, is inadequate for ensuring the development of equitable AI systems that can assure and strengthen the rights of Americans as well as check the undue influence of tech firms. In addition to describing and examining how tech firms behave as political actors in AI policymaking, as discussed in the previous section, sociologists can attempt to shift current AI policy debates by organizing existing and new AI research around a coherent framing of AI as an issue of equity that concerns the public at large rather than merely an economic and technocratic issue of narrow private interest.
In this section, we draw on research on AI and society as well as work in democratic innovation to outline four sociological questions that can begin to help sociologists reframe AI as a matter that is of pressing and fundamental interest to the public and to explore possibilities for equitable governance of AI through more robust democratic deliberation. We intend for these questions to motivate new areas of sociological research on AI as well as to advance core areas of research in the discipline on inequality, politics, and knowledge production.
How Does AI Interact with Social Structure?
Tech firms developing AI technology tend to focus on the benefits and risks of their products as they relate to users. This tendency to focus on users results in what Brayne, Lageson, and Levy (2023:478) aptly describe as “the individualization of social problems,” in which social problems are understood as individual problems that can often be fixed with technological products and services (see also Benjamin 2019). But as discussed previously, AI affects not only direct users but also the broader groups, communities, institutions, and networks to which they are inextricably tied (Benjamin 2019; Joyce et al. 2021; Noble 2018; see also Allen et al. 2024).
As such, understanding the impacts of AI, as well as how AI can be governed, requires structural analysis. As one example of how to shift the lens from the individual to the social, sociologists might examine how the advent of OpenAI’s ChatGPT intervenes in the college application process by altering the ability of college applicants across different social groups to seek feedback on essays and guidance on financial aid as well as spurring universities and colleges to adopt different responses to suspected and actual chatbot use. The goal of this research would be to understand how an intervention that could widen public access to information about the college application process might mitigate or even possibly exacerbate existing and well-known disparities in college admissions and enrollment (for example, see Alvero et al. 2024).
How Do Different Social Groups Define and Advocate for Inclusion and Equity in AI?
Research on AI and society has demonstrated how tech firms prioritize equal access to their products as a primary benchmark of social inclusion, mirroring early efforts to close the “digital divide” (Benjamin 2019; McMillan Cottom 2020; Noble 2018). Yet this research also makes the crucial point that this form of inclusion goes hand in hand with the extraction of value and exposure to additional harms among marginalized users of the technology.
Exploring possibilities for equitable governance of AI will require, at minimum, critical examination of AI expansion. If vulnerable populations are disproportionately and often unwillingly subject to AI technology, one could ask, to take one example, whether the increasing use of privately developed AI technology by public agencies to streamline (and often curtail) the provision of social services (Alegria and Yeh 2023; Eubanks 2018) indicates inclusive or equitable expansion of AI applications (as tech firms may claim). More broadly, sociologists can examine how diverse social groups define their interests as it relates to AI. As Benjamin (2019) emphasizes, marginalized communities are not simply subject to AI but instead have innovated their own visions and uses for this technology. For example, she describes how renters and landlords might find different (and opposing) uses for AI (Fong 2021). Sociologists could also examine the social, economic, and political factors that contribute to why one strategy prevails over another at a given time.
What Constitutes Public Information versus Proprietary Data?
As previously discussed, the tech firm business model is centered on privatization of data: they collect droves of data, including information on activities occurring on their platforms and applications as well as publicly available information, and then generate profit from their private use of these data. For example, OpenAI’s GPT models are trained in part on publicly available information from Wikipedia, yet the company restricts access to its models as well as information about how these models were developed because the models themselves are proprietary (Gertner 2023). The growing datafication or privatization of public information not only circumscribes the public’s pursuit of knowledge but also prevents individuals harmed by AI technology from obtaining information needed to seek legal recourse (Brayne et al. 2023; McMillan Cottom 2020; Noble 2018).
Future sociological research on AI must necessarily take up Noble’s (2018) prescient call for “a much-needed reassessment of information as a public good” (p. 5). For example, sociologists could extend Noble’s (2018:152) call for “public search engine alternatives” to call for open-source AI model development and examine the infrastructure, investments, processes, and actors needed for public AI. More ambitiously, sociologists could explore how public knowledge should be defined currently and what affirmative rights to this would entail.
How Can Democracy Be Practiced and Affirmed through AI Governance?
Finally, we ask sociologists concerned about AI policy to consider not just how our existing institutions might govern AI but also how AI might require the development of new processes of democratic governance. Scholars of inequality often adhere to a narrow reading of Rawlsian justice, prioritizing the right to a minimum degree of material security, but a more expansive approach to justice and freedom encompasses much more (Sen 1999). Here, we draw in particular on the work of political philosopher Danielle Allen (2023), who calls for innovations in democratic governance that are centered on “power sharing” as a benchmark for full social, political, and economic inclusion in society. When it comes to AI, Allen and her co-authors argue that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage risk but also to construe the risks and opportunities much more broadly, and to make correspondingly large investments in public goods, personnel, and democracy itself. (Allen et al. 2024: 1)
Future sociological research should consider what genuine power sharing means within the context of AI. One such question that sociologists could pursue is whether greater investments in open-source generative AI models in the academy will truly result in more power sharing, given that computational expertise and resources are currently unequally distributed across institutions (Kesari et al. 2024), and academic institutions have themselves often engaged in extractive research practices that harm marginalized communities (D’Ignazio and Klein 2020). Another question that sociologists could address is how AI oversight and regulation can be designed to protect not only the “negative” rights of citizens (e.g., rights to privacy) but also their “positive” rights to meaningfully participate in the governance and benefits of this technology.
Conclusion
This moment in the evolution of AI is an opportune one for sociologists to examine the full extent and consequence of AI’s growing embeddedness in society. But sociologists will need to go beyond descriptive and explanatory studies of AI’s impact on society and beyond development of novel methodological applications of AI, as important and necessary as this work is, to engage in policy-oriented discussions of a future in which AI both avoids harmful impacts on society and serves the public in an expansive way. We have drawn from an emerging literature across a disparate set of fields to provide a starting point for sociologists to think about how they can engage in such research and public discussion. The ultimate aim of this work is to shift the current AI policy landscape from one that is focused on mitigating a narrow set of risks and highly deferential to the private interests of tech firms, to one that is centered on advancing equity and in which every American has a vested stake.
As a final note, we wish to underscore two ways in which equity-focused research on AI can become more firmly established both inside and outside the academy. First, there is a need for sociologists to recognize at once the novelties of AI on the one hand and the ways in which it reflects long-standing patterns of social, political, and economic inequality on the other. We have attempted to strike that balance, believing that much existing scholarship, advocacy, and activism—within and outside the arena of technology—can be harnessed to better understand the current policy terrain vis-à-vis AI. Key features of this terrain are apparent in other domains, most especially in the power of corporations in U.S. politics and the lack of democratic representation and participation in the policymaking process across a wide range of issues. We, like the many scholars whose research we have cited, see these features as foremost impediments to the transformations called for above. Within the academy, there may be fewer impediments to doing the research needed to better understand how to transform the current policy landscape concerning AI, but they still exist. We encourage senior researchers, journal editors, funders, and academic administrators to take up that challenge by creating accessible spaces for scholars of all backgrounds and levels of experience to pursue that path.
Second, we echo and extend Ciocca Eller’s (2024) call for social scientists to address the “coordination problem” in evidenced-based federal policymaking. She is referring to the need for social scientists in the academy, the private sector, and the public sector to work in a more coordinated fashion to advance social science research across a wide range of issues. In the domain of AI, it is notable that computational social scientists are increasingly opting to work in the private sector, including at many of the leading AI tech firms we have discussed in this article (Kesari et al. 2024). Dedicated and regular coordination is a serious task; there are real differences in organizational objectives, and the challenges to improving coordination between the academy and the public sector (e.g., staff capacity) differ from those impeding coordination between the academy and the private sector (e.g., protection of intellectual property). Yet if we as social scientists share a fundamental belief in the value of social science in advancing social, economic, and political well-being for all, we must commit to and take steps toward ensuring that our research is and remains a truly collective endeavor.
Footnotes
Appendix
Enacted State Legislative Bills on Artificial Intelligence in the United States, 2019 to 2022.
| Year Enacted | State Legislative Bill |
|---|---|
| 2022 | Colorado H 1335: Producer Responsibility Program for Recycling |
| Colorado S 113: Artificial Intelligence Facial Recognition | |
| Idaho H 720: Personhood Designations | |
| Maine H 1199: Equity in Policy Making | |
| Maryland H 1205: State Government Information Technology | |
| Vermont H 410: Artificial Intelligence Commission | |
| Washington S 5693: Supplement Operating Appropriations | |
| 2021 | Alabama S 78: Council on Advanced Technology |
| Alabama SJR 104: Recognition Resolution | |
| California A1228: Supervised Persons: Release | |
| Colorado S169: Restrict Insurers Use of External Consumer Data | |
| Illinois H 645: Future of Work Act | |
| Maryland H 658: Digital Economy Workgroup | |
| Maryland S 44: Emerging Digital Economy Study | |
| Mississippi H 633: Computer Science Curriculum | |
| New Jersey S 2723: 21st Century Integrated Digital Experience Act | |
| Ohio H 110: Biennium Operation Budget | |
| Oregon H 3284: Collecting, Using or Disclosing Personal Data | |
| Virginia H2154: Hospitals and Nursing Facilities | |
| Washington S 5092: Fiscal Biennium Operating Appropriations | |
| 2020 | Alabama H 187: Public Education Appropriations |
| Maryland H 49: Pretrial Risk Assessment Instruments | |
| Massachusetts H5250: Partnerships for Growth | |
| Utah S 96: Emerging Technology Talent Initiative | |
| 2019 | Alabama SJR 45: Honorary Resolution |
| Alabama SJR 71: Artificial Intelligence and Associated Technologies | |
| Arkansas S 656: Data Sharing and Decision-Making Task Force | |
| California A 485: Local Government: Economic Development Subsidies | |
| California S 36: Pretrial Release: Risk Assessment Tools | |
| California SJR 6: Artificial Intelligence | |
| Delaware HCR 7: Artificial Intelligence Resolution | |
| Hawaii SR 142: Artificial Intelligence | |
| Idaho H 118: Pretrial Risk Assessment Algorithms | |
| Illinois H 2557: Artificial Intelligence Video Interviews (amended in 2021) | |
| New York S 3971: Commission to Study Artificial Intelligence | |
| Oregon S 138: Prescription Drug Treatment of Mental Health Disorders | |
| Texas S 64: Cybersecurity for Information Resources |
Note: Data on state legislative bills are from the National Conference of State Legislatures (2023). We exclude eight bills in which the role of artificial intelligence was unclear.
