Abstract
After the Dobbs decision ended federal abortion protection in the United States, experts raised concerns about digital data collected from people seeking abortions. U.S. technology corporations—Google, Apple, Microsoft, Meta, and Amazon—were conspicuously silent. Instead, GAMMA (Google, Apple, Microsoft, Meta, and Amazon) released statements and/or policies surrounding commitments to data privacy seemingly incongruous with surveillance-based business models. We examine GAMMA’s policies, statements, and associated news coverage post-Roe through commodity activism and politics of care. We reveal recurring discourses that cast technical privacy features as sufficiently protective alongside scrupulous data practices by users and that constrain the purview of company responsibility to full-time employees. A focus on responsible data management sidesteps critiques of data collection, framing GAMMA’s policy changes as corporate care but furthering commodification of individual privacy, reproducing the neoliberal subject, and upholding surveillance capitalism.
After the leak of the Dobbs v. Jackson Women’s Health Organization decision that overturned Roe v. Wade, and with it the federal right to abortion in the United States, technology companies made a series of (officially unrelated, but conspicuously timed) public statements in support of user privacy: Apple released an advertisement showcasing privacy features (O’Flaherty, 2022); Google promised to delete location data of abortion clinic visitors (Grant, 2022); and Meta announced testing of default end-to-end encryption (E2EE) on Messenger and Instagram Direct (Newman, 2022). On the surface, these declarations cast Big Tech as entities that take care to protect their users by placing users’ safety over profits. Deleting users’ location history or instituting E2EE threatens a business model based on selling user data to advertisers (Zuboff, 2015). Such data-collection practices ultimately put both companies and users at legal risk, as in the widely publicized case of the Nebraskan mother and daughter who were arrested after police were given access to Facebook data about the daughter’s self-managed abortion in response to a warrant (Bhuiyan, 2022).

Apple billboard in San Francisco, August 2023 (photograph author’s own).
Given the controversial nature of abortion, Big Tech’s strategic actions supporting user privacy appear, in some ways, to be acts of activism. The entanglement of corporations and politics has a long history in the United States but has proliferated under neoliberalism. Because neoliberalism operates under the belief that “human well-being can best be advanced by liberating individual entrepreneurial freedoms” (Harvey, 2007, p. 2), the market becomes a site of expression for political ideals. Mukherjee and Banet-Weiser (2012) proposed the concept of “commodity activism” to “assess the cultural resonance and tactical significance of marketized modes of ‘fighting back’” within this economic era and in light of rapid changes to media production and use (p. 3).
We utilize the framework of commodity activism to investigate how American Big Tech corporations responded to the Supreme Court ruling that ushered in the post-Roe era. Specifically, we engage with Google, Apple, Microsoft, Meta, and Amazon—or GAMMA—the five American tech giants. We examine GAMMA’s public statements, policy changes, and respective news coverage and evaluate them through a feminist framework of care. We ask: How is commodity activism enacted by technology corporations to both challenge and uphold surveillance capitalism?
When the U.S. Supreme Court overturned the federal right to abortion with the Dobbs decision, abortion activists, security experts, and privacy researchers immediately expressed concern over the role online data would play in the criminalization of pregnant people, activists, and doctors (Kelly et al., 2022). Data in the form of healthcare information, location, advertising, financial transactions, Google searches, and social media activity are in many cases accessible for free or for purchase by the public and marketers (Sherman, 2022), making individuals at risk of accidental or malicious leakage of abortion-related information. Given that abortion is now banned or restricted in 21 U.S. states (The New York Times, 2024), private information about abortion seekers, abortion providers, and those who have had abortions could—and has, in the aforementioned Nebraskan case—contribute to the criminalization of such individuals. Relatedly, America’s patchwork of data privacy laws leaves troubling holes in data protection (Andersen, 2023). Privacy experts and lawmakers have thus called for tech companies to take action to protect users post-Roe (Famularo & Wong, 2022; Gordon, 2022). Given Big Tech’s emphasis on protecting user privacy in advertising campaigns, this conjuncture reveals the hollowness of their promise and a reality of “leaky” data (Chun & Friedland, 2015, p. 9). To protect their users and their image, Big Tech was forced to respond.
The Limits and Potential of Corporate Care
Even as technology companies have quietly exerted their power through donating to political causes (Bass, 2022; Robins-Early, 2023) and advocating for political policies that align with their financial interests (Karsten & West, 2016; Van Hoboken, 2013), they have until recently presented themselves as politically-neutral platformers of information, rather than publishers (Gillespie, 2010). Yet 2016 marked a shift in external communications, as tech companies sought to satisfy consumer expectations around corporate consciousness—taking public positions on political issues like immigration (Gaither & Austin, 2022), information manipulation (Malik, 2022), and racial justice (Toh, 2020). By taking a stand on social issues, platforms may bolster perceptions of their companies as responsible and normatively “good” rather than as neutral institutions (Gillespie, 2010).
Communication scholars have developed at least two related but rarely intersecting frames for understanding the political activity of commercial institutions: corporate social responsibility (CSR) in public relations and commodity activism within critical/cultural studies. Broadly, CSR describes business practices through which companies fulfill societal expectations to contribute to the public good (O’Connor, 2023). Instrumental approaches to CSR primarily use the practice as a tool to improve the reputation of companies in pursuit of economic gains and financial returns (Kim, 2023). At its most hollow, CSR can be merely promotional as companies “inaccurately, vaguely, or irrelevantly communicate” about social issues through marketing, advertising, and public relations (Lekakis, 2022, p. 29). Greenwashing, for example, describes corporations’ displays of environmentalism in advertising without requisite action (Ongkrutraksa, 2007). Conceptions of CSR have expanded—moving from an exclusive focus on maximizing benefits for “shareholders” to a wider attention to distributing benefits among “stakeholders” (May, 2022, p. 24). For example, May’s (2022) 12 expectations for ethical CSR practice include “fostering fair, equitable, and inclusive treatment of employees,” “establishing fair treatment of customers,” and “supporting human rights for all people” (p. 24).
Care, which in its ideal refuses to reduce humans to capital (Martin et al., 2015), is increasingly an explicit part of corporate discourse. A variety of technology companies, including Meta’s Instagram and Facebook, launched “caring initiatives” in the wake of Covid-19 that exist within their “profit-making architecture” (Chatzidakis et al., 2020, p. 891). Initiatives like these not only seek to absolve corporations of ethically ambiguous practices but also allow individuals to comfortably act upon social problems through cause-related marketing that “commodifies compassion” (Golob & Verk, 2022, p. 148). Scholars have further critiqued such corporate practices as “woke-washing” or “woke racial capitalism,” a process by which corporations produce marketing campaigns that forward “desirable brand identity packaging socially progressive affects in consumer form” (Rossi & Tá’iwò, 2020, p. 1; Sobande et al., 2022, p. 1582). Kanai and Gill (2021) argue that such branding exploits historically marginalized populations through roping them into the extant neoliberal logic without disrupting the hegemonic order which oppresses them. Accordingly, social theorists Martin et al. (2015) observe that care is a “slippery” idea which can be used to describe opposing agendas and practices of concern, obligation, solidarity, and discipline, among others (p. 625). Corporate care discourses allow corporations to profit in a neoliberal, economic system that is in direct conflict with an ethos meant to resist individuality in favor of community.
The policy changes undertaken by GAMMA in response to the highly controversial topic of abortion position these actions at the most radical end of the CSR spectrum, a phenomenon known as “corporate social activism” (Gaither et al., 2018). Corporate social activism refers to “tangible corporate actions or initiatives to drive change relative to an issue”—especially when the issue is bound to alienate some consumers (Gaither & Austin, 2022, p. 181). In fact, according to a Bentley-Gallup poll, fewer than 30% of Americans believe that corporations should take a stand on the topic of abortion (Marken & Nicola, 2023). GAMMA risked reputational damage and substantial loss of profit in the absence of consumer data, should users stop using their platforms in response to their policies. However, contemporary marketing experts emphasize the need for companies to align their communication (marketing messages) with their actions (corporate policies) to achieve “authentic brand activism” (Vredenburg et al., 2020).
Alternatively, a feminist care ethics (Abu-Laban, 2015) requires particular attention to situated context and how power and vulnerability shape relationships of care. Emerging in contrast to neoliberal rationality’s “atomistic view of human nature” and the devaluation of gendered care work, a feminist ethics of care advances that we are relational selves entangled in webs of care-ing (Parton, 2003, p. 10). Feminist scholars have noted, though, that neoliberal ideals have infused themselves into conversations about care in feminism and technology culture. On the one hand, Rottenberg (2014) posits that liberal feminism has transformed into a new form of “neoliberal governmentality” that recasts systemic gender inequality as individual women’s self-care problem (p. 55). At the same time, technology culture and gendered labor are becoming entangled in our “post-mom economy,” wherein traditionally devalued care work is becoming technologized, redistributed, and repackaged as liberatory (Sharma, 2018).
Those concerned with the politics of care—particularly as it relates to technoscience (Star, 1990)—take this as a starting ethical premise to investigate power (who benefits and who doesn’t) in care-based relationships (Martin et al., 2015). It is thus necessary to view care not as a universal category, but as a malleable, circumstance-specific form of relating (Martin et al., 2015). While the commercial relationship between user and platform is anything but care-based, West (2022) observes that some instances of technologizing care, like instituting E2EE as Facebook did on WhatsApp after the Snowden revelations, can have liberatory effects. The intellectual lineage of encryption is often traced through cis-hetero White libertarianism (Hellegren, 2017; West, 2022). Yet when these protective technical features are combined with community practices grounded in feminist, Black, and queer ideologies—which prioritize the cultivation of safe and inclusive space—the unequal power dynamics between users and tech companies and between citizens and states can be subverted (West, 2022). In this article, we take up the question of where GAMMA’s post-Roe policies and discourse exist along the continuum between care and commodity activism.
Commodity Activism
The concept of “commodity activism” introduced by Mukherjee and Banet-Weiser (2012) has been widely used to analyze how activist commitments are taken up by brands and attached to products (Duvall & Guschwan, 2013; Littler, 2013). Through commodity activism, corporations and individuals express political goals through the “ideological and cultural frameworks of consumption” (Mukherjee & Banet-Weiser, 2012, p. 3). Under this framework, political resistance is possible, yet market value is inherently created in the process. When political will is expressed through the market, there is a limit to the degree it can challenge problems bound up in the current economic order. Without community involvement and requisite action, corporatized activism ultimately “contributes to and reproduces corporate power” rather than that of the movement itself (Place et al., 2021, p. 102085).
Importantly, Banet-Weiser (2012) positions commodity activism as an era of consumerism enabled by shifts in the media landscape, operating through the construction of particular kinds of selves. The overlapping historical progression from mass to niche markets culminates in a “relentless focus on [the] individual person” enabled by the participatory media of Web 2.0 (p. 44). Here, contemporary corporations recast the political ideals of freedom and choice as expressible through consumer behavior—including activity on social media. Critiques of commodity activism as “slacktivism” mirror larger critiques of contemporary activist practices as conveniently allowing individuals to uphold their self-image as caring citizens while doing little to effect change (Christensen, 2011). Mukherjee and Banet-Weiser (2012) acknowledge this tendency toward nullity yet hold open a space of possibility in the consumer realm, a contradiction Banet-Weiser (2012) calls “the politics of ambivalence.” Through the lens of commodity activism, we consider the political potential of corporate responses to governmental threats against reproductive rights and focus on identifying “the causes and consequences of cultural forms,” rather than determining whether they are the cause for celebration or condemnation (Streeter, 2015, p. 3107).
Data as Commodity Under Neoliberalism
More than a decade ago, Jarrett (2008) argued that the interactivity of Web 2.0 disciplined users into “a neoliberal ideal of subjectivity based around notions of freedom, choice and activity” through the seductive power of “apparent free choice and affective pleasure” (p. 6). In the years since Mukherjee and Banet-Weiser’s (2012) and Banet-Weiser’s (2012) foundational works on commodity activism, the proliferation of social media and development of big data have ushered in a new era of “data capitalism” (Sadowski, 2019). Our current age is marked by the “logic of accumulation in the networked sphere” whereby the ultimate commodity is users’ online behavior (Zuboff, 2015, p. 75). By transforming users’ digital activities into data, technology companies can sell buyer profiles and users’ attention to advertising companies (Cox, 2015). Commodifying audiences is not a new phenomenon (Cox, 2015; Smythe, 2009), but it intensified in a datafied society. Sadowski (2019) conceptualizes data as more akin to capital than a commodity, such that it continuously generates profit through circulation rather than terminating its value in consumption.
Platforms position users as both agents who accrue social capital from their digital self-expression and laborers who produce commodifiable behaviors and data (Briziarelli, 2019; Han, 2022). Papacharissi (2010) argues, “all Web-accessible platforms offer services, mostly social, in exchange for personal information. This simple step, taken by many, transforms our personal information into currency, and our privacy into a commodity” (n.p.). This dynamic has only become more visible in the years since, with platforms increasingly collecting “intimate data” or “information about, and access to, our bodies, health, innermost thoughts (browsing, reading, searching, texting, emailing, and the like), sexual orientation, sex, gender, sexual activities, and close relationships” (Citron, 2023, p. 3). Such data may be used to exploit users not only for corporate profit (Fuchs, 2013) but also for state surveillance and governance (Zuboff, 2015).
Scholars have argued that intimate data collection is particularly insidious in the post-Roe era given the risks that the leakage or sale of these data pose to abortion seekers and activists, including a lack of access to reproductive care and criminalization (Citron, 2023; Clark, 2024; Martin et al., 2023; McDonald & Andalibi, 2023). The confluence of commercial and state data has effectively eliminated the boundary between data collection for the sake of profit and data collection for the sake of surveillance (Andrejevic, 2019; Greenwald, 2014; Shepherd, 2015). Yet despite the concerns raised by researchers and civil society, the prevailing logic of Big Tech has been one of all-encompassing data collection, “‘frameless’ in scope” (Andrejevic, 2019, p. 8). Southerton and Taylor (2020) argue that whether users are informed about such data collection is beside the point, as the framework of an informed user who chooses to generate data when engaging with social media platforms fails to acknowledge the ways in which “platforms script and incline users toward habitual and routine . . . disclosure” (p. 7).
The production, exchange, and withholding of data in our current world is now the basis of many economic activities, such that where we take our data is a form of activism itself (Edmond, 2023). Data-based commodity activism against social media companies has taken the form of boycott campaigns, such as 2017’s #DeleteUber. Uber’s 2019 IPO filing revealed “hundreds of thousands” of people deleted their app during the campaign (Feldman, 2019, n.p.), resulting in an apology, a multi-million-dollar fund from Uber supporting drivers facing immigration issues, and a new CEO (Williams et al., 2021). A similar movement occurred when the Facebook-Cambridge Analytica data scandal broke in 2018, causing users—and even WhatsApp’s co-founder Brian Acton—to tweet #DeleteFacebook in droves (Bright et al., 2019; Knibbs, 2018). Scholars argued that despite not impacting Facebook monetarily, #DeleteFacebook shifted collective acceptance of the reigning model of platform capitalism (Bright et al., 2019; Mills, 2021).
In the following article, we build upon this work to consider how technology companies navigate controversy to retain producers of their most valuable product: user data. We argue that policy changes framed by public relations and media discourses as corporate care ultimately further the commodification of privacy and uphold surveillance capitalism. By voluntarily undertaking responsible—but baseline—data practices, corporations are recast as protecting users’ rights, yet in actuality, they are reifying systemic inequality between those who can afford to protect their data and those who cannot. Surveillance capitalism thus remains intact through furthering neoliberal visions of privacy as individual, technological, and apolitical, ensuring only data management becomes part of the cultural discourse, while data collection and extraction continue unencumbered.
Methods
Data Collection
While there are few official statements addressing the controversial Dobbs decision directly, there was ample discussion of Dobbs’ implications in technology reporting, often including statements from company spokespersons and leaked internal memos. News articles further included detailed information about GAMMA’s policy and design decisions, which we cataloged as a set of corporate actions within the larger field of corporate discourse. Each GAMMA company has separate organizational structures and “core products” (e.g., search engine, social networking site), but they all profit from triangulating and selling user data. Rather than focusing on a single corporation, we analyze GAMMA as an entity to better reveal the systemic nature of ideology propagated by Big Tech.
We analyzed 76 news articles published from the date of the Dobbs leak to the year’s end that include information on GAMMA responses to Dobbs. We collected articles from the top-20 Google News results for each of the platforms and “abortion.” Articles were from English-language digital editions of newspapers or technology press based in the United States or United Kingdom. 1 After cleaning the data for duplicates and articles not pertinent, the initial number of articles (100) was reduced to 76, which we assessed was sufficient after reaching saturation of descriptions of the companies’ actions and persistent themes. Our dataset additionally includes the only two official press releases made after the Dobbs decision: Meta’s response to their involvement in a Nebraska abortion prosecution and Google’s description of privacy practices, including user location history and data protection. Both were publicly available on their respective websites. The collection of press releases and news articles represents cultural ideology surrounding the role of Big Tech companies as social agents. By analyzing the messaging from technology companies, as well as those who report on them, we come to understand the social position that tech companies aim to fill as well as the position that citizens perceive they should fill.
Data Analysis
We analyze this compilation of public utterances (78) using Charmaz’s (2006) grounded theory. Authors engaged in a first round of open coding in which one stays very close to the data, a second round of “focused” coding in which one utilizes the initial codes to analyze the data, and finally “axial” coding to relate codes to one another. Charmaz’s interpretivist version of grounded theory has been reliably employed throughout feminist media (Prins & Wellman, 2021) and platform studies (Cotter, 2023), including those with similar datasets to ours (Cabas-Mijares & Jenkins, 2023). We recontextualize these documents through highlighting the infusion of ideology by their producers, including journalists and tech employees, thus troubling the notion that these texts consist of “objective” data.
Findings
After Dobbs, GAMMA put into effect a slew of data-management policies. Although several GAMMA corporations donated to groups that fund anti-abortion lawmakers (Google, Microsoft, and Amazon 2 ), their actions and statements publicly demonstrated coded support for those impacted by Dobbs. Specifically, they emphasized corporate support of individual privacy, healthcare, data protection, and data ownership. For instance, after complying with a warrant from Nebraska Police to release a mother and daughter’s Facebook Messenger conversations surrounding the daughter’s abortion, Meta promised to speed up default E2EE on Facebook Messenger and Instagram Direct to promote private communication (New York Post, 11 August 2022). Apple implemented E2EE on Cloud backups (and thus, iMessage)—though to protect conversations, this setting must be enabled by both users in the exchange through turning on Advanced Data Protection in Settings and is not a default (Apple, 07 December 2022). 3 Google claimed it would delete users’ abortion clinic location data, though this was imperfectly applied (CNBC, 01 July 2022; Fowler, 2023). Internally, all GAMMA corporations announced financial coverage for abortion-related travel for full-time employees (CNBC 24 June 2022; The Guardian, 26 June 2022; New York Times, 19 August 2022).
GAMMA’s post-Dobbs privacy decisions can be read as a negotiated form of commodity activism in which corporations act in relation to the varied and conflicting political concerns of their consumer users. By collectively pushing a narrative of individual data ownership and protection, GAMMA aimed to retain users who would continue to produce value for these companies through their use of platforms and thus production of other forms of (unprotected) data. Yet, their policy decisions also placed GAMMA at reputational and financial risk. Anti-abortion activists spoke of boycotting organizations that guaranteed financial support for abortion-related travel, thus resulting in potential profit loss through withdrawing their platform activity. Implementing E2EE or deleting location data could additionally cause profit loss, given companies lose access to potentially marketable data. Yet the response from GAMMA ultimately constructs a limited sphere of responsibility and care through neutralizing the values associated with the practice of data collection, casting the production of technical privacy features as beneficent, and framing healthcare for full-time employees as philanthropic.
Neutralizing the Threat: Removing Big Tech From the Abortion Conversation
Rather than situating GAMMA as active agents whose data-collection practices contributed to the unraveling of abortion access, discourse often painted them as victims that had to deal with the hassle, messiness, and uncertainty of data requests, potential legal liabilities, and public outcry. An article from Protocol, for example, wrote that “overturning Roe v Wade could force tech companies to help states punish people seeking abortions” (emphasis added, 03 May 2022). In these accounts, Big Tech companies are depicted as purveyors of neutral platforms, drawn into a “legal mess” and forced against their will to help enforce regressive policies that limit reproductive rights (Bloomberg, 07 July 2022).
This victimhood narrative worked alongside discourse that situated Big Tech’s expansive data-collection practices—and data itself—as neutral, and the government’s, advertisers’, and third-party apps’ use of it as malicious or potentially harmful. In the past, these corporations have broken from their stance as politically neutral (Gillespie, 2010) by releasing statements in support of the Black community after George Floyd’s murder (Pichai, 2020; Smith, 2021) and LGBTQ+ safety (Meta, n.d.). Yet post-Roe, no corporation released a statement in support of people who can get pregnant outside of their own employees. Instead, lawmakers, tech companies, and journalists spoke of “other actors that will try to access and leverage personal information” to “threaten the well-being of those exercising their right to choose” (The Hill, 27 May 2022), marking a shift from constructing platforms as neutral to constructing data collection and extraction themselves as neutral practices.
A Washington Post article covering Lockdown Privacy’s report detailed how Planned Parenthood put abortion seekers at risk by sharing potentially identifying data (like IP address) from their online scheduling tools with third-party corporations, including Google and Facebook (29 June 2022). In response, a Meta spokesperson stated that sending sensitive information is “against our policies and we educate advertisers on properly setting up business tools to prevent this from occurring” (29 June 2022). Planned Parenthood responded by suspending abortion-related marketing analytics “out of an abundance of caution” and said they will “be engaging with Meta/Facebook and other technology companies about how their policies can better protect people seeking abortion care.” In this way, data leaked in transit are constructed as merely a fumbled pass between corporations, each blaming one another, while the data collection and storage of the respective companies is framed as non-repudiable.
Concurrently, GAMMA removed and discouraged abortion-related content on internal and external platforms. TechCrunch reported on an investigation from Motherboard that revealed that on the day Roe v. Wade was overturned, users noticed inordinately swift removal of abortion-related content on Meta platforms (28 June 2022). Facebook’s automated moderation system removed posts related to abortion pills within seconds, and in some instances, accounts were suspended. Similar actions were reported on Instagram. In response to the report (and impending public outcry), Meta Policy Communications Director Andy Stone cited an existing policy banning content related to the exchange of pharmaceuticals, of any kind. However, “content that discusses the affordability and accessibility of prescription medication is allowed” and may have been taken down incorrectly, Stone conceded (NBC News, 27 June 2022).
Internally, Meta stated in a company memo that their decision to censor and remove abortion-related content on their internal messaging platforms (e.g., Workspace) was due to “a heightened risk of creating a hostile work environment” (TechCrunch, 06 December 2022). Media reports helped platforms frame heavy-handed management policies like limiting employees’ conversations about abortion on company messaging threads as necessary for the bigger project of ensuring amicable company culture. For instance, a leaked internal memo from Meta revealed that employees are not allowed to speak about abortion at work to maintain a “respectful, productive” company culture (TechCrunch, 06 December 2022). Similarly, Google released a memo asking employees to “Please be mindful of what your co-workers may be feeling and, as always, treat each other with respect” (CNBC, 24 June 2022). While companies framed these decisions as acts of care toward employees, users, and investors, these decisions ultimately did the work of caring for and protecting Big Tech from liability should their platforms become conduits for illegal activity.
“Working Hard to Protect You” with “Easy-to-Use Privacy Tools”
Taken together, this three-fold strategy serves to discursively distance the possession of data (by GAMMA) from both the production of data (by employees and users) and the potential use of that data (by governments and third parties). A Bloomberg article lauded Apple’s extant privacy protections: “Apple said it builds privacy protections into its products and is particularly careful with software and devices that relate to healthcare” (07 July 2022). While extolling the privacy features, the words “reproductive rights” and “abortion” do not appear, instead emphasizing privacy writ large in an apolitical appeal to a widely held American value. At the end, the journalist notes: “The company didn’t comment on how it might respond to an abortion-related warrant or subpoena for user data.”
When Google decided to show only verified abortion providers in abortion clinic search results, 4 they presented it as a routine, planned update: “We continue to update our Local Search services for local health-related queries, including those related to abortion services, to improve the accuracy and relevance” (emphasis added, CNBC, 25 August 2022). 5 Post-Dobbs, Meta doubled down on prior commitments to institute default E2EE on Facebook Messenger and Instagram Direct by returning to testing and said they are “working hard to protect your personal messages and calls,” the “you” being the normative and ubiquitous user, not abortion-seekers (New York Post, 11 August 2022).

Apple onboarding screen for iOS 17, September 2023 (screenshot author’s own).
By emphasizing privacy as an inherent and overarching good for the general public of all users, rather than merely those who can get pregnant, GAMMA attempts to sidestep the cultural moment while also responding to pressure from policymakers and the public. While Google positioned their search results update as an extension of ongoing improvements to their platform, it occurred directly after policymakers called for Google to crack down on fake health clinics appearing in search results (CNN, 17 June 2022). Similarly, Meta announced planned testing of default E2EE just 2 days after they released a short, defensive memo in response to a firestorm of reporting on the Nebraska abortion case, much of which they stated was “plain wrong” (Meta, 09 August 2022). The E2EE announcement contained no mention of the abortion case or media fallout, which included condemnation from privacy and pro-choice groups, as well as the trending hashtag #DeleteFacebook on X (formerly Twitter) (Reed, 2022).
In an effort to disentangle themselves from the legal and reputational risks brought on by the Dobbs decision, GAMMA offered a solution in “easy-to-use privacy tools and settings that put people in control of their data” (Google, 01 July 2022). In doing so, GAMMA places responsibility on users and moves to avoid blame should private information be revealed, as they have provided technological tools to prevent such occurrences. When researchers revealed that location data could be obtained through a hack that required only brief access to users’ devices, Google responded, “We make it easy for you to check and manage the accounts associated with your device from any Google app, including removing any unwanted or unknown account” (The Guardian, 07.21.22). This builds upon the construction that the misuse of data, rather than the data collection itself, is the problem. Yet rather than pointing at the “bad actor,” Google reminds users to individually protect themselves with tools and features Google offers.
Healthcare as Activism: Big Tech “Supports” You
All corporations in our dataset publicly announced policies (some already in existence) to pay for employees’ abortion-related travel if located in states where abortion became illegal immediately after Dobbs via “trigger laws” 6 (CNBC 24 June 2022; The Guardian, 26 June 2022; New York Times, 19 August 2022). Though GAMMA corporations are based in states where abortion remains accessible, they have offices all over the country. Many of their employees—which collectively total over two million people—had potential to be impacted by Dobbs. Post-Roe, Google employees would be allowed to move to a state where abortion is legal, “no questions asked.” In the pseudo-private but actually public email that announced this program, Google positioned itself as an advocate for its employees, seeking to “support Googlers and their dependents” (CNBC, 27 June 2022). This policy stands out among our dataset as other corporations did not mirror it. For instance, Apple did not unilaterally allow workers in an abortion-restrictive state to work remotely or transfer but did offer travel assistance (CNBC, 24 June 2022).
Generally, corporate moves that facilitated abortion access were lauded by journalists in news articles we analyzed (The Guardian, 26 June 2022; USA Today, 30 May 2022). It is worth noting, however, that Google and Amazon only extend this benefit to full-time employees, not contract or temporary workers. Google’s contract workers (an estimated 121,000 in 2019) outnumber its full-time employees. 7 Apple, Microsoft, and Meta did not make statements including information about whether contract workers would be included, yet employees at companies owned by Microsoft—like Bethesda—were not offered the same benefits as Microsoft employees (GameRant, 01 July 2022).
Through focusing inward, GAMMA failed to address the ways in which the overturning of Roe is a systemic issue affecting most Americans and instead protected only those under their direct employment. Their private and public statements were not paired with public expressions of support for reproductive justice. In addition, corporate donations to groups that support anti-abortion politicians are incongruent with policies that ensure employees retain abortion access regardless of geographic location. While limited in scope, these company policies were critiqued by some in our data, including the founder of a biblically-based fund who was quoted saying he did not want shares of a company that went against his values (Fortune, 12 July 2022). This article similarly reported religious investors’ attempt to make companies walk back said policies.
Messaging that came directly from GAMMA continuously leaned on “the law” to define the boundary of appropriate action. Microsoft and Meta spokespeople repeatedly listed their intended actions, followed by the condition that they would “do everything [they] can under the law” to ensure their workers’ well-being and access to “lawful medical services” (Microsoft spokesperson in TechCrunch, 24 June 2022). Using the law as a discursive crutch allowed GAMMA to paint themselves as progressive brands doing everything they can to stand up for reproductive justice (e.g., offering insurance benefits for out-of-state abortion care) while remaining law-abiding institutions. Meta’s official statement about their involvement in the Nebraska abortion case similarly demonstrated that their hands were tied: “We received valid legal warrants from local law enforcement [which] . . . did not mention abortion at all [and] . . . were accompanied by non-disclosure orders, which prevented us from sharing information about them” (Meta, 09 August 2022). Despite Meta’s history of battling the U.S. government when it is in their interest (Paul & Bartz, 2022), they claim they were simply forced to comply. These commonplace statements—even more critical in light of the fact that GAMMA seldom commented in an official capacity on their stance or reactions to the overturn—did the work of protecting platforms from public decry and liability.
Discussion
Through providing settings that manually disable user-tracking or instituting E2EE for all communication, GAMMA’s design decisions advance an approach to privacy as individually, corporately, and technologically attained and managed. This perspective is nothing new. Indeed, it is typical of the mindset espoused by the cypherpunks in the 1990s, who turned to technology—and specifically encryption—as a solution to state and corporate surveillance, which would restore individual freedoms (Hellegren, 2017; West, 2022). Yet critically, cypherpunks conceptualized encryption as open-source and publicly accessible, while technology corporations have historically challenged the government on the topic of encryption to maintain competition in global markets (Van Hoboken, 2013). While privacy was “becoming” a commodity in the late 1990s (Davies, 1998, p. 160), it has since been fully commoditized as something that can be purchased (Papacharissi, 2010).
The development—and sale—of strong encryption has become increasingly beneficial for corporations who hope to profit in a global market concerned about state surveillance after the Snowden leaks in 2013, with Apple and Google at the forefront of this charge (Karsten & West, 2016; Van Hoboken, 2013). Even as strong encryption might ultimately prevent government and corporate overreach alike, it remains a profitable tool for technology corporations to offer for two reasons. First, it signals to American, privacy-valuing users that the corporations are aligned with their values, thus increasing profit through the purchase or use of their products that continue to collect (unprotected) data. Second, it allows corporations to figuratively throw up their hands when law enforcement requests (protected) data, thus increasing their ability to resist both fallout and profit loss from consumers when they comply with warrants, and potentially even costly legal proceedings over delivering data. In cases where corporations are aware that resisting the state may be costly or troublesome, they comply “to the letter of the law.” Yet in situations where opposing the government—if only in philosophy, as the U.S. government still aims to eliminate E2EE for the public but has not yet succeeded in doing so (McKinney, 2023)—benefits the platforms, they do not hesitate to do so. Nor do they miss an opportunity to frame these design changes as philanthropic acts of care for users.
Under commodity activism, freedom and choice are attained through consumption and purchasing power (Banet-Weiser, 2012). GAMMA sells technical privacy to its consumers through buying the best and most expensive tech, maintaining the most up-to-date software, and possessing enough digital literacy to adjust your settings through a user-friendly interface. These are “corporate-friendly approaches to privacy” that retain users (who will likely produce unprotected and marketable data alongside protected data) and promote profit (Waldman, 2021, p. 5). As Papacharissi (2010) noted, this transforms privacy into a “luxury commodity,” in that it is largely inaccessible, “disproportionately costly,” and “associated with social benefits inversely,” such that forgoing digital participation to protect one’s data results in social disadvantage (n.p). Individuals in abortion-restrictive states, who are overwhelmingly women of color, must trust corporations to protect them through protecting their information—problematically “positioning American tech firms as providers of social goods” (Rider & Revoy, 2022, p. 34).
Furthermore, GAMMA sells the value of data privacy by purporting to support individual data privacy efforts; as such, they capitalize upon user bases who do and do not support abortion, as over three-quarters of Americans express distrust in technology companies’ responsible use of data (McClain et al., 2023). Platforms thus appear to be enacting great care for users by putting users’ privacy needs over platforms’ need for unceasing data collection (Sadowski, 2019). As such, platforms maintain their ability to exploit users for profit (Fuchs, 2013), in part through making privacy accessible only to some (Papacharissi, 2010), while sidestepping responsibility for the ways in which the platforms prescribe and seduce users into data generation (Jarrett, 2008; Southerton & Taylor, 2020)—even appearing apolitically beneficent.
Ong (2006) argues that neoliberalism is defined by the necessity to “self-manage according to market principles of discipline, efficiency, and competitiveness” (p. 4). To survive post-Roe, people who can get pregnant must treat their data as their most valuable asset, for instance, by moving conversations to end-to-end encrypted messaging apps. But who can ensure they are protected? Those who have the digital literacy to switch to an encrypted messaging app or turn on E2EE on their Apple device, but also only those who can afford a device that offers such protections. Given the mass network of data collection that we are subjected to (Andrejevic, 2019; Citron, 2023; Marwick, 2023), these technological solutions are at best partially protective.
Corporations thus produce an individually-responsible neoliberal subject stripped of the collective political consciousness that might encourage advocacy for a comprehensive data privacy law that would hold corporations accountable for the very collection of data, rather than merely its safety in storage or transit. This individualization brings about the “darker side” of care that Martin et al. (2015) speak of, as it privileges and protects those with the power to protect themselves yet neglects and abandons those without it (p. 627). By selling and purchasing products that seem to challenge establishment ideals, corporations and consumers are not threatening the system but ensuring its success (Heath & Potter, 2004).
Advanced capitalism and social media have turned the self into the product and the production of that self into labor (Banet-Weiser, 2012, p. 73). Yet the locus of power in participatory media can be difficult to pin down, as it enables users (and women, especially) to move from consumers to producers through content creation online (Banet-Weiser, 2012). This tension exists in almost all aspects of the datafied society: Users are both “laborers” exploited economically through processes of data extraction while also “agents” engaging in the production of new forms of capital (Briziarelli, 2019, p. 598; Sadowski, 2019). Social media companies make policy decisions that seemingly support the best interests of their users but ultimately maintain corporate control, conceptualized alternately as the “platform-as-daddy” and “platform paternalism” (Han, 2022, p. 1; Petre et al., 2019, p. 1). Platforms paternalistically exercise control over the collection, management, and extraction of user data, while purporting to give users agency through technological fixes. As such, protection of abortion-related data becomes a neoliberal act, converting the systemic issues of abortion access and data protection into (especially women’s and queer people’s) individual problem (Rottenberg, 2014).
This same mechanism of entrapment through support is visible in GAMMA’s paternalism toward its own employees. Indeed, Marchand (2001) argues that corporations have a history of buying goodwill in the public eye through “paternalistic display[s] of kindness” for their employees that have the added benefit of creating a more productive workforce (p. 15). Media discourse that positions healthcare as activism and altruism is troubled by the requisite linking of reproductive rights to full-time employment, which reinforces companies’ authority over their employees. Healthcare is attained and maintained through productivity and upholding the corporate culture. The travel policies announced by each of the GAMMA companies were not paired, in any case, with a corresponding statement vocally supporting those seeking abortion care and denouncing the overturn of Roe. In this corporation-user/consumer/employee relationship, these “caring” policies benefit only those already in relative positions of power and privilege (Martin et al., 2015; Star, 1990), who are statistically the least likely to have issues accessing abortion (Abrams, 2023). Taken together, Big Tech’s acts of commodity activism manage the public and the private at once: Publicly, the corporation has said enough to retain their users and employees, and privately, they maintain control over their data.
Concurrently, these publicized privacy moves allow corporations to “frame their data-driven activities . . . as innocuous and distant from state surveillance practices” (Stevens & Allen-Robertson, 2021, p. 2). Discourse and policy actions worked together to proliferate the idea that platforms are also a victim of the Dobbs ruling, and that the onus of caring for sensitive data is on the state, advertisers, and apps that pay for access to their data clouds. By casting only the misuse of mass data as problematic, technology corporations and journalists frame surveillance as something that is necessary—even desirable—under certain conditions. Despite the fact that targeted surveillance disproportionately impacts marginalized people and communities of color, it continues to be framed as a necessity for national security (Gürses et al., 2016; Rider & Revoy, 2022). Our analysis demonstrates that this narrative persists in the realm of Big Tech’s mass surveillance—data collection on a massive scale must continue (Andrejevic, 2019) yet should be deleted when it could harm individuals who deserve privacy. The articles in our corpus—both from journalists and technology corporations—overwhelmingly failed to consider a scenario in which these data are simply not collected. In so doing, they fail to imagine a digital media framework beyond surveillance capitalism and limit our societal ability to do so as well.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
