Abstract
This article explores responsibility claims by private tech companies. While the business literature has extensively discussed the notion of corporate social responsibility, it does not fully grasp the political significance of responsibility claims. This article proposes a novel conceptual understanding of responsibility by drawing on the concept of representative claims. It argues that by claiming responsibility for an issue or a community, companies are claiming to act on behalf of someone or some purpose—while avoiding democratic oversight. Thereby, responsibility claims not only provide reputational benefits but help companies legitimize and demarcate their political role. Empirically, it uses a representative claims analysis to compare responsibility claims of three companies—Meta, Microsoft, and the NSO Group. Companies either embrace, reorient, or refuse responsibility but frequently define the criteria to measure it. This article thus contributes to our understanding of the political significance of responsibility and tech business power.
Keywords
Introduction
Private tech companies take on tasks previously associated with public authorities: They weigh up the right to free speech of one individual against the privacy and integrity of others. They create digital currencies. Or they navigate due process rights in cross-border criminal investigations. Partially in response to these developments, both companies and regulators have increasingly emphasized the notion of tech company responsibility. Meta reflects on “our responsibility to maintain the safety, security, and privacy of our 2.7 billion users around the world” (Facebook, 2019: 4), and Alphabet’s President Sergey Brin (2017) highlights “the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies.” Seemingly a stark difference to “move fast and break things”—the internal motto used by Facebook until 2014—this shift has prompted some scholars to stipulate a “responsibility turn” in tech (Katzenbach, 2021: 3). But how significant is this turn? And what are the implications for public–private relations?
I argue that the responsibility discourse of tech companies is significant, because it not only provides companies with reputational benefits but also helps them legitimize and define the scope of their political role. While the literature on responsibility (Vetterlein and Hansen-Magnusson, 2020) and corporate social responsibility (Campbell, 2007) is well established, it does not fully grasp the power relations in the digital space (van der Merwe and Al Achkar, 2022). The pervasive global influence of tech companies requires a political lens to understand their current positions as state partners (Srivastava, 2023: 4)—in education, national security, or public discourse. Existing research has critically examined the “ethification” of tech (Avnoon et al., 2023; van Dijk et al., 2021), wherein both companies and scholars discussed measures to foster the development of ethical and responsible artificial intelligence (AI) technology (Bietti, 2021; Hermann, 2022). However, scholars have rarely conceptualized responsibility claims as an interrelated but more encompassing phenomenon that includes not only ethical aspects but also legal or economic dimensions (Herden et al., 2021), leaving blank spots in our understanding of their significance.
This article thus makes three contributions. First, it extends existing scholarship on responsibility in the digital space (Djeffal et al., 2022; Katzenbach, 2021; Srivastava, 2023; van der Merwe and Al Achkar, 2022) by developing a novel conceptual understanding of responsibility that draws on the concept of representative claims, coined by Saward (2010) and further developed by de Wilde (2013). Responsibility is similar to representation in that it is always for someone or something. By claiming responsibility, for example, for misinformation or diversity, tech companies also claim to act on behalf of the interests of a community or an issue. However, they strategically and selectively construct the object, that is, for what, whom, and how they are claiming responsibility. As companies also take on functions that were formerly restricted to public—and often democratically elected—representatives, whether by determining the boundaries between desirable and undesirable speech, or developing software to spy on citizens, they potentially alter public–private relations. Thus, understanding responsibility claims as representation claims sheds light on the political significance of tech companies in the public sphere.
Second, by highlighting the increasing relevance of responsibility, it also shows that tech business practices have transformed. Earlier scholarship has demonstrated how tech companies attempt to convey their business approach as “soft capitalism” (Dror, 2015: 542)—driven by ideological and emotional motives rather than just economic benefits. Haupt (2021) demonstrates how Facebook’s imaginaries of global connectivity and community back its alleged contribution to creating a better world. Nachtwey and Seidl (2020) show how Silicon Valley’s elites use “solutionist” rhetoric to suggest that they cannot only make profits but also provide a technical fix to societal problems. The responsibility discourse is a modified, more selective approach with a dual function: While tech companies admit that they cannot successfully solve every problem, they still imply that they are the right ones to do it. It fronts as a response to the unignorable reality of problems that tech companies caused or at least accelerated, such as the spread of illegal content. However, it also reinforces their position as representatives of peoples’ interests and needs.
Third, empirically, this article shows the dimensions and strategies of responsibility that tech companies employ. It provides background to the literature on institutional (Busemeyer and Thelen, 2020) and infrastructural business power (Kemmerling and Trampusch, 2023), which outlines how companies benefit from their key positions but sheds little light on how they legitimize them. This article compares the responsibility claims of three companies—Microsoft, Meta, and the NSO Group—in cases involving data and content governance. Through a representative claims analysis following Guasti and Geissel (2019), it analyzes how companies claim responsibility—and for whom and what. I identify three approaches toward responsibility: embracing, reorienting, or refusing—depending on whether companies proactively claim responsibility, shift their responsibility from one issue or community to another, or reject it.
I argue that in their responsibility claims, tech companies tend to emulate public language around human rights, the rule of law, and sovereignty. While they thereby entrench such public criteria for understanding and measuring responsibility, they also define where sovereignty begins and their responsibility ends, or how their responsibility toward human rights should be interpreted. Not being embedded in political institutional processes also means that they face less scrutiny (see also Gillespie, 2018). The article thus sketches responsibility claims as a key mechanism through which private tech companies render their political power positions legitimate, de-emphasize the need for regulation, and demarcate their political significance by selectively claiming responsibility for some issues but not others.
The article is structured as follows. First, I review the literature on responsibility and tech companies and develop the analytical framework by conceptualizing responsibility as representation before, second, sketching the claims analysis approach of the article. Third, I will zoom in on three cases involving major tech companies, before, finally, I conclude with a discussion of the implications of increasing responsibility claims.
Understanding responsibility claims by tech companies
Limitations of corporate social responsibility and tech companies
Responsibility as a concept is inherently relational. Responsibility is established through negotiation processes while also creating a relationship between actors and/or issues (Vetterlein and Hansen-Magnusson, 2020: 9)—who is responsible for what and when? Similar to the related concept of accountability, responsibility is a matter of degree, but it is often vaguer and more encompassing. Responsibility presumes not only an ex-post assertion of blame but also includes ex ante and general future obligations, thus functioning “like a behavioral guide” (Vetterlein, 2018: 553).
In the business literature, responsibility has been extensively discussed in relation to Corporate Social Responsibility (CSR; Campbell, 2007)—“the actions of corporate actors that address social and ethical values beyond legal requirements” (van Aaken et al., 2013: 351). While this literature has become more attentive toward political CSR (Frynas and Stephens, 2015; Scherer and Palazzo, 2011), current conceptions fail to account for the complex responsibility dimensions that emerge in the digital transformation (Herden et al., 2021; van der Merwe and Al Achkar, 2022).
The literature on tech companies establishes their influence as unprecedented, affecting a broad variety of issues from housing, to work, to social wellbeing, to law enforcement (van Dijck, 2021). Control over digital power resources, such as data or digital infrastructures, allows companies to shape the behavior of others (Kemmerling and Trampusch, 2023). Tech companies do not only lobby to shape their institutional environment (Uzunca et al., 2018) but also mobilize their—emotionally attached—consumers to fight for their interests (Culpepper and Thelen, 2019). They have positioned themselves as “different,” suggesting that their products can make the world a better place by drawing on emotional rhetoric and ideological commitments while also demonstrating a business case (Dror, 2015; Nachtwey and Seidl, 2020). However, tech companies could not live up to many of these promises—and even contributed to new problems. As they face challenges about the broader societal impact of their products, tech companies emphasized the alignment between business behavior and ethical standards (Bietti, 2021) and set up ethics boards and review processes (Taylor and Dencik, 2020; van Dijk et al., 2021).
Alongside and interconnected with this development, scholars have described a growing emphasis on the concept of responsibility (Djeffal et al., 2022; Katzenbach, 2021; Srivastava, 2023; van der Merwe and Al Achkar, 2022). Responsibility claims entail an admission of companies’ current limitations—“we are responsible” entails a shift from previous practices of denial and the emphasis on tech companies’ uniqueness to fend off regulation, be it in transport or media. In this sense, responsibility claims share crucial characteristics with a commitment to “ethics”: companies respond to criticism and social problems by aligning procedures and practices with certain principles—subject to varying interpretations (Avnoon et al., 2023; Taylor and Dencik, 2020) and a lack of accountability to democratic processes (van Dijk et al., 2021). However, as this article argues, responsibility claims transcend a mere commitment to values—or even virtue that is often associated with tech ethics (Phan et al., 2021). They carry a political significance, because they imply acting on behalf of someone or some purpose. While responsibility claims acknowledge limitations or challenges, companies simultaneously position themselves as the ones capable of addressing the challenges on behalf of their users.
Responsibility as representation
Conceptually, I argue that claiming responsibility involves making a selective claim of representation, that is, “claimants present themselves to an audience as the legitimate representatives of a certain cause and/or constituency” (de Wilde, 2013: 278; Saward, 2010). Responsibility is similar in that it involves making and evaluating claims of being responsible for someone or something. As de Wilde puts it, “representation is about constructing a relationship between an ‘imagined community’ and its representatives as much as it is about efficient policy-formulation” (de Wilde, 2013: 288). Thus, representation is not exclusively linked to political mandates or formal political settings but involves constitutive and symbolic processes of claim-making from a variety of actors (Guasti and Geissel, 2019: 98). Actors try to establish their legitimate space in the public sphere (de Wilde, 2013: 280) by positively defending someone’s or something’s interests and values rather than just stating that someone is affected by them (de Wilde, 2019: 9).
Claims of representation differ in quality and degree of explicitness (de Wilde, 2020; Guasti and Geissel, 2019: 101), that is, to what extent actors explicitly state that they represent a specific community or issue. Claims are often incomplete and may also deny representation (Guasti and de Almeida, 2019; Guasti and Geissel, 2019). This pattern holds true for responsibility claims: When actor A claims responsibility for an actor B or issue C, actor A also claims to act on behalf of B or in protection of C. They appeal to and constitute a “community of responsibility” (Vetterlein, 2018: 547) or claim to represent an issue or a “normative scheme,” such as freedom (Guasti and Geissel, 2019: 101). It is important to note that this understanding of responsibility is particularly relevant for ex ante and forward-looking obligations rather than accountability for wrong-doing.
I argue that the concept is well-suited for analyzing tech companies because they frequently make relational claims on behalf of a community, such as their users, or an issue, such as privacy rights. As Culpepper and Thelen (2019: 306) show, tech companies tend to appeal directly to their consumers and claim to represent their interests, concerning, for example, flexibility or convenience.
Leveraging the concept of responsibility provides companies with crucial advantages compared to making a direct claim of representation. If, for example, they were to overtly claim to replace governments, they would expose themselves to existing governmental norms, processes and institutions. By claiming responsibility, they can simultaneously address the demand for more responsible decision-making (in the sense of accountability) while also demarcating when and how they are responsible (in the sense of forward-looking obligations and representation). They thus make a selective claim of representation—claiming to act on behalf of some communities and issues but not others—while remaining shielded from the scrutiny faced by elected representatives.
Claiming responsibility strategically
I propose that companies generally follow three strategies to discursively solidify specific notions and allocations of responsibility. First, they may actively embrace responsibility, explicitly citing their responsibility to act on behalf of someone or some purpose. Importantly, this may occur independently of any additional scrutiny resulting from a controversy or scandal, which forces companies to justify their action, thus pertaining to future obligations. Successfully positioning themselves as responsible actors is likely to enhance their legitimacy both with consumers and policymakers. This may preserve the status quo (stabilize their existing authority position) or change the status quo (enhance their authority position).
Second, they may strategically reorient responsibility. This could involve companies claiming responsibility for an issue or community that is different from the one another maker is assigning to them. They might claim responsibility for a different “normative scheme” (Guasti and Geissel, 2019: 101), shifting from claiming responsibility for freedom of speech to responsibility for criminal justice. Alternatively, they may exclude affected stakeholders from their community of responsibility. In the aftermath of the Snowden revelations, major companies called for reforms in government surveillance (Reform Government Surveillance, 2015). By suggesting that public actors and public laws were responsible for surveillance activities, while private companies had no choice but to comply with an insufficient the legal framework, they strategically reoriented responsibility.
Based on the literature, claiming responsibility, whether reorienting or embracing, seems to have several benefits: Lock and Seele (2017: 6) show that companies may use responsibility instrumentally to influence political stakeholders for economic benefits. Okafor et al. (2021) show that tech firms with higher CSR spending tend to have higher revenue growth. Successful responsibility claims may translate into political influence: When companies establish themselves as insiders “there is much less need for them to resort to traditional lobbying and activism” (Busemeyer and Thelen, 2020: 454). Thus, by reorienting and embracing responsibility, companies may be able to shape evaluative criteria and standards according to their preferences. This increases their capacity to limit the impact of unfavorable regulation or shape the future political landscape.
In some instances, they may, however, third, refuse responsibility. For legal or reputational reasons, especially when faced with blame attribution, companies may reject responsibility to deny or discredit legitimacy challenges. For example, in 2016, Mark Zuckerberg thought it was a “pretty crazy idea” (cited in Solon, 2016) to suggest that fake news on Facebook had any influence on the US elections. Zuckerberg rejected the allocation of blame while simultaneously discrediting any attempts to assign future-looking obligations. If successful, this repudiation of responsibility may prompt regulators or the public to seek alternative sources in their allocation of blame. If the refusal fails, companies may likely face steep reputational losses.
Research design and method
The article delves into tech responsibility by conducting a qualitative document analysis of three central companies that relate to the privatization of public responsibility: Meta, Microsoft and NSO. These companies represent distinct but typical characteristics featuring different origins, business models, and focus areas. They include key global players with top 10 positions concerning market capitalization (Microsoft and Meta), and the smaller but significant NSO, with its position at the intersection of law enforcement, intelligence, and tech. While these firms operate across various markets, I focus on three instances of their governance of content and data, shedding light on responsibility constructions in relation to tech companies’ core activities—and the heart of their power position. First, I assess content governance on the social network Facebook. With approximately 3.05 billion monthly active users (Meta, 2023), the platform increasingly claims responsibility in its handling of hate speech, and mis- and disinformation. Second, I analyze Microsoft’s responsibility claims in relation to electronic evidence, sovereignty and due process rights. It represents one of the first major instances in which a tech company made clear responsibility claims—and one that has shaped the legal-political debate until today. The third illustrative case outlines the responsibility claims of the Israeli NSO Group in relation to its spyware Pegasus developed for the security sector. Reportedly, the tool has been used by various countries in Europe as well as, inter alia, Israel, Saudi Arabia, the United Arab Emirates, and India (Marzocchi and Pecsteen de Buytswerve, 2022), while NSO denies responsibility for the use of its product.
While the comparative perspective thus allows grasping responsibility in different contexts, it also limits the depth of the analysis. Therefore, the article has limitations concerning the evolution of responsibility claims over time, in different areas, and, importantly, with regard to audience responses. Further research could, for example, include a sentiment analysis of newspaper articles to analyze the acceptance of claims by the broader public. The article’s focus on developed democracies keeps the context for company activity similar but does not consider potential variations due to regime type and economic situation.
The 42 primary sources analyzed include court proceedings and submissions, company blog entries and transparency reports. I chose documents for the individual cases until I reached saturation. I loosely followed the steps of the representative claims analysis suggested by Guasti and Geissel (2019). I used the software MAXQDA to code and classify the (1) maker (who speaks) of a claim of responsibility by tech companies (2) for a specific constituency or object (on behalf of whom or what) and (3) their claimed linkage (positive claim or denial of responsibility). 1 Claims could comprise multiple sentences or only half-sentences depending on how extensively one claim was expressed. While not every claim contains all of these elements, I solely coded statements that relate to responsibility for something or someone and/or a specific reason. The article’s focus is on tech companies’ claims (or denial of) responsibility rather than their reception, but I contextualize claims to account for the audience responses.
Responsibility constructions across tech companies
Meta: the oversight board
Contextual changes: dealing with responsibility allocations
For a long time, Meta refused responsibility outright or—at best—attempted to reorient responsibility allocated by others. In 2016, Meta CEO Zuckerberg (2016) belittled allegations that content on Facebook, such as fake news, could influence elections and warned of companies becoming “arbiters of truth.” This strategy, however, proved unable to alleviate political and societal pressures in the long term. In 2018, a UN factfinding mission described Facebook as “a useful instrument for those seeking to spread hate” (UN Human Rights Council, 2018: 340) in the mass atrocities in Myanmar. Simultaneously, the Cambridge Analytica scandal exposed the firm’s access to millions of Facebook user data for, inter alia, voter manipulation (Cadwalladr and Graham-Harrison, 2018). In consequence, Zuckerberg had to justify the companies’ conduct before the United States (US) Congress. Meta admitted to failures and started to refer to responsibility, with Zuckerberg (2018b) explicitly acknowledging that “[w]e didn’t take a broad enough view of our responsibility, and that was a big mistake” (p. 1).
Responsibility claims
Several responsibility claims emerged in recent years, but the establishment of the Meta Oversight Board stands out as most noteworthy. The Board, which published its first decision in early 2021, issues “principled, independent decisions regarding content” (Oversight Board, 2022) to guide Facebook and Instagram in controversial content cases. The initial 20 board members were selected jointly with Meta but the company has since withdrawn from the selection process, enhancing the Board’s independence. The Board’s decisions are binding on the company, and it may provide further recommendations for policies. Zuckerberg (2018a) explicitly argued that “we have a responsibility to keep people safe on our services [. . .]. We also have a broader social responsibility to help bring people closer together—against polarization and extremism.” Nick Clegg (2020), a former Deputy Prime Minister of the United Kingdom and Leader of the Liberal Democrats who now is Meta’s president of global affairs, suggested that “[w]ith our size comes a great deal of responsibility,” and Meta highlights the Board as a way to ensure accountability to “the people in our community” (Zuckerberg, 2019: 1). The Board has focused mainly on ex post responsibility—scrutinizing past and existing practices—but it also aims to “promote the rights and interests of users” (Oversight Board, 2021a: 3), suggesting it acts on behalf of a community’s interests and values, such as interconnection.
In its responsibility claims, Meta strongly relies on public norms and values, including freedom of expression, and even “international human rights norms” (Oversight Board, 2019: 2, Section 2). In the original plans for the Board, Zuckerberg (2018a) compared it to a “Supreme Court” to “ultimately make the final judgment call on what should be acceptable speech in a community.” This comparison is a powerful recognition of public standards. At the same time, by adopting this language for a quasi-independent but Facebook-funded Oversight Board, Meta defines and circumscribes the powers of such a “court” itself.
Meta’s decision to install the Board represents an attempt to reorient—or even “outsource” (Kelly, 2021: 4)—responsibility, focusing on a procedural rectification by delegating difficult content decisions. By making the Board’s decisions binding and the board members quasi-independent, the company also embraces the previously reoriented responsibility.
Responses and recognition
Overall, the Board seems to have improved Meta’s perceived legitimacy. Critics view the Board as a thin veneer against regulation (Haggart, 2020), or point to the lack of access and scope (Douek, 2020). Conversely, Medzini (2021) argues the Board entrenches a shift in Facebook’s content regulation regime from “thin” toward “enhanced self-regulation,” and Helfer and Land (2022: 4) compare its work to international human rights tribunals. Interestingly, the Board itself re-allocated some responsibility to Meta, notably when assessing the ban of Donald Trump after the Capitol attacks. The Board refused Meta’s request for a ruling and “insist[ed] that Facebook review this matter to determine and justify a proportionate response” (Facebook Oversight Board, 2021). It called on Meta to establish a consistent procedure rather than reorienting responsibility to the Board. The Board turned into a relevant audience in itself. Similarly, concerning the so-called cross-check program, granting more leniency in enforcing content rules for prominent users, it criticized that “Facebook has not been fully forthcoming with the Board” (Oversight Board, 2021b). Notably, the Board’s open criticism and the fact that Meta implemented numerous of its recommendations suggests that the Board is actually enhancing Meta’s responsibility.
In sum, Meta transitioned from refusing to reorienting responsibility through the establishment of the Oversight Board. While Meta seems to embrace this reoriented responsibility, for example, by making the decisions binding, the Oversight Board has pointed to persisting challenges of transparency and accountability.
Microsoft: electronic evidence
Context: uncertain responsibility in electronic evidence
Microsoft has been one of the first companies to claim responsibility, for example, in its promotion of responsible behavior in cyberspace (Gorwa and Peez, 2020). One of the initial occurrences instances pertains to electronic evidence—data stored in the cloud, email accounts or messenger accounts that are relevant in criminal investigations. For cross-border cases (e.g. when the data are stored outside the jurisdiction of the investigation), law enforcement agencies have to rely on lengthy bureaucratic procedures through Mutual Legal Assistance Treaties (MLATs) to gain access to such data. Therefore, they often informally request the data from providers directly. In 2013, Microsoft challenged one of those requests by US law enforcement for data stored in Ireland, suggesting that the request violated European and Irish sovereignty (Smith, 2017) and was in breach of EU data protection legislation. In 2017, the case was referred to the US Supreme Court.
There was no clear public allocation of responsibility to Microsoft that would have required a public reaction. However, it is important to note that in 2013, when Microsoft initially resisted the warrant, privacy had become a prominent topic. Just months prior, whistle-blower Edward Snowden had revealed widespread surveillance by intelligence services. This prompted Microsoft to point to the “growing mistrust and concern about their [tech companies’] ability to protect the privacy of personal information located outside the United States” (Microsoft, 2014: 5). Thus, there were links to a broader controversy.
Responsibility claims
At the onset of the conflict in 2014, the company refrained from explicitly claiming responsibility. However, Microsoft (2014) asserted that the provisions suggested by the government “would violate international law and treaties, and reduce the privacy protection of everyone on the planet” (p. 3). This could be interpreted as a negative claim, implying that the government was acting against the interests of “everyone.” By deciding not to cooperate, Microsoft could demonstrate that it embraced its responsibilities, advocating for the interests of not only its users but a global community. Microsoft strongly grounded its reasoning in public principles and standards, such as “[p]rivacy and the proper rule of law” (Smith, 2016).
As the case gained prominence due to the Supreme Court’s involvement, Microsoft depicted it as part of a broader effort to act on behalf of its customers (Smith, 2018b), strengthening Microsoft’s representative role. Microsoft President Smith (2018a) pointed out that “Microsoft has fought hard to secure these rights and protections. Three times we filed lawsuits against the U.S. government to increase transparency, and all three successfully prompted significant new protections for our customers” (p. 1). In September 2018, Smith (2018a) argued that “Cloud providers act as a critical check to ensure that governments’ use of their investigative powers strictly adhere to the rule of law” (p. 2). This positioning establishes Microsoft as a bulwark against overly zealous governmental access, while the reference to the rule of law almost alludes to a judicial oversight function. Microsoft stressed that US agencies were acting against established principles and even beyond the law (Rosenkranz, cited in Supreme Court of the United States, 2018: 60). During the Supreme Court hearing in 2018, the company’s legal representative criticized the intrusive nature of law enforcement practices suggesting that “[t]he government wants to use the act to unilaterally reach into a foreign land [. . .] where it’s protected by foreign law” (Supreme Court of the United States, 2018: 32). This underscores the importance of foreign domestic law and state sovereignty, while implicitly suggesting that Microsoft was acting on behalf of the international community.
Responses and reorienting of responsibility
The company’s responsibility claims were successful in generating widespread support for Microsoft’s stance during the Supreme Court case, creating the impression that Microsoft acted on behalf of a broader community. Various entities, including the European Commission, Ireland, several non-governmental organizations (NGOs) and companies, and the UN Special Rapporteur for Privacy Online, submitted amicus curiae briefs to the Supreme Court, some explicitly supporting Microsoft. The US Department of Justice representative attempted to reorient Microsoft’s responsibility toward fighting crime and terrorism, arguing that “hundreds if not thousands of investigations of crimes [. . .] are being or will be hampered” (US, 2017: 12f.). However, the proceedings largely set aside Microsoft’s responsibility toward security, while frequently referring to the international community and international law. The Commission pointed out that the Court had a responsibility “to consider EU domestic law” (EC, 2018: 14). Microsoft’s responsibility toward its consumers did not play a central role in the proceedings; instead, the conflict increasingly focused on EU–US relations. Microsoft could argue that it had fulfilled its responsibility toward its customers by publicly raising concerns about data access.
This became more pronounced when, in March 2018, the United States adopted the Clarifying Lawful Overseas Use of Data (CLOUD) Act (2018). The law establishes a legal basis to data access by law enforcement agencies in the United States and enables other countries to enter into bilateral agreements allowing reciprocal sharing (Daskal and Swire, 2018). Since there was now a legal obligation to share data, Microsoft and other companies began to reorient responsibility. In a joint letter to the Congress representatives, the tech firms emphasize that “[w]e appreciate your leadership championing an effective legislative solution, and we support this compromise proposal” (Apple et al., 2018). Although legally mandated to comply, Microsoft sought to construct its support for the law as an additional manifestation of responsibility, stating that “we appreciate and accept the responsibility thrust upon us” (Smith, 2018b). The CLOUD Act does not resolve the initially problematized contradictions with European data laws and data access still applies to customers globally. However, rather than persisting in its former emphasis on responsibility for privacy rights, Microsoft reoriented its responsibility, highlighting its responsibility to act in line with the law.
NSO: phone surveillance
Dealing with allocations of responsibility
The NSO spyware Pegasus collects various data from mobile phones, such as text messages, and can activate the phone’s microphone or video without users noticing or having clicked on a link or a message (The Guardian, 2021). In October 2019, Meta initiated legal action, alleging that NSO exploited a vulnerability to target approximately 1400 users of the Meta messenger service WhatsApp with Pegasus spyware (Cathcart, 2019). According to WhatsApp, the targeted users included “attorneys, journalists, human rights activists, political dissidents, diplomats, and other senior foreign government officials” (WhatsApp, p. 42, cited in Whatsapp Inc, et al. v. NSO Group Technologies Limited, et al., 2020). NSO consistently attempted to refuse responsibility, highlighting its limited responsibility for any software-related misuses. In the court order, former CEO Hulio highlighted that the company’s “role is limited to NSO providing advice and technical support to assist customers in setting up—not operating—the Pegasus technology” (Hulio, cited in Whatsapp Inc, et al. v. NSO Group Technologies Limited, et al., 2020: 19). This portrays NSO as a neutral distributor of technology absolving it of responsibility for subsequent usage. Yet, the court rejected this refusal, suggesting the relationship between NSO and its customers was unclear (Whatsapp Inc, et al. v. NSO Group Technologies Limited, et al., 2020: 19).
NSO also attempted to reorient responsibility, albeit in a moderated form. NSO sought “derivative sovereign immunity” (Whatsapp Inc, et al. v. NSO Group Technologies Limited, et al., 2020: 9) by asserting that it acts on behalf of foreign nation states engaged in law enforcement investigations. Utilizing the Foreign Sovereign Immunities Act from 1977, which typically limits the liability of foreign authorities in another jurisdiction, NSO not only affirms public standards of appropriate behavior but also positions itself as acting as a representative of a government. However, this request was denied in court (Whatsapp Inc, et al. v. NSO Group Technologies Limited, et al., 2020: 15), and apparently failed to convince the US Supreme Court to review the decision.
Claims of responsibility
After the media revelations, NSO attempted to refuse responsibility by discrediting the allegations, dismissing them as a “planned and well-orchestrated media campaign” (NSO Group, 2021a). Yet, NSO also attempted to reorient responsibility by establishing a different reference community. NSO describes its mission as “saving lives, helping governments around the world prevent terror attacks, break up pedophilia, sex, and drug-trafficking rings, locate missing and kidnapped children, locate survivors trapped under collapsed buildings, and protect airspace against disruptive penetration by dangerous drones” (NSO Group, 2021a). During investigations at the European Parliament, the firm’s General Counsel and Chief Compliance Officer stressed that “probably many thousands of lives have been saved” (European Parliament, 2022). This links the company’s responsibility to a community of vulnerable people in need of protection with NSO positively defending their interests—and their lives. Like others, NSO backed up its responsibility constructions by engaging in calls for regulation (Kirchgaessner, 2021). NSO also emphasized the notion of responsibility as transparency, publishing a “transparency and responsibility” report in 2021 (NSO Group, 2021b) and emphasized the “immense challenges in publishing such a report in an industry that is inherently secretive” (NSO Group, 2021a: 2). Interestingly, other tech companies used the NSO controversy to emphasize their own responsibility. In an amicus curiae brief, Cisco, Google and Microsoft and other firms made responsibility claims on behalf of the victims (Microsoft et al., 2020).
Responses and recognition
Overall, NSO encountered persistent challenges regarding its (lack of) responsibility claims. In 2021, the Biden administration effectively blacklisted the company due to their involvement in “transnational repression” (US Department of Commerce, 2021) and apparently brought an attempt to sell the company to a halt. A European Parliament committee report recommended stricter regulation and standards (PEGA, 2023: 167–181), but numerous European countries admitted to using Pegasus and it is unclear whether they will stop.
Discussion and conclusion
This article has examined variations in how private companies conceptualize and make claims of responsibility. While tech firms previously often refused responsibility, there appears to be a growing trend of making positive linkages between communities/objects and companies’ responsibility (e.g. Katzenbach, 2021). I argue that these claims contain claims of representation for communities, such as “people,” “users,” or “customers,” and the “public,” or normative schemes, such as “privacy,” “human rights,” or “sovereignty.” By asserting that they claim to act on behalf of an issue or community, they also assume a political role that confers legitimacy to that issue or community, while legitimizing their power position. Understanding responsibility claims as representation claims thus help grasp how companies not only address wrong-doing but also selectively emulate the function of public authorities.
Why is that so? As outlined in the beginning, there is a rise in both the demand for and supply of responsibility, particularly concerning major players. It seems likely that companies employ responsibility akin to “ethics washing” (Bietti, 2021: 267)—a strategy aimed at positioning themselves as responsible and thereby reduce the perceived need for public regulatory intervention. Specific events may result in pressures that trigger an overhaul of the company approach such as the Snowden revelations for Microsoft (Gorwa and Peez, 2020) or the Cambridge Analytica scandal for Meta (Culpepper and Thelen, 2019: 305), particularly when trust relations between public actors and private companies break down (Carrapico and Farrand, 2021). However, the effect of such events may vary depending on the direct involvement of companies and the immediacy and gravity of consequences, such as public hearings or major fines. The imposition of additional supervisory duties, such as search engines having to weigh privacy and the right of access to information in the implementation of the right to be forgotten, or the establishment of new bodies, such as the Meta Oversight Board, may entrench a shift in the long term. Nevertheless, companies are likely to selectively draw on responsibility when and where it is most convenient and opportune. This makes their political role influential, while the oversight of democratic institutions is absent. Amid substantial downsizing of responsibility and AI ethics teams across the tech sector (Murgia and Criddle, 2023), companies may even eliminate scrutiny from within.
Moreover, the NSO example demonstrates that while a loss of reputation may prompt responsibility claims, it may not necessarily drive broader changes in a company’s rhetoric or behavior. Company maturity might be a relevant factor for the increase and quality of responsibility claims. Microsoft, being the oldest company, has received significant attention for its responsibility assertions, even gaining recognition as a “norm entrepreneur” (Gorwa and Peez, 2020). Meta, in contrast to its previous stance, increasingly focused on responsibility over time. The respective fields and audiences are also likely to be relevant, depending on whether companies heavily rely on relations with a broader public, as in the case of Microsoft and Meta, or must merely appeal to a smaller community of security officials, as exemplified by NSO. Tech companies push strong ties with former public officials, such as Aura Salla, who became a top Meta lobbyist after working in central positions in the European Commission—and who is now back in Finnish politics. While revolving door hires are a widespread phenomenon, employing high-level politicians, such as Clegg not only as back-door advisors but public figures will likely to strengthen their political standing and credibility—in the perception of both the public and regulators.
Implications and broader relevance
The consequences and normative desirability of both public and private responsibility depend on the context. On one hand, the prevalence of private responsibility claims shows that even powerful companies at least make an attempt to address shortcomings, even if these efforts are somewhat superficial.
On the other hand, such claims may solidify the perception that companies have representative functions. Importantly, in contrast to democratic representation, users can only vote with their feet and do not possess the right to formal participation. In this sense, responsibility is more of a compromise than a solution (Vetterlein and Hansen-Magnusson, 2020: 20). While regulators increasingly seek to regain control from big tech (Farrand and Carrapico, 2022), generating effective responsibility is ultimately a deeper structural task, necessitating a reconfiguration of relations between states, companies, and individuals (Srivastava, 2022: 235–239). Responsibility thus assumes a dual function of admitting that their current capacity to solve problems is limited while simultaneously implying that they know and represent the interests and basic needs of their users. By selectively claiming responsibility for the misuse of their products in some cases but not others—such as election interference in the United States versus the genocide in Myanmar—they admit a political role in some instances but not in others. This not only makes a more prominent part of the debate but also grants them a major role in determining wrong-doing, even by others. In addition, they define the object and community of responsibility as well as the scope of their action. Thus, similar to ethics, companies may operationalize the term according to their own interests (Phan et al., 2021)—profiting from the term’s vagueness and lack of democratic institutionalization.
Other controversial issues, including microtargeting, AI in military contexts, or facial recognition evoke similar concerns about responsibility. Van der Merwe and Al Achkar (2022) highlight disputes in Australia over shared profits between tech and media companies, while Alphabet’s has invoked its alleged responsibility for public access to information in conflicts with European regulators about the right to be forgotten to justify a limited approach in its enforcement. The recent controversies surrounding content governance on X (formerly Twitter) after its acquisition by Elon Musk as well as the dismissal of OpenAI co-founder and CEO Sam Altman, reportedly due to concerns about risks stemming from the rapid commercialization of the company’s products, highlight the salience of responsibility in tech today. At the same time, the re-instatement of Altman just days later and the dissolution of ethics and responsibility boards across companies (Murgia and Criddle, 2023) show its contested quality. Although the article broadly outlines the reception of responsibility claims, further investigation is needed to understand how allegedly represented communities and other audiences recognize tech company claims. Opinion polls suggest a decline in trust in technology companies, particularly in developed countries (Edelman, 2022). Future research should thus more precisely assess the conditions under which companies successfully legitimize their power and when they even want or actively seek a political role. While responsibility can initiate debates about the broader societal implications of tech companies and their political functions, it should not be confined to instances in which they claim responsibility themselves. Examining the effects of the selective responsibility claims by tech companies can enhance our understanding of their political significance today.
Footnotes
Acknowledgements
The author thanks participants at ECPR 2021, the workshop on “Legitimacy and Trust Challenges of Digital Governance” at the European University Institute, and SASE 2022, particularly Ellie Rennie and Michael Kemmerling for their helpful comments.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
