Research on AI ethics tends to examine the subject through philosophical, legal, or technical perspectives, largely neglecting the sociocultural one. This literature also predominantly focuses on Europe and the United States. Addressing these gaps, this article explores how data scientists justify and explain the ethics of their algorithmic work. Based on a pragmatist social analysis, and of 60 semi-structured interviews with Israeli data scientists, we ask: how do data scientists understand, interpret, and depict algorithmic ethics? And what ideologies, discourses, and worldviews shape algorithmic ethics? Our findings point to three dominant moral logics: (1) ethics as a personal endeavor; (2) ethics as hindering progress; and (3) ethics as a commodity. We show that while data science is a nascent profession, these moral logics originate from the techno-libertarian culture of its parent profession—engineering. Finally, we discuss the potential of these moral logics to mature into a more formal, agreed-upon moral regime.
AI, AI ethics, algorithmic ethics, algorithms, culture, data science, moral regimes, pragmatist, professions
Research has highlighted the social harms that stem from big data algorithms. Such algorithms were shown to restrict personal autonomy (Rouvroy, 2013); reproduce inequality, discrimination, and racism (Benjamin, 2019; Eubanks, 2018; Noble, 2018); destabilize democracy (Tufekci, 2014); promote polarization (Woolley and Howard, 2017), cause environmental harm (Bender et al., 2021; Crawford, 2021), and more. Specific attention has been given to the potential ramifications of autonomous cars (Gal, 2017), facial recognition systems (Buolamwini and Gebru, 2018), and autonomous weapons (Lewis, 2014). That is, the development of big data algorithms, particularly machine learning (ML) algorithms and artificial intelligence (AI) is increasingly accompanied by poignant social criticism about the development and implementation of such technologies. Addressing this criticism, various researchers and organizations have begun to deal extensively with the ethics of algorithms producing articles, books, and reports on the subject and formulating various guidelines and frameworks for the ethical development of big data algorithms and AI (Ananny, 2016; Jobin et al., 2019; Phan et al., 2021). Similar initiatives have recently matured into legislation, such as the European General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the European Digital Markets Act (DMA). However, the literature on “AI ethics” tends to examine the subject through philosophical, legal, or technological perspectives. Curiously, the sociocultural, professional, and organizational contexts in which big data algorithms are developed and imbued with (im)moral values have only recently begun to receive scholarly attention.
In this article, we seek to contribute to the burgeoning literature on the ethics of algorithms by offering a grounded and contextualized account of how AI developers understand and explain their ethics. Following Boltanski and colleagues’ pragmatic approach (Boltanski and Thévenot, 2006; Lemieux, 2014), we focus on how data scientists justify, understand, and interpret the ethics of their algorithmic work. 1
Moreover, acknowledging the ties between algorithms and culture (Seaver, 2017), recognizing the importance of the professional context in algorithmic production (Avnoon, 2021), and responding to the prevailing Euro-American centrism in research on such production (Kotliar, 2020b), we focus on a specific sociocultural context—the Israeli data science community. Based on 60 semi-structured interviews, we identify three dominant moral logics: (1) ethics as an individual endeavor; (2) ethics as a hindrance to progress; and (3) ethics as a commodity. Finally, we discuss the potential of these moral logics to mature into a more formal, agreed-upon moral regime and highlight their historical ties to data science’s parent profession—engineering.
While big data algorithms are developed in high-technology contexts and are based on advanced mathematics, it is increasingly realized that algorithms are far from neutral technological artifacts. Instead, they are actively created by human actors who imbue them with sociocultural contents and meanings (Gillespie, 2014; Van Dijck, 2014). Similarly, the functioning of big data algorithms is broadly affected by the data they run on, as well as by their training data, with its inherent biases and flaws (Gitelman, 2013; Scheuerman et al., 2021). Thus, algorithms are neither objective artifacts nor mere byproducts of the interaction between data and mathematical modeling. Instead, they are socio-technical products, actants in a buzzing socio-algorithmic assemblage (Latour, 1992; Seaver, 2017)—that reflect and echo the salient worldviews, practices, and values of the culture around them (Kotliar, 2020a).
Similarly, the ethics of algorithms is more than the result of formal guidelines and regulations, and it does not only mirror algorithms’ characteristics or the structure of their datasets. Instead, the ethics of algorithms stems from specific cultures of practice (Van Maanen and Barley, 1984)—from the norms, belief systems, and worldviews of the people who develop the algorithms and from the specific social contexts from which such algorithms emerge. Thus, we ask, how do the people who develop such algorithms justify, understand, and interpret their ethics? What ideologies, discourses, and worldviews shape algorithmic ethics? To answer these questions, we turn to the sociology of morality.
The sociology of morality examines how moral systems are constructed, understood, and adopted by societies, organizations, and individuals (Durkheim, 1961; Hitlin and Vaisey, 2013; Weber, 2003). Early sociological approaches have frequently dealt with questions of morality (Weber, 2003), highlighting how social structures shape social morals (Durkheim, 1961). Nevertheless, the sociological interest in questions of morality has quickly waned, and only by the end of the 20th century have sociologists begun to express interest in the links between the moral and the social. Prominently, Boltanski and his associates have proposed a pragmatic approach to morality that focuses on individuals as moral subjects while emphasizing the social contexts in which they are embedded (Boltanski and Chiapello, 2005). The pragmatic approach denies the existence of a monolithic or universal moral system (Boltanski and Thévenot, 1999, 2006), and instead, it represents a shift toward a polytheist morality (Schwarz, 2013: 157). Thus, rather than focusing on individuals’ ability to choose between “the moral” and “the immoral,” this approach offers to focus on individuals’ ability to navigate within and between diverse moral repertoires and choose between different “moral logics” (Boltanski and Thévenot, 1999). Hence, the pragmatic view of morality sees individuals as moral agents who actively choose their moral position out of vast repertoires of “moral logics” (Schwarz, 2013). In other words, people’s moral “toolbox” (Swidler, 1986) is culturally and socially shaped, and individuals eventually choose and justify why something may or may not be moral.
Following this view, this article sees algorithmic ethics as people’s attempts to institutionalize particular moral regimes out of culturally available repertoires of moral logics. Examining data scientists’ moral logics may deepen our understanding of the moral repertoires at their disposal, the social contexts from which they derive these logics, and the feasibility of establishing a dominant moral regime—uniform, agreed-upon algorithmic ethics.
While AI practitioners go by different names, they are most prominently known as “data scientists” (Avnoon, 2021; Ribes, 2019)—a professional title that first emerged as early as the 1970s (Naur, 1974) but was vastly popularized only around 2008 from the offices of LinkedIn and Facebook (Patil, 2011). Data scientists often describe themselves as both engineers and scientists and given their extensive technological practice, they are considered a sub-profession of engineering (Avnoon, 2021). Members of this new profession develop algorithms for big data operations and ML or AI, using statistical-probability approaches. Alongside data analysis, data scientists’ work also revolves around collecting and preprocessing data and integrating domain knowledge into algorithms and their data (e.g. from domains such as psychology, radiology, law, and more) (Fayyad et al., 1996). Thus, data scientists play a crucial role in designing the collection, processing, and analysis of data and disseminating algorithmic results to non-experts. In other words, alongside programmers, executives, entrepreneurs, and investors, data scientists are key players in the “datafication of everyday life” (Van Dijck, 2014), and they play a significant role in the socio-algorithmic assemblage, promoting or delaying the development of big data algorithms and their implementation into society. Therefore, data scientists’ ethical positions—the way they perceive and understand the moral implications of their work—can profoundly affect the design, production, and implementation of their algorithms.
As sociologists of professions have long shown, people’s socio-professional contexts play a significant role in shaping their professional ethics as part of their socialization and acculturation (Fournier, 1999). Such ethics are usually developed through the gradual creation of professional norms, practices, and standards that revolve around the implementation of expertise into society (Freidson, 1973; Scott, 2008).
Knowledge-intensive professional communities tend to formulate ethical codes to establish trust between the public and their profession and regulate the application of their expertise into society (Scott, 2008). Prominent examples include lawyers, medical doctors, and accountants (Abbott, 1983). However, unlike these professions, and due to various structural determinants, software engineering has so far avoided formulating a binding ethical code (Ensmenger, 2010). As an engineering profession, the development of formal ethical guidelines in data science runs into similar structural barriers (Mittelstadt, 2019).
Nonetheless, due to the rise of data science as a new profession and following the growing public criticism around algorithmic harms, prominent members of the data science community have recently initiated the formulation of an ethical code (Patil, 2018). At the same time, tech giants like Microsoft, Google, and IBM have begun publishing ethical guidelines in an attempt to design the field of AI ethics from its inception (Phan et al., 2021; Taylor and Dencik, 2020). In addition, these companies’ employees have also begun protesting against some of the algorithmic tools that their employers develop—including facial recognition systems, autonomous weapons, large-scale surveillance projects, and more (Crawford, 2019; Shane and Wakabayashi, 2018). However, these ethical initiatives are currently in their infancy, and their impact on the ethics of algorithms is yet to be seen. Moreover, as we argue below, their global reach is questionable.
While the ethics of algorithms is predominantly discussed from philosophical, legal, and technological perspectives, researchers have recently begun to examine this subject from a sociocultural perspective. For example, scholars have focused on media coverage of AI, showing that such algorithms elicit much optimism (Fast and Horvitz, 2017) but also considerable anxiety and concern (Cave et al., 2019; Ghotbi et al., 2022). Others have focused on pedagogical aspects of AI and the need for “AI ethics education” (Borenstein and Howard, 2021) or “AI literacy” programs to help people critically evaluate the ramifications of big data algorithms (Atenas et al., 2022; Yates et al., 2021).
Others have focused on big tech’s implementation (and co-optation) of algorithmic ethics programs. For example, Phan and colleagues have focused on how ongoing crises around big tech corporations have led to the creation of “economies of virtue”—“a space where reputations are traded, and ethical practice is produced in line with commercial decision-making” (Phan et al., 2021: 130). They show that experts who offer ethics advice to such companies risk getting co-opted by them. Metcalf et al. (2019) have similarly focused on “ethics owners” in Silicon Valley—people who are professionally responsible for applying ethical principles in technological production. They show that “ethics owners” grapple with tech culture’s fundamental logics and norms that paradoxically impair ethics while “performing” it.
Focusing on “the production floor” of AI, scholars have recently begun to offer empirical explorations of AI developers’ attitudes and perceptions toward the ethics of algorithms. For example, Veale et al. (2018) interviewed public sector ML practitioners in Organisation for Economic Co-operation and Development (OECD) countries about their challenges in imbuing public values into their work. They highlight a fundamental disconnect between organizational and institutional realities, which is likely to undermine ethical initiatives. Ibáñez and Olmeda (2022) have similarly described the gap between practice and principles among Spanish AI managers and the tactics they seek to close that gap. Holstein et al. (2019) focused on AI practitioners’ challenges and needs in developing fairer ML systems, highlighting the disconnect between the challenges faced by teams in practice and the solutions proposed in the “fair ML research literature.” (Holstein et al., 2019: 1). Orr and Davis (2020) have shown that Australian AI engineers tend to diffuse the ethical responsibility of their work, believing that others are responsible for determining the rules while they are merely liable for meeting technical requirements. Duke (2022) has similarly shown that AI developers might be aware of the risks posed by their technology, but they do not see themselves as accountable for them. More recently, Ryan et al. (2022) have highlighted the tensions between organizations’ AI ethics and the values of their employees. These works span various algorithmic production centers, and they almost unanimously point to a fundamental disconnect between algorithmic ethics initiatives and the algorithmic production floor. Namely, while discourses and services around AI ethics are beginning to proliferate, they have yet to materialize into coherent, agreed-upon ethics accepted and practically performed by AI developers themselves.
This article contributes to this line of research by focusing on data scientists’ sense-making processes around algorithmic ethics. Focusing on how data scientists understand and interpret algorithmic ethics, and on the socio-professional context of their work, offers a more fine-grained understanding of the gap between AI ethics theory and practice. Moreover, through the investigation of Israeli data scientists, this article contributes to the global study of algorithmic production beyond Euro-American spaces (Kotliar, 2020b; Ribak, 2019; Takhteyev, 2012). Thus, based on a pragmatic sociological analysis, we ask: how do Israeli data scientists understand, justify, and interpret their algorithmic ethics? And what ideologies, discourses, and worldviews shape their ethics?
Research settings: data science in Israel
Since the 1970s, and particularly since the 1990s (John, 2011), Israel has developed a highly active high-tech industry, currently boasting over 7000 companies, 401 multinational corporations’ Research and Development centers, and tens of billions in venture capital investments each year (IVC-Meitar, 2022; Startup Nation Finder, 2022). In line with the global interest in data science, the last decade has seen the emergence of a vibrant community of Israeli data scientists. In fact, although Israel is relatively small (with a population of 9 million inhabitants), in 2016, it ranked 10th in the world in the absolute number of data scientists and first in terms of density (number of data scientists per million residents) (Stitch, 2016). Israel is thus a global center of algorithmic production: Israeli-produced algorithmic systems can be found worldwide, and conversely, trends, norms, and regulations formulated outside Israel routinely affect the work of local high-tech workers (Ribak, 2019).
Despite the vibrant technical discussions in the Israeli data scientists’ community, ethical discussions are only now starting to emerge, as this community tends to avoid coordinated discussions of the social implications of their technology. Furthermore, algorithmic ethics are only beginning to be discussed in engineers’ training institutions, and ethical issues are rarely debated in this community’s meetups, hackathons, and conventions (Avnoon, 2019). Moreover, unlike Silicon Valley companies, Israeli tech companies rarely employ ethicists or “ethics owners” (Metcalf et al., 2019). Given the lack of formal ethical discussion, education, or organizational standards, it is imperative to explore how Israeli data scientists understand and construct algorithmic ethics and identify their dominant moral logics.
Data collection
The data for this study was collected as part of a broad research project on data science as a nascent profession, conducted in Israel between 2015 and 2018 by the first author. It included 60 semi-structured interviews with data scientists (n = 50), their supervisors (n = 5), and the university professors training them (n = 5), as well as participant observations in data scientists’ community events and their online groups. Sampling focused on individuals who defined themselves as data scientists on the professional social network LinkedIn. 2 The first author contacted 125 data scientists via LinkedIn, out of which 46 agreed to participate. In addition, four other data scientists were snowball-sampled based on interviewees’ recommendations. 3
The interviews were propelled by a topic guide that listed open-ended questions. Participants were asked about their view of ethics and the nature of ethical codes in their profession. 4 Interviews were usually conducted at local cafes and lasted 60–90 minutes. All interviews were recorded and transcribed by the first author. The names of the interviewees and their workplaces were pseudonymized. 5
Data analysis
The data were analyzed by the three authors using thematic analysis (Braun and Clarke, 2006) and in light of the pragmatic approach that focuses on how people interpret, justify, and criticize various normative views (Boltanski’s and Thévenot’s 1999, 2006). Accordingly, the analysis sought to delineate data scientists’ meaning-making processes and the specific “moral logics” (Boltanski and Thévenot, 2006; Lemieux, 2014) that they use to understand, interpret, and justify the ethics of their algorithmic work.
First, we read and reread the transcripts to identify participants’ moral claims and justifications. We then discussed the emerging themes, and only those agreed upon unanimously were utilized further. Second, we reread the transcripts, tagged the appropriate text under the identified themes, and eventually identified three dominant moral logics expressed by our interviewees. Finally, after the initial clustering, we selected and translated prominent quotes representative of each moral logic and analyzed them in light of the research questions and relevant literature.
“It’s more of a personal preference”: ethics as an individual endeavor
Many of our interviewees described algorithmic ethics as a personal issue related to individual tendencies, preferences, and values. As Ziv, a data scientist at a fintech startup company said:
As a human being, I try to contribute to the world and do no harm. Personally, that’s what I try to do—I try to contribute. I make all my choices with that in mind. But in the information world, many things are just wrong. And we’re simply such a bunch of geeks that we’re not going to do anything [about] it. But eventually, yes, you can use data to do lots of bad things to lots of people in the world, lots of bad things.
Ziv acknowledges the dangers that lurk in big data analyses. According to him, the “information world” is ridden with problems, and data can do “lots of bad things” to people. Nevertheless, at the same time, he also emphasizes that ethics is something that belongs to the individual—to an autonomous, independent entity—it is he who “tries to contribute,” tries to “do no harm,” and who sees his algorithmic ethics as something that informs his personal choices (“I make all my choices with that in mind”). In other words, according to Ziv, he is the sole moral agent in this equation, and the ethics of the algorithmic tools that he develops depend on his own intentions. While Ziv hints at the possibility of collective action, he immediately rejects it, explaining that he and his coworkers are “a bunch of geeks.” Namely, the alleged collective characteristics of his peers are at odds with the dangers that algorithmic production might pose. According to this view, data scientists’ inability to act against the ramifications of big data algorithms and the capitalist forces driving them does not stem from insufficient knowledge, ignorance, or the lack of political organization but from their personality traits and their allegedly innate tendency to avoid conflict. Ziv’s reference to his colleagues’ “geekiness” is consistent with popular stereotypes of IT workers (Kendall, 2011), often considered introverted and withdrawn—the opposite of the extroverted, socializing, and in this case, also political character, which Ziv believes is required to engage with ethical dilemmas. This image of geeks also conveniently ignores well-documented questionable behaviors in geek culture, such as in the misogynic harassment campaign known as “Gamergate” (Aghazadeh et al., 2018; Phillips, 2018).
Liron, a data scientist in a global corporation, expressed similar views when asked about ethics:
I’ve never accessed the account of anybody I know in our [system], although I can do that [. . .], I never tempted to do it, and I think this is true for most people working for us. I don’t know if it’s because of the type of people we hire, or, I don’t [know] why. But, somehow, I feel like there is some kind of ethics to our profession, at least in our company, that just comes naturally. It’s [made out] of people who would never take advantage of anything, [including] the data they’re working on.
Like Ziv, Liron refers to the power at the hands of data scientists and the companies that employ them, a power that largely stems from the plethora of user data at their disposal. He claims that he intentionally avoids violating the privacy of his company’s users and that most of his colleagues do the same. Ignoring the exploitive nature of this new form of capitalism (Zuboff, 2019), Liron focuses on the access he and his colleagues have to user data and prides himself and his colleagues on actively choosing not to misuse it. Thus, like Ziv, Liron envisages his ethics as something that resides in the individual and the choice of respecting users’ privacy as a personal choice, hence evading any sort of collective responsibility. Moreover, for Liron, this ethics “comes naturally” rather than the result of an organizational requirement or a binding professional commitment. It is a moral logic that sees algorithmic ethics as dependent on individual agents and their actions and inactions, but nonetheless, as ethics that are allegedly innate to data science—that “come naturally” to people who work in this profession.
Nevertheless, what characterizes this individual moral agency of data scientists? How do they implement their moral logics, and how do they envisage their “moral toolbox”? Most of the data scientists in our sample consider the ability to design their careers as the main, if not only, way of following their inner conscience. For example, Amit, who works at a fintech startup company, argued as follows:
I try to be very ethical; it’s very important to me. From the day I started in data science, I told myself: ‘I’ll [work in] the advertising business? Never! It disgusts me. Facebook makes me sick to my stomach. That’s my take, but [. . .] people don’t talk about it much because many people only care about the money.
Like the previous interviewees, Amit describes ethics as something that concerns him personally, to the point of physically evoking moral aversion (“makes me sick to my stomach”). Accordingly, Amit describes himself as capable of following his conscience by choosing the right job. As data-intensive technologies are now integrated into almost every industry, and with high demand for such professionals across sectors, data scientists can find a job in various industries and sectors. Amit explains that he sees Ad-Tech companies, and Facebook specifically, as out of the question and that he actively avoids them. That is, he explains that his way of enacting his moral stance lies in his ability to choose where to work, and particularly, where not to work. Amit indeed recognizes that not all data scientists hold these high ethical standards (“many people only care about the money”), but he sees these considerations as personal, not as something that stems from organized professional ethics.
Lavi, a data scientist in a large international tech company, similarly said,
For me, personally, it would have been difficult to work for some binary options company. These are, at best, gambling companies, and at worst, they are companies that try to take people’s money. And these companies have lots of work for data scientists. I’m not [like that, but] some people just don’t give a damn. Other companies do all kinds of surveillance [. . .]. Here ‘too, that’s not my style. I don’t think there is a code [of ethics], but . . . it’s more of a personal preference. I’m sure, for example, that the porn industry also has jobs for data scientists, but that bothers me less. They, at least, put their cards on the table and say: “there, this is what we do.”
Lavi too emphasizes that ethics is a matter of personal preference and suggests that unlike him, others just “don’t give a damn” about ethics, suggesting that such ethics does not, in fact, characterize all data scientists. In highlighting his occupational choices, he focuses on each company’s industry rather than on how they develop their algorithms or manage their data and explains that he would not work for companies that deal with gambling, binary options, or surveillance. Thus, he highlights the possibility of evaluating various companies, rating them on a moral scale (“surveillance [. . .] that’s not my style”; “porn [. . .] that bothers me less”), and actively choosing between them as the primary tool in his ethical toolbox. Like the moral imperative itself, the employment decision is seen as personal, and so do the moral considerations that lead up to it.
Thus, our interviewees tend to consider algorithmic ethics as a matter of the heart—one that revolves around individuals’ consciences rather than on a broader organizational or professional level. Accordingly, their moral agency remains personal, and their ethical options narrow down to one action: rating jobs and choosing between them. These ratings are based on the public image or stigma (Cohen and Dromi, 2018) of the sector in which each company operates.
“Back to the stone age”: ethics as hindering progress
In line with the previous moral logic that views algorithmic ethics as a personal matter of the heart, our interviewees also tended to contrast algorithmic ethics with one fundamental characteristic of their profession—technological progress. According to this moral logic, data science’s innovation, and the progress it brings about, rely on the use of almost unlimited amounts of data. Hence, the ability to access and process data unrestrictedly is seen as essential to technological development and to progress in general. As stated by Nimrod, a global tech company employee, in an interview:
Last week I asked my team what would happen in ten years, and I said: “the entire privacy issue will just disappear.” In ten years, we won’t give [privacy] another thought. We’ll just let it go completely. I mean, in ten years, we’ll say, what? Cookies? You must be kidding. Sensors will float in our bloodstream and continuously report on our cardiac condition! We will be so exposed that it won’t bother us one bit. We’ll be transmitting so much information to our environment that it’ll feel like we’re naked, it’ll be like walking nude in the street, [. . .], and it won’t bother us for a second. We won’t even think about it. Mark my words—you’ll see that I’m right.
According to Nimrod’s moral logic, one of the key items on the algorithmic ethics agenda—the right to privacy—is about to disappear since technological progress will inevitably lead to complete exposure, “nakedness,” as well as complete acceptance of that exposure. Nimrod accordingly expects future surveillance to be much more invasive and more tangible than today. Instead of web cookies—a widespread in-browser tracking device (Carmi, 2017)—sensors will float in our bloodstream and will perpetually assess our physical state, reporting to whomever. The way he embodies data collection—between public nudity and subcutaneous sensors—emphasizes that what may seem today as an extreme violation of an ethical principle, a desire to “undress” people, go into their bodies and get under their skins, would eventually come to be seen as natural, even obvious. Thus, for Nimrod, technological progress is inevitable and is necessarily benevolent, even if today it may seem like it is crossing clear ethical boundaries. Following the same moral logic, Nimrod also makes clear that humans are the ones who would need to stretch their boundaries due to technology and capital’s changing demands by changing their fundamental values (e.g. regarding public exposure). He describes that change as natural, even evolutionary—one that does not require elaborate thought processes, lengthy discussions, or political organization. Accordingly, Nimrod concludes his optimistic, techno-determinist narrative (Wyatt, 2007) with an almost threatening promise to the interviewer: “mark my words, you’ll see that I’m right.”
As John and Peters (2017) have shown, the end of privacy is commonly predicted, and in fact, the privacy discourse bemoaned its death from its inception. Similarly, in our case, Nimrod’s moral logic is infused with technological solutionism (English-Lueck, 2017) that unequivocally parallels progress with technology. According to this logic, technology’s developers and users alike remain passive, if not helpless, in the face of technology’s transforming power.
Indeed, the data scientists in our sample tended to oppose the imposition of restrictions on technological development. As David, who works for a startup company, explained:
One of the reasons behind data science’s rapid growth is the lack of bureaucracy. There are no bureaucratic restrictions. You can do whatever you want. Many people are working on it, so new things get created all the time. There are no restrictions because the harm [caused by these technologies] is probably minimal. I mean, what’s the big deal? So, people may know that you went from that page to another, and they can see which pages you went through. As if that can ever hurt anybody {chuckles}. Anything can hurt you.
David expresses a libertarian stance common in his technological community: that the unprecedented growth in the local data science industry stems from a lack of regulation. According to David’s moral logic, this lack is explained by the fact that algorithmic harms are essentially minimal, and ethical oversight is outright redundant. In other words, like the free market, algorithmic ethics is allegedly self-regulating, and any external regulation would only encumber its progress. This anti-bureaucratic approach is designed to fend off attempts to restrict technological development, even if these restrictions aim to protect the public interest. Like Nimrod, David views the loss of privacy as a done deal and the sensitivity around it as laughable—“anything can hurt you. .”
Eran, a data scientist in a startup company, similarly said,
How ethical is it? Really? I say that there’s no room for this question because today, everyone surveils everyone. If you ponder ethical questions [like]: “Is it OK to surveil people? Is it OK to collect information about people?” then you’re, in fact, back in the stone age.
Eran also heralds the end of privacy (John and Peters, 2017) and accordingly argues that the very discussion of ethical questions is obsolete. To him, such questions only indicate technological backwardness (“you’re back in the stone age”). According to this moral logic, technology is equivalent to progress, whereas ethical contemplations inherently mean stagnation. That is, Eran not only rejects the possibility of ethical action (by legislation, regulation, or the institution of professional or organizational norms), but he dismisses the very discussion of this subject. Thus, in this case, technological determinism and solutionism disavow the ethical debate around the social implications of technology. This view echoes the sociological claim that modernity is anchored in technological development and that this development is inherently at odds with an obliging moral system (Bauman, 2000). Thus, normative questions are deemed irrelevant when technological development is concerned.
“The PayPal of your private data”: ethics as a commodity
Despite the individualistic and techno-deterministic views expressed in the previous sections, the idea of organized ethics is not entirely foreign to the Israeli data science community. Instead, it slowly seeps into it through the commodification of ethics. Jonathan, for example, a data scientist with an MA in computer science, described the company he works for:
Privacy is part of our interests, part of the very reason the company [I work for] was founded. It’s really about offering sane and correct information management instead of privacy. Today everyone is after your data; everybody wants to learn about you. And without such a model, without a company that does this, we enter a grey zone. So, it’s an arms race. Everybody wants to collect your data; everybody wants to know where you are. [. . .] Our goal is to be like the PayPal of your private data. To be that one entity that you’d know is big, and that’s what it does. If they don’t secure [your data] properly, their business will collapse because that’s their business.
Jonathan describes the personal data market as a dangerous conflict zone, one in which users clash with companies and companies clash with each other over the control of personal data (“it’s an arms race”). From his perspective, the only way to protect users’ data is by hiring a company that offers “privacy services.” That is, rather than understanding privacy as a moral injunction or a human right, he sees it as a value that can only be protected when commodified. According to this moral logic, algorithmic ethics can only be acknowledged and protected within capitalist market relations. As Jonathan further explained,
The wise thing would be to create a model where companies would pay for mistreating your data. Customers wouldn’t need to pay because you’re making money off other companies—they provide the service for free, and we make money only if we guard your privacy. So, you have a situation in which everybody’s interests converge. I studied some game theory, and that’s good, [with such a solution,] the system would reach equilibrium.
According to Jonathan, when privacy is commodified and protected through capitalist market relations, the interests of all parties will converge to everyone’s satisfaction. Thus, the market is supposed to balance itself out, not only economically but also ethically—the economic market will perfectly merge with the moral market, and ethical dilemmas will be resolved by their commodification. This view echoes what Metcalf et al. (2019: 9) described as “market fundamentalism”—the idea that tech companies’ bottom line governs their ethical considerations. Nevertheless, in the case before us, market success is not at odds with ethics, but it is allegedly enhancing it. Kfir, a data scientist and tech consultant, shares his experience with the commodification of ethics:
I once tried to start a company that sells personal data. I wanted everyone to walk around with this electronic component that contains all their personal data, and every time you want to complete a form at the store or on the internet, you’ll approach, scan your RFID chip or something, decide what you want to give or not, and how much you’re selling it for. The other side will determine whether they’re buying, and a market will open up. If it’s a market, then let’s make it a market all the way.
Like Jonathan, Kfir is also keenly aware that online personal data has become a lucrative commodity. Kfir accordingly argues that the best way to protect people’s privacy and let them control their information is to commodify it and rely on consumers’ choices regarding their data—whether to sell them and for how much. According to him, the commodification of private data will empower individuals as agents and ensure minimal harm. According to Gershon (2011), with neoliberal agency, individuals perceive and manage themselves as businesses. In the case before us, individuals may assume responsibility for their data and manage it as a digital extension of their autonomous, agentic entity, but only after a private company has commodified it.
Hence, in line with the aforementioned moral logics, the data scientists we interviewed tended to deterministically focus on the commodification of human bodies and values (Meade, 1996; Zuboff, 2019). Under this neoliberal logic, ethics becomes legitimate for social organization only when assigned economic value. Accordingly, a formal moral regime in the shape of institutionalized algorithmic ethics is only possible with a price tag attached to it. 6
This research set out to explore how Israeli data scientists understand, justify, and interpret algorithmic ethics, and delineate the ideologies, discourses, and worldviews that shape those ethics. We have shown that while Israeli data scientists enjoy a thriving professional community, they often overlook the social implications of the algorithmic tools that they develop. Accordingly, Israeli data scientists largely refrain from adopting the moral regimes offered to them by legislators, activists, and scholars—even when these have been somewhat formalized and institutionalized as ethical codes and guidelines. Instead, they turn to libertarian, technocratic, and capitalist moral logics that favor unrestricted technological progress over ethically and socially aware algorithmic development. Thus, our findings reveal Israeli data scientists’ particular assumptions and presuppositions about algorithmic ethics: the meaning of ethics, what it means to be ethical, the identity of ethical agents, and the plausibility and necessity of ethical action.
These findings highlight the incongruity of these logics with the attempt to establish a universal, agreed-upon algorithmic ethics. Specifically, the first moral logic—ethics as an individual endeavor—is in stark opposition to establishing a collective, consensual moral regime. According to this logic, data scientists explain their refrain from more formal moral regimes by describing their socio-professional community as inherently ethical, as one in which data scientists naturally “do the right thing” (even when they acknowledge some of them do not). Hence, they insinuate that they, as individuals, have innate ethics of their own. This moral logic is bound to prevent any organized opposition to the violation of moral imperatives—whether individual or social—that may arise during the development and implementation of data-driven technology.
Moreover, this moral logic places a heavy burden on individuals, and its potential to turn into organized action remains highly limited. At the same time, data scientists’ moral agency is reduced to their attempts to evaluate and choose between potential employers. Such a practice impacts individuals’ career paths, disqualifying some companies and legitimizing others while allowing individual data scientists’ moral and financial flexibility. Nevertheless, this practice also circumscribes this profession’s ability to halt the development of potentially harmful algorithmic development from within.
The second moral logic, which contrasts ethics with technological development, emphasizes that the primary moral obligation of the engineering professions in general, and data science in particular, is continuous technological growth despite potential social ramifications. Professional ethics is not beyond the horizons of this logic. However, the only ethics these data scientists can consider is work ethics—a deterministic, techno-optimistic view (Vydra and Klievink, 2019) that sees technological production as their primary, even exclusive, social mission. This approach sees technological production as inherently ahead of its time and as one that renounces allegedly obsolete, restrictive social norms that seek to put an end to progress. Accordingly, in this specific socio-professional context, the possibility of a formalized professional moral regime (e.g. a code of ethics) is seen as a factor that closely coincides with innovation’s longtime nemeses—bureaucracy and regulation.
Data scientists’ third moral logic sees algorithmic ethics as viable only when it is commodifiable and subjected to the rules of the market. Here, data scientists’ moral logics are not in opposition to capital (Whalley, 1986), but they wholeheartedly adopt the entrepreneurial, venture capitalist ethos that favors financial profit over people’s wellbeing. Moreover, the commodification of ethics similarly favors trading in values over positioning them above and beyond the market, thus facilitating the continued trading in personal data (Zuboff, 2019). In other words, this moral logic, which necessitates the commodification of ethics, merely confirms the crawling commodification of all aspects of human existence (Illouz, 2017; Meade, 1996).
While this threefold libertarian-technocratic-capitalist view of ethics revolves around contemporary algorithmic production, it, in fact, has a long professional history, one that can be traced back to data science’s parent profession—engineering. Sociologists of technical work have famously identified a fundamental normative conflict between the engineering spirit and the profit-motived bureaucratic and capitalist organization. According to these scholars, engineers’ technical and rational expertise was often inconsistent with what they understood as an “irrational” aspiration for profit (Layton, 1986; Whalley, 1986). Accordingly, sociologists of engineering predicted that engineers would transform bureaucratic organizations from within in their demand for autonomy and more collegial work (Bell, 1976). They further argued that, alongside other professions, engineering would eventually subject capital to its own professional ethical principles (Freidson, 1973). Nevertheless, over the years, engineers’ organizational career paths, their loyalty to their employers, and their resistance to institutionalized professionalism have prevented the development of a binding ethical code for their profession (Ensmenger, 2010).
Today, as data scientists apply their expertise in multiple and highly diverse social fields, it appears that the social forces that operated on their predecessors’ professional ethics are still at play, preventing the development of a binding moral regime and favoring boundless technological development and capitalist endeavors over socially aware ethical considerations. While public criticism around algorithmic harms is gradually increasing, data science could have been expected to establish a binding moral regime. However, as a sub-profession of engineering, data science effectively turns its back on such normative institutionalizations. The moral logics presented above can accordingly be seen as a localized “moral grammar” (Honneth, 1996) through which data scientists reject any association between potential algorithmic harms and an organized response to them. This grammar is how data scientists discursively minimize the potential of their socio-professional environment to devise a formal, agreed-upon moral regime.
Our findings echo similar findings from other global tech centers, like the US (Metcalf et al., 2019), Australia (Orr and Davis, 2020), and Spain (Ibáñez and Olmeda, 2022), and like them, they highlight a fundamental disconnect between AI ethics initiatives and the algorithmic “production floor.” Thus, the moral logics that characterize Israeli data scientists might originate from a global socio-professional culture—the engineering professions’ implicit moral regime. However, while engineers’ longtime refrain from institutionalized ethics might point toward such an explanation, a more localized, contextualized perspective must also be considered. Namely, Israeli data scientists’ lack of an institutionalized moral regime may also be related to specific Israeli determinants.
As Kotliar (n.d.) has shown the close ties between the Israeli military and the Israeli high-tech scene are not only allowing Israeli engineers to develop new skills, new social networks, and new social norms (Swed and Butler, 2015: 125), but they also shape how they understand their algorithmic work and construct their technological ethics. Moreover, some of the most lauded (and notorious) Israeli characteristics that have presumably helped spur Israel’s phenomenal success as the “Startup Nation” (Senor and Singer, 2009) can also explain Israelis’ view of AI ethics: Israelis’ unapologetic directedness, questioning of authority, informality, and militarized ethos of teamwork, mission, and risk may promote entrepreneurial successes, but at the same time, they are almost inherently incompatible with the creation of agreed-upon ethics. In addition, Israelis’ general disregard for privacy (Ribak and Turow, 2003), their conflict-ridden reality, the immense profitability of its cyber-weapon sector, and the homogeneity of the local high-tech scene also play a part in hindering the formation of an agreed-upon, localized ethics. Finally, Israeli techies’ longtime opposition to unionization (Fisher and Fisher, 2019) may also play a part in their disregard for ethics, and it serves as a reminder that the professional explanation is never detached from an ethnonational one. These localized characteristics question the potential of AI ethics curricula and programs (such as “data feminism” [D’ignazio and Klein, 2020), “indigenous AI” (Abdilla et al., 2021), “Human-Centered AI (Xu, 2019),” and other pedagogical approaches (Borenstein and Howard, 2021; Yates et al., 2021) in educating Israeli techies into a more ethical algorithmic production. Moreover, as much as the global profession of data science shapes Israeli data scientists’ view of ethics, local Israeli moral regimes may also have a global reach when locally produced AI technologies are disseminated globally (Kotliar, 2020b).
Nevertheless, like technology, ethics is a creature of time. After all, people’s moral logics merely derive from existing moral repertoires at a given socio-historical moment (Boltanski and Thévenot, 1999, 2006) and like any cultural repertoire, these might change. Moreover, the relatively recent introduction of the GDPR, the CCPA, the DMA, and other legislations, the fierce, global public discussion on algorithmic harms, and other socio-techno-legal advancements may well change data scientists’ moral logics in Israel and beyond. Such factors might eventually redesign data scientists’ moral toolbox through mandatory ethical training, formal ethical accreditation, or the creation of other professional institutions that would focus on the good of society rather than the datafied goods that are extracted from it. Thus, while data science’s moral regime may stem from its parent profession—engineering, the emergence of data science as a nascent profession may still provide opportunities for moral restructuring and maturing. However, as we have shown above, such processes would need to consider engineers’ local and professional contexts for them to bear fruit.
This research’s qualitative, interpretive design offered a fine-grained exploration of the socio-professional context behind algorithmic ethics. However, like every qualitative research design, its generalizability is inherently limited. This research also focused on data science as a nascent profession, and as such, it offered a cross-organizational view of this emerging field rather than a company-specific or industry-specific one. Future research should interrogate algorithmic ethics from a quantitative, more generalizable perspective and, alternatively, offer ethnographic explorations of the ethics of algorithms in specific tech companies, sectors, or industries. Research should also extend our view of tech ethics across geographies, and particularly, to other tech centers beyond “the west” and explore data scientists’ moral logics in various global settings.
We thank the reviewers for the enlightening comments and enriching dialogue. This article was written with the support of the Shapiro Fund Fellowship for postdoctoral students, the Department of Sociology and Anthropology, Tel Aviv University.
Netta Avnoon, Dan M Kotliar, Shira Rivnai-Bahir
New Media & Society
Vol 26, Issue 10, pp. 5962 - 5982
Issue published date: October-01-2024
10.1177/14614448221145728