header

New Media & Society

4.3 Impact Factor more »

Contextualizing the ethics of algorithms: A socio-professional approach

Published  October  01, 2024

Article Information

Volume: 26 issue: 10, page(s): 5962-5982

Issue published:  October 01  2024

DOI:10.1177/14614448221145728

Netta Avnoon , Dan M Kotliar , Shira Rivnai-Bahir ,
Tel Aviv University, Israel
,

University of Haifa, Israel
,

Ben-Gurion University of the Negev, Israel; Reichman University, Israel

Netta Avnoon, The Department of Sociology and Anthropology and Coller School of Management, Tel Aviv University, Tel Aviv 6997801, Israel. Email: nettaa@tauex.tau.ac.il

Netta Avnoon is also affiliated to Columbia University, NY.

Abstract

Research on AI ethics tends to examine the subject through philosophical, legal, or technical perspectives, largely neglecting the sociocultural one. This literature also predominantly focuses on Europe and the United States. Addressing these gaps, this article explores how data scientists justify and explain the ethics of their algorithmic work. Based on a pragmatist social analysis, and of 60 semi-structured interviews with Israeli data scientists, we ask: how do data scientists understand, interpret, and depict algorithmic ethics? And what ideologies, discourses, and worldviews shape algorithmic ethics? Our findings point to three dominant moral logics: (1) ethics as a personal endeavor; (2) ethics as hindering progress; and (3) ethics as a commodity. We show that while data science is a nascent profession, these moral logics originate from the techno-libertarian culture of its parent profession—engineering. Finally, we discuss the potential of these moral logics to mature into a more formal, agreed-upon moral regime.

Keywords

AI, AI ethics, algorithmic ethics, algorithms, culture, data science, moral regimes, pragmatist, professions

Research has highlighted the social harms that stem from big data algorithms. Such algorithms were shown to restrict personal autonomy (Rouvroy, 2013); reproduce inequality, discrimination, and racism (Benjamin, 2019; Eubanks, 2018; Noble, 2018); destabilize democracy (Tufekci, 2014); promote polarization (Woolley and Howard, 2017), cause environmental harm (Bender et al., 2021; Crawford, 2021), and more. Specific attention has been given to the potential ramifications of autonomous cars (Gal, 2017), facial recognition systems (Buolamwini and Gebru, 2018), and autonomous weapons (Lewis, 2014). That is, the development of big data algorithms, particularly machine learning (ML) algorithms and artificial intelligence (AI) is increasingly accompanied by poignant social criticism about the development and implementation of such technologies. Addressing this criticism, various researchers and organizations have begun to deal extensively with the ethics of algorithms producing articles, books, and reports on the subject and formulating various guidelines and frameworks for the ethical development of big data algorithms and AI (Ananny, 2016; Jobin et al., 2019; Phan et al., 2021). Similar initiatives have recently matured into legislation, such as the European General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the European Digital Markets Act (DMA). However, the literature on “AI ethics” tends to examine the subject through philosophical, legal, or technological perspectives. Curiously, the sociocultural, professional, and organizational contexts in which big data algorithms are developed and imbued with (im)moral values have only recently begun to receive scholarly attention.

In this article, we seek to contribute to the burgeoning literature on the ethics of algorithms by offering a grounded and contextualized account of how AI developers understand and explain their ethics. Following Boltanski and colleagues’ pragmatic approach (Boltanski and Thévenot, 2006; Lemieux, 2014), we focus on how data scientists justify, understand, and interpret the ethics of their algorithmic work. 1

Moreover, acknowledging the ties between algorithms and culture (Seaver, 2017), recognizing the importance of the professional context in algorithmic production (Avnoon, 2021), and responding to the prevailing Euro-American centrism in research on such production (Kotliar, 2020b), we focus on a specific sociocultural context—the Israeli data science community. Based on 60 semi-structured interviews, we identify three dominant moral logics: (1) ethics as an individual endeavor; (2) ethics as a hindrance to progress; and (3) ethics as a commodity. Finally, we discuss the potential of these moral logics to mature into a more formal, agreed-upon moral regime and highlight their historical ties to data science’s parent profession—engineering.

Algorithms, culture, morality

While big data algorithms are developed in high-technology contexts and are based on advanced mathematics, it is increasingly realized that algorithms are far from neutral technological artifacts. Instead, they are actively created by human actors who imbue them with sociocultural contents and meanings (Gillespie, 2014; Van Dijck, 2014). Similarly, the functioning of big data algorithms is broadly affected by the data they run on, as well as by their training data, with its inherent biases and flaws (Gitelman, 2013; Scheuerman et al., 2021). Thus, algorithms are neither objective artifacts nor mere byproducts of the interaction between data and mathematical modeling. Instead, they are socio-technical products, actants in a buzzing socio-algorithmic assemblage (Latour, 1992; Seaver, 2017)—that reflect and echo the salient worldviews, practices, and values of the culture around them (Kotliar, 2020a).

Similarly, the ethics of algorithms is more than the result of formal guidelines and regulations, and it does not only mirror algorithms’ characteristics or the structure of their datasets. Instead, the ethics of algorithms stems from specific cultures of practice (Van Maanen and Barley, 1984)—from the norms, belief systems, and worldviews of the people who develop the algorithms and from the specific social contexts from which such algorithms emerge. Thus, we ask, how do the people who develop such algorithms justify, understand, and interpret their ethics? What ideologies, discourses, and worldviews shape algorithmic ethics? To answer these questions, we turn to the sociology of morality.

Moral logics

The sociology of morality examines how moral systems are constructed, understood, and adopted by societies, organizations, and individuals (Durkheim, 1961; Hitlin and Vaisey, 2013; Weber, 2003). Early sociological approaches have frequently dealt with questions of morality (Weber, 2003), highlighting how social structures shape social morals (Durkheim, 1961). Nevertheless, the sociological interest in questions of morality has quickly waned, and only by the end of the 20th century have sociologists begun to express interest in the links between the moral and the social. Prominently, Boltanski and his associates have proposed a pragmatic approach to morality that focuses on individuals as moral subjects while emphasizing the social contexts in which they are embedded (Boltanski and Chiapello, 2005). The pragmatic approach denies the existence of a monolithic or universal moral system (Boltanski and Thévenot, 1999, 2006), and instead, it represents a shift toward a polytheist morality (Schwarz, 2013: 157). Thus, rather than focusing on individuals’ ability to choose between “the moral” and “the immoral,” this approach offers to focus on individuals’ ability to navigate within and between diverse moral repertoires and choose between different “moral logics” (Boltanski and Thévenot, 1999). Hence, the pragmatic view of morality sees individuals as moral agents who actively choose their moral position out of vast repertoires of “moral logics” (Schwarz, 2013). In other words, people’s moral “toolbox” (Swidler, 1986) is culturally and socially shaped, and individuals eventually choose and justify why something may or may not be moral.

Following this view, this article sees algorithmic ethics as people’s attempts to institutionalize particular moral regimes out of culturally available repertoires of moral logics. Examining data scientists’ moral logics may deepen our understanding of the moral repertoires at their disposal, the social contexts from which they derive these logics, and the feasibility of establishing a dominant moral regime—uniform, agreed-upon algorithmic ethics.

Data science as a nascent profession

While AI practitioners go by different names, they are most prominently known as “data scientists” (Avnoon, 2021; Ribes, 2019)—a professional title that first emerged as early as the 1970s (Naur, 1974) but was vastly popularized only around 2008 from the offices of LinkedIn and Facebook (Patil, 2011). Data scientists often describe themselves as both engineers and scientists and given their extensive technological practice, they are considered a sub-profession of engineering (Avnoon, 2021). Members of this new profession develop algorithms for big data operations and ML or AI, using statistical-probability approaches. Alongside data analysis, data scientists’ work also revolves around collecting and preprocessing data and integrating domain knowledge into algorithms and their data (e.g. from domains such as psychology, radiology, law, and more) (Fayyad et al., 1996). Thus, data scientists play a crucial role in designing the collection, processing, and analysis of data and disseminating algorithmic results to non-experts. In other words, alongside programmers, executives, entrepreneurs, and investors, data scientists are key players in the “datafication of everyday life” (Van Dijck, 2014), and they play a significant role in the socio-algorithmic assemblage, promoting or delaying the development of big data algorithms and their implementation into society. Therefore, data scientists’ ethical positions—the way they perceive and understand the moral implications of their work—can profoundly affect the design, production, and implementation of their algorithms.

Data scientists, professional norms, and the absence of an institutionalized ethics

As sociologists of professions have long shown, people’s socio-professional contexts play a significant role in shaping their professional ethics as part of their socialization and acculturation (Fournier, 1999). Such ethics are usually developed through the gradual creation of professional norms, practices, and standards that revolve around the implementation of expertise into society (Freidson, 1973; Scott, 2008).

Knowledge-intensive professional communities tend to formulate ethical codes to establish trust between the public and their profession and regulate the application of their expertise into society (Scott, 2008). Prominent examples include lawyers, medical doctors, and accountants (Abbott, 1983). However, unlike these professions, and due to various structural determinants, software engineering has so far avoided formulating a binding ethical code (Ensmenger, 2010). As an engineering profession, the development of formal ethical guidelines in data science runs into similar structural barriers (Mittelstadt, 2019).

Nonetheless, due to the rise of data science as a new profession and following the growing public criticism around algorithmic harms, prominent members of the data science community have recently initiated the formulation of an ethical code (Patil, 2018). At the same time, tech giants like Microsoft, Google, and IBM have begun publishing ethical guidelines in an attempt to design the field of AI ethics from its inception (Phan et al., 2021; Taylor and Dencik, 2020). In addition, these companies’ employees have also begun protesting against some of the algorithmic tools that their employers develop—including facial recognition systems, autonomous weapons, large-scale surveillance projects, and more (Crawford, 2019; Shane and Wakabayashi, 2018). However, these ethical initiatives are currently in their infancy, and their impact on the ethics of algorithms is yet to be seen. Moreover, as we argue below, their global reach is questionable.

A sociocultural approach to algorithmic ethics

While the ethics of algorithms is predominantly discussed from philosophical, legal, and technological perspectives, researchers have recently begun to examine this subject from a sociocultural perspective. For example, scholars have focused on media coverage of AI, showing that such algorithms elicit much optimism (Fast and Horvitz, 2017) but also considerable anxiety and concern (Cave et al., 2019; Ghotbi et al., 2022). Others have focused on pedagogical aspects of AI and the need for “AI ethics education” (Borenstein and Howard, 2021) or “AI literacy” programs to help people critically evaluate the ramifications of big data algorithms (Atenas et al., 2022; Yates et al., 2021).

Others have focused on big tech’s implementation (and co-optation) of algorithmic ethics programs. For example, Phan and colleagues have focused on how ongoing crises around big tech corporations have led to the creation of “economies of virtue”—“a space where reputations are traded, and ethical practice is produced in line with commercial decision-making” (Phan et al., 2021: 130). They show that experts who offer ethics advice to such companies risk getting co-opted by them. Metcalf et al. (2019) have similarly focused on “ethics owners” in Silicon Valley—people who are professionally responsible for applying ethical principles in technological production. They show that “ethics owners” grapple with tech culture’s fundamental logics and norms that paradoxically impair ethics while “performing” it.

Focusing on “the production floor” of AI, scholars have recently begun to offer empirical explorations of AI developers’ attitudes and perceptions toward the ethics of algorithms. For example, Veale et al. (2018) interviewed public sector ML practitioners in Organisation for Economic Co-operation and Development (OECD) countries about their challenges in imbuing public values into their work. They highlight a fundamental disconnect between organizational and institutional realities, which is likely to undermine ethical initiatives. Ibáñez and Olmeda (2022) have similarly described the gap between practice and principles among Spanish AI managers and the tactics they seek to close that gap. Holstein et al. (2019) focused on AI practitioners’ challenges and needs in developing fairer ML systems, highlighting the disconnect between the challenges faced by teams in practice and the solutions proposed in the “fair ML research literature.” (Holstein et al., 2019: 1). Orr and Davis (2020) have shown that Australian AI engineers tend to diffuse the ethical responsibility of their work, believing that others are responsible for determining the rules while they are merely liable for meeting technical requirements. Duke (2022) has similarly shown that AI developers might be aware of the risks posed by their technology, but they do not see themselves as accountable for them. More recently, Ryan et al. (2022) have highlighted the tensions between organizations’ AI ethics and the values of their employees. These works span various algorithmic production centers, and they almost unanimously point to a fundamental disconnect between algorithmic ethics initiatives and the algorithmic production floor. Namely, while discourses and services around AI ethics are beginning to proliferate, they have yet to materialize into coherent, agreed-upon ethics accepted and practically performed by AI developers themselves.

This article contributes to this line of research by focusing on data scientists’ sense-making processes around algorithmic ethics. Focusing on how data scientists understand and interpret algorithmic ethics, and on the socio-professional context of their work, offers a more fine-grained understanding of the gap between AI ethics theory and practice. Moreover, through the investigation of Israeli data scientists, this article contributes to the global study of algorithmic production beyond Euro-American spaces (Kotliar, 2020b; Ribak, 2019; Takhteyev, 2012). Thus, based on a pragmatic sociological analysis, we ask: how do Israeli data scientists understand, justify, and interpret their algorithmic ethics? And what ideologies, discourses, and worldviews shape their ethics?

Method

Research settings: data science in Israel

Since the 1970s, and particularly since the 1990s (John, 2011), Israel has developed a highly active high-tech industry, currently boasting over 7000 companies, 401 multinational corporations’ Research and Development centers, and tens of billions in venture capital investments each year (IVC-Meitar, 2022; Startup Nation Finder, 2022). In line with the global interest in data science, the last decade has seen the emergence of a vibrant community of Israeli data scientists. In fact, although Israel is relatively small (with a population of 9 million inhabitants), in 2016, it ranked 10th in the world in the absolute number of data scientists and first in terms of density (number of data scientists per million residents) (Stitch, 2016). Israel is thus a global center of algorithmic production: Israeli-produced algorithmic systems can be found worldwide, and conversely, trends, norms, and regulations formulated outside Israel routinely affect the work of local high-tech workers (Ribak, 2019).

Despite the vibrant technical discussions in the Israeli data scientists’ community, ethical discussions are only now starting to emerge, as this community tends to avoid coordinated discussions of the social implications of their technology. Furthermore, algorithmic ethics are only beginning to be discussed in engineers’ training institutions, and ethical issues are rarely debated in this community’s meetups, hackathons, and conventions (Avnoon, 2019). Moreover, unlike Silicon Valley companies, Israeli tech companies rarely employ ethicists or “ethics owners” (Metcalf et al., 2019). Given the lack of formal ethical discussion, education, or organizational standards, it is imperative to explore how Israeli data scientists understand and construct algorithmic ethics and identify their dominant moral logics.

Data collection

The data for this study was collected as part of a broad research project on data science as a nascent profession, conducted in Israel between 2015 and 2018 by the first author. It included 60 semi-structured interviews with data scientists (n = 50), their supervisors (n = 5), and the university professors training them (n = 5), as well as participant observations in data scientists’ community events and their online groups. Sampling focused on individuals who defined themselves as data scientists on the professional social network LinkedIn. 2 The first author contacted 125 data scientists via LinkedIn, out of which 46 agreed to participate. In addition, four other data scientists were snowball-sampled based on interviewees’ recommendations. 3

The interviews were propelled by a topic guide that listed open-ended questions. Participants were asked about their view of ethics and the nature of ethical codes in their profession. 4 Interviews were usually conducted at local cafes and lasted 60–90 minutes. All interviews were recorded and transcribed by the first author. The names of the interviewees and their workplaces were pseudonymized. 5

Data analysis

The data were analyzed by the three authors using thematic analysis (Braun and Clarke, 2006) and in light of the pragmatic approach that focuses on how people interpret, justify, and criticize various normative views (Boltanski’s and Thévenot’s 1999, 2006). Accordingly, the analysis sought to delineate data scientists’ meaning-making processes and the specific “moral logics” (Boltanski and Thévenot, 2006; Lemieux, 2014) that they use to understand, interpret, and justify the ethics of their algorithmic work.

First, we read and reread the transcripts to identify participants’ moral claims and justifications. We then discussed the emerging themes, and only those agreed upon unanimously were utilized further. Second, we reread the transcripts, tagged the appropriate text under the identified themes, and eventually identified three dominant moral logics expressed by our interviewees. Finally, after the initial clustering, we selected and translated prominent quotes representative of each moral logic and analyzed them in light of the research questions and relevant literature.

Findings

“It’s more of a personal preference”: ethics as an individual endeavor

Many of our interviewees described algorithmic ethics as a personal issue related to individual tendencies, preferences, and values. As Ziv, a data scientist at a fintech startup company said:

As a human being, I try to contribute to the world and do no harm. Personally, that’s what I try to do—I try to contribute. I make all my choices with that in mind. But in the information world, many things are just wrong. And we’re simply such a bunch of geeks that we’re not going to do anything [about] it. But eventually, yes, you can use data to do lots of bad things to lots of people in the world, lots of bad things.

Ziv acknowledges the dangers that lurk in big data analyses. According to him, the “information world” is ridden with problems, and data can do “lots of bad things” to people. Nevertheless, at the same time, he also emphasizes that ethics is something that belongs to the individual—to an autonomous, independent entity—it is he who “tries to contribute,” tries to “do no harm,” and who sees his algorithmic ethics as something that informs his personal choices (“I make all my choices with that in mind”). In other words, according to Ziv, he is the sole moral agent in this equation, and the ethics of the algorithmic tools that he develops depend on his own intentions. While Ziv hints at the possibility of collective action, he immediately rejects it, explaining that he and his coworkers are “a bunch of geeks.” Namely, the alleged collective characteristics of his peers are at odds with the dangers that algorithmic production might pose. According to this view, data scientists’ inability to act against the ramifications of big data algorithms and the capitalist forces driving them does not stem from insufficient knowledge, ignorance, or the lack of political organization but from their personality traits and their allegedly innate tendency to avoid conflict. Ziv’s reference to his colleagues’ “geekiness” is consistent with popular stereotypes of IT workers (Kendall, 2011), often considered introverted and withdrawn—the opposite of the extroverted, socializing, and in this case, also political character, which Ziv believes is required to engage with ethical dilemmas. This image of geeks also conveniently ignores well-documented questionable behaviors in geek culture, such as in the misogynic harassment campaign known as “Gamergate” (Aghazadeh et al., 2018; Phillips, 2018).

Liron, a data scientist in a global corporation, expressed similar views when asked about ethics:

I’ve never accessed the account of anybody I know in our [system], although I can do that [. . .], I never tempted to do it, and I think this is true for most people working for us. I don’t know if it’s because of the type of people we hire, or, I don’t [know] why. But, somehow, I feel like there is some kind of ethics to our profession, at least in our company, that just comes naturally. It’s [made out] of people who would never take advantage of anything, [including] the data they’re working on.

Like Ziv, Liron refers to the power at the hands of data scientists and the companies that employ them, a power that largely stems from the plethora of user data at their disposal. He claims that he intentionally avoids violating the privacy of his company’s users and that most of his colleagues do the same. Ignoring the exploitive nature of this new form of capitalism (Zuboff, 2019), Liron focuses on the access he and his colleagues have to user data and prides himself and his colleagues on actively choosing not to misuse it. Thus, like Ziv, Liron envisages his ethics as something that resides in the individual and the choice of respecting users’ privacy as a personal choice, hence evading any sort of collective responsibility. Moreover, for Liron, this ethics “comes naturally” rather than the result of an organizational requirement or a binding professional commitment. It is a moral logic that sees algorithmic ethics as dependent on individual agents and their actions and inactions, but nonetheless, as ethics that are allegedly innate to data science—that “come naturally” to people who work in this profession.

Nevertheless, what characterizes this individual moral agency of data scientists? How do they implement their moral logics, and how do they envisage their “moral toolbox”? Most of the data scientists in our sample consider the ability to design their careers as the main, if not only, way of following their inner conscience. For example, Amit, who works at a fintech startup company, argued as follows:

I try to be very ethical; it’s very important to me. From the day I started in data science, I told myself: ‘I’ll [work in] the advertising business? Never! It disgusts me. Facebook makes me sick to my stomach. That’s my take, but [. . .] people don’t talk about it much because many people only care about the money.

Like the previous interviewees, Amit describes ethics as something that concerns him personally, to the point of physically evoking moral aversion (“makes me sick to my stomach”). Accordingly, Amit describes himself as capable of following his conscience by choosing the right job. As data-intensive technologies are now integrated into almost every industry, and with high demand for such professionals across sectors, data scientists can find a job in various industries and sectors. Amit explains that he sees Ad-Tech companies, and Facebook specifically, as out of the question and that he actively avoids them. That is, he explains that his way of enacting his moral stance lies in his ability to choose where to work, and particularly, where not to work. Amit indeed recognizes that not all data scientists hold these high ethical standards (“many people only care about the money”), but he sees these considerations as personal, not as something that stems from organized professional ethics.

Lavi, a data scientist in a large international tech company, similarly said,

For me, personally, it would have been difficult to work for some binary options company. These are, at best, gambling companies, and at worst, they are companies that try to take people’s money. And these companies have lots of work for data scientists. I’m not [like that, but] some people just don’t give a damn. Other companies do all kinds of surveillance [. . .]. Here ‘too, that’s not my style. I don’t think there is a code [of ethics], but . . . it’s more of a personal preference. I’m sure, for example, that the porn industry also has jobs for data scientists, but that bothers me less. They, at least, put their cards on the table and say: “there, this is what we do.”

Lavi too emphasizes that ethics is a matter of personal preference and suggests that unlike him, others just “don’t give a damn” about ethics, suggesting that such ethics does not, in fact, characterize all data scientists. In highlighting his occupational choices, he focuses on each company’s industry rather than on how they develop their algorithms or manage their data and explains that he would not work for companies that deal with gambling, binary options, or surveillance. Thus, he highlights the possibility of evaluating various companies, rating them on a moral scale (“surveillance [. . .] that’s not my style”; “porn [. . .] that bothers me less”), and actively choosing between them as the primary tool in his ethical toolbox. Like the moral imperative itself, the employment decision is seen as personal, and so do the moral considerations that lead up to it.

Thus, our interviewees tend to consider algorithmic ethics as a matter of the heart—one that revolves around individuals’ consciences rather than on a broader organizational or professional level. Accordingly, their moral agency remains personal, and their ethical options narrow down to one action: rating jobs and choosing between them. These ratings are based on the public image or stigma (Cohen and Dromi, 2018) of the sector in which each company operates.

“Back to the stone age”: ethics as hindering progress

In line with the previous moral logic that views algorithmic ethics as a personal matter of the heart, our interviewees also tended to contrast algorithmic ethics with one fundamental characteristic of their profession—technological progress. According to this moral logic, data science’s innovation, and the progress it brings about, rely on the use of almost unlimited amounts of data. Hence, the ability to access and process data unrestrictedly is seen as essential to technological development and to progress in general. As stated by Nimrod, a global tech company employee, in an interview:

Last week I asked my team what would happen in ten years, and I said: “the entire privacy issue will just disappear.” In ten years, we won’t give [privacy] another thought. We’ll just let it go completely. I mean, in ten years, we’ll say, what? Cookies? You must be kidding. Sensors will float in our bloodstream and continuously report on our cardiac condition! We will be so exposed that it won’t bother us one bit. We’ll be transmitting so much information to our environment that it’ll feel like we’re naked, it’ll be like walking nude in the street, [. . .], and it won’t bother us for a second. We won’t even think about it. Mark my words—you’ll see that I’m right.

According to Nimrod’s moral logic, one of the key items on the algorithmic ethics agenda—the right to privacy—is about to disappear since technological progress will inevitably lead to complete exposure, “nakedness,” as well as complete acceptance of that exposure. Nimrod accordingly expects future surveillance to be much more invasive and more tangible than today. Instead of web cookies—a widespread in-browser tracking device (Carmi, 2017)—sensors will float in our bloodstream and will perpetually assess our physical state, reporting to whomever. The way he embodies data collection—between public nudity and subcutaneous sensors—emphasizes that what may seem today as an extreme violation of an ethical principle, a desire to “undress” people, go into their bodies and get under their skins, would eventually come to be seen as natural, even obvious. Thus, for Nimrod, technological progress is inevitable and is necessarily benevolent, even if today it may seem like it is crossing clear ethical boundaries. Following the same moral logic, Nimrod also makes clear that humans are the ones who would need to stretch their boundaries due to technology and capital’s changing demands by changing their fundamental values (e.g. regarding public exposure). He describes that change as natural, even evolutionary—one that does not require elaborate thought processes, lengthy discussions, or political organization. Accordingly, Nimrod concludes his optimistic, techno-determinist narrative (Wyatt, 2007) with an almost threatening promise to the interviewer: “mark my words, you’ll see that I’m right.”

As John and Peters (2017) have shown, the end of privacy is commonly predicted, and in fact, the privacy discourse bemoaned its death from its inception. Similarly, in our case, Nimrod’s moral logic is infused with technological solutionism (English-Lueck, 2017) that unequivocally parallels progress with technology. According to this logic, technology’s developers and users alike remain passive, if not helpless, in the face of technology’s transforming power.

Indeed, the data scientists in our sample tended to oppose the imposition of restrictions on technological development. As David, who works for a startup company, explained:

One of the reasons behind data science’s rapid growth is the lack of bureaucracy. There are no bureaucratic restrictions. You can do whatever you want. Many people are working on it, so new things get created all the time. There are no restrictions because the harm [caused by these technologies] is probably minimal. I mean, what’s the big deal? So, people may know that you went from that page to another, and they can see which pages you went through. As if that can ever hurt anybody {chuckles}. Anything can hurt you.

David expresses a libertarian stance common in his technological community: that the unprecedented growth in the local data science industry stems from a lack of regulation. According to David’s moral logic, this lack is explained by the fact that algorithmic harms are essentially minimal, and ethical oversight is outright redundant. In other words, like the free market, algorithmic ethics is allegedly self-regulating, and any external regulation would only encumber its progress. This anti-bureaucratic approach is designed to fend off attempts to restrict technological development, even if these restrictions aim to protect the public interest. Like Nimrod, David views the loss of privacy as a done deal and the sensitivity around it as laughable—“anything can hurt you. .”

Eran, a data scientist in a startup company, similarly said,

How ethical is it? Really? I say that there’s no room for this question because today, everyone surveils everyone. If you ponder ethical questions [like]: “Is it OK to surveil people? Is it OK to collect information about people?” then you’re, in fact, back in the stone age.

Eran also heralds the end of privacy (John and Peters, 2017) and accordingly argues that the very discussion of ethical questions is obsolete. To him, such questions only indicate technological backwardness (“you’re back in the stone age”). According to this moral logic, technology is equivalent to progress, whereas ethical contemplations inherently mean stagnation. That is, Eran not only rejects the possibility of ethical action (by legislation, regulation, or the institution of professional or organizational norms), but he dismisses the very discussion of this subject. Thus, in this case, technological determinism and solutionism disavow the ethical debate around the social implications of technology. This view echoes the sociological claim that modernity is anchored in technological development and that this development is inherently at odds with an obliging moral system (Bauman, 2000). Thus, normative questions are deemed irrelevant when technological development is concerned.

“The PayPal of your private data”: ethics as a commodity

Despite the individualistic and techno-deterministic views expressed in the previous sections, the idea of organized ethics is not entirely foreign to the Israeli data science community. Instead, it slowly seeps into it through the commodification of ethics. Jonathan, for example, a data scientist with an MA in computer science, described the company he works for:

Privacy is part of our interests, part of the very reason the company [I work for] was founded. It’s really about offering sane and correct information management instead of privacy. Today everyone is after your data; everybody wants to learn about you. And without such a model, without a company that does this, we enter a grey zone. So, it’s an arms race. Everybody wants to collect your data; everybody wants to know where you are. [. . .] Our goal is to be like the PayPal of your private data. To be that one entity that you’d know is big, and that’s what it does. If they don’t secure [your data] properly, their business will collapse because that’s their business.

Jonathan describes the personal data market as a dangerous conflict zone, one in which users clash with companies and companies clash with each other over the control of personal data (“it’s an arms race”). From his perspective, the only way to protect users’ data is by hiring a company that offers “privacy services.” That is, rather than understanding privacy as a moral injunction or a human right, he sees it as a value that can only be protected when commodified. According to this moral logic, algorithmic ethics can only be acknowledged and protected within capitalist market relations. As Jonathan further explained,

The wise thing would be to create a model where companies would pay for mistreating your data. Customers wouldn’t need to pay because you’re making money off other companies—they provide the service for free, and we make money only if we guard your privacy. So, you have a situation in which everybody’s interests converge. I studied some game theory, and that’s good, [with such a solution,] the system would reach equilibrium.

According to Jonathan, when privacy is commodified and protected through capitalist market relations, the interests of all parties will converge to everyone’s satisfaction. Thus, the market is supposed to balance itself out, not only economically but also ethically—the economic market will perfectly merge with the moral market, and ethical dilemmas will be resolved by their commodification. This view echoes what Metcalf et al. (2019: 9) described as “market fundamentalism”—the idea that tech companies’ bottom line governs their ethical considerations. Nevertheless, in the case before us, market success is not at odds with ethics, but it is allegedly enhancing it. Kfir, a data scientist and tech consultant, shares his experience with the commodification of ethics:

I once tried to start a company that sells personal data. I wanted everyone to walk around with this electronic component that contains all their personal data, and every time you want to complete a form at the store or on the internet, you’ll approach, scan your RFID chip or something, decide what you want to give or not, and how much you’re selling it for. The other side will determine whether they’re buying, and a market will open up. If it’s a market, then let’s make it a market all the way.

Like Jonathan, Kfir is also keenly aware that online personal data has become a lucrative commodity. Kfir accordingly argues that the best way to protect people’s privacy and let them control their information is to commodify it and rely on consumers’ choices regarding their data—whether to sell them and for how much. According to him, the commodification of private data will empower individuals as agents and ensure minimal harm. According to Gershon (2011), with neoliberal agency, individuals perceive and manage themselves as businesses. In the case before us, individuals may assume responsibility for their data and manage it as a digital extension of their autonomous, agentic entity, but only after a private company has commodified it.

Hence, in line with the aforementioned moral logics, the data scientists we interviewed tended to deterministically focus on the commodification of human bodies and values (Meade, 1996; Zuboff, 2019). Under this neoliberal logic, ethics becomes legitimate for social organization only when assigned economic value. Accordingly, a formal moral regime in the shape of institutionalized algorithmic ethics is only possible with a price tag attached to it. 6

Discussion

This research set out to explore how Israeli data scientists understand, justify, and interpret algorithmic ethics, and delineate the ideologies, discourses, and worldviews that shape those ethics. We have shown that while Israeli data scientists enjoy a thriving professional community, they often overlook the social implications of the algorithmic tools that they develop. Accordingly, Israeli data scientists largely refrain from adopting the moral regimes offered to them by legislators, activists, and scholars—even when these have been somewhat formalized and institutionalized as ethical codes and guidelines. Instead, they turn to libertarian, technocratic, and capitalist moral logics that favor unrestricted technological progress over ethically and socially aware algorithmic development. Thus, our findings reveal Israeli data scientists’ particular assumptions and presuppositions about algorithmic ethics: the meaning of ethics, what it means to be ethical, the identity of ethical agents, and the plausibility and necessity of ethical action.

These findings highlight the incongruity of these logics with the attempt to establish a universal, agreed-upon algorithmic ethics. Specifically, the first moral logic—ethics as an individual endeavor—is in stark opposition to establishing a collective, consensual moral regime. According to this logic, data scientists explain their refrain from more formal moral regimes by describing their socio-professional community as inherently ethical, as one in which data scientists naturally “do the right thing” (even when they acknowledge some of them do not). Hence, they insinuate that they, as individuals, have innate ethics of their own. This moral logic is bound to prevent any organized opposition to the violation of moral imperatives—whether individual or social—that may arise during the development and implementation of data-driven technology.

Moreover, this moral logic places a heavy burden on individuals, and its potential to turn into organized action remains highly limited. At the same time, data scientists’ moral agency is reduced to their attempts to evaluate and choose between potential employers. Such a practice impacts individuals’ career paths, disqualifying some companies and legitimizing others while allowing individual data scientists’ moral and financial flexibility. Nevertheless, this practice also circumscribes this profession’s ability to halt the development of potentially harmful algorithmic development from within.

The second moral logic, which contrasts ethics with technological development, emphasizes that the primary moral obligation of the engineering professions in general, and data science in particular, is continuous technological growth despite potential social ramifications. Professional ethics is not beyond the horizons of this logic. However, the only ethics these data scientists can consider is work ethics—a deterministic, techno-optimistic view (Vydra and Klievink, 2019) that sees technological production as their primary, even exclusive, social mission. This approach sees technological production as inherently ahead of its time and as one that renounces allegedly obsolete, restrictive social norms that seek to put an end to progress. Accordingly, in this specific socio-professional context, the possibility of a formalized professional moral regime (e.g. a code of ethics) is seen as a factor that closely coincides with innovation’s longtime nemeses—bureaucracy and regulation.

Data scientists’ third moral logic sees algorithmic ethics as viable only when it is commodifiable and subjected to the rules of the market. Here, data scientists’ moral logics are not in opposition to capital (Whalley, 1986), but they wholeheartedly adopt the entrepreneurial, venture capitalist ethos that favors financial profit over people’s wellbeing. Moreover, the commodification of ethics similarly favors trading in values over positioning them above and beyond the market, thus facilitating the continued trading in personal data (Zuboff, 2019). In other words, this moral logic, which necessitates the commodification of ethics, merely confirms the crawling commodification of all aspects of human existence (Illouz, 2017; Meade, 1996).

While this threefold libertarian-technocratic-capitalist view of ethics revolves around contemporary algorithmic production, it, in fact, has a long professional history, one that can be traced back to data science’s parent profession—engineering. Sociologists of technical work have famously identified a fundamental normative conflict between the engineering spirit and the profit-motived bureaucratic and capitalist organization. According to these scholars, engineers’ technical and rational expertise was often inconsistent with what they understood as an “irrational” aspiration for profit (Layton, 1986; Whalley, 1986). Accordingly, sociologists of engineering predicted that engineers would transform bureaucratic organizations from within in their demand for autonomy and more collegial work (Bell, 1976). They further argued that, alongside other professions, engineering would eventually subject capital to its own professional ethical principles (Freidson, 1973). Nevertheless, over the years, engineers’ organizational career paths, their loyalty to their employers, and their resistance to institutionalized professionalism have prevented the development of a binding ethical code for their profession (Ensmenger, 2010).

Today, as data scientists apply their expertise in multiple and highly diverse social fields, it appears that the social forces that operated on their predecessors’ professional ethics are still at play, preventing the development of a binding moral regime and favoring boundless technological development and capitalist endeavors over socially aware ethical considerations. While public criticism around algorithmic harms is gradually increasing, data science could have been expected to establish a binding moral regime. However, as a sub-profession of engineering, data science effectively turns its back on such normative institutionalizations. The moral logics presented above can accordingly be seen as a localized “moral grammar” (Honneth, 1996) through which data scientists reject any association between potential algorithmic harms and an organized response to them. This grammar is how data scientists discursively minimize the potential of their socio-professional environment to devise a formal, agreed-upon moral regime.

Our findings echo similar findings from other global tech centers, like the US (Metcalf et al., 2019), Australia (Orr and Davis, 2020), and Spain (Ibáñez and Olmeda, 2022), and like them, they highlight a fundamental disconnect between AI ethics initiatives and the algorithmic “production floor.” Thus, the moral logics that characterize Israeli data scientists might originate from a global socio-professional culture—the engineering professions’ implicit moral regime. However, while engineers’ longtime refrain from institutionalized ethics might point toward such an explanation, a more localized, contextualized perspective must also be considered. Namely, Israeli data scientists’ lack of an institutionalized moral regime may also be related to specific Israeli determinants.

As Kotliar (n.d.) has shown the close ties between the Israeli military and the Israeli high-tech scene are not only allowing Israeli engineers to develop new skills, new social networks, and new social norms (Swed and Butler, 2015: 125), but they also shape how they understand their algorithmic work and construct their technological ethics. Moreover, some of the most lauded (and notorious) Israeli characteristics that have presumably helped spur Israel’s phenomenal success as the “Startup Nation” (Senor and Singer, 2009) can also explain Israelis’ view of AI ethics: Israelis’ unapologetic directedness, questioning of authority, informality, and militarized ethos of teamwork, mission, and risk may promote entrepreneurial successes, but at the same time, they are almost inherently incompatible with the creation of agreed-upon ethics. In addition, Israelis’ general disregard for privacy (Ribak and Turow, 2003), their conflict-ridden reality, the immense profitability of its cyber-weapon sector, and the homogeneity of the local high-tech scene also play a part in hindering the formation of an agreed-upon, localized ethics. Finally, Israeli techies’ longtime opposition to unionization (Fisher and Fisher, 2019) may also play a part in their disregard for ethics, and it serves as a reminder that the professional explanation is never detached from an ethnonational one. These localized characteristics question the potential of AI ethics curricula and programs (such as “data feminism” [D’ignazio and Klein, 2020), “indigenous AI” (Abdilla et al., 2021), “Human-Centered AI (Xu, 2019),” and other pedagogical approaches (Borenstein and Howard, 2021; Yates et al., 2021) in educating Israeli techies into a more ethical algorithmic production. Moreover, as much as the global profession of data science shapes Israeli data scientists’ view of ethics, local Israeli moral regimes may also have a global reach when locally produced AI technologies are disseminated globally (Kotliar, 2020b).

Nevertheless, like technology, ethics is a creature of time. After all, people’s moral logics merely derive from existing moral repertoires at a given socio-historical moment (Boltanski and Thévenot, 1999, 2006) and like any cultural repertoire, these might change. Moreover, the relatively recent introduction of the GDPR, the CCPA, the DMA, and other legislations, the fierce, global public discussion on algorithmic harms, and other socio-techno-legal advancements may well change data scientists’ moral logics in Israel and beyond. Such factors might eventually redesign data scientists’ moral toolbox through mandatory ethical training, formal ethical accreditation, or the creation of other professional institutions that would focus on the good of society rather than the datafied goods that are extracted from it. Thus, while data science’s moral regime may stem from its parent profession—engineering, the emergence of data science as a nascent profession may still provide opportunities for moral restructuring and maturing. However, as we have shown above, such processes would need to consider engineers’ local and professional contexts for them to bear fruit.

This research’s qualitative, interpretive design offered a fine-grained exploration of the socio-professional context behind algorithmic ethics. However, like every qualitative research design, its generalizability is inherently limited. This research also focused on data science as a nascent profession, and as such, it offered a cross-organizational view of this emerging field rather than a company-specific or industry-specific one. Future research should interrogate algorithmic ethics from a quantitative, more generalizable perspective and, alternatively, offer ethnographic explorations of the ethics of algorithms in specific tech companies, sectors, or industries. Research should also extend our view of tech ethics across geographies, and particularly, to other tech centers beyond “the west” and explore data scientists’ moral logics in various global settings.

We thank the reviewers for the enlightening comments and enriching dialogue. This article was written with the support of the Shapiro Fund Fellowship for postdoctoral students, the Department of Sociology and Anthropology, Tel Aviv University.

Notes

    1. The literature on algorithmic harms and the moral, social, and technological measures to address them uses the terms “AI ethics” or “Data ethics.” While these are largely overlapping terms, the former is more prevalent among (and appropriated by) the tech industry, and it accordingly steers the discussion toward more techno-solutionist, capital-friendly approaches. At the same time, the term “data ethics” is used by a variety of scholars from different fields, and it seems to (quite literally) highlight the data (its protection, misuse, fairness, and more) rather than the sociocultural drama on the production floor. Therefore, while many of our interviewees are self-proclaimed “AI developers,” this article uses the more neutral and unmarked term “algorithmic ethics” to describe this emerging field. We see this as an umbrella term that offers more interpretive flexibility and enables us to highlight the sociocultural processes behind algorithms and their ethics. 2. At the time of the study, the term “data science Israel” on Linkedin’s search yielded 1000 results. It now yields more than 23,000. 3. Regarding demography, 12% of interviewees were women (n = 6), and 88% were men (n = 49). 55% of interviewees were in their 30s, 24% were in their 40s, 15% were in their 20s and only 6% were above 50. In addition, 98% of interviewees identified as Jews (n = 59), and only one identified as Arab. While Israeli Palestinians gradually join the local high-tech scene (Demalach and Zussman, 2017), this field remains overwhelmingly populated by Israeli Jews. In terms of training, 87% of the 134 degrees held by participants were in computer science, engineering, math, physics, and statistics. In addition, 8% of degrees were in finance and business, 3% in biology and chemistry, and 1% in the humanities. Half of the 50 employed data scientists worked for large, usually international tech companies (n = 25); 44% worked for startup companies in diverse fields, such as FinTech, AdTech, and transportation (n = 22), 4% were freelancers (n = 2), and 2% worked for a government research institute (n = 1). 4. For example, participants were asked: Does your profession have an ethical code? Is there somewhere for data scientists to discuss ethics? Do you discuss ethics with your colleagues? What are the ethical issues that concern you in your work? Do you think data science will develop ethical guidelines like other professions? 5. The study received an IRB approval from the ethics committee at the Faculty of Social Sciences at the first author’s institution (approval no. 030/16). 6. Interestingly, this view of ethics as a commodity is usually limited to the discussion of privacy; other values such as transparency, fairness, and accountability—frequently discussed in the context of algorithmic ethics—have yet to be commodified in the eyes of Israeli data scientists and are not yet seen as a product to be bought or sold.

References

  • Abbott A, (1983) Professional ethics. American Journal of Sociology 88(5): 855885.
  • Abdilla A, Kelleher M, Shaw R, , et al. (2021) Out of the black box: indigenous protocols for AI. Available at: https://static1.squarespace.com/static/5778a8e3e58c62bbf1a639ae/t/61808f1d034eda41942223a9/1635815199890/*Final+Unesco+Paper_Designed.pdf
  • Aghazadeh SA, Burns A, Chu J, , et al. (2018) GamerGate: a case study in online harassment. In: Golbeck J, (ed.) Online Harassment. Human–Computer Interaction Series. Cham: Springer, pp. 179–207.
  • Ananny M, (2016) Toward an ethics of algorithms: convening, observation, probability, and timeliness. Science, Technology, & Human Values 41(1): 93117.
  • Atenas J, Haveman L, Kuhn C, , et al. (2022) Critical data literacy in higher education: teaching and research for data ethics and justice. In: Raffaghelli J, Sangrá A, (eds) Data Cultures in Higher Education. London: Springer (in press).
  • Avnoon N, (2019) The omnivorous strategy of data science: A nascent technical occupation. PhD Thesis, University of Haifa, Israel.
  • Avnoon N, (2021) Data scientists’ identity work: omnivorous symbolic boundaries in skills acquisition. Work, Employment and Society 35(2): 332349.
  • Bauman Z, (2000) Modernity and the Holocaust. Ithaca, New York: Cornell University Press.
  • Bell D, (1976) The Coming of the Post-Industrial Society: A Venture in Social Forecasting. New York: Basic Books.
  • Bender EM, Gebru T, McMillan-Major A, , et al. (2021) On the dangers of stochastic parrots: can language models be too big? In: FAccT ’21, ACM conference on fairness, accountability, and transparency, Virtual Event, Canada, 3–10 March, pp. 610–623. New York: Association for Computing Machinery.
  • Benjamin R, (2019) Race after Technology: Abolitionist Tools for the New Jim Code. Hoboken, NJ: John Wiley & Sons.
  • Boltanski L, Chiapello E, (2005) The New Spirit of Capitalism. London: Verso Books.
  • Boltanski L, Thévenot L, (1999) The sociology of critical capacity. European Journal of Social Theory 2(3): 359377.
  • Boltanski L, Thévenot L, (2006) On Justification: Economies of Worth. Princeton, NJ: Princeton University Press.
  • Borenstein J, Howard A, (2021) Emerging challenges in AI and the need for AI ethics education. AI and Ethics 1(1): 6165.
  • Braun V, Clarke V, (2006) Using thematic analysis in psychology. Qualitative Research in Psychology 3(2): 77101.
  • Buolamwini J, Gebru T, (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research 81: 7791.
  • Carmi E, (2017) Review: cookies—more than meets the eye. Theory, Culture & Society 34(7): 277281.
  • Cave S, Coughlan K, Dihal K, (2019) Scary robots’: examining public responses to AI. In: AIES ‘19: AAAI/ACM conference on AI, ethics, and society, Honolulu, HI, 27–28 January, pp. 331337. New York: Association for Computing Machinery.
  • Cohen AC, Dromi SM, (2018) Advertising morality: maintaining moral worth in a stigmatized profession. Theory & Society 47: 175206.
  • Crawford K, (2019) Halt the use of facial-recognition technology until it is regulated. Nature 572(7771): 565566.
  • Crawford K, (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.
  • D’ignazio C, Klein LF, (2020) Data Feminism. Cambridge, MA: MIT press.
  • Demalach E, Zussman N, (2017) The effect of vocational education on short-and long-term outcomes of students: evidence from the Arab education system in Israel. Discussion Paper Series, Bank of Israel. Available at: https://ideas.repec.org/p/boi/wpaper/2017.11.html
  • Duke SA, (2022) Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. Ethics and Information Technology 24(1): 115.
  • Durkheim É, (1961) Moral Education: A Study in the Theory and Application of the Sociology of Education. New York: Free Press.
  • English-Lueck JA, (2017) Cultures@SiliconValley. Redwood, CA: Stanford University Press.
  • Ensmenger N, (2010) The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise. Cambridge, MA: MIT Press.
  • Eubanks V, (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
  • Fast E, Horvitz E, (2017) Long-term trends in the public perception of Artificial Intelligence. In: Proceedings of the thirty-first AAAI conference on artificial intelligence, San Francisco, CA, 4–9 February, pp. 963969. Palo Alto, CA: AAAI Press.
  • Fayyad U, Piatetsky-Shapiro G, Smyth P, (1996) From data mining to knowledge discovery in databases. AI Magazine 17(3): 3754.
  • Fisher B, Fisher E, (2019) When push comes to shove: dynamics of unionising in the Israeli high-tech sector. Work Organisation, Labour & Globalisation 13(2): 3756.
  • Fournier V, (1999) The appeal to “professionalism” as a disciplinary mechanism. The Sociological Review 47(2): 280307.
  • Freidson E, (ed.) (1973) The Professions and Their Prospects. Thousand Oaks, CA: SAGE.
  • Gal MS, (2017) Algorithmic challenges to autonomous choice. SSRN Electronic Journal 25: 140.
  • Gershon I, (2011) Neoliberal agency. Current Anthropology 52(4): 537555.
  • Ghotbi N, Ho MT, Mantello P, (2022) Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI & Society 37(1): 283290.
  • Gillespie T, (2014) The relevance of algorithms. In: Gillespie T, Boczkowski PJ, Foot KA, (eds) Media Technologies: Essays on Communication, Materiality, and Society. Cambridge, MA: MIT Press, pp. 167193.
  • Gitelman L, (ed.) (2013) “Raw Data” Is an Oxymoron. Cambridge, MA: MIT Press.
  • Hitlin S, Vaisey S, (2013) The new sociology of morality. Annual Review of Sociology 39: 5168.
  • Holstein K, Wortman Vaughan J, Daumé H, , et al. (2019) Improving fairness in machine learning systems: what do industry practitioners need? In: Proceedings of the CHI conference on human factors in computing systems, Glasgow, Scotland, 4–9 May, pp. 116. New York: ACM.
  • Honneth A, (1996) The Struggle for Recognition: The Moral Grammar of Social Conflicts. Cambridge, MA: MIT Press.
  • Ibáñez JC, Olmeda MV, (2022) Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study. AI & Society 37: 16631687.
  • Illouz E, (ed.) (2017) Emotions as Commodities: Capitalism, Consumption and Authenticity. London and New York: Routledge.
  • IVC-Meitar (2022) Israeli tech review. Available at: https://www.ivc-online.com/LinkClick.aspx?_atscid=7_134353_48041553_2080834_0_Teatezfwtwuhwd2chandfileticket=eWRPYkJvwBA%3Dandportalid=0andtimestamp=1641128815294 (accessed 8 June 2022).
  • Jobin A, Ienca M, Vayena E, (2019) The global landscape of AI ethics guidelines. Nature Machine Intelligence 1(9): 389399.
  • John NA, (2011) The diffusion of the internet to Israel: the first 10 years. Israel Affairs 17(3): 327340.
  • John NA, Peters B, (2017) Why privacy keeps dying: the trouble with talk about the end of privacy. Information, Communication & Society 20(2): 284298.
  • Kendall L, (2011) “White and nerdy”: computers, race, and the nerd stereotype. The Journal of Popular Culture 44(3): 505524.
  • Kotliar DM, (2020a) The return of the social: algorithmic identity in an age of symbolic demise. New Media & Society 22(7): 11521167.
  • Kotliar DM, (2020b) Data orientalism: on the algorithmic construction of the non-Western other. Theory & Society 49(5): 919939.
  • Kotliar DM, (n.d.) On Cultural Differences, Ethics of Difference, and Algorithms as Differentiating Machines.
  • Latour B, (1992) Where are the missing masses? The sociology of a few mundane artifacts. In: Bijker WE, Law J, (eds) Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge, MA: MIT Press, pp. 225258.
  • Layton E, (1986) The Revolt of the Engineers: Social Responsibility and the American Engineering Profession. Baltimore, MD: Johns Hopkins University Press.
  • Lemieux C, (2014) The moral idealism of ordinary people as a sociological challenge: reflections on the French reception of Luc Boltanski and Laurent Thévenot’s on justification. In: Susen S, Turner BS, (eds) The Spirit of Luc Boltanski: Essays on the “Pragmatic Sociology of Critique.” London and New York: Anthem Press, pp. 153170.
  • Lewis J, (2014) The case for regulating fully autonomous weapons. The Yale Law Journal 124(4): 13091325.
  • Meade EM, (1996) The commodification of values. In: May L, Kohn J, (eds) Hannah Arendt: Twenty Years Later. Cambridge, MA: MIT Press, pp. 107127.
  • Metcalf J, Moss E, boyd D, (2019) Owning ethics: corporate logics, silicon valley, and the institutionalization of ethics. Social Research: An International Quarterly 82(2): 449476.
  • Mittelstadt B, (2019) Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1(11): 501507.
  • Naur P, (1974) Concise Survey of Computer Methods. Lund: Studentlitteratur.
  • Noble SU, (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
  • Orr W, Davis JL, (2020) Attributions of ethical responsibility by artificial intelligence practitioners. Information, Communication & Society 23(5): 719735.
  • Patil DJ, (2011) Building Data Science Teams. Sebastopol, CA: O’Reilly Media.
  • Patil DJ, (2018) A code of ethics for data science. Medium, 1 February. Available at: https://medium.com/@dpatil/a-code-of-ethics-for-data-science-cda27d1fac1
  • Phan T, Goldenfein J, Mann M, , et al. (2021) Economies of virtue: the circulation of “ethics” in big tech. Science as Culture 31(1): 121135.
  • Phillips W, (2018) The oxygen of amplification. Data & Society 22: 1128.
  • Ribak R, (2019) Translating privacy: developer cultures in the global world of practice. Information, Communication & Society 22(6): 838853.
  • Ribak R, Turow J, (2003) Internet power and social context: a globalization approach to web privacy concerns. Journal of Broadcasting and Electronic Media 47(3): 328349.
  • Ribes D, (2019) STS, meet data science, once again. Science, Technology, & Human Values 44(3): 514539.
  • Rouvroy A, (2013) The end(s) of critique: data-Behaviourism vs. due-process. In: Hildebrandt M, de Vries K, (eds) Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. London and New York: Routledge, pp. 143169.
  • Ryan M, Christodoulou E, Antoniou J, , et al. (2022) An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees. AI & Society. DOI: 10.1007/s00146-022-01430-1.
  • Scheuerman MK, Hanna A, Denton E, (2021) Do datasets have politics? disciplinary values in computer vision dataset development. Proceedings of the ACM on Human-Computer Interaction 5: 137.
  • Schwarz O, (2013) Dead honest judgments: emotional expression, sonic styles, and evaluating sounds of mourning in late modernity. American Journal of Cultural Sociology 1(2): 153185.
  • Scott RW, (2008) Lords of the dance: professionals as institutional agents. Organization Studies 29(02): 219238.
  • Seaver N, (2017) Algorithms as culture: some tactics for the ethnography of algorithmic systems. Big Data & Society 4(2): 112.
  • Senor D, Singer S, (2009) Start-Up Nation: The Story of Israel’s Economic Miracle. New York: Twelve, Hachette Book Group.
  • Shane S, Wakabayashi D, (2018) ‘The business of war’: Google employees protest work for the Pentagon. The New York Times, 4 April. https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html (accessed 8 June 2022).
  • Startup Nation Finder (2022) Israeli innovation network. https://finder.startupnationcentral.org/ (accessed 8 June 2022).
  • Stitch (2016) The state of data science. Available at: https://www.stitchdata.com/resources/the-state-of-data-science/ (accessed 8 June 2022).
  • Swed O, Butler JS, (2015) Military capital in the Israeli hi-tech industry. Armed Forces & Society 41(1): 123141.
  • Swidler A, (1986) Culture in action: symbols and strategies. American Sociological Review 51(2): 273286.
  • Takhteyev Y, (2012) Coding Places: Software Practice in a South American City. Cambridge, MA: MIT Press.
  • Taylor L, Dencik L, (2020) Constructing commercial data ethics. Technology and Regulation 2020: 110.
  • Tufekci Z, (2014) Engineering the public: big data, surveillance and computational politics. First Monday 19: 7.
  • Van Dijck J, (2014) Datafication, dataism and dataveillance: big data between scientific paradigm and ideology. Surveillance & Society 12(2): 197208.
  • Van Maanen J, Barley SR, (1984) Occupational communities: culture and control in organizations. Research in Organizational Behaviour 6: 287365.
  • Veale M, Van Kleek M, Binns R, (2018) Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In: The 2018 CHI conference on human factors in computing systems, Montreal, QC, Canada, 21–26 April, pp. 114. New York: Association for Computing Machinery.
  • Vydra S, Klievink B, (2019) Techno-optimism and policy-pessimism in the public sector big data debate. Government Information Quarterly 36(4): 110.
  • Weber M, (2003) The Protestant Ethic and the Spirit of Capitalism. Mineola, NY: Dover Publications.
  • Whalley P, (1986) The Social Production of Technical Work: The Case of British Engineers. London: Palgrave Macmillan.
  • Woolley SC, Howard P, (2017) Computational propaganda worldwide: executive summary. Working Paper 2017.11, Oxford: Project on Computational Propaganda.
  • Wyatt S, (2007) Technological determinism is dead: long live technological determinism. In: Felt U, Fouché R, Miller CA, , et al. (eds) The Handbook of Science and Technology Studies. Cambridge, MA: MIT Press, pp. 165–180.
  • Xu W, (2019) Toward human-centered AI: a perspective from human-computer interaction. Interactions 26(4): 4246.
  • Yates SE, Carmi E, Lockley E, , et al. (2021) Me and my big data: understanding citizens data literacies. Report, University of Liverpool London, 14 September.
  • Zuboff S, (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.
footer

Recommended Citation

Contextualizing the ethics of algorithms: A socio-professional approach

Netta Avnoon, Dan M Kotliar, Shira Rivnai-Bahir


New Media & Society

Vol 26, Issue 10, pp. 5962 - 5982

Issue published date: October-01-2024

10.1177/14614448221145728


Request Permissions

View permissions information for this article

View