Abstract
While research has highlighted the ties between algorithms and culture, this article focuses on how cross-cultural encounters shape developers’ perceptions of their algorithmic work. I ask: How do cultural transitions and intercultural encounters influence people’s perceptions of their algorithms? How do key issues regarding algorithmic production get translated and reinterpreted? Based on 50 semi-structured interviews with Israelis who immigrated to Silicon Valley, I show that their interpretation of the cultural differences between Israeli and Silicon Valley cultures—and their culturally specific
Introduction
This article deals with differences—cultural differences, people’s
I ask the following questions: what happens when algorithmic practitioners move between contexts and cultures? How do cultural transitions and intercultural encounters influence people’s perceptions of their algorithms? And how do key issues regarding algorithmic production get translated (Ribak, 2019) and reinterpreted? Based on 50 semi-structured interviews with Israelis who immigrated to Silicon Valley, I show that Israelis’ interpretation of the cultural differences between their culture and Silicon Valley’s culture and their
Literature review
Difference machines and their consequences
While algorithms are not necessarily computational (Gillespie, 2016), and while the datasets they run on can be small (boyd and Crawford, 2012), the algorithms that exert the most power over us are data-intensive algorithms, including deep learning, machine learning (ML) and so-called artificial intelligence (AI). Such algorithms automatically identify patterns, groups, or “clusters” in vast databases, making them highly effective social-sorting mechanisms (Lyon, 2003). Accordingly, such algorithms have become ubiquitous in various social fields, including court systems, welfare offices (Eubanks, 2018), policing (Brayne, 2017), border control (Madianou, 2024), and more. Data-intensive algorithms increasingly determine how users are profiled and targeted, who sees which social media post or ad, who is banned from which platform, who is hired, who receives a loan, and more—all based on the extraction of “user data.” That is, “the algorithmic gaze” (Kotliar Dan, 2020b) creates an allegedly personalized, hyper-individualized reality (Lake, 2017)—one replete with differences in how we are “seen” and treated by algorithms. Thus, this article defines data-intensive algorithms as
Scholars have repeatedly demonstrated how algorithms perpetuate discrimination, inequality, and prejudice. Research has shown that algorithms can be racist (Noble, 2018), sexist (Wachter-Boettcher, 2017), and ethnocentric (Kotliar Dan, 2020b). Scholars have accordingly highlighted the issue of algorithmic bias—when algorithms wrongly and harmfully differentiate between groups (Hargittai, 2020), and others have underlined the social harms that stem from algorithmic profiling (Vries, 2010), micro-targeting (Tufekci, 2014), and other data-intensive differentiating technologies. Such algorithmic harms have garnered significant attention in both academic and public spheres, as organizations, corporations, and institutions have been developing ethical principles to guide those working with AI (Jobin et al., 2019) and encourage the creation of less biased, more inclusive, more equal, and fairer algorithms (Wachter et al., 2017). Alongside these initiatives, researchers have examined how algorithmic practitioners perceive, understand, and construct the ethics of their algorithms, highlighting the gap between principles and practice on the production floor (Ali et al., 2023; Avnoon et al., 2023; Ibáñez and Olmeda, 2021; Orr and Davis, 2020).
These works stem from the assumption that algorithms are far from neutral techno-deterministic artifacts (boyd and Crawford, 2012). Instead, they are created within specific sociocultural contexts (Seaver, 2017) and are sold, bought, and used within particular sociocultural surroundings (Kotliar Dan, 2021). Therefore, to fully grasp the impact of algorithms on society, we must explore the sociocultural contexts in which they are formed. Nevertheless, what happens when algorithmic practitioners move between contexts and cultures? How do cultural transitions and intercultural encounters influence people’s perceptions of their algorithms? This article delves into these questions by focusing on Israeli tech workers in Silicon Valley.
Silicon Valley
Located in the southern part of San Francisco Bay, Silicon Valley has been at the forefront of high-tech advancements for decades, and it occupies a central position in contemporary debates surrounding technological innovation and its implications. Barbrook and Cameron famously traced the origins of Silicon Valley to the “Californian ideology”—A fusion of hippies’ free-spiritedness and yuppies’ libertarian economic entrepreneurship that promoted a techno-optimistic and largely myopic belief that new information technologies could create a new techno-political utopia (Barbrook and Cameron, 1996). The Californian Ideology is a starkly liberal one, rhetorically championing “universalist, rational, and progressive ideals, such as democracy, tolerance, self-fulfillment, and social justice” (Barbrook and Cameron, 1996: 3), mixing market economics with the freedoms of hippie artisanship. Fred Turner has similarly shown how the San Francisco Bay area counterculture, with its spiritual communion and cold-war era military-industrial culture, created a unique combination of conservative economic principles and revolutionary radicalism, libertarian politics, and digital utopianism (Turner, 2006). O’Mara emphasized the role of the American government in developing Silicon Valley’s technological ideology (O’Mara, 2019), and Hepp and colleagues discussed its afterlife and circulation (Hepp et al., 2023). Alexandre similarly described the “culture of Silicon Valley” as one that valorizes individual initiatives, risk-taking, de-dramatization of failure, and creativity (Alexandre, 2022). According to him, this culture is also characterized by an “open mindset” and tolerance toward lifestyles and ways of thinking, such as LGBTQIA+ rights, libertarianism, and transhumanism (Alexandre, 2022: 14).
Thus, Silicon Valley has long been regarded as a space where American innovation and technological advancement thrive, predominantly driven by white “tech bros” (Chang, 2018) and a site of convergence of various American sociocultural streams (Barbrook and Cameron, 1996; O’Mara, 2019; Turner, 2006). At the same time, for many, Silicon Valley “was never a place” but a promise, a mirage, used “to conceal societal crises and to distract disaffected constituents from true reform” (Schrock, 2020). Such a unified, abstract view of Silicon Valley is prevalent in much of the public and scholarly discourse about technological production today. Nevertheless, while women, African Americans, and Native Americans are still painfully underrepresented in this Californian space (Ruffin, 2014), Silicon Valley is a highly diverse space, encompassing a variety of ethnic, cultural, and national groups. In fact, 41% of the approximately 3 million Silicon Valley inhabitants were born outside the United States, and only half of those are American citizens. Moreover, in the computer and technology sector, 69% of employees are foreign-born, and more than half of the Valley’s households speak a language other than English at home (Silicon Valley Institute for Regional Studies, 2024). Indeed, Silicon Valley attracts entrepreneurs, technologists, technical workers, and service employees from various countries, including China, India, Korea, Taiwan, Vietnam, Canada, France, South Africa, the United Kingdom, Israel, and more. Thus, while the public image of Silicon Valley is that of a homogeneous, white space, and while the history, ideology, and ethos of this space are primarily seen as products of American sociocultural history, it is, in fact, an extremely diverse space; a space replete with differences.
Indeed, as early as 2000, Saxenian has shown how Silicon Valley’s Indian, Chinese, and Israeli communities created opportunities for their members while building entrepreneurial bridges back to their countries of origin, translating “brain drain” into “brain circulation” (Saxenian, 2007: 7, 63; Chakravartty, 2006). Moreover, the Culture@Silicon Valley Project has been providing rich ethnographic insights into the lived experiences of individuals from various communities in Silicon Valley (English-Lueck, 2017), and more recently, Fred Turner’s book with photographer Mary Beth Meehan presented a striking image of the social disparities in the valley (Meehan and Turner, 2021). However, the ties between this cultural diversity and technological production in this space, and the way cultural, ethnic, and national differences inform the perceptions of
Israelis in Silicon Valley
Israelis’ presence in Silicon Valley can be traced back to the 1970s, with Israeli students who came to the United States for their graduate education amid a turbulent time back home (Saxenian, 2007: 105). Concurrently, the establishment of American corporate offices in Israel served as a crucial bridge between Tel Aviv and Palo Alto, leading to the relocation of thousands of employees and their families to Silicon Valley (Saxenian, 2007). As Gold explained, “the simultaneous and transnational development of infotech industries in Israel and by Israeli emigrants in Silicon Valley provided benefits to the growth and expansion of both” (Gold, 2018: 136).
Estimates range from 50,000 to 100,000 Israelis in the Valley—1.7–3% of Silicon Valley’s population. Most Israeli tech workers are educated male military veterans (Gold, 2018: 135), their residency status in the United States varies, and some fly back and forth between Tel Aviv and San Francisco. 2 Israelis live across the San Francisco Bay Area, with a prominent concentration in Sunnyvale, one of Silicon Valley’s suburbs. My interviewees often likened Sunnyvale to a kibbutz, as Israelis tend to live near each other and send their children to the same schools, and Hebrew is frequently heard spoken on the streets and cafés. Moreover, Silicon Valley boasts several places to enjoy traditional Israeli cuisine, and local supermarkets offer emblematic Israeli products. Namely, Israelis in the valley have myriad ways to maintain their identities as “Israelis who live in the US” rather than as “Americans” (Gold, 2018: 134).
While many Israelis in Silicon Valley work at large corporations, others start, manage, or work for startup companies. This community is characterized not only by its high level of cohesion (Gold, 2018: 134) but also by its significant technological and economic achievements: Israeli founders form a significant number of Silicon Valley unicorns (companies valued at over $1B), far exceeding their share of the local population (Strebulaev, 2022).
Drori and colleagues have argued that transnational entrepreneurs “are not simply passive adherents to institutional constraints, but actively mold them to suit their own unique initiatives” (Drori et al., 2009: 1003). Värlander and colleagues have shown that professional practices are transferred across locations, particularly in global organizations (Värlander et al., 2016), and Gold suggested that regular travel among infotech migrants may contribute to the ongoing exchange of ideas between technological communities (Gold, 2018: 145). But which ideas? How do these ideas relate to the technology produced by these migrants? And how do cultural transitions and intercultural encounters influence people’s perceptions of their algorithms? This article delves into these questions by offering an empirical account of Israelis in Silicon Valley, their ideology, and how it informs their algorithmic imaginary—namely, how they see and interpret contemporary algorithmic production, and their own algorithmic products (Bucher, 2019).
Methodology
This article is based on 50 semi-structured interviews with Israelis who moved to Silicon Valley and work for data-intensive companies. The interviews were conducted between October 2019 and August 2021. Interviewees were contacted through Facebook and LinkedIn posts and by using the snowball method (Noy, 2008). Due to the COVID-19 pandemic, most interviews took place over Zoom. I additionally conducted 20 participant observations in various events—before and after the COVID-19 lockdowns and over Zoom; and have held dozens of informal conversations with Israelis in the valley. The sample included 64% men (n = 32) and 36% women (n = 18). About 42% of the interviewees worked for big corporations, a similar percentage worked for startup companies, and 16% worked as consultants, investors, designers, and other independent professionals (n = 8). 3 47% of the corporate employees (n = 10) worked for GAMAM companies (Google, Amazon, Microsoft, Apple, or Meta).
Aiming to explore the ties between algorithms, culture, and cross-cultural encounters, I use what I term Algorithmic Life Story Interviews—a structured yet flexible methodology inspired by established life story approaches (Atkinson, 1998). This framework centers on the personal narratives of algorithmic practitioners, highlighting how individual trajectories influence the ways people interpret, imagine, and design algorithmic systems. It emphasizes the reciprocal entanglement of biography, professional identity, and sociotechnical creation, offering a more holistic understanding of the “human hands” (Seaver, 2013) behind computational infrastructures. 4
In this article, participants’ algorithmic life stories included reflections on their upbringing in Israel, military service, career paths, immigration to California, and work in Silicon Valley. They also shared their views on broader questions of algorithmic production—such as AI’s social impacts, AI ethics, or legal and normative barriers to algorithmic development—and how these issues relate to their own algorithmic work. Interviews typically lasted 1 hour, were transcribed by the author, and logged into MaxQDA2022 for thematic clustering and analysis (Braun and Clarke, 2006). I read and reread the interview data, identified recurrent themes and major concepts, and clustered similar segments together. Finally, I selected and translated prominent quotes representing each theme and analyzed them in light of the above-mentioned theory and research questions. 5
The following is divided into four parts: first, I focus on how Israelis in Silicon Valley perceive the cultural differences between their own culture and the one in Silicon Valley; I then move on to describe Israelis’
Findings
Cultural differences
In August 2021, Noam Bardin, the Israeli CEO and co-founder of Waze (acquired by Google in 2013), published a public letter explaining why he was leaving Google.
6
Under the headline “Transparency and Directness,” he wrote:
I have always been a pretty passionate guy, especially at Waze. After the acquisition, I was invited to speak on many different Google panels and events and very quickly I began racking up my HR complaints. I used a four-letter word, my analogy was not PC, my language was not PG . . .. I value transparency and feel that people should bring themselves to work but that also means [people should have] a certain tolerance of people not saying something exactly as you would like them to or believing something you don’t. That tolerance is gone at Google and “words” > “content” is the new Silicon Valley mantra of political correctness. You can say terrible things as long as your pronouns are correct or can say super important things but use one wrong word and it’s off to HR for you.
Alongside other reasons for leaving Google, Bardin describes an incongruity between his behavior and the prevailing norms at the Mountain View giant. He explains that his “passionate,” direct, and unapologetic speech is deemed inappropriate at Google and is often understood as problematic or even harassing behavior (“I began racking up my HR complaints”). He ties this view to what he sees as a “new Silicon Valley mantra of political correctness” that emphasizes “words over content.” Hence, Bardin characterizes contemporary Silicon Valley as leaning toward oversensitivity and duplicity—one that valorizes formal markers of identity (“You can say terrible things as long as your pronouns are correct”) over essence and urgency.
My interviewees similarly highlighted what they saw as the cultural differences between their own Israeli culture and the Californian culture to which they immigrated. Like Bardin, they often describe that culture as one that tends toward political correctness, self-righteousness, and hypersensitivity, contrasted with what they describe as their own direct, daring, informal, and sociable attitude. As Sivan, a San-Jose-based engineer in a GAMAM corporation, said:
People often criticize me for not saying the right thing at the right time. [They say] things like “you embarrass people with your directness.” I embarrassed my VP on multiple occasions, but I wonder whether [it is] because of my Israeliness or because of the American world of political correctness. Interviewer: How do you see this world? Well, it’s always so milquetoast! . . . It is as if, in the American world, because of all their inclusivity and political correctness, you must phrase things very delicately to avoid offending anyone. The Israeli style is much more direct, often verging on verbal violence. Israelis, as you know, can call each other “idiot” in the middle of a meeting with no hesitation. This would never happen with Americans.
Like many of my interviewees, Sivan expresses frustration regarding the differences between what she sees as her Israeli communication style and the American or Californian one. She specifically highlights the differences between her own “directness” and what she calls the “American political correctness,” criticizing the American style as being overly passive (“milquetoast”), sugarcoating, and insincere. The directness Sivan describes echoes what Tamar Katriel famously termed Dugri speech—a culture-specific Israeli way of speaking that privileges directness and truthfulness (Katriel, 1986; 2004). Nevertheless, in this case, Sivan not only distinguishes between an unrestrained, aggressive communication style and an overly cautious one, but she also ties Americans’ “milquetoast,” passive style to their emphasis on political correctness and inclusivity—namely, to an American discursive emphasis on treating people equally and fairly (Hughes, 2011). By contrast, according to her, Israelis tread much less lightly, can easily be offensive, and rarely consider how inclusive or fair their messages may seem.
Orly, a project manager at a large American corporation, similarly recounted:
I often find myself sitting with the guys [at work] in different situations where I’m dying to tell a joke or make a funny remark. And then I say [to myself]: it’s inappropriate. I just can’t. Because no matter how I’d [say it, it can never be] like in Israel where we used to sit at lunch, laugh, tell jokes, and form connections. [Here,] it will either sound racist, or I might offend someone.
Humor is often a rocky terrain for immigrants, as jokes’ punches get lost in translation (Madziva et al., 2016). However, Orly’s frustration seems to stem from what she sees as the excess sensitivity that characterizes the Californian culture and its tendency to see any statement as potentially “racist,” “offensive,” or even harassing. That is, Orly is not only preoccupied with the embarrassment that might stem from not getting her translated jokes across but also with the fact that her Israeli humor might cross local, culture-specific normative boundaries. By that, she too highlights the Californian emphasis on diversity and inclusion, particularly around issues of race, making clear that in Israel, such things are seen much more leniently. 7
These findings echo previous research on Israelis’ communication styles (Kaneh-Shalit, 2017; Katriel, 1986; 2004), particularly research about Israelis in organizations (Fraiberg, 2017; Ravid et al., 2010; Zaidman and Brock, 2009). Notably, more than two decades ago, Shamir and Melnik (2002) argued that Israelis in Silicon Valley perceived their American colleagues as compartmentalized and bounded while highlighting their own tendency toward boundary crossing of different kinds. Others have similarly highlighted Israeli techies’ alleged tendency toward improvisation, informality, assertiveness, and bluntness (Gold, 2018; Hickson and Pugh, 2014; Saxenian, 2000: 106; Senor and Singer, 2009; Zaidman and Malach-Pines, 2014). Nevertheless, as we saw above, Israelis in contemporary Silicon Valley seem to specifically contrast this style with what they see as a salient cultural trope in contemporary Californian culture—a heightened emphasis on diversity, equity, and inclusion. This dominant discourse, formally known as DEI, is highly prevalent across American employers, universities, and large corporations (Luhr, 2023; Nader, 2018), and it somewhat echoes Californian Ideology’s tendency toward liberalism, tolerance, and social justice (Barbrook and Cameron, 1996, 3). As we will see below, Israelis’ emphasis on DEI discourses goes hand in hand with their
Logic of difference
When asked about his work for a medium-sized American corporation, Ori, a San Francisco–based engineer, said:
Back when I was working there, they held company-wide meetings in which they talked about the quotas they aim to have to promise a [fairer] representation [of various social groups in the company]. In 2019, they said they aim to reach [a rate of] 20% women, 10% Hispanics, and 10% LGBTQ, both in the company as a whole, in engineering, and in management. Many employees care deeply about the kind of company they work for and aim to work for a company that “makes the world a better place” and blah blah blah. Israelis? They don’t give a fuck.
Ori explicitly refers to Silicon Valley corporations’ DEI practices, namely, their declarative attempts to make their companies more diverse, inclusive, and equal (Nader, 2018). While these corporations’ actions around DEI are often quite limited, particularly regarding women, African American, and Native American populations (Luhr, 2023), such discourses are highly prevalent across Silicon Valley. Ori rejects these discourses, mocking people who seek such qualities in their employers (“blah blah blah”), and bluntly attests that Israelis are uninterested in such questions. Hence, in line with the previous section, Ori dismisses the boundary work (Lamont and Molnár, 2002) offered by DEI discourses, seeing them as oversensitive and superfluous. Ori also implicitly refers to Silicon Valley’s hippie ethos (and rhetoric) with its utopian, good-seeking visions (“a company that makes the world a better place”) (Hoffmann et al., 2018), insinuating that Israelis are unimpressed by such naïve ideals.
Karin, a biotech employee with a Ph.D. in computational biology, reminisced about studying at a Californian university:
Karin: I had a set of samples with half black and half white women [who experienced] premature births, which is usually much more common among black women . . .. In my article, I offered a clear comparison between the two groups, but it got taken down. Interviewer: Why? Karin: My mentor removed it, claiming it could be seen as a racist issue. And for me, as a scientist, it was tough. I tried extremely hard to persuade her, but she didn’t budge. . . . It was an analysis of what we found among Black women compared to what we found among White women with some biological explanations, and she just threw it out of the article. . . . It pissed me off because it could have been a much stronger article. And this could never have happened in Israel. Not in a million years. People here [in Silicon Valley] have this excessive sensitivity to this issue that sometimes knows no limit. We defined everything there in a clear biological way; there was no [subjective] interpretation of any kind. It was all . . . biological. My Israeli instructors would never have dismissed this.
Karin describes this incident as frustratingly encapsulating the tension between what she perceives as scientific objectivity (“It was all . . . biological”) and a Californian oversensitivity toward human categories of difference. She sees her mentor’s insistence on putting principles of racial equity before scientific objectivity as an outrageously subjective understanding of humans that bluntly interferes with her own objectivist one (Brubaker, 2015: 48). She accordingly argues that this incident could never have happened in Israel, thus contrasting these two cultures’ logic of difference.
Lilach, an investor and CEO of a Palo Alto-based startup, said:
Israelis struggle with all the boundaries people put up here, all sorts of sensitivities around identity—say, the fact that you have to be careful not to differentiate between white people and black people, between women and men. So many Israelis [react to this and say:] “Who gives a fuck.”
Lilach offers a bird’s eye view of how Israelis see Californian logic of difference. According to her, Israelis struggle with Californian Identity politics and its ensuing boundary work (Lamont and Molnár, 2002) and are often perplexed by what they see as an American wrongful tip-toeing around human difference.
Thus, through the lens of their algorithmic life stories, we see that my interviewees described their
Difference machines and their ethics
As mentioned above, contemporary algorithms often revolve around the creation of difference—in how users are “seen,” “understood,” and profiled by such algorithms, in the types of ads, posts, or products different people see, in the types of nudges they get toward consumption or political participation, and more. This functionality has raised considerable alarm around algorithmic bias, profiling, targeting, and various other algorithmic-differentiating practices. In this part, I argue that Israelis’ logic of difference is reflected in how they perceive their algorithmic production.
For example, Shira, an information security engineer at a large corporation, explained:
In Israel, . . . it is ingrained in us from a very young age, from our military [service], that ensuring the continuation of our [nation] is an existential need. And so, [we are not working as information security experts] to help a corporation avoid some five-million-dollar ransomware. No. [Our motivation] is existential at its core. Think of Stuxnet, for example, the [Israeli-American computer worm that allegedly] fucked up the Iranian [nuclear] reactor, right? It is an existential necessity, so the difference between the two [cultures] is very, very, very significant . . .. When you have this existential need to protect yourself and your country, you don’t just do [your job because] you must. It becomes part of your personality. . . . See, I live in Silicon Valley. There are no missiles here. There are no enemies, no suicide bombers. But [as an Israeli], when I’m in a restaurant, I always sit facing the door! I need to see who comes in. So, it’s an existential need that amounts to much more than just doing your job. [And so,] in Israel, we do racial profiling because we have to do racial profiling! And it’s not just acceptable. It’s a fundamental need; . . .. Just look at all the questioning at Ben Gurion airport [profiling the coming and going]. Similarly, racial profiling plays a vital role in information security because you look into [online] threats and threat actors. But here, [in Silicon Valley], it is strictly forbidden to profile people. I don’t know if it’s specifically in the Bay Area with all its liberalism or in [the US] in general, but here . . . [you have to] really tip-toe around this subject. People here are very afraid to offend, afraid of racism, or of anything remotely close.
Shira ties her professional identity as an information security engineer to her Israeli upbringing, particularly to Israel’s mandatory military service. She accordingly conjures up fundamental Jewish-Israeli tropes that link the need for the continuation of the Jewish nation (and state) with discourses of securitization and militarization (Lomsky-Feder and Ben-Ari, 1999). Such tropes implicitly refer to deeply-rooted Jewish-Israeli cultural traumas (Alexander, 2004)—including the Holocaust, Israel’s wars with its neighbors, and terrorist attacks—presenting information security as an existential answer to such existential threats. By describing her preoccupation with missiles and suicide bombers in the midst of peaceful Silicon Valley, Shira emphasizes how quotidian and distressingly visceral these threatening tropes can be, but also how Silicon Valley’s seemingly neutral space can be seen and experienced through culture-specific eyes. Accordingly, while Shira’s current job revolves around defending a big platform against malicious but not-life-threatening online actors, she gives the example of Stuxnet—an aggressive cyber weapon with vast geopolitical consequences. According to her, such securitized tropes clearly differentiate between Israeli techies and non-Israeli ones, but they also have to do with how security experts like herself see their work.
Shira explains that for Israelis, profiling, including racial profiling, is essential for fighting against such threats. She curiously mentions Israel’s Ben Gurion Airport, notorious for its discriminatory treatment of Palestinians and other darker-skinned visitors (Margalioth et al., 2010), as a positive example that attests to the power of profiling. Accordingly, Shira argues that online racial profiling is essential for dealing with online threat actors like the ones she deals with, and laments that this differentiating principle is deemed unacceptable in the Bay Area culture. Like the interviewees above, Shira attributes these injunctions to Californian or American liberalism and to what she sees as typical Californian oversensitivity and overcautiousness. Liberalism is, of course, one of the pillars of the Californian Ideology (Barbrook and Cameron, 1996), and Shira underlines it as something that hinders the power of what she sees as an effective and legitimate tool—algorithmic profiling.
Gilad, who works in Trust and Safety for a GAMAM corporation, similarly said:
When you look at abuse [on our platform], you can undoubtedly say that [people] from specific countries—country A, country B, country C are the biggest [abusers]. The abuse comes from them, or at least, most of it does. Now, we get to the point where we say, ok, so write down an [algorithmic] rule that IF [the user] comes from country A, B, OR C, THEN we do X, Y, Z. But engineers here are more like introverts, they avoid friction, so there’s no chance they’ll do it. It will look bad, like we’re targeting or going after people or something. But it’s like, that’s where the malicious [behavior really] comes from! That’s not targeting!
Like Shira, Gilad laments that algorithmically profiling users based on their national identity is deemed problematic by his coworkers. His wish for a rule-based, social-sorting algorithm that would easily differentiate between users and minimize abuse gets complicated by his “introvert” coworkers and their fear of “friction.” Echoing the views expressed in the previous sections, Gilad explains that, in his view, there is nothing inherently problematic about such an algorithm. It might only seem so in the eyes of his passive Californian coworkers. That is, according to Gilad’s logic of difference, differentiating between people according to their identity is a legitimate, even necessary algorithmic practice.
Udi, a former Israel Defense Forces officer who works in compliance for a GAMAM corporation, similarly referred to how Israelis see AI ethics:
AI ethics still largely revolves around ad integrity. . . . And in this regard, in Israelis’ minds,’ there’s a binary—there are Jews and Arabs, right? There’s not an ounce of diversity. I mean, I don’t even think that leftists like myself, like us, who are very enlightened and blah, blah, blah, care about diversity. Do you see what I’m saying? I think that here [in Silicon Valley], . . . it’s much more of a loaded topic, and people are much more engaged with it, even smaller groups in the population. I mean, look at what’s happening with Asians right now in America [a wave of anti-Asian hate crimes]; look at what happened after [the murder of] George Floyd [the 2020 Black Lives Matter demonstrations]. . . . It’s like Americans say, “Excuse me? I’m Asian. I’m Sundar [Pichai], I’m the CEO of freaking Google, so non-English speakers or people who come from different backgrounds should not possibly be [algorithmically] biased against . . ..” And that’s what we need to solve, do you understand? This is the magnitude of ethics required to somehow enter these [algorithmic] models. And I think Israelis are less sensitive to it because it never really bothered them. Because, in Israel, racial bias is an inherent [part of culture], it’s even inscribed in Israeli law—if you’re Jewish, you can immigrate to Israel; If you’re Arab, you can’t. It is very deeply rooted.
Udi acknowledges that questions of algorithmic bias unavoidably deal with questions of identity, diversity, and inequality—questions of human difference. His blazing narrative goes back and forth—from cultural differences to algorithmic differences, from AI bias to racial bias, from discriminatory ads to discriminatory legislature. In the spirit of prevalent AI ethics guidelines, he highlights the contemporary challenge of incorporating equality and diversity into algorithms, but at the same time, he underlines Israelis’ incapacity of incorporating such principles into their worldview. Udi accordingly argues that Israelis’ indifference to AI ethics stems from the ethnic bias abound in Israeli culture, particularly in the inherent prioritization of one ethnic group (Jews) over others. Hence, according to Udi, Israelis’
When asked about his view of tech ethics, Dvir, a San Francisco–based consultant, similarly said:
Americans care, or at least give lip service that they care about being ethical, while Israelis care less. Whoever grew up with a golden spoon in their mouth . . . and is hence always so careful not to offend anyone is different from someone who grew up in a society with buses exploding and with everyone personally knowing someone who got killed in action. Therefore, [Israelis would] say, “Listen, bro, let’s not make a fuss only because a few people gamble [online].” It’s not like we’re bombing their hospitals or anything, so let’s not make too big a deal out of it.
He later added
Israelis are much rougher [than Americans]. We’ve been hardened in a much harsher environment. As a result, we are less sensitive, both in how we react to things and in how we generally behave. So, in our eyes, anything that ends with less than death is not serious harm. . . . In the US or California, they are at least trying. [But] people here grow up much more sensitive, so they can also allow themselves to be [more ethical], right? Their bar for what is considered legitimate can be much higher. They are much more aware of ethical questions both in terms of algorithms [and in other fields].
Like the previous interviewees and their algorithmic life stories, Dvir contrasts what he sees as Americans’ heightened interest in tech ethics and Israelis’ disregard for it. He explains this contrast by an alleged difference between Americans’ carefree upbringing (“golden spoon in their mouth”) and Israelis’ violent one. These are almost caricature-like generalizations, but by evoking Israelis’ close ties with fallen soldiers and deadly suicide bombings, Dvir turns to traumatic militarized cultural tropes to attest to Israelis’ insensitivity and hence, to their disinterest in algorithmic ethics. These tropes highlight Israelis’ sense of victimhood and traumatic suffering (Fassin, 2008), but at the same time, Dvir also implicitly critiques Israelis’ indifference to their own aggression (“It’s not like we’re bombing their hospitals or anything”). Thus, like Udi and Shira, Dvir argues that Israelis’ views of algorithmic ethics have deep cultural roots, but while Udi focused on Israelis’ logic of difference, Dvir highlighted Israelis’ cultural traumas as explanations for their insensitivity. Generally, Dvir speaks about “Israelis” from a slightly distant perspective. Nevertheless, he interchangeably uses the third person (“Israelis are. . .”) and the plural first-person pronoun (“We’ve been hardened. . .”). By that, he, too expresses a liberal critique of his cultural group while signaling that he still carries with him their complex, war-ridden cultural baggage.
Smoothing out the differences
While many interviewees echoed the views described above, others explained that these views are not set in stone. After all, culture is inherently mutable, and immigration potentially entails processes of assimilation, enculturation, and socialization. The Israeli community’s response to the violent clashes of 2021 offers an excellent case in point.
In May 2021, following a deadly outbreak of violence between Israelis and Palestinians that included rocket attacks on Israeli cities, Israeli airstrikes in the Gaza Strip, and unprecedented ethnic clashes between Israeli Jews and Arabs in various Israeli cities, a few Silicon Valley Israelis organized an “Israeli-Palestinian protest” against the violence. A few dozen protestors arrived in Palo Alto’s Greer Park, almost all Jewish Israelis, bringing pre-made signs and a small sound system. The sun began to set, cold winds blew from the bay, and one of the organizers invited people to speak impromptu, explaining that: “when we don’t talk, bad things happen.” Hila, a woman in her 40s, picked up the microphone and said the following:
Only after coming to the US have I had [work] relationships with Muslims, with Arabs. I now even work with an Arab that has family in Palestine. . . . It’s only when I came here that I [managed to] see these people as equal, . . . that I was able to actually feel the pain on both sides. Because in Israel, you have to be in “Woman’s Watch” [Machsom Watch,—a radical, left-wing Women’s group] to actually meet the other side. [Back home,] I was a lefty Tel-Avivian like so many [others] who sat at cafés and at the beach and spoke nice words, but I never really hung out with people from the other side. So, I think that actually, the perspective that we got from [living] here just makes it so clear. We are all human beings, and we just need to live with each other. We don’t hate each other. There’s nothing to hate about us.
Hila describes her logic of difference as a dominant-but-mutable cultural factor that has changed following her immigration to and work in Silicon Valley. According to her, the segregated life she had held in Israel prevented her from developing a more inclusive, and hence, more peace-seeking view of human difference. Thus, she retrospectively identifies the ethnic segregation prevalent in the Israeli economy (Yonay and Kraus, 2001) and specifically in Israeli tech (Darr, 2023) 8 as factors that have circumscribed her ability to see and empathize with people from her outgroup. She underlines this message by stating that she is a “lefty Tel Avivian,” namely, a liberal by Israeli standards, but that only after living in Silicon Valley she managed to acknowledge “people from the other side.”
While Waze’s CEO, Noam Bardin, cited in the first section, described Silicon Valley’s “mantra of political correctness” as one that privileges words over content, and while other interviewees echoed this message, Hila conversely described her “lefty” Tel-Avivian culture as one that revolves around speaking “nice words,” insinuating that such liberal-sounding words are far from peaceful actions. Namely, for Hila, it is Israeli liberalism that falsely privileges words over content, not the Californian one. Moreover, while Hila’s narrative mirrors Dvir’s words on the ties between Israel’s innate segregation and Israelis’ logic of difference, she describes how living in a culturally diverse place like Silicon Valley (English-Lueck, 2017) gradually changed her worldview to one that more readily accepts diversity, inclusion, and equality. After all, immigration tends to highlight cultural differences but it can also smooth them out.
Conclusion
With nearly 70% of its tech sector composed of foreign-born employees, Silicon Valley’s culture is one of immigrants. Nevertheless, the overwhelming diversity of this space is not only reflected in the lived experience of its inhabitants (English-Lueck, 2017), the local communities they build (Gold, 2018), or the technological and occupational bridges they build back to their home countries (Saxenian, 2007). Silicon Valley’s diversity also translates into diverse ethoses, taboos, and meaning systems—into various Californian ideologies.
Shedding light on one such ideology, this paper focused on Israelis in Silicon Valley. I showed that Israelis tend to highlight the differences between their culture and that of the valley, particularly contrasting what they saw as their direct, daring, informal, and sociable attitude (Shamir and Melnik, 2002) with Silicon Valley’s alleged complacency, political correctness, and hypersensitivity. I argued that these perceived differences specifically revolve around the Northern Californian emphasis on diversity, equity, and inclusion (Luhr, 2023) and that my interviewees accordingly expressed a
Moreover, describing data-intensive algorithms as
Nevertheless, I have also argued that as immigrants, my interviewees’ logic of difference is mutable and can potentially change in light of their socialization into “Silicon Valley’s culture.” Indeed, as Alexandre argued, Silicon Valley’s culture has a unifying effect on incoming techies, as people actively construct and sustain an “epistemological unity” around it (Alexandre, 2022: 26). However, as we have seen above, this culture is far from homogeneous. This Californian space houses various epistemologies, world views, and ideologies that also inform how techies see their algorithmic work. Moreover, Silicon Valley is in constant population flux, as techies often return to their home countries or move to other parts of the United States, only to be replaced by new immigrants (Saxenian, 2007). Hence, while the American-born Californian Ideology might be the dominant one in this region, it is continually diversified, challenged, and complemented by various other ideologies from various parts of the world. In other words, the contemporary Californian ideology is never strictly American, and never in the singular.
Nevertheless, is the Israeli ideology indeed so different from the Californian one? Israelis’ apparent puzzlement in the face of diversity, equity, and inclusion may be far from hippie liberalism with its progressive ideals of tolerance, self-fulfillment, and social justice (Barbrook and Cameron, 1996: 3). However, the Californian Ideology was also described as a myopic “Jeffersonian democracy” that creates “dominance machines” (Barbrook and Cameron, 1996: 13). Moreover, like the Israeli technological imagination, Silicon Valley also has deep, American militaristic roots (O’Mara, 2019; Turner, 2006), illiberal roots (Lewis, 2025), and its ethos has been described as one that privileges “disruption over sustainability, sharing economies over union labor, personalized access over public health, data over meaning, and security over freedom” (Levina and Hasinoff, 2017: 491). Others highlighted Silicon Valley’s tendency toward techno-solutionism, dataism (Van Dijck, 2014), and post-racialism (Noble and Roberts, 2019). Others underlined Silicon Valley’s disregard for algorithmic ethics (Metcalf et al., 2019) and the problematic implementation of DEI principles on the ground (Nader, 2018). This point is particularly resonant after the 2024 American presidential elections, and the overwhelming embrace prominent Silicon Valley figures (colloquially now known as “broligarchs” (Samuel, 2025)) gave the new administration. This embrace famously included rescinding DEI programs (Bhuiyan & Kerr, 2025), turning off systems of fact-checking and dis/misinformation (Chow, 2025), calling for a more “masculine energy” in Silicon Valley organizations (Francombe, 2025), and more. Thus, ironically, the Israeli-Californian ideology might provide a more lucid, perhaps more genuine depiction of today’s technological production in this space. Perhaps, as Bardin noted, the difference lies in words, not actions—in the discourses surrounding technological production, not their actual translation into algorithmic products.
This article draws on what I term Algorithmic Life Story Interviews—a structured yet flexible methodological approach that reveals how personal experiences and technological creations are deeply intertwined. Grounded in established life story methodologies (Atkinson, 1998) and the view of algorithms as culture (Seaver, 2017), it illustrates how individual paths and broader sociotechnical systems reciprocally shape one another. By interviewing software developers, data scientists, and other tech professionals, I focus on their formative experiences, career trajectories, and individual worldviews and guiding principles. Through narrative methods, participants explore how personal and national events, ethical considerations, and wider social contexts influence their view of, and work with algorithms. This framework clarifies how personal history, professional identity, and technological design intersect—offering a more holistic understanding of the “human hands” (Seaver, 2013) behind today’s computational systems. In the case before us, interviewees recounted how growing up in a war-torn, highly stratified home country, relocating to Silicon Valley, and adopting particular cultural norms inform their perspectives on human difference, their work culture, and ultimately their
Focusing on the Israeli case, it is also interesting to note that directness, uninhibitedness, informality, and a militarized ethos are often mentioned as key factors behind Israel’s success as the “Startup Nation” (Senor and Singer, 2009). Nevertheless, as we have seen above, while these characteristics may indeed promote entrepreneurial successes (Strebulaev, 2022), they also seem inherently incompatible with the creation of organized algorithmic ethics or with the hindrance of controversial algorithmic-differentiating practices. Similarly, my findings suggest that the close ties between the Israeli military and Israeli high-tech industry, and particularly, Israelis’ mandatory military service, are not only allowing Israeli engineers to develop new skills and construct new social networks (Swed and Butler, 2013: 125), but they also shape how they understand their algorithmic work and algorithmic ethics. Israel’s cultural traumas (Alexander, 2004) and their ensuing sense of victimization and militarization (Fassin, 2008) seem to play a similar role in their algorithmic imaginary (Bucher, 2019).
Finally, while scholars have shone light on the ties between culture and algorithmic production (Kotliar Dan, 2020a; Seaver, 2017), this article highlights the ties between algorithms and cross-cultural encounters. Moreover, researchers have recently explored AI developers’ attitudes toward algorithmic ethics, highlighting a fundamental disconnect between principles and practice (Ibáñez and Olmeda, 2021; Orr and Davis, 2020). Specifically, Avnoon et al. (2023) have argued that Israeli data scientists’ disregard of such ethics originates from the techno-libertarian work culture of data science’s parent profession—engineering. While engineering culture might indeed serve as a mediating factor in the socialization of these techies and the formation of their algorithmic imaginaries, this article adds a cultural facet to this discussion, showing that understanding developers’ views of their algorithms and their ethics requires an examination of their deep cultural tropes, their movements across space, and the differences that they make.
Footnotes
Acknowledgements
Acknowledgments I thank Fred Turner, Rivka Ribak, Asaf Darr, Ilan Talmud, Dana Zarhin, Roei Davidson, Ori Schwarz, Tair Karazi-Presler, Chen Bar-Itzhak, David Nieborg, and the other Rethinking Personalization workshop participants at TAU (organized by Alex Gekker and Tamar Ashuri) – for their insightful comments on previous versions of this manuscript.
Data availability statement
This research is based on ethnographic and interview data, which, due to ethical concerns, cannot be made available.
Ethical approval
This research was approved by Stanford University’s IRB (Protocol 58643).
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
