Abstract
In this article, we make sense of financial algorithms as new objects of concern for organizational ethnography. We conceive of algorithms as ‘objects of ignorance’ jeopardizing traditional ethnography from the perspective of its categories and methods. We investigate the organizational politics taking place within high-frequency trading – a sub-field of algorithmic trading where automated decision-making without human direction has reached a peak, and show that financial algorithms raise particular epistemic and methodological challenges for practitioners and ethnographers alike. Consequently, we develop a typology for various interpretations of algorithms as ethnographic objects, accounting for their structural ignorance and shedding light on a continuum of the changing human-machine/trader-algorithm relation. To this end, we use the concepts of ‘quasi-object’ and ‘quasi-subject’ as developed by Michel Serres, and make the point that in order to study financial algorithms ethnographically, we need to think anew the dynamic relationship they embody, and acknowledge their constitutive heterogeneity.
Introduction
What kind of ethnographic experiences do algorithms accept, afford or warrant? This question seems pressing as algorithms have taken over a good number of practices such as searching, indexing and ordering information (Gillespie, 2014; Kockelman, 2013, 2017; McQuillan, 2015; Mittelstadt et al., 2016). Totaro and Ninno (2014) wrote about algorithms as key objects to interpret modern rationality. They defined the algorithm as a recursive function that operates upon itself – a mathematical function defining our modern practices of classification and categorization. Neyland (2015) posed a direct response to this claim and raised a concern about the organization of algorithms regarding the apparent power, agential capacity and control that algorithms command on our lives. He argues, in contrast to Totaro and Ninno (2014), that algorithms do have a relation to their environment: commenting on how contemporary algorithms interact with their users, he notes that there is ‘an algorithmic-organizational politics of inclusion and exclusion stemming from modelling preferred relations with users, designing these models into organizations’ algorithmic practices and the active response made by users’ (p. 122). In this article, we aim to both strengthen and develop this argument, with a view to shedding light on the epistemological issues raised by algorithms.
We do so by investigating the organizational politics taking place within contemporary financial markets, a social field that tends to adopt new technological developments at an early stage. For instance, in algorithmic trading and its specialized subtype of high-frequency trading (HFT), automated decision-making without human direction has reached a peak. HFT therefore serves as an exemplary case when discussing algorithms as ethnographic objects. In HFT firms, the provision, matching and execution of orders are automatically carried out by the algorithmic system (Arnoldi, 2016) and the algorithmic technology deployed by such firms relies heavily on temporal and epistemic asymmetries. HFT operates with specific trading strategies that depend on information advantage: that is, on knowing something or decoding signals before other market participants. High-frequency traders are using high-tech devices to exploit information asymmetries occurring at microsecond intervals: by using sophisticated technological tools and computer algorithms to analyse multiple markets at once, they execute orders based on real-time market conditions. Proprietary trading strategies are carried out in order to move in and out of positions in fractions of a second. This development of HFT has caused heated debates in the public media, among regulators and financial institutions, about its consequences for financial stability and the future of regulation. One of the main concerns to date rests with the notion that black-boxed algorithms (Little-Gill, 2005; Pasquale, 2015; Rosen, 2009) have triggered an empirical shift from traders being in control of their financial devices to a situation in which those same devices are now more or less controlling the trader – subjecting it to the internal coded logic.
How can we make sense of such black-boxed algorithms, and what type of problems does the application of algorithmic objects raise for organizational ethnography? This question carries a variety of theoretical and methodological quandaries and dilemmas. First, and as is often the case within technological settings, the algorithmic context is generally deemed hidden from the human eye as we cannot access its internal operations. In most cases indeed, algorithms remain an obscure object – one that is hard to access and at the same time one important ethnographic object to study nowadays (Introna, 2015; Medina, 2015; Mittelstadt et al., 2016). Algorithmic trading systems process information so fast and in such volumes that even the sharpest of human attention and focus is incapable of grasping them in their entirety (Leinweber, 2017). Algorithms seem to bend the will of the individual trader by reducing it to a ‘caring’ role, focusing on infrastructure maintenance – such as drilling through the mountains and installing ‘dark fibre’ cables to increase the speed of algorithmic transmissions (MacKenzie et al., 2012). Thus, and in general, algorithms always produce more traces than what can be followed by human actors (Borch, 2016), whether they act in a front office position (for instance, as a trader), in a monitoring position (for instance, as an information technology (IT) developer or a compliance officer) or as a participant-observer (for instance, as an ethnographer).
A second problem is related to the issue of opaqueness: the status of algorithmic objects remains unclear. Because of its hidden nature, actors problematize algorithms in a very different, sometimes even contradictory manner. Approaches, following classic sociology (Weber, 1978: 121), would understand algorithms as another instrument of human agency (Reichertz, 2013). Here, algorithms are understood as objects of and for human subjects. Furthermore, and similar to other technological developments, the opposite understanding has also gained traction: an understanding in which humans become the subject of their technological objects. This is especially the case in recent discourses on machine-learning, in which algorithms are seen as controlling social life as they see fit, defining what we are able (or not) to access. In the past decades, such an asymmetric understanding of the relation of human subjects and algorithmic objects has been heavily criticized. Scholars in the tradition of Actor-Network-Theory (ANT) and Science and Technology Studies (STS) have tried to overcome both the (inter)subjective reductionism of classical sociology on one hand, and the notion of an algorithmic control on the other hand. Instead, they have proposed a symmetric relation between subject and object where both, as potential actants, mutually affect, influence and shape each other – thereby abstracting this relation from notions of subject and object (e.g. Latour, 1991, 1999). 1
In our opinion, none of these perspectives includes the constant shifting relations (or roles) between objects and subjects that seem necessary to account for when dealing with algorithmic objects ethnographically. Furthermore, and while we agree with authors in the tradition of ANT and STS that objects play an active role in the formation of agency and cognition, these approaches do not necessarily account for different types of technological objects and the relations they entail (Braun-Thürmann, 2002). It is our contention that, beyond the ‘displacement of action’ between humans and non-humans (Latour, 1996: 267), there are nevertheless differences in the types of relations of a subject to a key, to an automatic door or to a machine-learning algorithm. In addition, these relations are not necessarily symmetric. As we will show in this article, observing algorithms ethnographically entails encounters with all types of subject–object relations, including symmetric and asymmetric ones. Instead of providing a further attempt at delineating the (stationary) ontology of financial algorithms, we suggest to lay out a map of possible types of subject–object relations within the field of HFT.
Building on Michel Serres’ (1982) idea of ‘quasi-objects’ and ‘quasi-subjects’, we provide a dynamic understanding of algorithmic practices, which we understand as an ‘onlife’ (Floridi, 2018) dance between the human and the algorithm, with the two becoming indistinguishable on occasion. The multiple types of relations between traders and algorithms might include (1) algorithms as tools mirroring the intention of the trader acting as an extension of his will and (2) algorithms as autonomous entities shaping the intentions and will of the trader. These two ways of making sense of algorithms are often used as pivotal thresholds in public discourse, or by regulators when crafting rules for market participants. However, we contend that these perspectives should be complicated: indeed, beyond the complexity that can be found in most technical environments, algorithms raise a specific issue.
Having extensively surveyed the field of algorithmic and HFT, we have learnt from our interviewees and the data we gathered (Beverungen and Lange, 2018; Borch et al., 2015; Borch and Lange, 2016; Lange, 2016; Lenglet, 2011, 2014; Lenglet and Mol, 2016; Lenglet and Riva, 2013; Seyfert, 2016, 2018) that most of the time, however, neither the human trader nor the algorithmic machine is in full control: metaphorically, we can describe their relation as a ‘dance’ with one another (Pickering, 2010). Surely, a less festive term for this dance is related to the organizational politics we mentioned earlier: as we will see, the idea of a dance allows for making sense of the exchange of roles and agencies within the relation entertained between traders and algorithms. Thus, the focus of analysis has to shift to the zones in-between the two heuristic thresholds represented by ‘human traders’ and ‘autonomous algorithms’, which are often used in discussions about algorithms. They involve a blurring and distribution of cognition and agency, of being active or passive, of endorsing the role of a subject or of an object – where human traders and algorithms can also endorse (3) the role of a ‘quasi-object’ or (4) ‘quasi-subject’. In order to grasp this phenomenon, and building on our reading of Serres, we present a typology for studying the constant reconfiguration of the trader-algorithmic relation. In doing so, we provide a series of remarks for developing interpretations of algorithms as ethnographic objects.
The rest of this article is structured into four distinct sections. We first discuss scholarly debates on the role of algorithms, with particular focus on the tools for making sense of the human-machine/trader-algorithm relations. We then introduce Serres’ concept of ‘quasi-object’/‘quasi-subject’, with a view to showing how such studies might be complemented. Equipped with these notions, the third section looks into the interpretation of algorithms as ethnographic objects, and provides a series of remarks aimed at furthering the methods to study such objects. Following this discussion, we provide four ideal-typical relations of human-machine/trader-algorithm relations that can be encountered in ethnographic fieldwork. A conclusion discusses our contribution to the study of algorithms as new objects of concern for organizational ethnography, and characterizes the type of organizational politics that might be operating in such organizational structures.
Algorithms as ‘objects of ignorance’
Algorithms have been discussed extensively these past years, up to the point that there is now an abundant literature in critical algorithmic studies. 2
Early contributions on algorithms
The initial impetus for such developments can be found in early computer science and cybernetics, which studied the ontological quandaries prompted by the use of computers in society. For instance, Mirowski (2002) discussed the limits of computability for solving economics-related puzzles, with a view to organizing complexity through algorithmization. Similarly, scholars in the social studies of science and the philosophy of technology have for long questioned the computing abilities of machines and the types of knowledge that are inaccessible to them (Collins, 1998; Dreyfus, 1972; Selinger et al., 2007). For instance, Collins (2000: 34) delineated the features of ‘machine-like action’ to discuss how specific human routines might be replaced by computer-based activities.
Within this body of early literature, some scholars shed light on epistemic issues arising from the development of cybernetics. Among these, Dupuy (2000: 79–80) elaborated on ‘second order cybernetics’, with a view to explaining the epistemic problems resulting from the mutual observation of actors. This problem is particularly relevant for explaining contemporary financial markets, where second-order observations involve algorithmic processes. Elaborating on Dupuy and Heinz von Foerster, Muniesa (2014: 73–74) recently pointed out that a second-order observation is abstract (or ‘trivial’), while a first-order observation is concrete (or ‘complex’). From this perspective, the introduction of algorithmic market technologies can be understood as a replacement of complex human face-to-face transactions by algorithmic transactions, ‘trivializing the procedures governing transactions’ (Muniesa, 2014: 75). This move towards meta-observation is certainly a very important feature of the automation of financial markets.
In addition to the increase of abstraction and meta-observation, a simultaneous increase of uncertainty can be noted – a crucial element for making sense of the opaqueness arising from the introduction of algorithms into financial markets, and related market events. Advancing von Foerster’s ideas, Luhmann defined second-order observations as mutual observations. Such a constellation is characterized by the introduction of uncertainty and ‘contingency’ (Luhmann, 1994: 74). While first-order observations are simply ‘givens’ to the observer, a second-order observation furnishes these observations with contingency – that is, with the possibility for being different. Luhmann (1995) complicates this model even further by pointing out that this awareness of contingency is true for both observers – both know from each other that the other might observe things differently – turning this situation into a relation of ‘double contingency’: a reflective process of mutual anticipations (p. 304).
This literature helps us elaborate on the epistemic status of algorithms: the relation between traders and algorithms, or between algorithms, might indeed be described as a situation of ‘double contingency’. Accounts drawing on algorithmic fieldwork suggest that algorithms should not be reduced to first-order observers (simple tools) but also be recognized as second-order observers (having an agency of their own). As a result, studies have shown that the introduction of algorithmic techniques in financial markets has turned the direct interaction of human beings into the ‘interaction order of algorithms’ (Knorr Cetina, 2013, quoted in MacKenzie, 2018: 1637). Hence, this situation of double contingency undergoes a further complication, increasingly turning financial markets into opaque ecologies.
The social studies of finance tradition
These debates have served as a background for scholars working within the social studies of finance (SSF) tradition. At first, those studies were part of a broader concern to study market devices (Muniesa et al., 2007). They focused on the automation of marketplaces, where it is the market mechanism itself that is translated into an automat: for example, Muniesa (2007) focused on the adoption of the CAC algorithm on the Paris Bourse as a means to increase the quality of closing prices, with a view to avoiding price manipulation. While mentioning Luhmann’s second-order observation in passing, Muniesa (2014) addresses the problem from a different angle, rooting in the debate on performativity and adopting a pragmatist perspective. For him, price formation algorithms produce signs that signify what the market is meant to say (Muniesa, 2014: 69), and stock prices are to be considered as artefacts – they are constructed and immanent to the trading practice and exchange architecture of which they are a part.
Later on, SSF scholars began looking into the broader range of algorithms that now populate financial markets, especially the execution algorithms widely used by financial intermediaries since the mid-2000s. A series of studies looked into the materiality of financial algorithms, either from the perspective of their spatial location and related physical phenomena (MacKenzie et al., 2012), their impact on regulation (Arnoldi, 2016; Lenglet, 2011) and, more recently, the way financial algorithms – specifically, high-speed trading algorithms – have played a role in the reconfiguration of US markets (Beunza and Millo, 2015; MacKenzie, 2015, 2017) or European markets (Lenglet and Riva, 2013). These algorithms also have an impact on how traders relate to the market and the resulting ‘calibration’ (Borch et al., 2015): by keeping a close proximity to their algorithms, traders now face the need to attune their body to a specific ‘rhythmicity’ (Miyazaki, 2013, 2016). Especially, in HFT, algorithms are used to execute orders faster than human perception and seem to interact in quite unpredictable ways, by using strategies aimed at hiding their moves and exploiting other traders’ algorithms (MacKenzie et al., 2012). Here, the performativity thesis does not suffice to account for the kind of observational ignorance that now shapes the interaction order playing out between adaptive algorithms: indeed, as MacKenzie (2014) explains, ‘it would plainly be a mistake to treat trading algorithms simply as the faithful delegates of human beings. As Adrian Mackenzie (2006) notes, ‘[a]n algorithm selects and reinforces one ordering at the expense of others’ (p. 44), but that ordering may not be the one its human programmers intended’ (p. 3).
While early SSF accounts of algorithmic finance focused on the history of market structures (Muniesa, 2007, 2011; Pardo-Guerra, 2010), the mid-2000s have provided a market environment prone to the diversification of algorithms. Nowadays, algorithms are not only aimed at producing prices by pairing buying and selling intentions, but are first and foremost built for optimizing moves within market structures. The differences are important here: HFT algorithms, for instance, evolve more rapidly than the algorithmic solutions used by market platforms. They are also more numerous and more diverse in their types, designs and performances, and are often built in-house by market intermediaries: such algorithms constitute a genuine ecology, fitting within the system afforded by market structures (MacKenzie, 2018).
On the epistemic issues raised by algorithms
These studies, while providing insights into the realm of financial algorithms, however, do not question thoroughly the underlying epistemological issues. MacKenzie (2016), while not addressing the specific epistemological assumptions relating to the study of algorithms, mentions the problem of depending on ‘indirect evidence that can mislead’, quoting one of his interviewees who had warned him that ‘someone could be in all honesty saying [their algorithms are] doing [something] when in fact they’re doing something else, they’re just not measuring it right’ (p. 23). In another article, he speaks about the ‘considerable challenge’ posed for ‘investigating high-frequency trading empirically’ (MacKenzie, 2017: 176). MacKenzie further emphasizes the fact that the trader often faces the same problems as the ethnographer. The trader inventing the algorithms is not fully aware of its organizational politics – that is, how it might behave when interacting with other algorithms operating in the market. MacKenzie (2014) therefore asks, ‘how can one legitimate a domain that sometimes seems no longer observable, at least not to those without specialist data feeds and algorithmic equipment?’ (p. 6). This approach to trader-algorithm relations implies that not-knowing the inside of the black box and how algorithms interact in the market is a problem that needs to be overcome. It also implies that the question of ignorance is not something invented by social scientists who simply don’t have access to or don’t understand the inner workings of algorithmic objects: rather, ignorance is a defining feature of algorithmic practices, a challenge all practitioners are constantly reckoning with.
Few papers actually discuss these issues with recourse to epistemological debates. Coombs (2016: 283) noted the ‘obscure epistemic status of algorithms’ but did not address the issue from a theoretical perspective (which he refers to as ‘an abstract epistemological lens’). On the contrary, Seyfert (2016) offered a detailed study of how the way market actors – regulators, IT specialists and traders – access and construct knowledge, having recourse to different ‘epistemic regimes’. These regimes allow for understanding conflicting representations of events, practices and ways of sensemaking in an algorithmic trading environment, where the problem of ignorance makes for a structural feature of the operations insofar as the main activities of algorithms remain hidden and unseen, simply because of their sheer size. In addition, epistemic problems are amplified because in practice algorithms are often not just black boxes but relational entities that receive their meaning and instruction in relation to other algorithms (Lange, 2016). Consequently, the epistemic problems are not only related to the trader but also to the algorithmic object itself: thus, ‘object of ignorance’ can refer to the algorithm as a black box for human beings but also to the algorithm as a black box for itself or other algorithms.
At another level, algorithmic ignorance can also be used as a strategic unknown. It can be investigated as a kind of relation that might be produced from ‘structures of not knowing’ (i.e. structures intended to divide, obscure and protect knowledge). Such strategic use calls attention to what can be called the ‘anti-epistemic’ (Lange, 2016: 232), that is, the ‘study of non-knowledge or the art of how knowledge is deflected, covered and obscured’ (McGoey, 2012: 3). Specifically, McGoey (2012) points at the ‘value and practical uses of ignorance in economic and social life’, making the point that ‘ignorance is knowledge’ (p. 4) that can help understand unexpressed intentions and political agendas.
The inner logic of strategic ignorance, in turn, allows for questioning a central ethnographic problem: that of the lack of epistemic access to the ‘inside’ of the algorithmic object, and related strategic use of algorithmic ignorance (Lange, 2016). It seems to us that many works in the SSF mentioned above, while providing fine descriptions of algorithmic actants by drawing on the classic ANT distinction between humans and non-humans, in the end do not follow through to draw enough consequences from their discoveries: that the network of actants they describe contains an organizational politics in need of further investigation. This, we argue, can be done by broadening the type and range of our methods for reaching algorithms as ethnographic objects.
Algorithms as ‘quasi-objects’: bringing Serres to ethnography
It is our contention that we need to further theorize algorithms as ethnographic objects in order to develop an adequate understanding of their organizational politics. We suggest advancing a possible methodology for approaching financial algorithms by learning from the concepts of ‘quasi-object’ and ‘quasi-subject’ developed by Michel Serres (1982).
The main idea conveyed here amounts to moving from being to relation, and from a static to a dynamic understanding of the considered human-machine interaction. Serres makes a call for studying the collective that makes up this relation, which explains why he paved the way to essential ideas underlying ANT. He writes: This quasi-object is not an object, but it is one nevertheless, since it is not a subject, since it is in the world; it is also a quasi-subject, since it marks or designates a subject who, without it, would not be a subject. (Serres, 1982: 225)
For him, the quasi-object ‘weaves the collective’: the relations between the different individualities that gather around it.
This has profound consequences for our conception of human actors, which to some extent remains entrenched within a classical view in which they are seen as ‘being in charge’ of their tools in traditional ethnographies. Although many studies in the SSF have been developed based on readings of ANT classics, thereby giving a representation of non-humans and how they mediate action, often adding a surplus to the intentions, representations or ways of acting of their users (Beunza and Stark, 2004; MacKenzie and Millo, 2003; Zaloom, 2003), they sometimes fail at de-centring the human component of the relation proper, for lack of a coherent theorizing of the dynamic relation existing between the two paradigmatic poles (human/non-human) still popular among social scientists. Let us be clear: the SSF programme and related descriptive methods are essential, and criticizing their inability to defend a political agenda (Beunza, 2010) amounts to missing the issue. Here, we rather point at the fact that beyond descriptions of networks of actants, there is also a need to make sense of the relation proper, of its internal politics and organization. The concepts of quasi-object/quasi-subject offer a nuanced understanding of what is at stake in this relation.
In contemporary financial markets, scholars have shown that traders cannot be seen as ‘being in charge’ of their algorithms: in many cases, the algorithmic tool escapes the trader, whether for purely technical reasons (Lenglet, 2011; Ma and McGroarty, 2017), or more institutionalized justifications (Coeckelbergh, 2015; Lenglet and Mol, 2016). This of course does not suggest that algorithms should be represented as ‘living bacteria endowed with their own soul’ (Beunza, 2012), but rather to acknowledge that as a device, algorithms reconfigure the relation between users and their tools (Borch and Lange, 2016) by building a strong mediation (Latour, 1994), and to question this point thoroughly. In turn, the trader, the programmer, or who is in charge of the infrastructure, seems subjected to the regime developed and deployed by the algorithm. Constructing a new type of inter-subjectivity and inter-objectivity, where roles are exchanged from time to time between humans and non-humans, high-speed algorithms and their users appear to evolve in a specific ecology where one is defined by its ability to receive the move of the other, to dance with it, accepting the reconfiguration of the ‘I’ as ‘a token exchanged’ (Serres, 1982: 227).
For Serres, a quasi-object is defined by its ability to circulate between people (‘It circulates, it passes among us’, 1982: 47): the quasi-object, he writes, is that which ‘traces or makes visible the relations that constitute the group through which it passes, like the token in a children’s game’ (Serres and Latour, 1995: 161). This token, passing from one hand to another, is similar to the ball that children play with. Here, he notes that the clumsy ones are playing with the ball as if it was an object, while the more skilful ones handle it as if it were playing with them: they move and change position according to how the ball moves and bounces. (Serres, 1995: 47)
The ball is the origin of relations between the children taking part in this game, similar to the algorithm that is being used, valuated or gamed by traders in the market. And vice versa. Indeed, the quasi-object can be said to occupy ‘a space which [is] close to that of subject’ (Serres, 1995: 52). In the age of high-speed algorithmic trading, financial markets may be better understood as an arena where market participants play some kind of a similar ‘ball game’, or dance with each other: tweaking the parameters of their algorithms, traders are also destabilized, ‘sub-mitted’ to the calculations performed by the algorithmic machine, now mediating and organizing with its own coded rules the relations between market participants. All in all, Serres helps conceptualizing an understanding of the empirical material of HFT, that is, the algorithm not (only) as a black box, but first and foremost as an origin for developing dynamic relations within markets, in need of unpacking by the ethnographer.
But what makes financial algorithms so specific, in comparison to other types of entities traditionally studied by (organizational) ethnographers? How do algorithms jeopardize ethnographic research? After all, they might not be so different from other non-humans, which have already been studied: forests and related animist practices (Brown and Emery, 2008; Holbraad, 2009), borders (Jansen, 2013), ‘natureculture’ hybrids (Latimer and Miele, 2013) or mundane things (Giaccardi et al., 2016). And what about the objects studied by organizational scholars, such as hospitals and their digital clinical records (Bruni, 2005), or food factories (Hamilton and McCabe, 2016) to mention but a few? From the readings we mentioned earlier, and also based on our own ethnographic experience with algorithms, we propose to outline one distinctive issue that might be particularly relevant for understanding financial algorithms as ethnographic objects: the fact that beyond their opacity and secrecy, algorithms seem to not only put their users (the traders) and their observers (the ethnographers) on a level-playing field but also to continuously shift the relations between them. Indeed, even for those who possess the abilities to access algorithms and the knowledge required to make sense of lines of IT code, their resulting movements are not fully understandable (Burrell, 2016): as Gillespie (2014) has put it, ‘there may be something, in the end, impenetrable about algorithms’ (p. 192).
In the next section of the article, we elaborate on this peculiar aspect of financial algorithms by making use of Serres’ concepts of quasi-object/quasi-subject to develop four possible perspectives on algorithms as ethnographic objects.
Interpreting algorithms as ethnographic objects
The interpretation of algorithms as ethnographic objects can be described as being structured by ignorance. As we have said before, the materiality of algorithms is hidden from human perception: beyond the difficulties of securing access to a field that is often described as sensitive, based on objects that are understood as proprietary information, human beings (whether users or observers) can only but approach very close to the object of inquiry.
Interpreting algorithms as ethnographic objects does not limit itself to identifying potential interviewees in a given context. But it cannot either amount to developing knowledge by learning how to conceive, code and use such algorithms: the object of our inquiry, the financial algorithm, does not only ‘refuse’ itself to the ethnographer, but first and foremost to its creator or user. Indeed, not all of the traders or even the IT employees interviewed in past research were fully aware of the operations of the algorithms they programmed themselves (e.g. MacKenzie, 2018). Most commonly, they were not always able to explain why an algorithm would behave badly, triggering ‘large erroneous orders’ (Lenglet, 2011: 58). Here it is not possible to ‘only’ observe the traders or to treat the traders’ narratives on what their algorithms can or cannot do as transparent accounts of how trading strategies were executed, or even reliable accounts of how HFT algorithms operate (MacKenzie, 2014). At the same time however, the observer is limited by its nature as a human being: here, the organizational ethnographer shares the same kind of ignorance that is experienced by algorithmic and HF traders. This shared perspective constitutes a crucial difference when qualifying algorithms as ethnographic objects: one that we argue indicates a shift from more traditional forms of organizational ethnography
Recently, it has been suggested that multi-sited ethnography, a method identified by Marcus (1995) might offer a better approach to algorithmic organizations (Seyfert, 2016). In the social studies of finance, multi-sited ethnography has been performed under different forms by a number of scholars, such as MacKenzie et al. (2012) or Ortiz (2014). Applying a multi-sited ethnography in algorithmic finance means to spread the field of inquiry, despite the uneasiness at doing so (Abolafia, 1998; MacKenzie, 2009: 179), and also due to the secretive culture surrounding algorithmic trading (Gomolka, 2011: v). Very often, scholars explain HFT ‘by drawing indiscriminately from knowledge obtained through personal interviews with traders, exchanges, and/or data released from market authorities’ (Seyfert, 2016: 256), so as to provide a general explanation of the HFT industry.
While traces of algorithmic being, presence and activity might be collected and mapped, as previous research suggests, we contend that algorithms cannot be collected ‘in themselves’: because they are quasi-objects, they escape from our cognitive and material ‘grabbing’ abilities. The specific case of HFT shows that the opacity and relationality at play requires a distinct frame of interpretation for observing and making sense of algorithms. One possibility here would be to complement a multi-sited ethnography with different modes of interpretation of algorithms: depending on the perspective adopted by the ethnographer, algorithms could be understood as a coded narrative, the effectuation of an economic model, the implementation of a political rule, or a trace of how technology should be trusted for making decisions.
Deploying Serres’ concepts of the quasi-object/quasi-subject as a methodological approach entails a distinct understanding of methods – one where the tools collecting the data are not separated from the mere interpretation hereof. The interpretation of the research data (the observation of traders or algorithms observing the market) is always already a part of the data collection. Following this line of argumentation, we contend that HF algorithms cannot be studied from the sole perspective of classical ethnography. Given the opaqueness of the field and the role played by ignorance in its structuration, multi-sited ethnography might provide better avenues for developing interpretations of these relations and their dynamic nature, especially when paired with a typology of possible interpretations of algorithms as ethnographic objects.
Before we outline such a typology it is important to note that our focus on the dynamic relation between traders and their algorithms, and on the shifting agency between them, is first and foremost meant to discuss methodological issues. Thus, we neither speak in favour of a philosophy of the subject nor do we take sides with perspectives of objective and inter-objective relations. Empirically, all types of subject, quasi-subject, quasi-object and object relations are performed within financial markets, that is, algorithms being controlled by the trader and algorithms operating independently from its inventor. Our aim is to offer a method for understanding the dynamic and changing relations between actants, subjecting and objectifying one another, organizing their relations in different guises. Those relations depend on organizational structures, situational constellations and practical contexts.
A tentative typology for making sense of algorithms as ethnographic objects
Systematizing insights drawn from our readings and ethnographic experience, and thanks to the concepts afforded by Serres, we suggest to distinguish four algorithmic personifications: the subject attempting to master the algorithmic object (O); the dance of quasi-object (QO) and quasi-subject (QS) with shifting degrees of influence between the two; and the autonomous algorithm, increasingly becoming an algorithmic subject (S). Methodologically speaking, we remain agnostic about the status of these relations. They might be imaginations, interpretations or ontologies, depending on context and situation: in what follows, we provide brief overviews of what such perspectives entail and provide us with, for interpreting algorithms.
Algorithms as objects
In this first type of interaction, the mastering subject treats the algorithm as a tool, an object to be controlled. The relation between the trader and the algorithm is one of master and slave, of domination and resistance (or submission). This attempt to master the algorithm works particularly well if applied to algorithms that relate to non-algorithmic elements, for instance, when executing trades automatically in the financial markets or when identifying objects through surveillance technologies (Neyland, 2016, Neyland and Möllers, 2016) – cases in which the algorithm follows specific instructions. In such cases, the trader provides the algorithms with ‘if-then’ commands with predefined instructions or sets of actions.
In contemporary financial markets, the idea of mastering the algorithmic object has led to an ideology of ‘impersonal efficiency’ (Beunza et al., 2011). It implies that the algorithms are reduced to nothing but the passive tools of the trader, which increases transparency and anonymity and at the same time eliminates the social norms and emotions that used to define the activities conducted on the trading floor of physical exchanges (Muniesa, 2014: 73). From this perspective, HFT realizes Fisher Black’s (1971) fantasy of a ‘fully automated exchange’.
Most literature drawing on financial economics to study financial algorithms uses this perspective. However, and more often than is acknowledged in this literature, algorithms might lose the status of a mere tool and play a more active role. Here, the algorithm begins to lose its status as an object, either turning in a quasi-object or a quasi-subject, or even into a full-blown subject with a ‘life’ of its own.
Algorithms as quasi-objects
In between the two extremes – the algorithm as a passive tool and the algorithm as a living subject – various observational parameters switch sides. These are concerned with receptivity, agency, cognition and so on. Strictly speaking, every process of automation as in the case of simple execution algorithms involves a shift in the observation of agency from the human trader to the algorithmic tool. In HFT, this shift of agency is particularly obvious because activities take place in time horizons and space configurations that operate beyond the realm of human perception (Laumonier, 2015). Such shifts especially begin to emerge when algorithms not only have relations with human traders (i.e. do not simply follow instructions), but also with other algorithms. In financial markets, for instance, HFT algorithms react to other trading algorithms behaviour, rather than to long-term price moves. Here, unintended side effects can emerge and algorithms might develop effects of their own. Very often, there are inter-algorithmic effects that escape the traders.
A well-known example of such dynamics is the ‘Flash crash’ of 6 May 2010 (Borch, 2016). On this day, over one trillion dollars evaporated within a few minutes. The Dow Jones Industrial Average plummeted by approximately 5% of its total value in a matter of minutes. This happened when a mutual fund attempted to sell a very large number of E-mini S&P 500 contracts, triggering a negative feedback loop where HFT algorithms attempted to sell at lower and lower prices to minimize short-term losses. This negative trend spilled over to the equities markets and continued until computer systems paused trading temporarily, rebounding immediately. The flash crash has been linked to an unwanted domino effect where pre-programmed algorithms triggered other pre-programmed algorithms (Sornette and von der Becke, 2011).
What is important to note for the purpose of this article is the fact that the rules governing observation among agents are indeed trivial. The flash crash shows that HFT algorithms act upon (and respond to) the behaviour of other traders’ algorithms. Following Muniesa (2014), it can be argued that such second-order observation (algorithms observing other algorithms) alienates the trader from perceiving the aggregated outcome of his operation. The wider effect that the operation of the traders’ algorithms might have upon the operation of the market structure in which he trades remains totally opaque to him.
Algorithms as quasi-subjects
The shifting status of algorithms has consequences for the trader, somewhat de-centring it. It loses its status as normal subject and rational agent. Whether in the context of accessing markets (Lenglet and Mol, 2016), in the daily workings of algorithmic trading (Lenglet, 2011; Seyfert, 2018) or in HFT firms (Lange, 2016; Seyfert, 2016), interviewees have encountered many situations that would best be described as ‘abnormal’ within the dominant regulatory paradigm based on impersonal efficiency and a pure model of perfect operability.
With reference to the Flash crash mentioned above, HFT is considered by some to contribute to ‘mini’ crashes of a similar kind on a continuous basis, thereby representing an inherently destabilizing factor in financial markets (Golub et al., 2012). Furthermore, Ben-David et al. (2012) have argued that in HFT, due to the correlated nature of the assets traded and the accelerated time-scale, such mini crashes continuously occur in single stocks. 3 In fact, when observing and talking to practitioners, one cannot fail to notice the common awareness of the ambiguity and the dangers of algorithmic operations. In one case we observed, the Head of a trading desk kept an illustration of the collapsing stock price of Knight Capital on the wall. The demise of this once famous HFT firm was related to the accidental activation of an experimental trading algorithm in August 2012, and the poster on the wall reminded everybody of the fragility and fickleness of algorithmic objects. When abnormality makes for the routinized unfolding of daily practices, then the paradigm has to change, or at least to be adapted so as to take into account such features that are very close to environments prone at creating ‘normal accidents’ (Perrow, 1984; Weick, 2016).
As in other technological environments, HFT provides a context where human actors are not operating as rational agents in full control of their algorithmic devices. As has been demonstrated elsewhere (Lange et al., 2016), in HFT human traders are not interested in general developments of the market, but only in those that have a specific relevance for their trading system. In this case, the algorithmic process is structured through ignorance: knowing the internal operation of the trading system matters less compared to its behaviour in relation to other algorithms and the eco-system.
Furthermore, and in addition to the dance taking place within the trader-algorithm relation, there is also a regulatory restructuring of such movements: recent regulations of HFT, such as the German HFT act and the Markets in Financial Instrument Directive (MiFID II), have created new demarcations between algorithmic objects and human traders. For instance, the invention of so called ‘algo flags’ (labels for orders generated by algorithms) re-creates the algorithmic object in a completely novel way (Coombs, 2016). These regulations determine algorithmic objects not through their codes (within black boxes) but from a relational perspective focused on their effects on the microstructure of financial markets.
Algorithms as subjects
Most recently, some high-frequency traders have been confronted with yet another transformation of their relation to algorithms. In fact, algorithms seem to take on the status of a subject: here, we see the idea of algorithms taking on a life of their own. It is not uncommon to encounter this idea in interviews with traders. They speak of algorithms as ‘partly alive’ (Beunza and Stark, 2004: 396), as ‘evolving’ and also having an ‘end of life’ (Lenglet, 2011: 54). The vocabularies, narratives used and ways to relate to the algorithm suggest that the unclear status of algorithms generates an ambiguity. Being ‘partly alive’ also means not being quite alive, that is, differing at least partially from living organisms: not being like ‘living bacteria endowed with their own soul’ (Beunza, 2012).
This status of the algorithm as subject can be found, for instance, in machine-learning algorithms. In contrast to high-speed HFT described above, the algorithm is not so much responsible for the fast or efficient execution of orders or quotes but rather for finding unique trading patterns. In order for a machine-learning algorithm to function, a human has to perform a variety of assisting and preparatory steps: reducing the complexity of data from the environment for the system, selecting and engineering features (Kearns and Nevmyvaka, 2017: 92), and making these selected data and features ‘algorithm ready’ (Gillespie, 2014: 168). Thus, while the algorithm has taken on the task to identify trading ‘policies’, the trader has moved on to identify possible features that might be added to the machine-learning algorithm. Although machine-learning algorithms need a lot of information, they cannot operate with ‘raw data’: they need variables they can compare with each other to identify patterns and new policies – a task which remains with the human part of the trader-algorithm relation.
As such, the algorithm has a rather strong influence on the trader in terms of cognition and pattern recognition. For traders, machine-learning is accompanied by new layers of non-knowledge that arise from the very fine granularity of the data combined with a lack of understanding of how such data relate to market actions (such as profitably buying or selling shares, and optimally executing a large order). Consequently, the trader puts less emphasis on understanding the logics behind a certain strategy: it simply trusts the machine-learning algorithm to detect patterns that will simply work. In sum, machine-learning algorithms need assistance and the human trader provides that assistance. We are confronted with a dramatic reversal of roles where humans increasingly become the tool of what was once before the tool: the human is increasingly becoming the object for what is now an algorithmic subject.
It is important to repeat, the identification of machine-learning algorithms as subjects is a heuristic threshold. We do not claim that machine-learning algorithms are ontologically autonomous, intelligent or free. We also do not follow up-on the temptation to apply dialectical thinking to what seems like a Hegelian reversal of master and slave relations. The qualification of an algorithm as a subject emerges within the subject–object relation itself. They emerge from interpretations, imaginaries and ontologies within algorithmic practices. The current framing serves methodological purposes alone, with a view to helping social scientists in the analysis of algorithms as ethnographic objects.
Concluding remarks
In this article, we have advocated a methodology for studying high-speed financial algorithms and the related algorithmic relations and practices. We have done so with reference to recent research on algorithmic cultures, and to empirical data drawn from ethnographic fieldwork in financial markets. The development of this methodology is based on two basic contradictory observations that we have ourselves frequently encountered when doing fieldwork. On one hand, algorithms appear as powerful technical objects that best embody ideas of rationality and efficacy, but on the other hand, they also remain elusive and strange black-boxed entities to the observer. On one hand, the trader assumes a dominant role and is inclined to bend the algorithm to his intentions, with a will to remain entirely in control, and with the algorithm represented as a device that executes the will of the trader. On the other hand, algorithms are seen as active participants, decisive actors that sometimes bend the will of the individual trader. The latter scenario frequently emerges in HFT, where algorithms interact more with other algorithms than with human counterparts (MacKenzie, 2016). That is even more the case with advanced machine-learning algorithms, where human beings delegate the cognitive process of discovering trading opportunities (market correlations) to the algorithmic system.
As a consequence, the relation between users (the traders) and observers (the ethnographers) is transformed, in that even for those who possess the knowledge required to code the actions of the technical object, the resulting movements are not fully understandable (Burrell, 2016; Gillespie, 2014). To put it another way, observing traders dealing with their algorithms in different settings opens towards a more nuanced understanding of the human-machine relationship. This point also refers to apparent disconnections between narratives and practices, which we have encountered during our respective fieldwork. Interviews with different actors in different companies revealed the use of very different definitions about similar phenomena, which are usually expressed with conviction and certainty. That is not to say that these varying understandings indicate somebody is wrong or even lying (MacKenzie, 2016). It does suggest, however, that the degree of implicit non-knowledge and uncertainty is much higher than commonly understood (Karppi and Crawford, 2016), and that it roots within the quasi-object/quasi-subject relationship (Roberge and Seyfert, 2016). Both the user and the observer are confronted with similar epistemic constraints that involve implicit or explicit forms of non-knowledge and uncertainty, and types of trust that are easily broken.
We have argued that traditional methods have tended to frame algorithmic practices in two extreme ways: either as human expressions, or as black boxes containing all instructions necessary to operate autonomously. Related but not necessarily identical with these two methods are two types of reductionisms of social research. On one hand, there is the subjective and inter-subjective reductionism of classical sociology, where agency is entirely reserved for human subjects. In this subject–object relation, the human subject dominates, subduing the algorithmic object. The other reductionism stems from approaches that have actively tried to overcome the strict subject–object and human–non-human bifurcations, inherent in classical sociology. Such approaches (such as STS and ANT) have criticized the subjective and inter-subjective reductionism, replacing it with the notion of symmetry between subject and object, with a view to displacing the issue thanks to the resulting ‘flat’ landscape. In their view, actants encounter each other on equal footing, no matter whether the object is a key, an automatic door or a machine-learning algorithm. However, such symmetric approach has proven to be too inflexible when studying algorithms as ethnographic objects. The case of HFT shows that the relation between traders and algorithms varies, affording a multiplicity of shifting roles, both symmetric and asymmetric. In addition, there are qualitative differences between a user and his key and a human trader and his algorithm. While it seems true that agency and cognition are distributed between actants, the relation itself is not necessarily symmetric. Consequently, we have tried to create a more complex classification of subject–object relations and have developed an indicative typology for studying them. We also propose that a multi-sited ethnographic approach would be most suitable to study algorithms as ethnographic objects in order to grasp the changing human–machine/trader-algorithm relation.
We suggest replacing the notion of a general symmetry through the notion of an ‘organizational politics of algorithms’. This organizational politics accounts for various types of subject-object relations. In this article, we have exemplified four modes of subject-object-relations: the algorithm as object, quasi-object, quasi-subject and subject. The organizational politics is particularly visible in the extreme cases (the algorithm as subject or object), because these involve relations of domination and submission (or resistance and diversion), that is, domination of and submission to the algorithm. The specific dynamics in which this organizational politics unfolds – the mutual entanglement of algorithms and humans – we have called a ‘dance’, so as to account for the exchanging of roles and agencies. In our fieldwork we have indeed seen human actors and algorithmic objects entangled to various degrees, dancing with each other and changing roles in the leading of the dance. While in some cases, traders were bound to bend the algorithms to their will, others were more willing to be bound by them: each market event precipitated a series of agencies arranged in a singular setting allowing for the dance between trader and algorithm to develop, holding together before departing from each other within the time frame of the market context. Surely, other modes (or different kinds of dances) might be identified, when it comes to interpreting algorithms as ethnographic objects: we would however suggest that the organizational ethnographer should keep an acute eye on the relation before succumbing to the appeals of sole description.
Furthermore, the algorithmic object might even play ‘hardball’ with the trader, leaving him overwhelmed and frustrated. Finally, and perhaps most commonly, both might engage in friendly ‘teamwork’, at which point they become indistinguishable. Thus, the methodological question shifted from studying the intentions inscribed into tools and codes to the relational analyses of the nature of their entanglement. Consequently, it is possible to identify a continuum between trader and algorithmic involvement, with the two positions previously studied as two heuristic (or fictional) extremes – pure subjective or inter-subjective activities and pure algorithmic or inter-algorithmic movements. With this methodological framework, we hope to shift the attention away from the extremes (‘warm intentions’ and ‘cold codes’) to the areas in between where both extremes merge, sometimes becoming seemingly indistinguishable.
In addition, as the algorithm is always already in relation with other algorithmic objects or other human beings, we can no longer analyse it from one perspective or with one method. Instead, the social study of algorithms requires a multi-sited ethnographic approach, destabilizing positions by furthering the perspectives afforded by the interpretation of algorithms as ethnographic objects. It is neither enough to read and decipher the code of the algorithms, nor is it enough to only ask questions to human actors using these objects. What is needed is an organizational ethnography as a type of ‘teamwork’ (Rouleau et al., 2014: 4). The teamwork takes place in the general ecology of algorithmic trading, where the ethnographer needs to acquire an appropriate amount of basic knowledge. But as we have also shown, it also means understanding the important role of ignorance. For instance, hoping to understand algorithmic practices by learning to read the code of algorithms fails to appreciate the role of ignorance in actual ‘everyday’ activities (Gillespie, 2014). As in other cases, knowledge is generated in the relation: learning how to play the ball efficiently requires playing it effectively. The same is true for the algorithm: facing an algorithmic object, learning something requires accepting its relational nature and the type of ignorance it generates. For it is impossible to know anything about a relationship beforehand: only by committing to the relationship can we acquire some (non-)knowledge about it.
A second aspect of teamwork is related to the diversity of our fieldworks, however, conducted within the same field of HFT. As our illustrative cases and previous work demonstrate, the multi-sitedness of our methods also relates to the importance of capturing the variety of market ecologies. Algorithmic trading firms, their competitors, financial analysts, regulators, press commentators and all other interested parties have a unique and sometimes conflicting understanding of financial markets. Each individual understanding corresponds to varying epistemic views and different practical solutions with real effects on the market. A somewhat representative picture of algorithmic trading therefore needs to account for the specificity of the multiple actors operating in the various ecologies of algorithmic trading. Consequently, ethnographies of algorithmic trading require a variety of interpretations that do not only account for non-human actors but also for the particularities of each considered ecology.
Footnotes
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
