Abstract
This article analyses the complex work of human actors and technologies that goes into producing that which appears to us as ‘transparent’. Drawing on studies of governance and surveillance, affordance theory, actor-network theory and sociological work on numbers, we analyse the role played by mediating technologies in the production of transparency and relate it to the question of how knowledge is created, recycled and modified in organizational settings. This perspective is largely absent from existing research on transparency, which construes transparency as unmediated or fails to investigate the organizing properties of specific mediating technologies. We argue that mediating technologies, conceptualized here as disclosure devices, have distinctive organizing properties that are important to scrutinize. They play a central role in attempts to shed light on objects, subjects and practices, and to help build or break up relationships within and across sites and organizations. We focus on three disclosure devices and their respective knowledge creation processes: (a) due diligence, whose emphasis is on qualitative knowledge production; (b) rankings, which is about quantitative knowledge production; (c) big data analysis, which underscores algorithmic knowledge production. We conceptualize the distinct features of these disclosure devices, indicate ways in which they shape organizational processes and discuss some of the ethical and political challenges they pose.
Keywords
In 1913 Louis Brandeis, a US Supreme Court Justice, wrote: ‘Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman’ (Brandeis, 1913). This statement bears relevance in times of global financial crisis and cases of leaking and whistleblowing, which provides the opportunity to diagnose organizations as diseased and to speculate about the appropriate cure, such as ‘ripping open closed doors and restoring public oversight’ (Steffen, 2009).
The widespread faith in the regulatory and empowering virtues of transparency makes it pertinent to challenge its self-evidence as a cure to the problems of our times. We are not the first to do this, but our approach is different from much research by focusing on the complex work of human actors and technologies that goes into producing that which appears to us as ‘transparent’, that is, the result of efforts at ‘laying bare’ (Roberts, 2009: 962). Our analytical move towards scrutinizing the manufacture of transparency points to the ambiguities that always surround projects of transparency and helps us to consider their significance. For example, the project of exposing the hidden is important in many contexts, ranging from business and politics to science, but when put into practice it often ends up concealing more than is revealed, such as when an account of something in the past simplifies so much that it eliminates important, contextual information. In this way, transparency can easily become part of the problem it was intended to solve, producing concerns with distortion, concealment and collusion. Similarly, we need to consider whether highly touted phenomena like ‘big data’ will make it possible to uncover the unknown while remaining sensitive to contextual dynamics (McAfee and Brynjolfsson, 2012), or rather lead to new types of opacity. More generally, the project of transparency elicits a particular stance towards what counts as truth and certainty. When transparency is ‘rolled out’ programmatically in organizations or societies, typically with a view to correcting failures or empowering employees and citizens, it is understood narrowly as a superior mode of knowledge, a cultural signifier of unmediated objective information. It thus implies a denigration of other forms of disclosure, such as gossip, rumor, scandals and conspiracy theories, which are considered morally suspect, subjective and unreliable (Birchall, 2012). And this irrespective of the social functions these perform, like ensuring social cohesion, reinforcing a democratic public sphere or contesting an ideology (Birchall, 2012: 4–12; see also Comaroff and Comaroff, 2003).
Our contribution to existing literatures on transparency and organizational research more broadly is thus to look carefully at the contingencies of transparency, and at how it is produced. The first step is to take issue with those understandings that do not consider seriously the impossibility of full transparency and neutralize the active role of mediation. The second line of argument, which is central to the article, builds on and extends critical studies of transparency, which in fact do emphasize the naivety of unmediated transparency but downplay the significance of the mediating technologies involved. Seeking inspiration in Foucauldian studies of governance and surveillance, affordance theory, actor-network theory and sociological work on numbers and metrics, we relate transparency to the question of how knowledge is created, mediated, recycled and modified in different organizational settings and social domains. By implication, we focus on the active role played by mediating technologies, or what we term disclosure devices, in the production of transparency. Irrespective of their potential alignment with broader regimes of truth, disclosure devices have distinctive organizing properties that are important to scrutinize because these can serve to enlarge or restrict the scope for human interaction and connectivity. These properties play a central role in attempts to shed light on objects, subjects and practices, and their significance lies in their capacity to help build or break up relationships within and across sites and organizations.
We focus on three disclosure devices: (a) due diligence, whose emphasis is qualitative knowledge production, (b) rankings and quantitative knowledge production and (c) big data analysis, which underscores algorithmic knowledge production. Whereas (a) and (b) produce a retrospective kind of disclosure that has been the subject of critical investigation for quite some time (e.g. Espeland and Stevens, 2008; Hansen, 2012; Löwenheim, 2008; Maurer, 2005; Porter 2012; Power, 2004), albeit not specifically from the perspective of transparency that we offer here, (c) produces what seems to be a new kind of machine generated ‘anticipatory transparency’ entirely concerned with making predictions based on data mining and pattern recognition in large amounts of digital traces. It is not unrealistic that (a) and (b) will become increasingly aggregated with if not subsumed under the anticipatory logics of the algorithmic mode in organizational practice (Hildebrandt, 2011; Rouvroy, 2011). Organizations will have growing access to masses of real-time data and specialized expertise to interpret the rapidly expanding ‘datafication’ of human actions (Mayer-Schönberger and Cukier, 2013). The key contribution of the article is thus to conceptualize the distinct features of these disclosure devices and their ensuing modes of knowledge, to indicate the ways in which they shape organizational processes and to discuss some of the ethical and political challenges they pose for our understanding of the project of transparency.
The article is organized as follows. We first discuss relevant literatures on transparency and highlight the need to look more into the contingencies of transparency, specifically its mediated character. Second, we develop the concept of disclosure devices based on literatures that emphasize the materiality of mediations and their social implications. We then investigate in more detail the modes of knowledge produced by the disclosure devices mentioned above and discuss their consequences. In our concluding section we suggest that future research on transparency address the ascendancy of the algorithmic mode in the production of transparency. We mainly use examples from contemporary organizational efforts at controlling money laundering and corruption within and around organizations. The examples serve as an empirical backdrop to our conceptual argument, but where possible we provide examples from other issue areas. Our analysis does therefore not represent a full-blown empirical investigation of the field in question, but is used primarily as a resource for theorizing. By developing a conceptual reflection based on our studies of a specific empirical terrain, we suggest areas for further conceptual and empirical research.
The contingency of transparency
Scholars have shown how various forms of ‘seeing’ become central features of modern societies in which ‘lives and deaths [come] to be read less as a sign of cosmic metaphysical forces than as the sum of mundane biophysical processes, knowable primarily through the modest art of observation’ (Comaroff and Comaroff, 2003: 291; Scott, 1998). Particularly in the Western tradition, transparency, understood as visibility contingent upon observation (Brighenti, 2007), has come to rely on the notion of the ‘social contract’ in which ‘free individuals’ delegate some degree of their freedom to authorities in exchange for the provision of protection and security. Here authorities must provide reliable information to the public to be considered trustworthy and accountable (Drucker and Gumpert, 2007).
This conception of information has shaped research in which transparency is promoted as a solution to governance problems. Consider for example the claim that in order to be transparent, ‘ … organizations should voluntarily share information that is inclusive, auditable (verifiable), complete, relevant, accurate, neutral, comparable, clear, timely, accessible, reliable, honest, and holds the organization accountable’ (Rawlins, 2009: 79). Transparency is premised on a model of linear communication, and information provision will produce an informed and engaged public that can hold accountable people in positions of power (Fenster, 2006). The technological mediation that is necessarily involved in the provision of information appears as a neutral transmission belt, obscuring the power that is involved in the selection and coding of what is made visible to us and what is not.
Hood and Heald (2006: 20) note that the pervasiveness of transparency aspirations in modern governance is confronted by doctrines of confidentiality or secrecy promulgated by business, governments and interest organizations (see also Birchall, 2011). Recent examples include business invocations of commercial confidentiality in processes of out-sourcing and privatization, and government insistence on state secrets in the so-called war on terror (Morozov, 2013). Such doctrines remind us about the fundamentally political nature of transparency: it can easily move from an unmarked taken-for-granted issue to a marked and contentious issue that articulates normative boundaries in social life. Ironically, confidentiality and secrecy doctrines are typically premised on the same model of linear communication that underpins mainstream transparency aspirations. That is, too much information provision can empower the public in areas contesting those in power. Like transparency, doctrines of confidentiality and secrecy display the same problem of obscuring normative, technological and political contingencies.
Critical scholarship on transparency has come in different versions, with critics arguing that transparency does not deliver what it promises and/or is aligned with neoliberal projects of global capitalism and its chronic opacities. Most studies emphasize the need to analyse the contexts of information disclosers and information users, the benefits and dangers of transparency policies and the role of governments in developing and implementing such policies. For some, transparency policies can be designed to help people assess the validity of claims made by organizations and thereby contribute to more informed decision-making, organizational learning, organizational efficiency and effectiveness. This more optimistic view is characteristic of Fung and colleagues’ work (2007), which argues that policies of ‘targeted transparency’ can be effective if designed properly. Targeted transparency does not simply increase information as such, but rather the knowledge that informs the choices of citizens and consumers (Fung et al., 2007: vii). In a similar vein, studies of transparency in the context of corporate social reporting have emphasized the political task of mobilizing competitive environments, with a specific view to ‘ratcheting up’ standards through increased transparency. Importantly, this requires the production and availability of comparative data that enables ‘users to focus on the worst performers to ratchet up minimum standards and also allows some disclosers to benefit as top performers’ (Hess, 2007: 471).
Other critical studies have argued that transparency, usually promoted as a trust-enhancing measure, can spur mistrust. As Strathern observes (2000: 310) ‘people want to know how to trust one another, to make their trust visible, while (knowing that) the very desire to do so points to the absence of trust’. Rendering something visible can distort organizational performance, and impose new forms of closure, self-censorship and anxiety. Also, too much light amounts to tyranny (Tsoukas, 1997). Other unintended consequences include de-coupling whereby organizations project an image of being in control to the outside observer, whilst internal operations resist any meaningful connection to this image. This gives way to the notion of transparency as ‘theatre’ that hides more than it reveals (Power, 1997, 2004; Strathern, 2000) and weakens the effectiveness of accountability (Roberts, 2009). Accountability, similar to transparency, is contingent. It is never possible to give a full account (just think of the fragility and incompleteness of memory). And since demands or norms of accountability are not equal across the social spectrum, any individual or organization will face severe troubles meeting several accountabilities at the same time (Butler, 2005; Messner, 2009: 932).
Other critical scholarship has focused on the politics and material relationships that operate independently of information provision processes. Zyglidopoulos and Fleming (2011) argue that late modernity has made a plethora of information available, but also that growing reflexivity amongst consumers has forced organizations to be more visible and accountable than before. On closer inspection, the move towards transparency and accountability is only partial as the same conditions of late modernity have made it easier for organizations to hide their contested practices, such as the outsourcing of unethical activities in global commodity chains (Zyglidopoulos and Fleming, 2011: 692). Particularly in the context of globalizing neoliberalism, ideals of transparency have come to serve the vision of the market as the central agent in society, emphasizing deregulation and observable economic transactions to maintain a level playing field in competitive markets, and enhancing organizational and individual self-regulation (e.g. Garsten and Lind de Montoya, 2008; Larmour, 2006). Transparency initiatives are endorsed by business and politicians but they do not undermine the prevailing ‘business as usual’, such as the abuse of authority. They are largely voluntary and a surrogate for hard regulation, bolstering existent and opaque configurations of power (e.g. Hindess, 2005).
In all, critical research on transparency has examined the difficulties, failed promises and unintended consequences of transparency when put into practice, but it has focused less on analysing how transparency is produced. Power’s work (1997) on how budgets, audits and benchmarking create particular knowledges and linkages that enable governing at a distance, as well as related work in the field of critical accounting studies (e.g. Miller, 2001; Miller and O’Leary, 1987; Roberts, 2009; Robson, 1992), provides important insights into the intersection of mediating technologies and power. Governing at a distance rests on the provision and translation of information about subjects, objects and processes to centers of calculation and power. In other research mediating technologies tend to be reduced to a vehicle for the injection in organizational settings of more important forces or mentalities, such as globalization or neoliberalism (Rose et al., 2006: 95). Here the constraints and opportunities provided by specific mediating technologies tend to escape analytical attention. We argue below that technological mediation in the shape of disclosure devices is more loosely connected to such forces, more active and mobile, and that the devices have distinctive properties that should be analysed more carefully to understand the production and significance of transparency in particular contexts.
Disclosure devices
The move, originally identified by Foucualt, from sovereign and disciplinary power based mainly on law and coercion, to governmental power or governmentality, which aims at enhancing the capacity of populations and individuals to govern themselves, has spurred a lot of focus on the material technologies used in government and management, such as architecture, examinations and surveys, metrics and statistics, etc. (Hacking, 2007; Miller and Rose, 1990; Power, 1997; Rose, 1999).
The classical example of how material structures have governing effects by creating a field of visibility is Jeremy Bentham’s panoptic model and its self-disciplinary system, which became famous in the exploration of surveillance in modern society (Foucault, 1977; Lyon, 2006). The image of the panopticon as a device for centralized observation, control and self-discipline in modern societies has come to be slightly nuanced (Brivot and Gendron, 2011). Contemporary surveillance is mediated through an enormous range of devices that make visible subjects, objects and processes, including templates, diagrams, reports, video cameras, remote sensing, traditional mass media, social media and statistics. Surveillance also takes place in a plethora of domains, and includes the scrutiny of consumption and entertainment patterns for security, research and commercial purposes. Add to all this the digital turn which has made it possible to capture, store and aggregate huge amounts of data by public, private and civil society actors. The multi-directional nature of surveillance means that its target is not only the ‘deviant’, as frequently implied by the panoptic metaphor, but nearly everyone by default. That surveillance is made from multiple angles and social positions reflects the polycentric character of contemporary social formations, in which governing subjects, like representatives of governments and corporations, are also governed subjects.
If material disclosure devices are crucial in the multi-directional ‘conduct of conduct’, their distinctive organizing properties are unclear. A frequently waged critique against some Foucauldian studies is that while they acknowledge the role of material technologies they nevertheless subsume these under the logics of large-scale mentalities without really exploring the specificity of these technologies (Kipnis, 2008; Porter, 2012). The critique is a fundamental one, but also problematic if it implies that disclosure devices are not to some extent aligned to wider regimes of truth, that they are not coded by rationalities of various sorts, but rather considered to be fully autonomous. We believe that while devices can never be fully autonomous it is misleading to consider the use of particular disclosure devices in organizational practice as simply ‘derived from’ or ‘reflecting’ larger ‘mentalities’. For example, research on audit practices in non-Western settings indicates that social formations need not be shaped by neoliberal mentalities of the West to adapt the same disclosure devices for the purpose of carrying out audits. Disclosure devices are connected to the politics of localized social situations and issues that are irreducible to macro mentalities (Kipnis, 2008; see also Porter, 2012). This does not detract from the potentially powerful effects of using the devices, such as the conferring of particular identities on people and objects, enabling inclusion or exclusion from networks and communties, changing organizational scripts, practices and resource allocation.
Another set of theoretical contributions can improve our understanding of the role of these technologies while keeping intact the central insights from Foucauldian inspired studies of surveillance on materiality, knowledge creation and governance. Affordance and new medium theories (e.g. Deibert, 1997; Gibson, 1977; Hutchby, 2001) suggest that mediating technologies create fields of potential action that can respectively enlarge and restrict social interactions. The transformation of basic information into knowledge is seen as strongly dependent on mediating technologies, which are never neutral but always impose certain constraints on the nature and type of possible human communications, while facilitating other types. For example, printed information is stuck into a medium (paper), which slows down the dissemination and reuse of information severely in social networks, whereas digital information speeds up and expands these processes, with implications for the scope of social relationships. This view can be critized for being technologically deterministic, neglecting the role of human actors and social context in shaping the mediating technologies in the first place. Yet the critique tends to ignore the different effects attributable to the mediating technologies as such, for example via their design and functionality. People create different technologies for different purposes, but human effort can never be completely determinant, because once a specific technology is introduced, it ‘becomes part of the material landscape in which human agents and social groups interact, having many unforeseen effects’ (Deibert, 1997: 29) and ‘generative capacities’ (Rubio and Baert, 2012).
This also means that technologies originally developed and justified for one purpose, say the exchange of personal information on social media, can easily find surprising applications, like assisting the insurance industry revealing fraudsters. The affordances inherent to a mediating technology have unforeseen disciplinary consequences, also termed ‘function creep’ (Ellerbrook, 2010). A more valid critique of affordance theory is that it does not really theorize the linkages created between humans and material objects in social practices. Here some of the approaches mentioned earlier are stronger as they draw attention to the decentred ways in which material technologies help to produce relations of power, as well as amplify these. But as already indicated, these approaches have less to say about the distinctive organizing properties of particular technologies and focus more on the determining forces of larger mentalities.
Insights from actor-network theory, ANT, and more broadly Science and Technology Studies (STS), can help to address the distinctive nature of mediating technologies while demonstrating their significance as they become aligned with human activity in networks. ANT and STS is a diversified literature and subject to critique addressing various ontological, epistemological and political shortcomings, a debate we shall not take up here (e.g. Whittle and Spicer, 2008). Nonetheless, key insights about the relationship between the human and non-human have been a source of inspiration for very different bodies of literature, including studies of power and numbers in governance processes (e.g. Hansen and Porter, 2012; Miller and Rose, 1990; Rose et al., 2006), and research on the politics of knowledge (Rubio and Baert, 2012) .
ANT and STS, much like affordance and medium theory, suggest a performative role for material objects. But unlike affordance and medium theory they focus particularly on the entanglement of humans, including their bodies, thoughts and ideas, with non-human objects, such as material things and devices, in socio-technical networks (Higgins and Larner, 2010; Kendall, 2004; Latour, 2005; Law, 1992; Mackenzie et al., 2007). Processes of inscription, translation and enrolment are important in tracing as well as producing agency (which is always the combination of humans and non-human objects), and thus also its performative effects. In contrast to the concept of diffusion (Latour, 1987), translation involves the modification of agency as it moves through networks of humans and non-humans. Translation plays a key role in making networks durable, enrolling and empowering some actors while excluding and disempowering others. The process is not harmonius and smooth, but replete with controversies as information travels along socio-technical networks. Once these are settled, the new objects produced are ‘black-boxed’, i.e. taken for granted and naturalized.
We can use these insights to analyse some of the distinctive features of disclosure devices, including their significance in social processes particularly concerned with the production of transparency. Generally, disclosure devices are deployed to make objects, subjects and processes visible through visual, verbal and numerical representations, but precisely because the properties of the devices differ their effects will be different as well. The representations that result from disclosure are not only resting on institutional classifications and selections made in the process of forging them (Bowker and Star, 1999; Hacking, 2007) but also on the translation of local knowledge and relationships into new contexts and domains, while often reducing, and potentially even concealing, the complexity of what the representations aimed to make legible in the first place. Translation processes engage humans and technologies, resulting in the formation of socio-technical networks in which specific modes of generating transparency can become standardized and legitimized, such as ways of vetting business partners, or calculating and comparing the quality of governance across cases. Transparency is manufactured while it simultaneously orders social reality through the work of simplifying representations that travel throughout chains of translation, entangling people and material objects and reconfiguring relationships.
In the following subsections we analyse three widely used devices for manufacturing transparency: (a) due diligence, whose emphasis is on creating transparency via qualitative knowledge production; (b) rankings, which is about knowledge production through abstract quantification; (c) big data analysis, which relies on algorithmic knowledge production. These disclosure devices differ in several ways: they rely on different forms of material and methodologies, produce different representations and visibilities and involve different articulations of social relations, boundaries and possibilities for recirculation and participation. The goal here is to demonstrate the distinctive properties of the devices, the modes of knowledge and transparency production involved and to discuss their effects. We draw mainly on examples from international anti-corruption and anti-money laundering efforts, and where possible, provide examples from other fields.
The qualitative mode: due diligence
Emblematic of the qualitative mode is the production of narratives that make visible certain actors, relationships and processes in the past. The conduct of due diligence provides an example of how this production takes place, illustrating also how humans and materials become entangled in networks as the process of disclosing and translation develops.
Organizations increasingly operate across multiple jurisdictions shaped by different norms, legal requirements and media attention. Doing business under such circumstances has pushed organizations to vetting business partners carefully. Although scrutinizing business partners is a social practice as old as commerce itself, the term due diligence appeared for the first time in the US Securities Act of 1933 (Spedding, 2004: 2). In its more current use, due diligence refers to the processes whereby corporations go through the histories of their potential partners before closing a deal in relation to mergers, acquisitions, joint ventures, strategic alliances or supplier arrangements. Consider the following excerpt from a web page titled ‘due diligence horror stories’ run by a management consulting firm: An associate at a private equity firm was responsible for opening up operations in Eastern Europe. He discovered an opportunity to invest in a recently privatized manufacturing company. The CEO at the target company was charming and seemed competent. However, after six months of effort, the executive learned that the CEO had been convicted of embezzling money from a bank he worked at and had ties to syndicated crime. Having learned his lesson, the associate now makes sure that background checks are conducted first thing in the due diligence process. (http://www.astutediligence.com/Diligence_Horror.htm#criteria)
Doing such ‘background checks’ caters to the legal demands placed on companies operating in for example the financial sector and can protect a company from lawsuits if the procedures have been followed. This legal core of due diligence has been complemented significantly over the years with other facets, as illustrated by terms like human due diligence, cultural due diligence and integrity due diligence (e.g. Harding and Rouse, 2007).
The conduct of due diligence requires significant work and mobilizes human beings and materials. Investigations in mergers and acquisition processes, for example, are carried out by one or more examiners, e.g. a law firm representing an acquiring corporation, who then vet the pre-selected examinee(s), i.e. the potential targets of the acquiring corporation. The material used in the examination includes spreadsheets and terms of reference, whose various classification systems guide the collection of data, interviews of examinees, analysis of documents and use of information from public media. Searches for the examinee’s business history in the databases which promise to ‘find risk that does not want to be found’ (e.g. www.world-check.com), complement the entire exercise. While much of the work revolves around material forms and documents, building personal relationships is highly important, such as meeting clients in person, being introduced to their business partners and families, and where relevant, visiting facilities and so on. The transportation of bodies to specific sites where meetings take place, with copies of documents on the table to facilitate discussion face to-face, all testify to the intersection of the human and material in due diligence processes, whose immediate material outcome is a completed due diligence form. This form discloses details about the client, and the client’s objective with business arrangements. In this way, due diligence reduces complexity and aggregates information into packages that come to represent objects and subjects, and which then become part of other information packages exchanged within and across organizational settings (Flyverbom, 2012). But disclosure tends to be restricted to the examining organization. It is a confidential if not private affair, although confidentialities can transpire to the public once the process has been completed.
Contrasting due diligence conducted by business with peer reviewing in academia and by international organizations provides further insights into this mode of qualitative information disclosure. In both due diligence and peer reviewing, the compilation of information is case-based, qualitative and frequently with face-to-face encounters as part of the process. In academia, peer review refers to the anonymous process of evaluating manuscripts, but in employment situations it also concerned with the review of a person’s identity, references and CV. Peer reviewing is ‘about a particular kind of recontextualization, reputation, and regard’ (Maurer, 2005: 488). The set-up resembles laboratory work, where documentation, experimentation as well as the generation, maintenance or termination of relationships play an important role (Porter, 2012). Peer reviewing of states, conducted by international organizations such as the OECD and EU is a more open process. Specially appointed officials from member states carry out missions to other member states to examine the degree to which the examined state has adopted the policies agreed upon in international conventions. The system entails day-long interviews conducted by the examiners with selected representatives from the country being examined. Learning, value sharing and mutual trust play an important role as examinations proceed. This can open up discussions between the examiner and examinee where the latter can be challenged to explain and defend him or herself in public, bringing accountability into the process (Messner, 2009; Porter, 2012). The outcome of the peer review is a written report, which is intended to make visible often problematic dimensions of the examinee and contains recommendations for improvement. Once entering the public sphere, the report begins to live a life on its own, interacting with state officials and civil society actors representing a wide range of organizations who translate the results in accordance with their specific interests and purposes.
The knowledge that can be gained through due diligence and peer reviewing is limited. The final reports from these processes, whether published or not, only include the information that the mode of examination affords and requires. There is no way of gaining full certainty about clients, the origin of their funds and the nature of their business, nor about the quality of candidates for positions in academia or about the progress made by governments in terms of complying with international agreements. But the point is not just that peer reviewing and due diligence provide limited disclosure. The narrative representations contain knowledge and judgements about what to see or refrain from seeing, and they package information in particular ways. When these packages start to travel and interact with those who use them, the confidence in their truth becomes decisive for the roles the packages come to play and for decision-making as such. For the examiner, the peer review or due diligence report provides the basis for recommendations and decisions based on ‘reason’. Certainty and truth can be replaced by a concern for personal or organizational reputation (Maurer, 2005). For the examinee, peer review or due diligence confers the identity of a ‘good or bad guy in class’, potentially leading to inclusion or exclusion from a ‘club’ (e.g. the OECD) in the case of an examined country, or a network of suppliers in the value chain in the case of a vetted company. For the critics outside the examination process, due diligence appears to be focusing on the wrong issues, in fact it can be accused of being ‘undue’ to the extent its serves a smokescreen for the continuance of problematic business practices (Global Witness, 2009).
Because of their situated, diagnostic and historical nature, and the focus on persons, context and narratives, the qualitative mode of peer review and due diligence differ from the ‘depersonalized’ scrutiny enacted through the quantitative mode, which we analyse in the next subsection.
The quantitative mode: rankings
The quantitative mode focuses on the production and communication of numbers (Porter, 1995). Numbers can mark, as when they are used to identify particular objects (license plates on cars), persons (shirt numbers of a sports team) or locations (postal codes). Counting also extracts a particular quality of the objects being counted and leaves aside all their other qualities. The resulting number is far more mobile than the object. Words share this quality to a certain degree, but numbers are sparer. Numbers are also more stable and precise than words: ‘Even less polysemic words than democracy have more complex meanings than a number like 83’ (Hansen and Porter, 2012: 413). Finally, numbers can commensurate. Commensuration refers to the ‘valuation or measuring of different objects with a common metric’, implying not only that all qualitative difference is transformed into quantity under a ‘shared cognitive system’ (Espeland and Stevens, 2008: 408), but also that a hierarchy of knowledge can be established with high and low positions. Importantly, commensuration does the trick of conveying the impression that measurements involved are ‘in principle replicable and not dependent on when, where and whom the measurement is done. This is the foundation of their impersonality and objectivity’ (Power, 2004: 769).
The quantitative mode of disclosure is thus much more distant from the objects and subjects it seeks to make visible than the qualitative, relying on the compilation of disparate sets of data sources, which, through commensuration, are then transformed into rankings that allow for comparison across cases. The selection of specific sources and the calculative processes that go into the construction of rankings is largely concealed, but in reality, it rests on the work of mobilizing human actors and materials. Of course, the concealment of selection of sources and the human work necessary is also very much part of the qualitative mode just analysed: the underlying institutional classifications that make the selection of particular sources relevant in due diligence and peer review processes are typically taken for granted, just as the work and entanglement of humans and non-humans in the examination process. However, all such black-boxing is magnified in the quantitative mode because the categories of similarity and difference and the establishment of a common metric, a sine qua non for transforming quality into quanta, not only have to be negotiated between those who design the measurements, they must also be put into aggregated forms, such as rankings. All this involves the mobilization of specialized knowledge, bureaucratic organizing and the mastering of calculative and communication technologies. In contrast to the qualitative mode where personal encounters are important, the creators of rankings need no personal or bodily engagement with the field or issue being measured.
Even though rankings are seriously questioned methodologically, they appear as objective representations. Rankings are very often designed to anticipate expectations, facilitating alignment to specific plans and programs for action. In contrast to due diligence and peer review reports, the importance of the aesthetic dimensions of rankings is evidenced in the glittering publications with graphs, tables and indices disseminated by public and commercial organizations worldwide, and more recently in the attempts to make complex comparisons accessible and dynamic online, enabling multicolored comparisons over time and across space. Consider the example of Gapminder, which under the heading ‘unveiling the beauty of statistics for a fact based world view’, sets out to transform global statistics into digitalized maps that illuminate gaps in world economy and politics over time, a device which can be aligned to educational purposes (http://www.gapminder.org).
Rankings are disseminated by public media and computer systems. They circulate instantaneously and at great distance, interacting with those who use, translate and align these packages of knowledge to their own objectives. The example of the Corruption Perception Index (CPI) illustrates this process. Since the mid-1990s the NGO Transparency International (TI) has launched an annual CPI, which is an aggregated performance index that draws on different expert and business surveys to measure the perceived levels of public-sector corruption in most countries of the world. The CPI presents a hierarchy of good and bad performers, and was a breakthrough in the construction of corruption as a ‘global policy issue’ in the 1990s by conveying the general impression that corruption can actually be measured. As Krastev (2000:37) observes, its ‘most important effect was the public conviction that it was possible to compare levels in certain countries and to monitor the rise of corruption in any one individual country’.
While the systematic introduction of quantitative methodologies in anti-corruption campaigns from the 1990s onwards has made the spatial distribution and frequency of the phenomenon visible and communicable through rankings like the TI CPI, the methodologies have been widely criticized for being flawed, even by TI itself: it does not measure the actual practice of corruption but only perceptions, and it compresses national complexity into a single-number assessment of one country, which are repeated by officials and investors and thereby provide false authority. All this leads to simplistic accounts of corruption, as well as to public debates, recommendations and decision-making based on distorted and biased information (Hansen, 2012).
The criticism waged also suggests how the ideal of ‘uncontaminated’ measurement can provide authority to quantification, maintaining its ‘scientific stamp’. By implication, the criticism also reflects the more general notion that rankings can indeed be performative, contaminated or not, impinging on listed objects and their surroundings. Rankings are not simply techniques of representing things, but themselves capable of ordering organizational and social action once they become latched onto institutional agendas. In fact, despite widespread criticism, the annual CPI remains one of the most widely used corruption indices for decision-making in important matters. It forms part of decision-making processes in credit rating agencies and amongst aid donors, and is recommended by the consultancy industry as one important instrument in the construction of risk management systems in corporations. The OECD suggests it as a tool for corporations to assess their risks in ‘weak governance zones’. It is an important component of the World Governance Index, an aggregated index developed and promulgated in the context of the World Bank (Porter, 2012). Ironically, one of the most significant effects of the criticism of the CPI has been the further refinement of quantitative methodologies in this particular area, not their abandonment.
Research has begun to analyse the ordering capacities of rankings and other indices and lists, such as benchmarking, ratings and blacklists in more detail. For instance, rankings are important in economic and financial polices where they operate as competitiveness tools for governments, serving as markers of inclusion and exclusion where decisions are made (e.g. Fougner, 2008; Larner and Le Heron, 2004; Löwenheim, 2008). Credit ratings, issued by private actors, shape market practices and institutional interventions on generally flawed if not manipulated terms, with obvious negative consequences for some while others continue doing business as usual (Sinclair, 2005). In the educational and research domains, rankings enable ‘governance by comparison’ carried out not only by national authorities and international organizations such as the OECD, but also by private actors issuing them. Like in other areas such as corporate social reporting (Hess, 2007), governing by comparison (through the knowledge hierarchies made visible by rankings) is often believed to ‘ratchet up’ minimum standards. But the competition that forms part of this process can have severe consequences for organizational practices in public schools and universities, such as self-fulfilling prophecies, marketization, the hollowing out of professional pride, anxiety and resistance. More than this, the creation and making publicly available rankings and ratings without the consent of those being measured can produce strong responses, suggesting the performative force of these devices (Espeland and Sauder, 2007; Power et al., 2009).
Comparing numerical rankings and ratings with the more qualitative public blacklists provides further insights into the performative aspects of ranking systems. Public blacklisting can be seen as a form of non-numerical ranking that unambiguously marks a boundary between the acceptable and unacceptable. Blacklists are created by scrutinizing and classifying the activity of an entity on the basis of pre-established criteria that mark out an ethical boundary for the inclusion or exclusion from a community, en bloc (Hansen, 2012; Sharman, 2009). They call into question the fundamental legitimacy of the listed entities, cutting connections, but they also create new linkages and relations when combined with disciplinary measures. For example, public blacklists are increasingly used to combat money-laundering and corruption, with considerable significance for those organizations appearing on a list: companies lose contracts and their license to operate. In the 1990s, the World Bank introduced blacklisting of fraudulent companies. The list of ‘debarment’, which is accessible on the Bank’s website and updated on a regular basis, is compiled by Bank specialists and contains the names and addresses of the listed firms and associates (World Bank, 2011). But blacklisting is also used as deterrence tool that connects prospective business partners to self-regulatory measures, enrolling companies into management practices and wider networks introducing anti-corruption standards. In connection to its blacklist the World Bank has developed a Voluntary Disclosure Program (VDP), which targets any company that has been or plans to get involved in Bank-related projects. The VDP provides firms with incentives to disclose their knowledge of fraudulent and corrupt practices, as well as to reduce the risk in ongoing and planned projects by getting engaged in anti-corruption work.
In all, rankings obscure the qualitative complexity of the actors, settings and interaction being depicted. The considerable work needed to create the rankings in the first place is also largely concealed. What comes out as ‘transparent’ is an abstract representation that cannot but make opaque the complexity that goes into social life and organization everywhere. Like qualitative modes of inquiry such as due diligence, rankings thus rely on historical data for purposes of diagnostics. As we will see in the following, these orientations are challenged by emergent, algorithmic modes of disclosure which rely also on real-time data and engage more in prognoses than diagnoses.
The algorithmic mode: big data analysis
Big data analysis is closely related to the digital turn. It is based on algorithmic data mining and data aggregation, and as such, it takes us further away from the experience-based disclosure device of due diligence, as well as beyond the quantification involved in rankings, although it clearly builds on it. Much of the current hype about big data (McAfee and Brynjolfsson, 2012), articulates high-flying ideas about the ability of big data analysis to produce truths. According to Anderson (2008) big data obliterates science, hypothesis-testing and theoretical models and explanations altogether: This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
The notion that ‘numbers speak for themselves’ echoes our earlier discussion of transparency understood as unmediated access to reality. Here it is complemented with the belief that large data sets can generate insights that were previously impossible and ‘with an aura of truth, objectivity, and accuracy’ (Boyd and Crawford, 2012: 663). But the access to reality is mediated by algorithmically coded soft and hardware devices, which afford particular kinds of knowledge and insights, but never the full picture of anything. Like other mediating technologies, big data analysis prompts utopian promises and dystopian visions, but now on a much broader scale: it makes visible the scope, depth and future patterns of important societal challenges, such as global climate changes, serious illnesses, illicit financial transactions and terrorism. This helps us to design solutions. At the same time, it is considered a product and vehicle of growing corporate and state intervention in social life, raising important questions about surveillance, manipulation, privacy and freedom (Ellerbrok, 2012; Morozov, 2013).
It is the advent of advanced algorithms, and the rapid spread of ‘datafication’ resulting from activities and objects that leave ‘digital traces’ (Mayer-Schönberger and Cukier, 2013), which has made it possible to analyse very large amounts of information, to search for patterns and, not least, to develop profiles of specific actors and processes (Hildebrandt, 2011). A profile is created by the selection of relevant information. It does not deliver proof of causality or any conclusive reasoning, but mathematical correlations indicative of expected behaviour, by focusing either on patterns or anomalies in the data. The kind of knowledge produced by profiling is different from traditional scientific knowledge that starts out with hypotheses to be tested in search of causes or reasons (Rouvroy, 2011).
In practice, big data analysis involves collection and aggregation of data from very different sources, including real-time digital traces from bank transactions, geo-locating devices, ‘likes’ and updates on social media, and more stable forms of information stored in databases, reports and public records. A refined profile allows companies to provide targeted servicing at the right time and in the right place. For governments, big data analysis and profiling augments the trend towards surveillance and pre-screening to predict and pre-empt deviant or criminal activity. For both business and governments, big data analysis increasingly dominates market and political risk analyses, which conventionally relied on due diligence or traditional quantification as sketched above. For the ordinary citizen, leaving digital traces can be used by business and governments for purposes not envisioned by the individual in question. While such ‘function creep’ is not per se a bad thing, the ‘nature of these new developments often mean that they are not subject to rigorous public scrutiny or political debate’ and can ‘conflict with basic human rights’ (Ellerbrok, 2012: 212). The current NSA affair spurred by the revelations made by former NSA contractor Edward Snowdon suggests the complexities of this issue (e.g. Gellman and Soltani, 2013).
Big data analyses offer speedy ways of compiling, combining and mining multiple types of data. Rather than looking for singular evidence—the ‘smoking gun’ of an illicit transaction or a compromising record as in due diligence—big data devices such as Gotham (www.palantir.com/platforms) or compilations of data about corporate structures, such as Opencorporates (http://opencorporates.com/viz/financial/), allow investigators to search for patterns and revealing correlations, and present these using advanced visualization technologies. Such devices play an important role in the preparation of so-called ‘Suspicious Activity Reports’ (SARs) to prosecute money laundering and illicit financial transactions, allowing investigators to easily search large data sets by using filters and algorithms that provide automated detection of trends and shifts in regional data (Gilbert, 2013). Similar techniques can also be found in anti-corruption efforts. Websites like www.ipaidabribe.com in India compiles thousands of reports about bribery and corruption practices, including their types, locations and frequency, into so-called Bribe Trends visualizations of where in India and in which public agencies bribery is most widespread (Wickberg, 2013).
Big data analysis captures both personal data and more extensive relations and correlations (Mayer-Schönberger and Cukier, 2013). But the automated form of knowledge production means that big data analyses occur at great distance from the phenomena being scrutinized. This distance cuts many ties to personal experience and more traditional ways of making judgements. Some even consider the financial crisis to be the result of relying on big data analyses at the expense of established knowledge and experience, held by for instance bankers (Kallinikos, 2013). Of course, friendships and social recognition can be ‘datafied’ through Facebook ‘likes’, and financial transactions leave digital traces. The value of studying digital traces to make financial transactions legible and governable is obvious (Mayer-Schönberger and Cukier, 2013: 7–8). Nonetheless, many social activities remain outside and will probably never be made visible by digital technologies. Big data reintroduces long-standing concerns about the existence of digital divides (Norris, 2001). ‘Bigger data are not always better data’ (Boyd and Crawford, 2012: 668), and basic questions of inclusion and exclusion pertain.
Like rankings, big data analysis also makes it possible to produce representations that are publicly available and highly mobile. But whereas public rankings are fixed forms until the institutions behind update them, big data representations can afford engagement with users. The provision of data sets and technological platforms increasingly offer individuals and organizations with minimal data crunching skills opportunities to repurpose digital traces, add new layers of data or cross-reference data for their own purposes. However, whereas we can gain access to due diligence reports through courts, or learn enough mathematics to peek behind the production of an index—even if the origins of the numbers may be obscured—the algorithms of big data analysis are rarely accessible to anyone outside the super-crunching organization.
Conclusions
This article has explored a number of different ways in which transparency can be produced today. By analysing some of the distinctive properties of disclosure devices such as qualitative due diligence, quantitative rankings and algorithmic big data analysis we have gained insight into how human and materials become entangled in the manufacture of transparency, ranging from the situational and unique to the highest level of aggregated abstraction. This happens through the deployment of various methodologies and mediating technologies, some of them automated, as well as through the circulation of knowledge representations and packages in socio-technical networks, some of them more public than others. We also indicated that transparency is not restricted to the realms of anti-corruption and anti-money laundering. There is a surge in the demands for disclosure in multiple spheres, including education, health and environment, but also growing concerns about privacy and blanket forms of data aggregation.
Our emphasis on analysing mediating technologies complicates naïve conceptions of transparency as full disclosure and objective truth, but also conceptions where the contemporary hype around transparency is explained away with reference to megatrends or all-encompassing mentalities. The production of transparency goes into the very grain of the mediating methodologies that help to transform data into information, and information into knowledge. The velocity of mobile and reusable forms of data and information, which make visible patterns and relations, not only suggests that different disclosure devices and their ensuing modes of producing knowledge and insights can be combined, but also that questions about changes in epistemologies, organizational and societal design and power can usefully be addressed in future studies of transparency.
Whereas more traditional qualitative and quantitative modes of producing transparency seem to relate to well-established epistemologies and ethics, big data analysis in many ways reframes ‘key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and categorization of reality’ (Boyd and Crawford, 2012: 665). Furthermore, as Kallinikos (2013: 42) observes, The strength of analytic perception ‘big data’ affords complexifies life by exponentially multiplying the distinctions upon which human decisions draw, and rationalizes the context within which human life unfolds. It also subjects people to the influence of powerful actors in ways that are still beyond adequate understanding and social control or legislation.
Algorithmic disclosure devices will become more prevalent. Future studies of the production of transparency should analyse in detail how organizations handle this situation, including the politics of knowledge involved. This would not only be a matter of exploring the rationalization and consequences resulting from organizational attempts to produce transparency in this particular mode, but also of investigating the potentially dramatic consequences for organizational life when knowledge becomes boiled down to data that ostensibly ‘speaks for itself’.
