Abstract
Various data platforms force the individual into constant presence and visibility. However, the ways in which datafied environments relate to experienced vulnerabilities in our everyday lives remain unclear. Through diaries produced by and interviews with participants from three groups who occupy presumably vulnerable positions and who currently live in Finland, we explore the ways in which people challenge expectations and prior assumptions related to forced visibility. Using the concept of tactics developed by de Certeau, we aim to understand how individuals make everyday surveillance culture livable through what we call tactics of invisibility. Based on our analysis, we identify three kinds of tactics in this context: keeping worlds apart, cropping oneself out of the frame, and sidestepping. We interpret tactics of invisibility as ways of shaping a space for oneself illustrate fractures in what previous research has framed as digital resignation.
Introduction
The means by which datafied platforms have permeated our everyday lives have received considerable attention, both from the public and in the recent literature concerning critical data studies (Kennedy, 2018). Through a variety of datafied platforms throughout our everyday lives, we write ourselves into being. Simultaneously, we engage with data and data technologies within a nexus of data-mediated relationships (Lee, 2021). This situation creates tensions between being connected and exercising control over data. Consequently, we determine our relation to data, to data interfaces (through which we connect to digital contexts), to “data circulation” (i.e. the ways in which data are stored and move), and eventually to the ways in which data are abstracted and manipulated (Lee, 2021). Data and the relations in which we engage both with and through them profoundly shape our everyday living with respect to becoming a (digital) subject. As Barassi (2019) argues, the presence and visibility of individuals on datafied (social media) platforms—and the associated uses of surveillance technologies and personal data collection—are mechanisms by which datafied citizens are constructed (Barassi, 2019) and are part of neoliberal governance.
Various data platforms force the individual into constant presence and visibility (Treré, 2021) and normalize surveillance culture in everyday life (Hermida and Hernández-Santaolalla, 2020). By visibility, we mean the logic of contemporary digital life and its requirements to be seen. The incitement for visibility is often seen in relation to social media self-presentations, part of influencers’ brand management and visibility labor (Abidin, 2016). However, the common understanding of “pics or it didn’t happen” (Draper, 2020: 1628) refers not only to social media spheres. As Draper (2020) observes in the context of the online visibility management industry, there is an implicit tension created through obligations to be seen: the requirement to strike a balance between sharing too much and not sharing enough. The danger when not sharing enough—and not being visible—is of disappearing and “not being considered important enough” (Bucher, 2012: 1171).
Recent literature concerning critical data studies uses the term vulnerability to problematize this situation (Hermida et al., 2020). The drive for constant visibility also brings to light new types of vulnerabilities, which often multiply existing (structural) vulnerabilities. For example, in the realm of health self-surveillance and self-monitoring, the so-called datafit—a term that is applied to individuals who are data capable, data literate, and occupy a good socioeconomic position (Charitsis, 2019)—can afford to exhibit visibility on their own terms. Individuals who are not so datafit either remain out of sight or choose to embrace visibility since there are no real alternatives to doing so. Forced visibility and coerced participation (Barassi, 2019) also put certain individuals in profoundly precarious positions. The most obvious examples are platform economy workers and the data-based control that they cannot escape (Firmino et al., 2019). However, gig economy workers are not the only ones who occupy positions of forced visibility. Indeed, based on (visible) data, subject positions and frames for agency are constantly constructed, and these positions and frames seldom match individuals’ lived experience and self-understanding (Thornham, 2019). These algorithmic vulnerabilities are difficult to mitigate. Datafied environments can multiply previously existing vulnerabilities, and as Eubanks (2018) notes, they often produce a form of double vulnerability. Whereas algorithmic vulnerabilities function more at the individual level, Eubanks’ idea of double vulnerability suggests that we should focus on the processes by which structural inequalities are incorporated into everyday datafied life.
Our focus on individuals who represent groups that are commonly defined (e.g. in political discussions) as “vulnerable” is motivated by the need to understand how these presumed vulnerabilities may be experienced in everyday data relations rather than by a desire to explore vulnerability or vulnerabilities in terms of their essence. We do not presume that belonging to an ostensibly vulnerable group as defined by structural factors such as legal status, employment situation, or age necessarily dictates individual experiences. The three groups that our participants represent (unemployed individuals, undocumented persons, and older adults) share the social position of being a target of particular governance strategies and related surveillance and are thus of particular interest in the context of understanding everyday surveillance culture. Unemployed people face governmental surveillance, for example, in the form of the need to report their activity as job seekers. Undocumented people have no legal status and are afraid of being caught; therefore, they avoid the authorities, even on digital platforms. For older adults, governance and surveillance occur on a more discursive level. Their successful digital inclusion is seen as one solution to the challenges of population aging. At the same time, their digital practices can be devalued and positioned in opposition to practices of younger generations (Gallistl and Wanka, 2022). The positions of undocumented and unemployed groups regarding forced visibility are in a sense opposite to each other: unemployed people are incentivized to be visible by authorities and undocumented individuals are incentivized to be invisible, whereas older adults are incentivized to be visible by the digital inclusion discourse, but simultaneously their digital practices might be marginalized regardless of their digital skills or experience.
These relationships to digital platforms are interesting because they shed light on how these positions and expectations are negotiated in practice. The selected groups of participants have clearly different levels of predefined vulnerabilities, and the risks associated with visibility and surveillance are also understandably different among these groups. However, we are aware that designating a group as vulnerable can be seen as a form of control as such (see, for example, Cole, 2016: 261). That is why we do not focus on socioeconomics but on experience and seek to examine vulnerability from the perspective of individual experience and the digital subject (see, for example, Kennedy, 2018).
We propose that through tactics of invisibility, individuals negotiate their paths through the nexus of everyday data relations and that these tactics indicate an aspect of the relevant vulnerabilities for which we do not yet necessarily have words. In this article, we attempt to understand what these tactics are to investigate whether they contain voices that oppose platformization and the precarious positions that that process creates.
Digital resignation and resistance
Draper and Turow (2019) introduce the concept of digital resignation to describe feelings of helplessness when individuals aim to control the flow of data on and between platforms. These feelings of helplessness, in turn, result in passivity concerning data practices, even if they are perceived to be annoying and/or unfair. Draper and Turow (2019) argue that digital resignation serves the interests of platform providers because it hinders the ability to envision alternatives. Solutions to surveillance culture are individualized, thus hindering collective action against it. Individual actions, although they may be satisfying for the individual, may not change the underlying power structures in question. Individuals may have the power to control the level on which they share their data, but this control only applies within the frame defined by the service (i.e. its developers) that forces the data to be shared in the first place (Seberger et al., 2021). However, digital resignation does not necessarily imply that individuals are indifferent to privacy and the ways in which their data are used. Instead of focusing on the frustrations experienced toward datafied platforms and data circulation as such, in this article we extend our understanding of resignation by showing that what appears as resignation may conceal tactics of invisibility that emerge from the vulnerable situations occupied by individuals.
Resignation can also be viewed from the perspective of media activism. In the context of studying social movements, Treré (2018: 11) suggests that we focus our analytical gaze on the invisibilities connected to digital media activism and on the “hidden, submerged, and peripheral places where movements originate and develop in unexpected ways. It requires a special attention to the silent process of formation and unfolding of imaginaries that is crucial to movement making.” This suggestion points to the fact that the everyday activities that occur on digital platforms may also be interpreted as signals of emerging but not yet formulated acts of resistance. It is thus worth investigating whether resignation can also function to promote the agency of vulnerable groups; granted, this agency may not be collective, but this approach remains one that vulnerable groups facing datafied environments can adopt. Although it would be too straightforward to interpret resignation as an active stance toward platforms, individual acts of resignation may have some value in the context of living with platforms and their cultures of constant visibility. Indeed, as Phelan (2003) argues, visibility and the quest for recognition have been understood as a proper political agenda aimed at the pursuit of better living conditions among, for example, underrepresented societal groups. The underlying assumptions are that visibility is the only path to power and that increased visibility equals increased power. As she argues, this approach is an “ideology of the visible” (Phelan, 2003), which establishes a contrast between visibility and invisibility. The danger of this approach is that it erases or leaves unnoticed the power of the unmarked, unspoken, and unseen. Visibility and aims for visibility not only empower but may also do the opposite. Indeed, visibility may also be a trap. (Phelan refers here to Lacan but could just as easily refer to Foucault’s analysis of panopticism.) In the context of datafied environments, it is clear what following the ideology of the visible may mean for an individual. In this context, one might consider remaining out of sight and unmarked to be an active (and at least micropolitical) strategy. It is tempting to suggest that resignation could be read as an ideology of invisibility and as a counternarrative to the “ideology” of visibility. However, this interpretation would once again overlook the “silences” (on the platforms) that emerge from individual acts of resignation and the implications of those “silences.”
Blas and Gaboury (2016) examine how the use of datafied surveillance and biometric technologies increases individuals’ visibility. The aim of these acts and technologies is to facilitate governance, but in fact, they may be a form of administrative violence. The idea underlying this approach is that by demarcating and identifying human presence, it is easier to eliminate the unknown in terms of individual agency and actions. By producing maximum visibility, the individual and the world become seen and thereby also “known.” Once the individual is known, the frames of their agency can be predicted and preempted. Through such preemption, unwanted acts can be prevented ahead of time. The same tone is used in discussing automated media in datafied environments by Andrejevic (2019), who argues that the logic of automation is founded on preemption. Whereas Andrejevic offers a dystopian vision of individuals’ capacity to live in datafied environments—and the incremental ways in which preemptive logic constructs our positions as subjects within automated media environments—Blas and Gaboury (2016) emphasize the power to resist that may be connected to invisibilities. These authors discuss the ways in which (political) visibility, as a form of control, may be resisted through invisibilities and the fact that such invisibilities leave room for a politics of refusal. While their focus is on queer politics (and masking) as acts of resistance to identity categories, the general underlying idea of their examination is that an individual may refuse to enter into formal public discourse as an identifiable subject. This idea of a known, identifiable position and individual attempts to fit into the “picture” can be related to everyday encounters that occur within datafied environments, a view that was also evident in our data, as we will see. Indeed, invisibility may also be a form of unequal privilege that individuals who already occupy structurally vulnerable positions cannot afford. However, Blas and Gaboury (2016) suggest that tactical invisibility may also be adopted as a way of engaging with and resisting digital media technology and surveillance. However, invisibility is not a straightforward solution. Rather, it is the tensions around invisibility that need to be explored. Instead of framing invisibility (or visibility either) as a means to give up or resist some abstract platform power, we should focus on how individuals themselves experience the tension around being visible, how do they respond/act, and most notably what meanings they use to explain these acts.
It remains unclear who has the opportunity to adopt a resistant position toward digital platforms. The problems posed by data-driven culture have been analyzed from the perspective of discrimination and inequality by focusing on already-disadvantaged or -vulnerable groups (Eubanks, 2018; Favaretto et al., 2019; Kennedy, 2018). These studies have raised questions concerning how prejudices, inequalities, and discrimination may be incorporated into data processing, a situation that is likely to deepen existing societal inequalities (Gangadharan, 2012). For example, a study by Eubanks (2018) shows that digital tools in public services replicate old systems of power and privilege because the assumptions made by the services and platforms are not based on actual individual situations but rather on profiling based on information regarding social class, age, gender, ethnicity and so on. Such categorization resulting from individuals’ membership in a social category both deepens existing inequality and creates new forms of inequality (Gangadharan, 2012).
However, the ways in which people experience life do not always represent social categories and expectations. Not all the dimensions of vulnerability can be determined externally. Therefore, it is important to allow so-called vulnerable groups to describe and differentiate their experiences in a data-driven world. De Certeau’s writings resonate with debates in media and cultural studies concerning issues of citizen resistance and empowerment in data-driven culture and provide insight into how people engage with the formal structures of power via their everyday actions. In this context, strategies are described as ways in which powerful actors and dominant institutions define official practices and knowledge (1984). People’s everyday lives not only involve reliance on these official elements but also consist of practices and forms of knowledge that subvert and undermine official norms and “ways of doing.” The relevance of de Certeau’s writings to the digital era lies in this notion of tactics, as this notion makes it possible to address issues of digital agency and resistance. He argues that although there is an imposed system that controls everyday life, individuals can “reinvent” everyday life through subtle, everyday tactics, which form a domain for the “nonpowerful” and can be used to either survive within the power structure or reduce the effects of control and power. Although these tactics do not initially have the power to challenge dominant frames, they include mundane, half-conscious practices that may, to a certain extent, aim at resistance. The tactics function as hidden, disguised, everyday practices by which people can threaten power structures without publicly challenging them and by which they can negotiate recognized systems of power—here, experienced forced visibility. For example, considering data privacy, recommended practices include people’s awareness of the “terms of service” required by social media platforms and reconfigurations of personal settings, permissions and other parameters of social media accounts. These strategies demand that individuals become more vigilant with respect to the risks implicit in engaging with digital devices and platforms. We base this study on our presumption that being aware of these demands, people continue to approach digital media using tactics that are adapted to their own everyday lives and experience.
De Certeau’s (1984; De Certeau et al., 1980) concept of everyday tactics has been widely used to understand resistance in its mundane, everyday forms (e.g. Kahveci et al., 2020; Thongsrikate et al., 2018; Yilmaz, 2013). For example, subaltern groups may use tactics to both survive and undermine repressive domination, especially in contexts in which rebellion is excessively risky. The concept of tactics has been extended to the realm of digital platforms by several studies (De Ridder, 2015; Gangneux, 2019, 2021; Selwyn and Pangrazio, 2018; Vainikka et al., 2017; Van der Nagel, 2018; Willson, 2017; Witzenberger, 2018). These studies use the concept to describe how digital platforms, which may conflict with personal identities or social expectations, are “made habitable.” This concept acknowledges the demands of platforms but recognizes individual agency within the frameworks established by such platforms. Furthermore, these studies identify the particularity of digital media with respect to tactics. The platforms change constantly, and individual tactics are developed into platform features. We see that these two notions, acknowledging both human and technological agency and understanding the particularities of digital platforms, are also necessary for the aim of this study to understand the interplay between forced visibilities and lived vulnerabilities.
In what follows, we explore how people challenge expectations and prior assumptions related to forced visibility. How do they cope with these perceived pressures, demands, and expectations? What kinds of tactics are employed in the search for experiences of control and governance?
Material and methods
The empirical evidence referenced by this article consists of qualitative interviews with and diaries produced by members of three presumably vulnerable groups: unemployed individuals, undocumented persons, and older adults. Participants kept a diary of their daily lives, focusing on the digital dimensions, for 7–10 days. The participants were asked to note down at least once a day their encounters with digital technologies, including reflections on their experiences. These notes were prompted by suggested topics, such as “situations in which you think about your privacy or its boundaries.” These diaries were mainly written. Ten undocumented participants kept diaries by recording voice messages in their native languages, which were translated and transcribed verbatim. The length of the diaries varied from one to nine pages, averaging four pages. After completing the diary, each participant was interviewed face-to-face, or via Zoom or phone. Sixteen undocumented migrant persons or asylum seekers, 30 unemployed persons and 42 older persons living in Finland at the time of the study (2020–2021) took part in data collection. The undocumented and unemployed participants were all of working age (20–63 years old), and the older adults were retired (69–85 years old). Fifty-two of the 88 participants were female. The quotes are excerpts from the interviews.
The interviews, like the diaries, focused on everyday life in the context of datafication. The researchers discussed participants’ use of the Internet, digital services and (social) media as well as related concerns, experiences and feelings in general. Research ethics were upheld from the preparation of the questions to the analysis of the answers and the preservation of the material. Ethical commitments included principles pertaining to the confidentiality and disclosure of information. For the undocumented individuals, we were also committed to assisting them with residence permit issues.
The diary study is a qualitative method used to collect data pertaining to user behavior, activities, and experiences over time. In a diary study, data are self-reported by participants longitudinally (Alaszewski, 2006; Milligan and Bartlett, 2019). Collecting chronologically organized diary data can provide unique insights into the lifeworlds inhabited by individuals: their experiences, actions, behaviors, and emotions as well as how these factors develop across time and space. Solicited diaries enable informants to participate actively in both recording and reflecting on their own data (see, for example, Bartlett and Milligan, 2015). Allowing participants to exercise control of their data and enabling individuals to reveal as much as they want is an ethical practice when working with vulnerable groups, or anyone for that matter. The use of multiple diary-keeping methods makes it possible to include participants from different backgrounds. The use of this method also increased during the COVID-19 pandemic since it is not limited to a particular place (Nind et al., 2021). Similarly, ethnographic diaries (Markham and Harris 2021, Gwenzi et al., 2020), digital storytelling and diary writing (Jones et al., 2020), and other expressive methods have been found to suit people’s needs to engage both individually and collectively in sensemaking during the pandemic.
The data were recorded and transcribed. They were initially coded by a research assistant who organized them thematically around aspects of datafied everyday life, such as surveillance, management of social relations and management of networked spaces. Instead of focusing on specific codes, the authors together explored the material by focusing on instances that they interpreted as relevant to forced visibility and ways of coping with it. These instances spanned several codes. De Certeau’s ([1984]) everyday tactics were used as a conceptual tool to analyze the mundane forms of action taken in response to experienced forced visibility. After identifying these instances in the material, the authors discussed their content to form the final categories by applying the principles of constant comparison (Strauss and Corbin, 1990): modifying the categorization to fit each instance under a thematic category (Glaser, 1992; Guest et al., 2012). The final categories are presented as subsections below.
Deep insistence on staying under the radar
Before introducing the tactics of invisibility that we identified, we discuss the ways in which forced visibilities are manifest in participants’ accounts of their everyday lives. The pressure associated with visibility was recognized by all the participants. They felt the need to address this topic in some way, although they had a variety of motives for responding to the platforms’ attractions. These individuals appeared to act and exercise resistance not against platforms as such or upon some well-defined actor, but rather against the gaze of others as mediated by the platform. For example, some participants mentioned their need to represent themselves as active citizens in their public appearances, and the need for visibility was related to self-promotion. Through active posting on social media, the professional value of, for example, job markets increased. Requirements for self-promotional acts were experienced as if they were imposed externally. One participant described this situation as follows: There it is again, the professional me who speaks . . . So, I went to this kind of LinkedIn coaching, a very personal one-on-one coaching, where a career coach very critically evaluates my LinkedIn profile and gives feedback. And he then tells me directly that “your LinkedIn profile is asleep . . . You have to stay there for those traces of activity because otherwise it will be perceived, especially from the point of view of headhunting, it will be perceived that you are not interested in your career.” (Unemployed/6)
The official requirement for visible traces of social media activity further demands that individuals signal an active and professional “self.” The pressure for visibility that social media markets expect is evident. The expressed fear of going unnoticed thus becomes a foundation for increasing social media visibility, for example, by broadening one’s social network. Visibility increases one’s “value” in the context of the network. This situation shows how these expectations of platform visibility may well harm the position of underprivileged individuals and demonstrates how this forced visibility may be part of a new fundamental digital divide (Syvertsen, 2020). Accordingly, the right to disconnect—and to step out of sight—might be a right that is available only to privileged individuals, a hidden (moral) narrative of self-responsibility to opt out of keeping up with digital society (van den Abeele and Mohr, 2021). This withdrawal is possible through self-disciplinary acts of disconnecting (and hiding). Simultaneously, however, numerous datafied platforms demand visibility as a prerequisite for becoming part of a common digital society.
For many, the invisibility that they experience—due to their (structural) vulnerabilities—is related to the way in which visibility is deeply connected to individual trustworthiness, for example, when maintaining social relations with close family and friends. One undocumented participant explained forced invisibility as follows: Yes, when I was in Morocco, I shared everything, my location, where I was, I could do it with no problem and share where I live or something like that. When I started to live here, I put everything in private. Where I live, where I’m from, if I’m married or divorced, everything. Especially my friends, they have asked me, where are you, because they see that there is nothing on my Facebook, and someone, when I say that I was married, they say oh, it’s not believable that you didn’t say it on Facebook because before I could do it. But now I’m really afraid to share my location or pictures—I: Why? R: Why, because it’s my situation here. I’m afraid that someone might catch me or, you know [laughs]. (Undocumented/3)
Many undocumented people were indeed very concerned about digital surveillance. Some feared that the authorities from their home country were searching for them, but they also feared being located here and deported. Alongside these fears, a more general fear for the safety of family members remaining in their country of origin was expressed.
This deep insistence on staying under the radar was also forced by platform design. Among older participants, this forced visibility due to platform design took a form similar to what Bucher (2017) termed “cruel connections.” Reminders to users of past events are programmed into the platform, but it is not possible to judge how humans might react to these memories emotionally. Visibility is thus sometimes forced even after the account owner has passed away: “LinkedIn shows that she’s been 20 years in some job, and I know she died some 10 years ago. This is the sad part” (Older adults/3). The platform design can be interpreted as forcing visibility through its default settings, which do not always appear relevant. This situation is illustrated by the way in which some older participants discussed cloud services: I’m retired, so I don’t have anything I should share, or . . . at least I don’t have much. And then the folks with whom I keep in touch, they’re about the same age, and most of them don’t use services that way . . . nowadays, in these machines, it’s somehow very difficult to change those defaults, it always wants—automatically to cloud, cloud, cloud . . . I don’t want [laughs] them into that cloud . . . You have a lot of memory and everything in the machines nowadays so what, in principle, why this cloud system? (Older adults/7)
Furthermore, elderly participants often mentioned the difficulty of hiding from advertisers, reflecting the findings by Ruckenstein and Granroth (2020): datafication and surveillance are often noticed through online marketing. What differentiates these individuals from the participants in the study by Ruckenstein and Granroth (2020) is that we did not identify a similar desire for the advertisements to “know” them better and hence to be able to offer more relevant content. They valued the sense of actively choosing the things that one buys: “ . . . if I need anything, I’ll go and buy it. Simple as that. You don’t need to come to my home and sell it” (Older adults/9).
Based on our analysis, participants negotiated their visibility throughout their everyday digital lives. The theme of visibility was linked to the gaze of others, rather than to platforms as such. The anticipated gaze varied across the participants (e.g. the gaze may have come from employers, friends, authorities, advertisers, or a more general expectation to take up new practices that are not perceived as relevant), but we interpret that the common denominator was that it was deemed as part of the attention economies’ expectations and their various and contextual cultures of self-presentation and curation. The discussion of visibility/invisibility focused on a shared experience of living in a datafied digital world every day, which is noteworthy since our data were taken from three different participant groups, each of which exhibited a great deal of heterogeneity. Following Phelan’s argumentation concerning the (micro)political power of invisibilities, we introduce in what follows the three tactics of invisibility that we identified across all three groups. By examining these tactics, we aim to explicate everyday datafied existence and the agencies that our participants were able to exercise. We approach the tactics as situational responses to the anticipated gazes.
Keeping worlds apart to stay safe
The circulation of personal data on datafied platforms was a concern for all the groups. Users themselves actively and self-consciously restricted their use of various (social media) apps, although their reasons for doing so varied. Some users, in addition to censoring content, decided to avoid the applications altogether. Many described their tactics as a way of keeping the worlds in question apart: “The best protection for privacy is to live your life in this real world, because that’s where the biggest risks are, in the digital world” (Unemployed/7). Undocumented persons tried to protect themselves from perceived threats by using only encrypted apps, employing pseudonyms and avatars, avoiding open social media groups and changing SIM cards as often as possible. This chosen invisibility, however, did not stop them from following conversations and groups. One participant said that when she truly wanted to say something, she called a trusted friend and asked her to post the comment in question. For others, everyday tactics occasionally entailed using only selected applications based on communication needs and perceptions of the applications’ features.
Participants also employed less extreme tactics than restricting the use of certain apps, which allowed them to continue to use digital platforms for purposes that they viewed as important instead of disconnecting from those platforms entirely. Similar phenomena related to the selection of platforms have previously been studied from the perspectives of audience management and purposeful self-presentation (see, for example, Pitcan et al., 2018).
Instead of curating their presence, our participants considered which sphere of life was suitable for which platform and use a different application for each sphere. One unemployed participant (7) explained this process as follows: So, even if it’s, in my opinion, more practical in that sense to do it on, let’s say, Messenger or WhatsApp, you still have to keep those different worlds separate. If you use Slack, then you know that, OK, from there will come this kind of stuff, and now you have to relate to it in this way. As long as it doesn’t take up too much space in your brain, I prefer to spread the different parts of my life over different apps. But as soon as they start getting tangled up, then it’s done for.
For example, undocumented persons tried to protect themselves by using multiple profiles in different languages on social media. They also discussed changing the name and profile picture (which was seldom recognizable) of their Finnish profile to avoid being “found” by hostile peers or people from their country of origin. In the following excerpt, one participant described choosing which content to post as well as the codes of appropriate behavior for each profile: . . . And then I have two Facebooks. The one I had already in Pakistan; there are no pictures of me. But there is another, the one I made here in Finland. I have separated my friends, so that in this [profile] I have my Finnish friends, and in the other the rest. Q: Are you more active on your Pakistani profile? R: No, I am not active on both. On the other, I don’t even have a profile pic, so no one knows me if they don’t know my name. I don’t share pictures. Or I do sometimes, but then I remove them. I don’t know, it is something (laughs). I can’t. There is some kind of fear there. (Undocumented/9)
Another tactic employed to keep worlds apart was the use of different devices. Some older participants considered using a mobile phone to be cumbersome and risky because of its small size—the chance of mistyping in a critical moment is greater than the risk of doing so on a “big computer.” Choosing to use a desktop computer allows such individuals to mitigate the risks associated with mobile devices, as one participant stated: “ . . . I’m so seasoned and old-fashioned that I always use that big computer when I’m visiting the bank—I think it’s safe that way” (Older adults/23).
Cropping oneself out of the picture to maintain social relations
In recent research, sharing intimate and personal issues, in line with platform-specific constraints and algorithms, is understood as the driving force underlying social media attention economies (Jerslev, 2016; Raun, 2018). Our analysis conveys a different story. Participants noted that they felt a need to keep “out of the picture” and instead shared things that they viewed as “seen by their eyes”: I am just looking at things. At the videos there. I am not posting anything, really. Only when there’s some kind of nice weather and beautiful place, I might make some short videos and just post them. (Undocumented/1)
Most participants used at least some social media platforms. The tactic of cropping oneself out of the picture emerged mainly when discussing habits of social media sharing and the visibility that practice creates. Many participants described how they avoided sharing anything “too personal.” The category of “too personal” was often understood to include information that revealed mundane and everyday situations, posts about one’s inner feelings or sharing images of loved ones: “Maybe I don’t share deeper feelings in any case” (Unemployed/8).
Participants’ reasons for not sharing personal content varied, but the insistence to remain outside the frame was striking since it was observed among most of the participants. Undocumented participants explained that they would like to sustain an active and visible social media presence but that their vulnerable positions made them choose not to share: Today, also, or not only today, sometimes I would like to put up photos and pictures because I visit beautiful places. I would like to share them. But I say to myself, better not to share your pictures because it is not good for my security. So, I have to say this to myself every time, even if I want to put something about my boyfriend on Facebook. So, I can’t even put my face in his profile, not to talk about my own. It’s not safe. I wish it wasn’t like this, but it is, because of my position. (Undocumented/4)
Consequently, many of the participants explained their inactive position as that of not sharing. Many were indifferent toward sharing content, but the habit of not sharing was sometimes explained as a means to protect their privacy. However, participants understood privacy in various ways. For some, it was a set of concrete practices for following security protocols. For others, it meant not sharing personal data or anything else deemed too “personal.” No matter how the privacy was framed, participants had actively chosen invisibility and hiding, They deemed this important on manifold data platforms, such as media streaming services and online shopping, but also with official and administrative services. Through invisibility, many of the participants aimed to avoid a more general gaze on the platforms. Some participants also aimed to avoid data-collection practices, because of fear and in order to resist the data surveillance. This suggests that for some non-sharing that could be interpreted as resignation, was in fact meant as subversive behavior. The outcomes of a resistant position were then explained as causing extra efforts for an individual and as “making one’s life harder” but were still deemed as important means to sustain one’s agency, as the following excerpt illustrates: Because with this divide and conquer -method I have been able to erase my own footprint in such a way that I don’t get targeted advertising, for example, anywhere anymore . . . But on the other hand, you can play this game in such a way that you mess up the system on your part. It’s like you make your life so difficult that all systems no longer work the way you want them to (Unemployed/9)
By divide and conquer -method the participant refers to his or her habits of resisting sharing any personal information or personal data, for example, by using incognito windows and fake profiles. This excerpt illustrates the experience of an active actor as that of a passive retreater, which is also evident in our data elsewhere.
Furthermore, a stance against social media attention economies could be identified among the participants and by non-sharing they felt that they opposed it. Participants sometimes regarded sharing a great deal of content, publishing photos of themselves and commenting on heated discussions on social media as activities engaged in mostly by outgoing individuals who like to be seen. Participants differentiated themselves from this view of social media users, although some jokingly referred to their posts of a good catch or beautiful view, for example, as “narcissism” that made “friends jealous” (Older adults/1). Simultaneously, the audience was limited to a circle of friends. Hiding was also explained as the result of a desire not to intrude on “others,” as one participant explained: I have this relative who always posts every meal there; it might be because he’s a very enthusiastic cook, and that’s a very important thing to him, those meals, but he’s not thinking at all that someone might see it who does not have food. (Older adults/25)
Many participants felt that the expectation to share their everyday lives on the platforms was highly annoying. The ideal solution for coping with this irritation would be to withdraw from (social media) platforms altogether, as one unemployed person (28) explained: “it’s irritating, so it would really be my own choice that I wouldn’t need to go there and look at all.” Total nonpresence, however, did not seem to be a viable option for our participants. Individuals still wanted to communicate their presence on these platforms. A safe way to accomplish this communication was to share shots of nature and thereby to locate the vulnerable “self” outside of the platforms’ frame. These platforms were also spaces to exercise individual agency, which did not require individuals to fit themselves into the flow of social media self-representation or the logics of preemption. Nature offered a safe context to share something that was not too revealing (or intimate). A desire for social connections was the driving force behind this activity, and in this regard, participating on social media platforms was experienced as a rewarding act. Through these “shots” of nature and the acts of sharing and contemplating, participants sustained social relations. In and around these shots, participants created caring spaces with those who were close to them. One participant explained this situation as follows: I message with my mother on WhatsApp on a nearly daily basis. Sometimes just texting, often sending photos. I’ll send cat videos or something, and my mother maybe sends something if she’s been on a walk and sees something, flowers in the summer or some rabbit droppings or something else. She might take a photo and then send it. And we’ll then discuss these or send messages. (Unemployed/12)
Sidestepping to avoid the affective load of datafied platforms
Tactics of invisibility were also connected with a more fundamental issue pertaining to participants’ self-understanding. Many participants explained that they understood themselves as being unworthy of attention or not important enough. Some indeed described themselves as “shit people” who were unworthy of anyone’s interest, which indeed indicates how self-understanding is fundamentally linked to social status. This sense of being unworthy of attention led to an avoidance of social media sharing in general. This sense was also explained as a reason for not paying attention to data privacy issues. In the context of data privacy studies, privacy is considered to be the most valuable commodity that individuals may have on platforms (Nissenbaum, 2009). This ignorance of one’s data implies that one’s sense of one’s personal “value” on platforms is not particularly high. In this sense, participants expressed feelings similar to those associated with digital resignation. That is, they expressed indifference on the use of their data and the possibility of it circulating on platforms. One participant explained this situation as follows: But I don’t know. I think I’m as normal as normal can be, so who’s really interested in my stuff anyway? I’m not rich or anything special. Nothing like that. (Unemployed/27)
Undocumented participants viewed their visibility on datafied platforms as being of interest, at least for local authorities and ill-intentioned parties in their country of origin; however, this situation resulted in the use of similar tactics for invisibility as in the cases of other participants. These tactics employed avoidance as an intentional means of sidestepping particular social media platforms, their affective atmospheres, and the need to comment. As one participant shared regarding how to limit seeing posts that they feel generate a negative atmosphere: “I’ve decided at least that I do not comment on those as that does not lead to anything. That’ll only be a never-ending war” (Older adults/1).
Moreover, the expected affective load caused by various platforms was a reason for adopting the role of an invisible observer. This role was a way of keeping up with various platforms and identifying their affective atmospheres without engaging with them too deeply. Regarding the emotions produced by and within these platforms, participants protected themselves from malaise and negative emotions not only by avoiding certain groups and content but also by regularly removing the applications that caused such emotions. However, the removal of these apps also caused anxiety, as without the ability to keep in touch with these apps, fear on behalf of loved ones increased. A sense of duty also emerged. For this reason, the deleted apps were restored to the phone after some time: I follow Afghan pages for what’s going on there. I follow all the time. Facebook is also for that, if even my mom and dad can get a look and maybe find me. I follow these pages, and there is really bad news every day. Despite that, I still follow the news and discussions. Sometimes I delete Facebook from my phone completely because of really bad news I can’t take. It is too much. And then, I redownload it, I just put it on . . ., and I’ll continue thinking what if my mom finds me there. (Undocumented/5)
Indeed, when discussing their invisibility, participants often noted that they felt themselves to be bystanders. They used concepts such as “lurker” and “lurking,” which refer to silent participation in digital environments. Former studies of lurkers have shown that the silent observer position is the dominant mode of participation in social media environments (Crawford, 2011). Our participants, however, framed lurking as an active choice, which was caused, for example, by an active decision not to register on platforms. This decision was a means of taking shelter from the threats of digital platforms. Often, these threats were vague, and the decision not to share any data was made to protect oneself from “everything”—just in case. As one unemployed participant (9) explained, “When I don’t know, I want to protect myself from everything.”
The position of bystander can also be a form of tactical invisibility, which indicates a politics of refusal and various forms of resistance, as suggested by Blas and Gaboury (2016). In these examples, however, resistance is adopted to deal with the affective load anticipated from platforms, and such resistance can be seen as a means of ensuring one’s own well-being. Moreover, participants spoke extensively about their general fatigue with respect to platforms, their attention economies, and the (automated) ways of knowing that such platforms both generate and enhance. Automated platformization causes individuals to have a sense of not “fitting in,” which may naturally have profound effects on their self-understanding, particularly among persons who already occupy structurally vulnerable positions. One of the participants explained this situation as follows: When a form has a box to check where you can only check one, but none of them really describe me fully, I either check the wrong one and give incorrect info or then I don’t check any box at all and I’m not able to use the form . . . Let’s say you apply for something from some office, and you have to explain that you don’t quite fit into these categories, then the human official can listen and make a sensible decision concerning how to act in that situation. But when a machine just says it can’t, then there’s nothing that can be done. It’s unequal in the sense that some get through and some don’t just because you happen to check some box. But this is a deeper problem than just digital inequality. (Unemployed/7)
This situation has much in common with the logic of preemption produced by automatization through datafied platforms, as suggested by Blas and Gaboury (2016) and Andrejevic (2019). Thus, as this excerpt illustrates, sidestepping does not stem from an active decision to adopt a stance of resistance but rather from the individual possibilities that afford invisibility. As our data show, even though not everyone has this opportunity, individuals can still find ways to make their lives livable.
Conclusion
We conclude that de Certeau’s concept of everyday tactics provides a useful theoretical tool for identifying the mundane yet creative forms of action through which datafied space is reappropriated and used in people’s own ways. Following de Certeau’s definition, we identified mundane, half-conscious practices through which everyday data relations are negotiated in practice. These everyday practices meet de Certeau’s definition of tactics and place themselves in the strategic context of forced visibility. Our findings also reflect de Certeau’s conceptualization in that the tactics we identified were ways to survive within the power structure, regardless of whether they were seen by the participants as conscious resistance or mere ways to survive.
Our study adds to the previous research applying the concept of tactics to datafied platforms because, in addition to the concrete tactics, we uncovered the lived experiences of people in vulnerable positions living in datafied environments. Our findings also pinpoint the feelings that engaging in these tactics entails for our participants. We find that interpreting the tactics as encounters with Phelan’s (2003) ideology of the visible provides a new way to consider the individual’s costs of using them, as well as their potential to challenge existing power structures.
The tactics appeared in participants’ speech mostly in relation to social media, especially concerning the act of sharing content regarding the participants themselves. We argue, using Phelan’s (2003) ideology of the visible as a frame, that withdrawal from sharing entails withdrawal from the ideology of the visible. Although they do not have, in Seberger et al.’s (2021) terms, power over the platforms that force the sharing of data, individual users have the power to resist the ideology of the visible that fuels these platforms. Resisting—or acting upon—the ideology of the visible is, in fact, the only possibility for living with the platforms.
Downplaying one’s importance in relation to the attention economies of social media and data surveillance on digital media generally could be interpreted as expressing resignation toward platforms. However, we found that resignation should not be understood only as a consequence of unsuccessful attempts to control the flow of data on and between platforms as such. The participants did not necessarily experience frustration (only) because their data is circulating uncontrollably. Instead, we interpret the frustration to stem from the ideology of the visible in a more general sense: someone else defines how one should be visible on datafied platforms.
Indeed, we interpret these acts as ways to resist what platforms (and the different gazes there) seem to expect from an individual—be they acts of hiding by posting nature pics or sidestepping from using a certain app. It is interesting that even though these acts might look like resignation and as giving up, participants themselves prescribed these acts as statements of their active doing. This means that sometimes acts which at the outset may seem like resignation are something more. In fact, doing nothing, for example, the decision of not commenting on heated social media discussions, may require extra efforts. Tactics for invisibilities appear to be active choices, not positions where one falls.
Furthermore, we see that the presumably vulnerable societal positions of our participants give rise to small fractures to resist such platforms—and gazes enabled by datafied platforms. The decision to hide and refuse to hand over one’s data on the platform is an act that challenges the idea of data-generated platforms, even though disconnection may rarely be total (Hesselberth, 2018). Among our participants, these acts of resistance were not always naturally conscious, but they can also be seen from a wider perspective of media refusal. Participants rarely spoke about consciously resisting platforms as such but instead described choices that they wanted to make to make datafied environments livable. Accordingly, the decision to step out of digital and social media environments, as either a break (illustrated by terms such as media fasting (Syvertsen and Enli, 2020) or a more permanent disconnection (Nguyen, 2021), may also be a voluntary act through which individuals try to overcome the burdens and harms that digital connectivity on digital and social media platforms may cause (Büchi, 2021; Büchi and Hargittai, 2022). Refusal to use digital and social media may also highlight digital inequality in its new form, as the opportunity to leave platforms is often available only to individuals who are already in a good socioeconomic position (Portwood-Stacer, 2013; Van den Abeele and Mohr, 2021; Van den Abeele and Nguyen, 2022). However, our data also indicate that even individuals in presumably vulnerable positions are stepping off the platforms. This expands our understanding of what being datafit (Charitsis, 2019) means for people in vulnerable positions. We argue that through tactics of invisibility, individuals might exploit powerful systems to disrupt them without fundamentally subverting them.
Our study thus supports the notion that digital resignation does not, in practice, necessitate giving up in the face of datafication. Our participants had the courage to sidestep and not post everything about themselves—for example, their faces or whereabouts—on platforms that demand individuals’ visibility and data. This is a slightly more positive story of how visibility can be lived among vulnerable populations compared with, for example, female Instagram users’ experiences (Duffy and Hund, 2019). The enormous effort that participants often reported in relation to their resigned stance suggests that this position was not a form of “giving up” but rather aimed at “standing up for” the livability of one’s datafied life. As we observe, vulnerable populations seem to make their datafied lives livable through the use of individualized tactics. Some of these are born out of the inability to afford invisibility but some are inventive ways of bypassing expected visibilities on platforms. This means that their tactics are not as strategic or collaborative as, for example, Brunton and Nissenbaum (2015) describe in their work on obfuscation.
The tactics discovered by this article were also connected with individuals’ experiences of their own self-worth. Participants discussed how they and their lives were unworthy of attention and described themselves as bystanders. This finding raises a more profound question concerning the price of invisibility and its consequences, particularly if invisibility is connected to diminishing self-worth. Indeed, among our participants, the role of bystander was connected to a sense of not “fitting in,” which may naturally have profound effects on individuals’ self-understanding in the future. The experience of not fitting in (with the datafied culture and its attention economies) may well lead such individuals to become outcasts. This situation further highlights the question of whether silences may exist in datafied societies (Treré, 2021) and leads us to question the power of the invisible, as discussed by, for example, Phelan (2003) and Blas and Gaboury (2016). In addition, Draper (2020) points out that on various data platforms, we may encounter imagery of the “publicity-worthy self.” She focuses on personal online reputation management services, which aim to help balance the need to be seen online with the need for privacy. In our data, we also encountered how individuals negotiated their worth (to be seen). This raises concerns about how both the online reputation management industry and, more importantly, individuals themselves assess their worth on platforms.
The extent to which these tactics can diminish the effects of control and power is thus not straightforward due to their costs to the individuals using them and their potential to be shared. Indeed, invisibilities do not allow individuals to exercise power over datafied platforms or spur collective action as such. Previous studies on tactics on digital platforms have made the same observation. However, based on our findings, we argue that the ability to step into the shadows of platformized existence by using invisibility tactics is nevertheless a way of claiming and shaping a space for oneself. These tactics remain ways of acting “otherwise,” that is, against expectations. The invisibilities identified illustrate fractures in what previous research has framed as digital resignation and the visibility games (Cotter, 2019) that occur in social media economies. We argue that while tactics of invisibility are individualized and remain unshared, and even though the individual has little power over datafied platforms, an undertone of resistance can still be detected when people describe how they negotiate paths through the nexus of everyday data relations. The tactics of invisibility that we identified among structurally vulnerable people illustrate this undertone and give it a particular form. We interpret that acts of hiding should be understood as fractures of individual agency—ways of re-inventing contextual and sometimes resistant ways of fitting in. Based on our findings, we argue that people have the motivation (sometimes to ensure their personal security) to create these fractures to resist datafication and forced visibility. Demonstrating the presence of these fractures in certain contexts may help us detect them elsewhere as well.
Footnotes
Funding
This work was supported by Multileap profiling area, University of Jyväskylä, Academy of Finland and Intimacy in data-driven culture, Academy of Finland, Strategic Research Council (project numbers 327392 and 327394).
