Abstract
Informed consent may be unobtainable in online contexts. This article examines the difficulties of obtaining informed consent online through a Facebook case study. It is proposed that there are at least two ways informed consent could be waived in research: first, if the data are public, and second, if the data are textual. Accordingly, the publicness of the Facebook News Feed is considered. Taking account of the wide availability of Facebook users’ data, and reflecting on how public those users perceive their information to be, this paper argues that some Facebook data are properly viewed as public to semi-public in nature. A second issue is whether the Facebook News Feed data collection ought to be classified as document-based or human subjects research. Since the Facebook News Feed involves social interaction that may elicit ‘ethically important moments’, this paper proposes that observing it may constitute human subjects research. While informed consent is desirable for human subjects research, it is suggested that Facebook News Feed observations are comparable to observational research in a public space, and thus waiving informed consent in this online setting could be justifiable.
Introduction
For my doctorate, I examined how an English working-class community resolved conflict. Specifically, I decided to conduct an ethnography at home, living with family on my council estate, and submerging myself into local community life. As a researcher working in my home environment, I strove to make my project socially valuable (Bledsoe and Hopson, 2009), and I sought to avoid creating harm (Eynon et al., 2008). However, when the seemingly harmless opportunity arose to extend my observations from ‘offline’ environments to the ‘online’ medium of Facebook, I inadvertently stumbled upon a set of ethical dilemmas. This article details the challenges I faced, primarily focusing on my use of online data without obtaining informed consent from all individuals involved.
Since I was conducting ethnographical research, I was open to discovering innovative sources of information that might help me better understand community life (Hammersley and Atkinson, 1995: 1). And so, when a research participant with 3,307 Facebook ‘friends’, loosely covering five percent of our town’s population, invited me to observe their Facebook News Feed for research purposes, I gratefully accepted the opportunity. At first, my research participant showed me interesting incidents that had been uploaded to their News Feed. Cautious not to over-represent the most violent or unusual events, and to appreciate more fully the context in which the comments were made, I asked to see some of the more mundane activity taking place. Having gained such access, I then produced anonymised field notes about discussions I observed on the News Feed, approximating users’ ages and genders from their profile pictures. In this way, I compiled a dataset of 4,411 anonymised posts, 834 of which involved spontaneous conflicts.
However, this method of data collection presented a problem. On the one hand, the dataset I created potentially offers a valuable source of knowledge to my home community, which is recorded as experiencing especially high levels of violent offences against the person, and low levels of socioeconomic advantage. On the other hand, while I had complete informed consent from my research participant who gave me access to their Facebook News Feed, I was unable to obtain informed consent from each user I observed appearing on the News Feed, which may breach my obligation to ‘do no harm’ by undermining the right to privacy. 1 In order to ‘do no harm’, Eynon et al. (2008) propose three concepts at the core of human subjects research: confidentiality, anonymity, and informed consent (see also Lawson, 2004; Sveningsson, 2004). While I managed to uphold the first two requirements, I did not gain informed consent from everyone whose data I observed. This led me to the question: when is it justifiable for a researcher to use data without informed consent?
The parameters of informed consent
It is widely accepted in the social sciences that ethical research will aim to obtain informed consent from all participants involved (Bakardjieva and Feenberg, 2001; Barnes, 2004; Eynon et al., 2008; Gaiser and Schreiner, 2009). Bledsoe and Hopson (2009: 397) describe it as the most basic aspect of the research process. This principle, as Boellstorff et al. explain, was derived from biomedical research, and while it is problematic trying to translate this into ethnographic research, researchers are nevertheless obliged to inform participants about the study (2012: 132–133). But the degree to which there is flexibility with this approach is somewhat contested. Bruckman (1997), for instance, asserts that no matter how public some information is, unless informed consent can be obtained, then a study should not go ahead (cited in Sveningsson, 2004). Similarly, Duncan (1996) proclaims that gathering data without informed consent will always violate privacy (cited in Roberts et al., 2004). Nevertheless, there is wide acceptance that a flexible approach to research ethics is required (Ess and the Association of Internet Researchers (AoIR) ethics working committee, 2002). Gaiser and Schreiner (2009), for example, advise that there are no hard and fast rules about whether or not it is ethical to use data from an online website. Whiteman (2012: 9) also expresses how it is preferable to take a contextualised approach to each online situation, instead of adhering to generalised, context-free principles.
There are at least two instances when a researcher might use data without obtaining informed consent. First, when the data are treated as textual, or documentary research, then the need for informed consent is potentially not required (Solberg, 2010; Wilkinson and Thelwall, 2011; Wilson et al., 2012). In fact, Wilkinson and Thelwall (2011) suggest that the very act of asking for consent to use publicly available information would require online users to become active participants in the research, and thus by asking for consent the researcher risks turning documentary research into human subjects research, which in turn necessitates the need for consent. Although Wilkinson and Thelwall argue their position with respect to publicly available documents, they also point out that at times private documents may be accessed without consent, such as when research access to medical records might be considered permissible without gaining patient consent, so long as personal information is anonymised at the earliest opportunity.
Second, if the research involves observation of human subjects in a public space, then arguably informed consent can be waived (Hudson and Bruckman, 2004, 2005). There are instances in observational research involving human subjects where, although it is desirable, consent may nevertheless be unfeasible. For example, the Economic and Social Research Council (ESRC) Framework for Research Ethics recognises that informed consent is impractical or meaningless when observing crowd behaviour (ESRC, 2015). These same guidelines, however, emphasise that covert research – i.e. research conducted without the awareness of the participants involved – should only be carried out when the research ‘may provide unique forms of evidence or where overt observation might alter the phenomenon being studied’, and that such covert research ‘must not be undertaken lightly or routinely’ (ESRC, 2015: 31). Guidelines of the British Psychological Society advise that observational research without consent is ‘only acceptable in public situations where those observed would expect to be observed by strangers’ (British Psychological Society, 2014: 25). These are similar to guidelines in the US, which suggest that consent is not required for observations in public naturalistic settings (American Psychological Association, 2010). In light of the need to pay close attention to how consent relates to the phenomena under study, institutional ethical approval may ultimately be required for human subjects research where informed consent is not sought, unlike anonymised documentary research (see University of Oxford, 2017; Wilkinson and Thelwall, 2011).
Both potential exemptions to the consent requirement involve a central caveat: the text or human activity must be public. Even in offline spaces, the divide between public and private is never simple, and in online environments, that same dichotomy is blurred even more so. Sveningsson has noted the semi-public nature of Facebook (2008: 74), and Allen’s earlier research aptly demonstrates how public and private spaces can coexist on the same online platform (1996). On account of these blurred lines, Waskul and Douglass claim that online activity is capable of being ‘publicly-private and privately-public’ all at once (1996: 131), Anderson and Kanuka draw our attention to the possibility of private conversations taking place in public spaces (2003: 58), and Eynon et al. (2008) refer to the need for privacy in public. How, then, might a researcher begin the task of determining whether online data counts as public?
Sensitive to similar concerns, Whiteman (2010, 2012) deployed a strategy adopted by other online researchers (Hookway, 2008; Rosenberg, 2010). Their approach contends that the technical levels of access to some domain do not alone determine whether or not the domain is properly public or private, and therefore whether or not informed consent should be sought (e.g. see Whiteman, 2012: 61–62). Rather, it specifies a different (or extra) constraint: levels of privacy or publicness as perceived by the users of online communities. The way I adopt this strategy takes both definitions of the public/private divide as relevant ethical factors for online research. This means that a researcher collecting data from an online space ought to consider both the technical accessibility of information uploaded by users, and how those users treat that information. This approach is appealing since it embraces established norms of observation research – those that consider subjects’ beliefs and expectations – and thereby widens the requisite considerations for researchers who use online, publicly accessible information. It follows from this approach that consent can only be waived for online, publicly (i.e. technically) available information when users treat such information as being public.
The argument presented
I explore both the technical availability of Facebook News Feed information, as well as Facebook users’ apparent perceptions about the public nature of their News Feed posts. Having done so, I suggest that Facebook News Feed data are public, both technically so and perceived by the users in my dataset as such. I then move on to consider whether this counts as public documentation research or observational research of human activity in a public space. Although there are fewer constraints on the use of public documents, and while an argument could be made that the Facebook News Feed is textual in nature, my experience leads me to find ‘human subjects research’ a more apt description. However, since the activity is public, I liken my observations to those in a busy public space, which may not necessitate consent of every person observed.
This article is structured as follows. In the next section, I begin by introducing the Facebook platform. In the following section I examine whether Facebook News Feed data are public and thus fall under a potential exception to the requirement of informed consent. I then reflect on whether the data fall under another potential exception to informed consent by being of a nature based on documents, rather than human subjects. Based on these discussions, I reflect on ways forward for online research in the concluding section.
A brief introduction to the Facebook platform
For readers unfamiliar with Facebook and the accompanying terminology, the following provides the essential details. Facebook is a popular online social networking site that enables its users to communicate through a range of media. Whereas features such as ‘messaging’ facilitate private exchanges between two or more people, ‘status updates’ enable users to communicate with potentially everyone on the internet, depending on user-set privacy settings. Facebook ‘friends’ include all the users accepted or successfully invited into a person’s social network, and numbers often range from hundreds to thousands.
The News Feed feature was launched in 2006; it provides a continuous channel of friends’ status updates, which are displayed on a user’s home page. By only displaying the updates of Facebook friends, each user’s News Feed is as unique as is their set of friends. A user can customise their News Feed, so that only particular friends’ posts appear. Similarly, users can choose to make their posts accessible to the ‘public’ (everyone with internet access), to Facebook friends, to friends of friends, to themselves, or to specific friends set as a custom setting (Johnson et al., 2012; Lampe et al., 2008). Thus, while some profiles appearing on the News Feed will be set to public privacy, others will be restricted so that only friends may view their posts. Such diversity of privacy settings makes it challenging to determine the publicness of Facebook posts. For the purposes of this case study it will be useful to consider the publicness of News Feed information restricted to friends, because this is one of the most restrictive settings a user appearing on my research participant’s Facebook News Feed will have implemented. 2
The publicness of Facebook news feed data
To present a case for waiving informed consent, I must establish whether the data collected counts as residing in the public or private sphere (Hudson and Bruckman, 2004, 2005). I will first consider how technically available the data are (Whiteman, 2010, 2012). I will then move on to consider how users within the data collection treated the information they uploaded to the News Feed feature.
The ‘technical publicness’ of information restricted to ‘friends’ on Facebook
In accordance with Facebook’s terms of service, a ‘friends only’ privacy setting on Facebook means that information posted by a user can be viewed by friends in the user’s Facebook network, by employees of the Facebook Corporation, and by third-party commercial and marketing agencies who purchase the data from Facebook (Facebook, 2015). In accordance with Facebook’s Privacy Policy, Facebook can make use of personal information such as ‘name, email address, birthday, and gender’, and the information users ‘choose to share’, which includes all the status update information which appears in others’ News Feeds. As well as handling users’ information within the corporation, Facebook may also pass this information on to third parties, so long as a user’s ‘name and any other personally identifying information’ has been removed. In spite of concerns about the potential to de-anonymise Facebook datasets (Bonneau, Anderson and Danezis, 2009; Wondracek et al., 2010), this commodification of information is highly profitable (see Facebook, 2014; Solon, 2017). A user must consent to these terms as a prerequisite for using Facebook.
Another way News Feed information can be accessed is through a third-party application. Applications are often free for users to download through the Facebook Platform, and may come in an entertaining form, such as a game or quiz. While applications are usually free, often downloading it requires users to allow the application to access their personal and shared information. Again, users must consent for an application to have access to this information, and if a user does not wish to consent to those terms, then their only alternative option is not to use the application (Wang et al., 2011). Facebook regulations state that applications should not take more information than is required to run the feature; however, research suggests that many applications access more information than is necessary, with no way of monitoring what happens to the data thereafter (Debatin et al., 2009; Steel and Fowler, 2010).
As well as allowing applications to access personal data, Facebook also allows users to consent to applications accessing all their friends’ information. By default, at the time of my research in 2014, applications were given authorisation to take the following information from a user when their friends downloaded it: biographical information; date of birth; names of Facebook friends and any recorded relationship statuses; religious and political views; websites; online status; status updates (such as those shown on friends’ News Feeds); uploaded photographs; uploaded videos; shared links; notes; hometown; current city; education and work history; and activities, interests, and likes. To change these default settings, users were required to click on three links, and then uncheck 15 boxes related to each aspect of information just noted. Even if these boxes are unchecked by a user, applications will still have access to public information, unless users forfeit using any applications at all. Since my research, the nature of the information taken by applications has slightly changed, however News Feed activity remains unaltered.
Application privacy settings are separate from the more accessible ‘privacy shortcuts’, which are only a single click away from the user homepage. The privacy shortcuts allow users to decide ‘who can see my stuff?’, so that users may, for example, let the entire ‘public’, or only friends, view their posts. Having application privacy settings separate from the privacy shortcuts may mislead users into believing that by selecting a friends shortcut, other third parties will not have access to their information. It has been suggested that the settings are intentionally designed to give the illusion of control in order to increase the potential to exploit personal data for profit, since information restricted from public view online has a higher sales value (Debatin et al., 2009; Light and McGrath, 2010). Empirical research indicates that users may not be fully aware of the application default settings in place, and users are often found to be less likely to alter the more obscure default settings (Bonneau, Anderson and Church, 2009; Bonneau, Anderson and Danezis, 2009; Liu et al., 2011). Taking all these specifics into account, it could be considered a norm for Facebook users to consent to sharing friends’ information with third parties through the act of downloading applications (Hull et al., 2010).
Therefore, I propose that despite the apparent restriction of information to friends on Facebook, its actual wider availability makes this data publicly accessible. It can be accessed by the Facebook Corporation, by advertising companies, and by third-party applications, in addition to friends. Moreover, Facebook users can widen public access to their friends’ information by sharing friends’ posts through applications they decide to download. Setting up an application through the Facebook Platform only requires the consent of users who download the application, but not the direct consent of all users whose information the third party is thereby able to collect. Under Facebook’s default settings, therefore, friends in my research participant’s network technically consent for them to share their data with third parties.
Despite being technically able to access users’ data on Facebook, since users may be unaware of how widely available their Facebook data are, it will be useful now to address the second criterion I aim to meet, adapted from Whiteman’s second definition of publicness and privacy (2010, 2012): how users perceive the level of publicness of their News Feed posts to be.
Users’ beliefs and expectations about the levels of publicness of information restricted to ‘friends’ on Facebook
This second concern relates to the specific beliefs of users within a social network, and for this reason user-perceived levels of privacy may vary from case to case, even within different Facebook networks. Eysenbach and Till (2001) recommend considering how many users can access the space under study. My participant’s Facebook network included 3307 friends. From a sample of 100 Facebook friends of this account, the mean average number of friends a person had was 690, and the median average was 540, ranging from between 32 and 3147 friends. This indicates that, on average, informed users who have restricted their profile visibility to friends in this network expect their posts to be directly viewed by somewhere between 540 and 690 people. Because of the relatively large average number of online friends within this network, it seems reasonable to assume that even users who have restricted their status updates to friends are likely to treat the News Feed as a semi-public space. I will draw on data from my research participant’s Facebook News Feed to determine whether this assumption is fair.
Observations from the News Feed indicate that users treat their status updates as public and widely sharable much of the time. There takes place, for example, a high amount of social capital activity: many posts advertise items for sale, jobs on offer and other opportunities that arise; they also provide a space for job requests, calls for sponsorship and appeals to borrow a variety of items. In addition to this, news articles frequently appear on the News Feed under observation, along with local gossip, which is again often commented on, and shared by friends. While users may wish for these opportunities or requests to be viewed only within their social network of Facebook friends, creators of such posts at times explicitly appeal for users to ‘share’ information in this way, to encourage posts to go viral.
Sometimes users appear guarded about the information they upload to the News Feed. Daily, vague posts can be found, such as ‘I can’t believe this is happening’, and when a user’s friends ask for more details, the creator of the post requests for them to ‘PM [private message] me’ or ‘inbox me’. This demonstrates that status updates are treated as less private than other forms of correspondence on Facebook. This is also apparent in the title of the feature as a ‘News Feed’: a place where information is announced and dispersed, as opposed to a message, which indicates something more private. This has led other researchers to describe the News Feed feature as ‘broadcasting’ information, presented in a headline news format, which demonstrates the feature’s public, or at least semi-public, nature (Hoadley et al., 2010; Hull et al., 2010; Naaman et al., 2010). Indeed, Papacharissi (2009) describes Facebook as having a public ‘glasshouse’ structure.
There also appears to be an etiquette (or ‘netiquette’ (Shea, 1994)) among the observed online community not to reveal personal information through the News Feed feature. For example, users have been noted to comment on the inappropriateness of emotional posts created by other users in the network, which sometimes escalate into conflicts, and sometimes result in the emotional party apologising for their apparently unsuitable post. An example of such activity can be seen in a post I observed from a woman who wrote the following: ‘I know I shouldn’t do this on Facebook’. She then proceeded to air her grievances with an ex-partner to shame him for his behaviour, a post which received 40 ‘likes’ at the time of observation. This example demonstrates that users are aware that personal information is expected not to be disclosed through the News Feed feature, but nevertheless the public nature of the feature is occasionally exploited to gain support in such matters. Moreover, some users in the dataset posted status updates to complain about the inappropriate nature of other users’ posts, stating that there are ‘too many people airing their business on Facebook’. Another frequent occurrence appears to be for users to intervene in News Feed conflicts by suggesting that such arguments should not be taking place on the News Feed, and that users ought instead to ‘inbox’ each other or meet face-to-face.
It might be argued that in the abovementioned examples, rather than treating the News Feed as public, users are treating the News Feed as semi-public, and thus only as viewed by the people within their social network, albeit potentially large numbers of people. 3 However, there is evidence that users are conscious of third-party presence on Facebook beyond their friendship circles. A prime example of this awareness was demonstrated when violent threats or incriminating footage of crimes unfolding were uploaded to the News Feed. In response, some users warned about police roaming Facebook and advised others to remove such incriminating material. Facebook users’ awareness that police, and even commercial companies, can access their information would seem to show that members of the community expect their information to be public enough for those sorts of observances. Still, this does not mean that users will expect their information to be viewed by a researcher. Indeed, Whitty points out that even when information is in the public domain, the intended audience may not be a researcher, and therefore thought should be given as to how someone might feel about information being used in a study without their consent (Ferri, 2000; Whitty, 2004).
Despite evidence that Facebook status updates are often treated as semi-public to public, it is important to recognise that sometimes those posts will be intended as personal expressions – as more private than others (Anderson and Kanuka, 2003: 58; Eynon et al., 2008; Waskul and Douglass, 1996: 131). For instance, I have observed comments in the dataset where people express anxiety that their privacy has been intruded upon, such as a mother who warned other users that someone who was not her Facebook ‘friend’ had liked a picture she posted of her child. Another example shows the various stages of a relationship breakup displayed on Facebook. The woman in this situation pleaded with her ex-partner not to reveal their personal details on Facebook for the sake of their children, which demonstrates her uncomfortable awareness of the public nature of Facebook posts. Interestingly, the woman in this case had a completely public profile, whereas her ex-partner had partly restricted his profile to friends only. Users may, therefore, feel a violation of privacy, even when their information is openly in the public domain.
Based on this investigation, an argument could be made that Facebook information is public. First, it is proposed that the wider commercial availability of Facebook information restricted to friends means that this information is publicly accessible, which thus meets my first ‘technical’ criterion. Second, a review of how users within this network treat the information they upload to the News Feed indicates that for the most part status updates are treated as public or at least semi-public in nature. However, a contrary concern is that users who restricted their privacy settings to ‘friends only’ in response to the option ‘who can see my stuff?’, may not expect third parties, such as corporations and lurking researchers, to view their posts. Furthermore, users may be oblivious to the covert sharing of information by applications downloaded by friends. On this line of reasoning, then, users may not expect their information to be as publicly available as it is in practice. Moreover, even when information is openly public, and not restricted to friends by a user, there is evidence that sometimes users may still feel that their privacy has been violated. As such, further thought is warranted when researchers are handling even public or semi-public data online.
Documentary research or human subject research
The data I collected was in a textual format, and it therefore could constitute a form of documentary research, which is less contentious to use without obtaining informed consent than is human subjects research (Ess and AoIR, 2002). Markham (2003) proposes that online data are text-based, and thus a textual approach is warranted (see also Gaiser and Schreiner, 2009). In such contexts, participants are understood as authors rather than research subjects, and hence, although obligations to protect intellectual property rights might arise, obligations to protect autonomy, privacy, and confidentiality simultaneously diminish (Bassett and O’Riordan, 2002: 236–237; Ess and AoIR, 2002; Whiteman, 2012: 97). Bassett and O’Riordan suggest that in deciding how to classify online data, researchers may decide whether spatial or textual metaphors best describe their research space – chatrooms, for instance, invoke the idea of space and human subjects research, whereas posting may imply a more published dialogic metaphor, thus indicating a textual nature (2002: 239).
Much of the News Feed activity I recorded was in the form of fleeting statements or opinions, with little or no personal identifying information. These were described as ‘posts’ which appeared on a ‘News Feed’, thus indicative of textual metaphors. However, at times these statements sparked conversation, with multiple online users engaging in dialogue, frequently referencing events taking place in offline contexts. Of interest to my study, there was a considerable number of occasions when these interactions erupted into conflict, ranging from minor disputes to full-blown violence. These interactive activities were more indicative of an online social space, resembling human-subject activity. Indeed, it was the nature of one conflict that made me question whether I could feasibly treat my dataset as document-based research.
On a seemingly ordinary Wednesday night, my participant’s News Feed was bursting with activity. There were several violent threats being made by a drug dealer to ‘bring [him] the body’ of a customer who had failed to pay for goods. Later in the night, perhaps unrelated, another user uploaded a video of a man being attacked in his home by a gang wearing balaclavas and knuckledusters. Shaken, I left the online space and eventually fell into a troubled sleep. I was woken in the morning by an SMS message from my sister: ‘There’s been a murder’. Suddenly, the implications of my Facebook research came to the forefront: did this activity involve human subjects, and if so what were my ethical obligations, and to whom? I was a prime example of Boellstorff et al.’s worry: ‘The ease with which inexperienced researchers can enter virtual worlds without having thought through ethical concerns (indeed, in some cases without even being aware of what the concerns are) makes ethics a particularly critical topic for virtual world research’ (2012: 130). Following this event, I withdrew from my online observational activity, leaving with a six-month, rather than 12-month, dataset. I have spent the years since then ruminating on the ethical concerns I unearthed.
While this worrying example is particular to the online community I observed, this type of wakeup call seems to be a recurring incidence for online researchers. In fact, Guillemin and Gillian introduce the notion of ‘ethically important moments’ in order to describe points in the research which require researchers to reflect on ethical issues (2004: 262, cited in Whiteman, 2012: 114). Whiteman (2012: 49) explores this idea in great depth, and offers a comparable example of research carried out by Stern (2008). As with my initial approach of treating online data as textual, Stern received institutional guidance that in her research on public online youth websites she was working with text and not human subjects research. However, this classification became murky when Stern observed a young person’s suicide message, to which she did not respond due to textual treatment of the data. Stern later made the distressing discovery that the young person had committed suicide.
In my ‘ethically important moment’ shared above, I was fortunate that the suspected murder transpired to have been an accident, and I avoided being in possession of potentially incriminating data for a serious crime. But, as with Stern, it nevertheless raised serious doubts over my initial classification of such information as document-based research. Hine’s (2008) observation, therefore, that online information tends to be construed as human subjects research, is perhaps a preferable approach to adopt for research in online environments such as the Facebook News Feed, capable of eliciting ethically important moments.
Indeed, the value of my data is contained in its ability to reveal cultural activity such as conflict resolution. Thus, it was the human activity, not merely the text, which formed the significance of my data collection. Rather than being the kind of social media website on which users construct identities, many studies have noted a tendency for online communities, and Facebook in particular, to mirror the offline lives of social network users (Baym, 2000; Hine, 2000: 132, 2008; Kendall, 2002). Indeed, I found many references to discussions stemming from work and social life, in which Facebook as a platform weaves observably in and out of users’ offline and online worlds. Consequently, on account of the essential value of my data stemming from its capturing and investigating human activity, and due to the real-time risks I could be privy to observing, it is more accurate to classify my data as observational human subjects research than documentary research.
By concluding that my online research qualifies as human subjects research, another research analogy is apt. Zimmer (2010) criticises large-scale quantitative data collection on Facebook for being quite different to the conducting of observational research in a public square. Yet, in my case, the analogy is much more favourable. Similar to research at a public square, my News Feed observations involved near-random encounters with individuals who happened to be in the online space at the same time as the researcher; I was observing aggregate action, unable to observe all action simultaneously; and, finally, the data I gathered was imprecise, limited to my ability to discern gender, age, ethnicity and other physically observable characteristics from the profile pictures displayed. Instead of copying and pasting text verbatim, I took paraphrased field notes, and I anonymised my field notes at the point of data collection. Moreover, as in a public square – or even at a private event – individuals are guarded about how they behave and with whom they share information, much in the same way that Facebook users appear to guard and monitor information displayed through their friends’ News Feeds.
Concluding reflections: Ways forward for online research
My first encounter with online research has presented ethical difficulties stemming from my inability to gain full informed consent. I have sought to justify my data collection because of the information being public, both in its ability to be accessed, and more significantly, in how users perceive the information. Second, I assessed whether my dataset is best described as documentary or human subjects research. Due to the risks I found unwinding in the online space, I propose that ‘human subjects research’ is the best way to characterise my data. In accordance with the line of argument I present in these sections, observations on the Facebook News Feed can be viewed as comparable to observational research in a public space, which does not necessarily require informed consent for its use in research.
Social interaction increasingly occurs online, which raises a mound of ethical questions for researchers seeking to enter these spaces. There will be instances, such as mine, where gaining informed consent from each internet user is unfeasible. I propose that in such situations, research should not necessarily be ruled out. Exceptions to obtaining informed consent offline can translate to online contexts: if activity online is public and users perceive it as such, then waiving informed consent might be justifiable. Like Bassett and O’Riordan (2002), I am cautious that a blanket rule about informed consent online could eliminate promising research that may have wider public benefit. In addition to individual rights considerations, there are other research values for ethical review committees to consider, such as the overall benefit to society, knowledge developed and contribution to public policy and debate (Bledsoe and Hopson, 2009; Ess, 2012; Gaiser and Schreiner, 2009). If researchers avoid incorporating data from online platforms such as Facebook due to difficulties obtaining informed consent, then that leaves the information only to be exploited for financial profit, through targeted advertising and so forth (see Fuchs, 2011, for example). Socially minded researchers could have a valuable role to play in balancing the market-led aims of online data analysis. Therefore, while informed consent should always be the goal, where this is not possible, online research may still be warranted.
In situations where the right to informed consent has not been achieved, researchers might go beyond the ethical goal of doing no harm and seek to do good (Boellstorff et al., 2012: 130; Dupont, 2008; Herring, 1996: 164). One way to do this, as Walther suggests, could be to share information with the online community about privacy online (2002: 207). Another way could be to disseminate wider research findings with the research community, as Whiteman did (2012: 147). To redress not having gained full informed consent, I aim to make the research findings accessible to my research community. Moreover, by working closely with local policymakers in my hometown, I hope that the research can be of wider community benefit.
Footnotes
Funding
All articles in Research Ethics are published as open access. There are no submission charges and no Article Processing Charges as these are fully funded by institutions through Knowledge Unlatched, resulting in no direct charge to authors. For more information about Knowledge Unlatched please see here:
.
