Abstract
This article argues that the study conducted by Facebook in conjunction with Cornell University did not have sufficient ethical oversight, and neglected in particular to obtain necessary informed consent from the participants in the study. It establishes the importance of informed consent in Internet research ethics and suggests that in Facebook’s case (and other, similar cases), a reasonable shift could be made from traditional medical ethics ‘effective consent’ to a ‘waiver of normative expectations’, although this would require much-needed change to the company’s standard practice. Finally, it gives some practical recommendations for how to implement such consent strategies, and how the ethical oversight gap between university-led research and industry-led research can be bridged, potentially using emerging Responsible Research and Innovation frameworks which are currently gathering momentum in Europe.
Introduction
Informed consent is a concept that is traditionally associated with medical ethics. However, over time it has come to be a vital part of research ethics in any research involving human subjects, including research in the social sciences, medical sciences and computing sciences etc. Universities mostly self-govern research ethics approval, with committees (such as research ethics committees [REC] in the UK or institutional review boards [IRB] in the USA) that assess potential research projects and their impact on the research subjects and greater society. Even simple questionnaires that appear to have little impact on their research subjects are highly regulated within the scientific research arena, with informed consent requirements including information sheets, signatures, and the ability for the research subject to withdraw from the study. Failure to go through this process is considered an academic offence or malpractice, and can be dealt with through penalties such as removal of funding, cancellation of the research or the researcher being fired, amongst others, depending on the severity of the offence. In the age of ‘big data’ social networking it is easy to share information online, but it is not correspondingly easy for researchers to request informed consent for that information, and it is not always obvious which parties are responsible for obtaining that consent.
It is within this environment that the problems with the Facebook emotional manipulation study (Kramer et al., 2014) begin to emerge. The study is used in this article as a case study to draw conclusions about poor practices in mixed-institutional big data research projects, and to subsequently explore alternatives. In this article I argue for the requirement of informed consent in experiments such as the titular, in order to protect the basic human rights that have been the focus of informed consent development over the years. I then draw up some recommendations based on internet research ethics and a re-thought form of informed consent that has been used to develop a better informed consent process for other situations that require requests for consent en masse. Finally, I summarize these recommendations at various levels and briefly discuss them within the responsibility setting that can potentially be addressed by the currently developing area of Responsible Research and Innovation (RRI).
Informed consent, internet research ethics and Facebook
In this section, I will give a brief overview of the Facebook emotional manipulation study, identify that there is a problem with lack of informed consent gathering according to current internet research ethics procedures, and discuss informed consent and information and communication technologies, focusing on the environment in which the Facebook study is situated.
The Facebook study
The Facebook study, entitled Experimental evidence of massive-scale emotional contagion through social networks (Kramer et al., 2014), was a collaborative endeavour between Facebook and Cornell University’s Departments of Communication and Information Science. In it, Facebook researchers directly manipulated Facebook users’ news feeds to display differing amounts of positive and negative posts from the people they followed in order to determine whether their subsequent posts were affected by the positivity or negativity of the set of posts they were viewing. This effect, that more positive or negative posts read by a user could change their own emotional state positively or negatively, is the ‘emotional contagion’ referenced in the article. Facebook allowed the scientists (both internal and those from Cornell) access to the huge amount of data that was produced by manipulating what Facebook users saw according to computerized determination of positivity and negativity levels. The Facebook researchers performing the data collection did not seek explicit informed consent from any participants, and although an IRB was consulted at the university, the board deemed that the researchers were ‘not directly engaged in human research and that no review […] was required’ because the data collection was ‘conducted independently by Facebook’ prior to the involvement of its researchers (Carberry, 2014). Facebook does have an internal review process, but at the time it was not independent, and subsequent improvements to this process have apparently been made since the conclusion of the study (Meyer, 2014b; Schroepfer, 2014).
At face value, the study seems as though it would be a reasonable one to conduct, with potentially valuable outcomes for the understanding of the impact of social media on society. However, there were significant questions about the conduct of the experiment, focusing on the data manipulation and analysis methods used for identifying positive and negative posts (Grohol, 2014), the ethical conduct of the research, the burden of responsibility for this conduct, and the publication of the article by a peer-reviewed journal that requires IRB, REC or equivalent approval of such research. Because this article is concerned with informed consent, I will focus on the latter points, as a wider critique of research methods is outside the scope of this article.
‘Traditional’ informed consent
Informed consent has a long history in medical and bioethics, but only a relatively recent history specifically in technology, where it is most notoriously displayed in terms of service or end-user licence agreements. The seminal form of informed consent that is appealed to in technology situations, effective consent, was described by Faden and Beachamp (1986) within the context of medical research ethics, but the theory has since moved beyond that field. Effective consent involves the aspects of autonomy (of the consent decision action), competence (of the person making the decision to consent), disclosure (of all of the risks, benefits, terms, conditions and any other limitation), understanding (on the part of the consenter of the former) and voluntariness (of the consenter to consent) (Faden and Beauchamp, 1986). Where traditional research ethics involving human subjects usually involves face-to-face contact between researcher and subject, allowing for a conversation to occur, technology removes that contact, which significantly dilutes the ability for consent-seekers to determine autonomy, competence and understanding, and for consenters to understand the ramifications of the disclosure – for example, in Facebook’s case, providing thousands of words of difficult-to-read text 1 (although a face-to-face disclosure of this same information is neither a good nor practical solution to this problem). It is common for consent-seekers in these kinds of situations to appeal to the effective consent model, but focus on disclosure as the main aspect, disregarding the more important aspects of the requirements of informed consent (Flick, 2013). This is insufficient for real informed consent, yet technology companies persist with it because, although legally dubious at best, it has become the de facto standard. So even though Facebook had not included ‘research’ in its terms of service (Hern, 2014) at the time of conducting the research, if it had, this would not be ethically sufficient for its users, even at a most basic effective consent level.
I have previously discussed the fact that this sort of informed consent is wholly unsuitable for information technology (Flick, 2013) and that the theory of waiver of normative expectations (Manson and O’Neill, 2007) may be better suited. 2 I will return to this theory in a later section where I suggest a better approach for technology-based informed consent procedures.
Internet research ethics
Internet research ethics is currently an area that is pushing boundaries and still establishing best practices. The sorts of ethical research procedures that used to be commonplace in traditional research have been somewhat shoehorned into the online sphere, though many aspects are not directly translatable. The debate over what constitutes public and private data, for example, is ongoing (British Psychological Society, 2013). Despite this, there are extensive guidelines for researchers available at various levels (universities [e.g. University of Brighton, 2014], professional organizations [e.g. British Psychological Society, 2013] and independent bodies [e.g. The Association of Internet Researchers, 2012]) that outline the issues and provide suggestions to researchers on how to approach their research. All of these guidelines include the requirement for informed consent procedures in human research. Ignoring for the moment the discussion about whether the Facebook study should have had official ethical oversight (through an IRB or equivalent) – I will get to this later in the article – if the researchers involved were adhering to good research ethics, and were acting as professional researchers, they should have followed these ethical guidelines and performed informed consent procedures. The journal that published the article, the Proceedings of the National Academy of Sciences of the United States of America (PNAS), also requires any research with human participants to undergo ethical review. However, in a correction to the original article by PNAS Editor-in-Chief IM Verma, owing to the conclusion drawn by the Cornell University IRB the editors felt that it had been adequately screened, and that ‘as a private company Facebook was under no obligation to conform to the provisions of the Common Rule when it collected the data used by the authors’ (Kramer et al., 2014). However, the correction continues, ‘it is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out’.
Such a situation is considered a ‘policy vacuum’ (Moor, 1985), where the existing policy does not adequately fit current technological challenges. In the Facebook study situation, there is no explicit guidance of what to do, as noted by Harriman and Patel (2014). Harriman and Patel looked at UK National Health Service (NHS) ethics board procedures and found little in the way of explicit guidance for Facebook-style experiments. However, there are general guidelines for internet-based research, such as those discussed above, but they are not generally checklist or explicit guidance style approaches. The Association of Internet Researchers (AoIR) argues that it is impossible to create ‘one-size-fits-all’ rules for this sort of research, but that ‘ethical decision-making is best approached through the application of practical judgement attentive to specific context’ (AoIR, 2012: 4), and that ‘researchers should consult as many people and resources as possible’ (AoIR, 2012: 5) to assist in ethical deliberation prior to research being conducted. In internet research ethics, the following principle is of primary importance, according to the AoIR: ‘Because all digital information at some point involves individual persons, consideration of principles related to research on human subjects may be necessary even if it is not immediately apparent how and where persons are involved in the research data.’ (AoIR, 2012: 4). Although there are no hard and fast rules for this sort of research, in failing to carry out informed consent procedures and restricting their ethical review to an internal process, Facebook did not follow these principles.
The questions for researchers presented by the AoIR (along with similar guidelines posed by those institutions mentioned previously), as part of the suggested ethical deliberation, focus particularly on the ethical issues that the Facebook study encounters: ethical issues surrounding the participants of the study and the object of study; data storage, management and presentation; data analysis; potential harms and risks; potential benefits; recognition of autonomy of others (informed consent procedures); and issues to do with minors and vulnerable persons. It recognizes the potential differences between community and researcher expectations of norms, asking whether the research definition of the context matches ‘the way owners, users, or members might define it’, and asking researchers to define their own position within the context and compare it with their participants (AoIR, 2012). This is important within the Facebook context because if the researchers had done this there may not have been the same backlash against the study as was demonstrated. For the issue of consent the discovery and establishment of normative expectations is, as we shall see, particularly important.
Informed consent as a waiver of normative expectations
A brief outline of the Manson and O’Neill (2007) theory of informed consent follows. Informed consent should be made up of a series of waivers of expected behavioural and social norms. For any procedure or situation that may violate a person’s normative expectation for behaviour, such as using a knife to cut into a person’s body, the person needs to waive that particular behaviour (for surgery, for example). If they do not waive the expected norm, then consent has not been given (as would be expected in cases such as a stabbing). In order for people to make an informed decision, they must be involved in an effective communication framework that allows for simple, easy to understand language, and epistemic responsibility on behalf of the consent-requester for the consenter’s level of understanding in the consent transaction. In this theory, the focus shifts substantially from the assessment of the consenting party (and their autonomy) to the assessment of the quality of communication of the nature of the norms that are to be waived. This theory is less ambiguous than the currently practised theories (especially the effective consent disclosure model discussed above), not simply providing a set of minimum threshold requirements for assessing autonomy of a user’s actions but placing specific and explicit responsibility for accurate communication of requests for waivers of normative expectations on the consent-obtaining party.
The problem of disclosure, such as that encountered with overly lengthy and incomprehensible terms of service such as those used by Facebook, can be addressed more directly by the waiver-based approach. Facebook can, in fact, improve their terms of service in such a way that the expected norms are included as part of the base standard, but that expectations that need to be waived are communicated effectively and consented to (either negatively or positively) by the user. The use of the theory of communication as a transaction can more easily restrict the types of language used in a waiver disclosure, because successful communication requires that it be written in a language that is intelligible, easy to follow and relevant to its audiences, ‘rather than overwhelming them with a flood of irrelevant or distracting – even if intelligible – information’ (Manson and O’Neill, 2007: 85). Also, both parties should know some background about each other – sites and companies employing such terms of service should know the target markets for their services and adjust the language accordingly. However, this is not all that a successful communication transaction requires: in addition, the communication must not be dishonest, and needs to be accurate for the purpose. A company should state up front if they are going to be, for example, using user data for purposes other than as communication between users, and, as accurately as possible, what sorts of outcomes this use may have. This theory requires successful communicators to be responsible in the truth-claims they make in order not to undermine the communication transaction and thus undermine the informed consent procedure. In this way, sites that wish to violate normative expectations of their users should communicate explicitly with their users about these potential violations in ways that emphasize the truth-claims being made rather than focusing on full disclosure saturation of information, as is often traditionally required. This can avoid the ‘numbness’ that is associated with getting users to read, understand and accept the traditional terms of service and similar documents across multiple sites.
This approach also, importantly, avoids allowing those who deliberately mislead their users or assume acceptance of activities, such as in the Facebook study, to get away with such behaviour by making them just as responsible for the appropriate and accurate transaction of information as the user who signs up for the site and agrees to the terms of service by doing so. It is too easy under the current system for these parties to state that they adhered to the requirements of full disclosure, while failing to mention that in fully disclosing, the important information was buried so deeply within the agreement so as to be virtually drowned out by useless or inappropriate information. Currently the responsibility for understanding and isolating the relevant pieces of information is entirely that of the user, as, arguably, the consent-requester can avoid accountability by pointing to their disclosure of the information which theoretically empowered the user to make an autonomous decision. The waiver-based approach firmly states that this is not acceptable, because the required standard of communication dismisses full disclosure as an appropriate way to inform the user. If full disclosure is used as a defence but there is inadequate or unreliable communication (such as that brought about by the ‘flood of irrelevant or distracting […] information’ [Manson and O’Neill, 2007: 85]), then the informed consent procedure has failed, and the consent given by the user was not informed. Such is the case in Facebook’s study – any information about what a user’s data might be used for is deep within the terms of service and data use policies, which are a long and complex read for most users (the terms of service running at over ten pages worth of text that is not often drawn to the attention of the user outside of initial sign-up, and which can be modified at any time). Even if the user were to read (and re-read) these documents, the wording is ambiguous; for example, in the Data Use Policy, research is only mentioned as part of internal operations, and does not necessarily describe externally publishable research (such as that of the study).
It is important to recognize that the responsibility for successful informed consent events is shifting back to the consent requestor – there is a strong incentive for Facebook’s users to accept the terms of service, because being on Facebook is socially desirable, and a strong disincentive for Facebook to narrow the general terms of service. If Facebook can keep their terms of service general (within legitimate normative expectations of users – these will be discussed later) and seek explicit consent through successful informed consent procedures for potential norm violations, this may make it easier for them to retain legitimacy and favour with their users, and to cover themselves legally.
In this section I have described the Facebook emotional manipulation study, identified there to be a problem with the lack of informed consent in the study according to research ethics, described the problems with obtaining informed consent in traditional forms in ICT situations, and offered a solution to this problem that in turn can solve future informed consent gathering problems in social media (and similar) sites. In the next section I will suggest some ways for companies such as Facebook to identify normative expectations that they should request explicit consent for in the method described above, and describe some suggestions as to how to present the waiver for users to consent to the violation of these expectations.
Normative expectations
Similarly to internet research ethics, there is no ‘one-size-fits-all’ solution to the determination of what normative expectations exist. However, researchers, if they engage with the community prior to carrying out the research, can identify concerns users may have, what they would consider to be inappropriate or uncomfortable for them, etc. They should also consult with experts in research methods and ethics to ensure that all aspects are covered, whether through an ethics committee (such as an IRB or a REC) or independent consultants. In fact, unlike many traditional research projects, where potentially interacting with the community could sway results, big-data projects such as Facebook can easily separate those participating in community engagement procedures and those to be involved in experimental research projects as subjects, if afraid of potential bias.
As an example of some normative expectations that Facebook and similar organizations might want to consider, I have identified some themes that have run through the reactions of users to the Facebook emotional manipulation study as reported in the press and on blogs. This is not an exhaustive review in any way, and ideally there would be direct engagement with the population to provide a fuller list of expectations that might potentially be violated. I have focused on consent-related issues specifically, though there are other issues that are highly likely to be important to users of the system. These reactions were collected via a search of Google News for terms related to the Facebook study, including the title of the article, ‘Facebook emotional manipulation study’, ‘Facebook study’ 3 etc., which were then further filtered to ensure the applicability of the results to the actual study (and not other studies). These articles were then read, and consent-related problems described within them identified for further analysis. The responses from lay stakeholders were separated from responses from academics, which were used as a comparison mechanism. The limitations of this approach include that they were post-hoc (as described before, ideally stakeholders are queried about potential issues before such activities are undertaken or included within a terms of service document), they were not particularly representative of all perspectives of the study (although this does not impact the initial findings as we are looking for potential consent-related problems, not positive aspects), and that it is not an exhaustive review, but, as mentioned before, simply attempts to identify some illustrative emergent aspects (violated normative expectations) that stakeholders had with the study, and that could be used within the waiver approach.
Some of the main problems, as identified by ‘users or members’, reported online by the media (Hern, 2014; Hill, 2014; Meyer, 2014b; Waldman, 2014) and users in blogs and online polls (a particularly well curated set of responses was written by Deterding [2014a], and a poll conducted by The Guardian [Fishwick, 2014]), with the consent in this particular study can be broken down into these main issues:
Users were not asked to participate in the experiment, and didn’t know they were participating in research;
users did not know their news feeds were being manipulated by Facebook (although they were not surprised upon finding out);
users were not made aware on signing up for the service that they could be potentially involved in any experiments while using Facebook; and
Facebook did not see fit to seek ethical oversight despite ostensibly manipulating peoples’ emotions.
These match some of the issues with the Facebook study as found by academics commenting on the study (boyd, 2014; Hunter, 2014; Klitzman and Appelbaum, 2014). B is a particularly interesting finding because there is an element of manipulation that Facebook conducts on the news feeds even when they are not carrying out research. Even so, according to the literature mentioned above, users were still unaware that their feeds were being manipulated at all, let alone for the study.
From these, I suggest the following normative expectations that explicit consent ought to be sought for prior to violation of them:
Users expect their news feeds in Facebook to be published faithfully by the people, communities, and organizations they follow and not manipulated. (Resulting from issue B)
Users expect to be explicitly asked to participate in any experimental research conducted by Facebook. (A, C)
Users expect Facebook to conduct human research under the auspices of some sort of independent ethics committee. (A, C, D)
It is important to note that some of these extend outside research situations – especially (1), so the issue of manipulation of news feeds could be dealt with at sign-up and change of manipulation procedures. It is similarly important to note that any explicit consent requests to waive these expectations cannot be done only once at sign-up, but ought to be re-requested if situations are to change. For example, with (1), there could be an option for users to view their news feeds with no or minimal manipulation (as there are currently the ‘Most recent’ and ‘Top stories’ options), and those for whom it is manipulated should be notified if algorithms to do this change.
Practical implementation of this is not technically difficult. Modern web development techniques allow Facebook to pin a notification to the top of the news feed or similar, or, as currently occurs when an unusual login is detected, offer a notification, until the user has acknowledged it in some way. It is important that this particular notification is not simply a disclosure situation – a user has to be free to consent or not consent to the changes. Subsequent design decisions will need to be made carefully so that the option to not consent does not necessarily bar the user from using the site (though, depending on the consent request that was denied, perhaps puts them on a lower-scale bare bones version of the site). Obviously this will need to be further investigated in specific examples as they arise, but could also lead to better designed experiences for users too.
Responsible Research and Innovation
With suggestions for Facebook’s informed consent issues out of the way, it is important to look at the responsibility for enforcing the practice of ethical human research. The ‘Facebook loophole’ (Deterding, 2014b) that allowed the research to be associated with Cornell University and published in PNAS shows a gap in the approaches to human research between private companies and research groups (such as universities) (boyd, 2014; Hunter, 2014).
It is important at this point to note that there has been a response from a group of bioethicists stating that there was not any ‘egregious breach of ethics or law’ in the study (Meyer, 2014a) and that turning to regulation could drive such research ‘underground’: ‘If critics think that the manipulation of emotional content in this research is sufficiently concerning to merit regulation or charges of unethical behaviour, then the same concern must apply to Facebook’s standard practice — and many similar practices by companies, non-profit organizations and governments.’ As discussed earlier in this article, and as I have advocated for end-user licence agreements and privacy policies previously (Flick, 2013), it is absolutely the case that Facebook’s standard practice is also at issue, and that problematic informed consent procedures in the realm of technology need to be fixed in order to protect users and consumers of that technology. However, there have also been significant issues brought to light in this study that show that there are problems with human research within big data companies, exemplified by the Facebook study, and particularly problems of informed consent, as discussed earlier in this article. The fact that the bioethicists involved in drafting the Nature article could not agree on whether the manipulation of news feeds was a normative expectation to begin with shows that there are significant problems with the understanding of the community and the normative expectations involved, so to make a decision from an ethical perspective that these should be automatically waived without consultation is professionally irresponsible at best. In fact, these bioethicists suggesting that the status quo of allowing companies to operate outside of independent ethical oversight to continue is negligent at best – allowing companies to acknowledge that their avoidance of human research ethics has been condoned by a group of bioethicists – and at worst undermines the hard work of the field of computer ethics, which attempts to address these issues and fill Moor’s ‘policy vacuums’. Yes, rigorous science should be conducted, but science conducted within a company should not be treated any differently from an ethics perspective than science conducted within a university. This does not shift that science ‘underground’; it gives it that desired rigour.
This brings us to the question of how we can fill the wider gap between requirements for university-led research and that conducted within a company – the ‘Facebook loophole’ (Deterding, 2014b). The emerging discourse around RRI might be able to provide pointers toward a framework for further discussion and research into how this might be achievable. RRI aims to empower research and innovation to be responsible – both in research institutions and in industry. RRI is defined by Stahl (2013) as a ‘higher-level responsibility or meta-responsibility that aims to shape, maintain, develop, coordinate and align existing and novel research in innovation-related processes, actors and responsibilities with a view to ensuring desirable and acceptable research outcomes’. There is a significant movement coming from the European Union to move these principles into higher level research policy as well as into industrial contexts, with projects funded like Responsible-Industry (2014) looking at best practices to institute this meta-responsibility. The idea of bringing responsibility in at a meta-level allows us to organize responsible innovation with existing frameworks, such as, in Facebook’s case, integration with ethics review, codes of ethics, stakeholder consultations, and other best practices, but requires the industry itself to get involved and willingly accept such a meta-responsibility. To do so the incentives to drive that acceptance must be found – this is one of the goals of the Responsible-Industry project.
Indeed, so far the analysis of the applicability of RRI frameworks to industry has raised familiar results. Once again, there is unlikely to be a ‘one-size-fits-all’ approach, but ‘the most promising route is to tailor [academic] frameworks for specific industry sectors and for differently sized organisations’ (Søraker and Brey, 2014: 24). According to Søraker and Brey, corporate social responsibility (CSR) is used as an example of integration of responsibility mechanisms within industry, and could be used as part of the wider RRI frameworks. Standards and certifications could be used to further establish companies’ reputations and improve accountability to their users. However, these ideas need further research and implementation – and for companies to buy into the idea of its worth – before we see any change in this regard. Such an effort would not be without challenges, though, if the response from Facebook is any indication, ‘bottom-up’ grassroots reactions such as these can significantly incentivize change in processes within large industry players: Facebook has recently responded officially to the uproar around the study, stating their commitment to doing research ‘in the most responsible way’ (Schroepfer, 2014) by improving their guidelines for research and their review and training processes. For other companies, however, it may not be so easy. Small companies (such as start-ups) with few resources may find such frameworks difficult to implement, or put them at a lower priority. Large companies may wait for something ‘bad’ to happen before implementing such a framework. The incentives, barriers and risks for this sort of approach need to be further examined.
Companies that are involved in dealing with human data, and especially those involved in peoples’ daily lives to the degree that social media like Facebook is, need to remember that there are people at the end of the data rainbow. This means treating them as people with different ages, genders, backgrounds, cultures, desires, wishes, dreams, disabilities, strengths and vulnerabilities, and many other aspects. Facebook and other social media companies have a responsibility to their users to treat them with care and respect. It goes beyond the question of whether the Facebook study is just ‘no more than usual’ (Meyer, 2014a) and into the day-to-day running of the company. If companies are acting ethically from the outset (and performing their research and innovation responsibly), then the ‘Facebook loophole’ would not exist. Until that day, however, it would be prudent to maintain the ethical requirements and quality of human research science to reject research studies from any organizations, whether research or industry, where the data collection is not performed with ethical oversight (and if not independent ethical oversight, then at least procedures that would satisfy such oversight should it exist). Ideally, however, all modern market research that involves humans should come under such ethical scrutiny if we wish to close the research ethics gap.
Conclusions
The Facebook study was controversial for good reasons – it violated the normative expectations of the very users it was studying. The lack of informed consent to waive these expectations directly resulted in the backlash that caused the editor of the journal to publish a correction (Meyer, 2014a), that caused one author to make a statement apologizing for ‘the way the article described the research and any anxiety it caused’ (Kramer, 2014). Additionally, the COO of Facebook, Sheryl Sandberg, also apologized for the lack of communication (although she did not agree that the study was problematic in itself) (Bailey, 2014). The fact that these are considered communications issues by those involved further highlights the importance of engaging with the users in order to determine their expectations and gaining informed consent from those users before proceeding with such studies. If these studies, and associated daily practices, had been brought into a broader framework that ensured engagement with stakeholders over these sorts of issues, such controversy would have been more likely to emerge well before the data collection had begun. Whether or not this higher-level ‘meta-responsibility’ were to incorporate an independent ethics review committee should be the focus of future research, as well as incentives to drive uptake of such a framework, but for now it is plain to see that there is a gap between university-led research and industry-led research when it comes to ethics, and this should be rectified in some way.
In this article I have argued that the study conducted by Facebook and Cornell University researchers as described by Meyer (2014a) did not have adequate ethical oversight and did not perform informed consent procedures as required by internet research ethics, and that even if Facebook’s terms of service had had that users’ data could be used for research added to it, this would not have constituted valid informed consent. I identified that there is a significant gap between university-driven human research and industry-driven research in terms of research ethics, and that this is a problem which needs to be rectified. I suggested a new theoretical method for introducing informed consent into social media sites that requires the company rather than the user to be responsible for the consent decision, and gave some practical suggestions of identification of normative expectations that such companies could use in informed consent procedures. Finally, I pointed in the direction of RRI – a relatively new but quickly spreading movement to incorporate responsibility into the innovation process – and looked at how larger issues of differences between university-led research and industry-led research need to be addressed.
Footnotes
Declaration of conflicting interests
The author declares that there is no conflict of interest.
Funding
This work received no specific grant from any funding agency, but the author is a member of the Responsible-Industry consortium, which is funded under the European Commission’s 7th Framework Programme, Grant Agreement #609817.
