Abstract
In spite of many and varied concerns that the processes of institutional ethical review are flawed, cumbersome, and in need of reform, these processes do provide effective protection in certain situations for individual research subjects, researchers, and the institutions from which the researchers venture to conduct their fieldwork. Many in the social sciences have argued that the ethical protocols that the researcher must observe are designed to protect against the potential dangers of much riskier biomedical research, and that social research is, by and large, harmless. Although there is some validity to that argument, in this paper, social research is assessed not in terms of its risks to the individual participant, but to communities. By examining the protocols of the Belmont Report, the Institutional Review Board, and the American Sociological Association’s “Code of Ethics” and ethical review, this paper discusses some of the major blind spots in the ethical review of social science research, applying the analysis in particular to the case of indigenous communities, who have historically sustained significant damage from academic researchers against which no standardized institutional review could have protected them. The paper covers the history and parameters of these three ethical review institutions, identifies shared blind spots, and discusses the consequences of these blind spots for indigenous communities, ending with some suggestions of ways to address the problems in the system.
Keywords
Introduction
As norms concerning the relative importance of certain types of costs and certain types of benefits have shifted, so too has ethical practice. Changing beliefs about racial equality, the sanctity and universality of scientific knowledge, and inviolability of the sovereignty of the individual over their own body have all played central roles in radical evolutions in what is considered to be ethical practice for researchers. In recent decades, we as a scholarly community have gone to significant lengths to ensure that ethical conduct is, if not wholly guaranteed, strongly encouraged, and that the consequences of unethical conduct be severe enough to act as an effective deterrent.
Despite the steps toward institutionalization of mandatory ethical conduct, however, there remain subject populations and research practices that exist in the blind spots of academic ethical oversight—some in socio-historical and political spaces so foreign to Western academic institutions that these same institutions are functionally incapable of protecting them. These structural deficiencies in academic ethical oversight, their consequences, and some suggestions regarding roads toward more comprehensive ethical practices in research are the concern which I address in this paper. In particular, I will be addressing the inability of institutional ethics to protect subjects on a community level. To be sure, the failure of researchers to provide adequate protections to communities has been addressed at some length. However, what I am suggesting here takes the critique a step further; namely, I contend that we are not simply failing to adequately protect communities, but that doing so without direct community involvement and oversight is and will be impossible for academic institutions of ethical review, regardless of best intentions or even extensive reforms. In order to demonstrate this point, this article draws on the structure and scope of three particular modes of ethical oversight on academic research, namely the “Belmont Report,” the Institutional Review Board (IRB), and the American Sociological Association’s (ASA’s) Code of Ethics.
These are far from the only mechanisms of ethical oversight in operation, but they are crucial mechanisms for sociologists operating in the US, comparable to similar mechanisms operating in other political and disciplinary jurisdictions. The choice to examine these three mechanisms is rooted primarily in the extent of their impact on research practice in the US, and their ability to demonstrate the inevitable inadequacies of institutional protections, regardless of their particular structure, extent of bureaucratic rigor, or level of disciplinary specificity. The ASA Code of Ethics was a natural choice based on my own disciplinary standpoint, but it is simply one case among other similar codes of conduct produced by academic organizations invested in human subjects research. By overlaying the structures of the Belmont Report, the IRB, and the ASA Code of Ethics against one another, we are able to see with stark clarity the gaps that remain.
Structure, power, and scope of academic ethical oversight
The Belmont Report, produced under the National Research Act of 1974, made recommendations for ethical human subjects research practice, though it applied most readily to medical, biological, and other laboratory sciences. Its three “Basic Ethical Principles” of: (a) Respect for Persons; (b) Beneficence; and (c) Justice, were designed to be comprehensive, but were left intentionally broad in order to facilitate local interpretation. These principles included assumptions of individual autonomy, the responsibility of the researcher to respect and protect that autonomy, and equity in both the shouldering of risk and the benefits of research, as well as more abstract and subjective judgments including the weighing of subject risk against the benefits to the subjects, their families, social networks, and the society at large. Interestingly, the latter guideline is not only for the researcher, but also for the society itself: the wider population, the report claims, has an obligation to not only consider the well-being of individual research subjects, but the importance of the research and its potential to benefit the broader society. It is notable that providing structural or institutional protections for communities is demonstrably more difficult than providing such protections for individual research participants, and the Report contains little language related to community protection. The report ultimately did relatively little to change the culture of biomedical and behavioral research, which would continue to assess ethicality based on the principle of the “greater good” (Emanuel and Grady, 2007).
Despite the relatively minimal requirements for membership on a given review committee, which may comprise as few as five members (and fewer for an approval vote), most social scientists are familiar with the at times frustratingly minute detail one must devote to producing an IRB application that passes all the necessary checks involve in institutional review. In fact, one of the primary complaints about the IRB process from social scientists suggests that the review process is too stringent for the vast majority of social science projects, in which the risks are generally (according to the complainants) minimal at most (AAUP Committee A, 2006; Ashcraft and Krause, 2007; Borenstein, 2008; Carpenter, 2006; Feeley, 2007; Kim et al., 2009; Rasmussen, 2009; Stark, 2007). In the course of seeking IRB approval for a proposed research study, the applicant must provide documentation of the purpose of the study, of informed consent (or reasons why such documentation is unnecessary or would be a significant hindrance), and the risks and benefits for the subjects and/or other communities. There must be assurances of privacy, safety, and confidentiality for the subjects, and any additional measures to be taken for the protection of especially vulnerable populations. The expectation of the IRB contract—indeed, the explicit guarantee—is that the researcher will conduct the research in ways that conform to the guidelines agreed to in the contract, and only deviate from those prescribed rules after filing an amendment with the IRB and receiving their approval. The practical circumstances of research practice, however, provide little assurance that these agreed-to guidelines will actually be followed.
In the event that the IRB is notified of misconduct (or discovers it during random or targeted audits, which many IRB offices conduct), there are a number of possible consequences of variable levels of severity, but public disciplinary action and report of misconduct can have a strong adverse effect on the career of a researcher, making it more difficult to find work, to obtain tenure, less likely that their work will be published or cited, and so on (Stern et al., 2014). However, the assumptions that the subjects will be properly informed of the means of reporting, or that reporting will actually occur in situations of misconduct, are specious at best. It is also crucial to stress this further concern: IRBs are specifically instructed
In 1970, the ASA published its first Code of Ethics, establishing a set of general ethical guidelines for sociological work. It outlines not only principles for the protection of human subjects, but also a variety of other principles pertaining to professional conduct. The Preamble specifically mentions that the individual researcher may
Communities in the ethical blind spot
Each of the three means of institutional oversight analyzed here possesses its own strengths and weaknesses, and to some extent the benefits of one may make up for the deficiencies in another. However, even when thought of as a series of overlapping fail-safe devices for social research, there continue to be cracks through which entire communities may fall, and therein lies the central point of this article: these overlapping systems not only do not currently provide adequate protections for communities (an issue noted extensively by existing scholarship (Castellano, 2004; Gbadegesin and Wendler, 2006; Weijer and Emanuel, 2000; Weijer et al., 1999), although mostly in reference to biomedical rather than social research), but
The ASA Code of Ethics, the IRB, and the Belmont Report all followed on the tail of the Civil Rights Movement, in the wake of discoveries surrounding ethical misconduct by scholars in the US, and with the memories of the Nuremberg Trials and the subsequent “Nuremberg Code” statement of ethical treatment relatively fresh in the mind of scientists, doctors, and policymakers. Each of these institutional means of oversight reflected the ethical perspectives of the time, as well as the particular professional interests of the researchers and policymakers themselves, focusing largely on the treatment of subjects in biomedical research while also creating a system of rules in which research could be conducted with the assurance that comes with official institutional sanction.
The Belmont Report and the IRB share an additional quality that is central to the discussion at hand: they are intently focused on the well-being of the individual (Emanuel and Weijer, 2005)—an institutionalization of ideological individualism which had been used in a variety of ways throughout Western modernity as a means of discrimination, oppression, and dispossession (e.g., Bose, 2003; Fevre, 2016; Walls, 2015).
It is fitting that these two structures of ethical oversight should be concerned primarily with the individual. Both the Belmont Report and the IRB regulations emerged from a piece of federal legislation produced at a time when the rights of the individual were very much in the forefront of national political life (Mills, 2014). Under the Belmont Report, some measure of consideration is given to society at large under the principles of Beneficence and Justice, though notably there is a tacit distinction between “society at large” and “communities.” The former term suggests a large, amorphous group of individuals united under a particular social structure. The latter would include particular demographic groups united by experience, status, and common culture, ethnicity, history, class, and so on. The requirements for IRB approval make a similar distinction.
The ASA Code of Ethics represents a somewhat different and more encouraging case, although it, too, is not without its problems. In a chapter in
Where protection of communities is concerned, again the ASA Code of Ethics shows a significant improvement on the principles and practices of both the Belmont Report and the IRB process. The IRB evaluation process is particularly striking in its focus on the protection of the individual
As with the operationalization of the Belmont Report through the regulatory document controlling the IRB, the ASA Code of Ethics has its own practical review process. The function of the review by COPE is fundamentally reactive rather than preventative. The presence of a formal review process and the possibility of potentially severe professional sanctions may act as a deterrent against misconduct, but this poses the same questions as the deterrent effect of IRB sanctions, given that investigation of misconduct only happens if it is reported, and even members of the ASA are not required to provide research participants with contact information for the ASA’s Commission on Professional Ethics. If an accusation is made, it is more likely to come from a peer than a research subject, which would afford the researcher committing the misconduct significant opportunities to twist or conceal it prior to peer review, and thus avoid consequences.
There is another concern, which COPE shares as well with the IRB, concerning the application of ethical review for the protection of communities: namely, it represents a form of scholarly paternalism over the subjects and the communities to whom they belong. For many research subjects this might not be a concern—it is reassuring to know that there are multiple formal processes in place to protect them from exploitation or harm by researchers, and consequences should such misconduct take place. However, this process also makes certain assumptions about who holds the power not only to determine the ethics of a given situation, but also what protections vulnerable populations are entitled to, the importance of community and individual concerns about the research by way of assessing level of risk, the severity and impact of misconduct, and to whom the researcher should be accountable. These questions are particularly salient when considered against the struggles for sovereignty and self-determination that indigenous communities have been enacting throughout colonial history, and the systematic dispossession of indigenous knowledge and lifeways by agents of Western science.
To be clear, the structural issues present in these three modes of institutional ethical oversight that prevent them from being able to adequately identify and address ethical misconduct, particularly where communities are concerned, are not issues which I would argue to be solvable internally through reform or restructuring of the processes or institutions. This is not to say that reform to the processes involved would not be meaningful and potentially beneficial, but it would be unable to address the core problem at hand; namely, that the development, implementation, and enforcement of universal principles and rules for social research will, by nature, be inadequate to address particular instances of misconduct or risk, especially on the community level. The example of protection of indigenous research subjects, the communities to which they belong, and the intellectual and cultural sovereignty of indigenous peoples provides an instructive case in the structural deficiencies of current means of ethical oversight in social research. The case of indigenous social research effectively illustrates not only the holes in top-down institutional oversight, but also both the inevitability of these deficiencies and some of the approaches that may be necessary in order to provide the forms of protection that the Belmont Report, professional Codes of Ethics, and the IRB cannot provide.
Protection of indigenous communities
The vulnerability of indigenous individuals and communities is defined in no small part by the consequences and ongoing experiences of colonial domination and eradication. In the US, many indigenous communities face social, cultural, and political problems stemming from overwhelming poverty (Essenburg, 2014), high rates of drug and alcohol abuse (Gone, 2013), the ongoing epidemic of suicides (Grande, 2015), persistent physical and mental health problems (Gone, 2013), and physical and sexual abuse (particularly against indigenous women) (Comas-Diaz and Greene, 2013). For individual indigenous people, these problems are immediate and tangible parts of daily life. However, according to many indigenous theorists and other scholars of historical trauma (Alfred, 2009; Atkinson, 2002; Gone, 2013; Malley-Morrison and Hines, 2004; Million, 2013), these everyday, immediate problems came to being and grew to a disproportionate size as a direct result of broader, long-term structures and processes of colonial control, dispossession, and violence. In keeping with the biomedical theme of much of the ethical literature on human subjects, the current methods of ethical review and protection would be—at best—a treatment for the exacerbation of symptoms by researchers interacting with indigenous communities, and do nothing to address the underlying problem.
The challenges that face indigenous communities are distinct, wide-ranging, and innately long-term. The Belmont Report instructs researchers to approach subjects with respect, beneficence, and justice, but the operationalization of these principles through federal regulations, and the IRB does not allow for considerations of indirect and/or long-term risks, which becomes an immediate problem when the primary risks posed by social research are themselves both innately long-range and foundational to the rest of the day-to-day challenges of indigenous lives. Considering that the interests of indigenous communities and settler colonial states are, by definition, at odds with one another, research which benefits “society,” as it is termed by the Belmont Report, could be actively harmful to indigenous communities—a structural-historical issue which is only possible to prevent if one is able to (a) consider the needs of particular communities, rather than individuals or “society at large,” and (b) think about the consequences of research beyond the scope of what happens immediately and directly following its conclusion.
The IRB has a further structural failing. Considering the potentially small size of review committees and the infinitely vast array of subjects covered by social researchers, it is unreasonable to assume that any team of reviewers, however knowledgeable and well-credentialed they may be, will have comprehensive knowledge of the varied concerns and vulnerabilities of all potential subject populations. And although committees do have the option to call on outside experts, their decision to do so hinges on their awareness that there is a problem on which they are not qualified to comment—a fairly obvious institutional paradox, and a real stumbling block when it comes to communities whose vulnerabilities are not well-known, such as indigenous peoples.
In the latter point lies the crux of the problem when it comes to ethical concerns in indigenous research: it is difficult to find mere awareness of past and present indigenous struggles, let alone expertise in these subjects, in the dominant institutions of knowledge production. Moreover, the diversity of indigenous vulnerabilities is such that, even if an IRB committee were fortunate enough to have a reviewer familiar with the challenges of a particular people or even region, there is no way to ensure that this expertise would be applicable to the vulnerabilities of other peoples and regions. During the review process, the IRB has a responsibility to attend to the concerns of the researchers regarding the ethical risks of their research, as well as to seek outside expertise if the circumstances call for it. This requires, however, that the review committee be able to recognize the need for further scrutiny or expertise in the first place, a recognition that is far from guaranteed. It is functionally impossible for the IRB to adequately protect indigenous communities, particularly in terms of issues of intellectual sovereignty or socio-political self-determination. 1
The problems of the reactive and paternalistic nature of the review process by the ASA’s COPE, guided by the organizational Code of Ethics, are directly applicable to the circumstances of research with indigenous communities. In the absence of adequate IRB review, we might instead wish to rely on the research subjects to report instances of misconduct to either the IRB or to the COPE, and if they were to do so, ideally the issue would be resolved fully and appropriately. However, there are a number of factors that could easily prevent a satisfactory resolution, including (a) lack of ethical documentation in the IRB contract due to lack of knowledge among reviewers, (b) the abstract or long-term nature of the grievance rendering it beyond the jurisdiction of the IRB, (c) lack of awareness of the review protocols among the subject population, (d) distrust of academic review institutions such as the IRB or COPE among the subjects, (e) ignorance of indigenous histories or issues among the COPE, even if the subjects know to contact them in the event of a problem, or (f) disagreements among participants over the appropriateness of placing the ethical decision-making in the hands of the IRB or ASA (as colonial, paternalistic institutions).
The problems of community protection as applied to indigenous communities have not gone unnoticed within the scholarly discourse on research ethics—indeed, outside of (and even within, to some extent) the field of bioethics, discussions of community protections have been applied most extensively to the case of indigenous peoples (Ball and Janyst, 2008; Brugge and Missaghian, 2006; Castellano, 2004; Denzin et al., 2008; Dunbar and Scrimgeour 2006; Glass and Kaufert, 2007; Tauri, 2014; Tuhiwai Smith, 2012). This scholarship has covered the topic of indigenous research protections at some length, and it is not my intention to reinvent the wheel. However, with the foregoing analysis of the deficiencies of the Belmont Report, the IRB, and the ASA Code of Ethics, it becomes clear that the kind of protections for which scholars of ethics in indigenous research have been calling will, for better or worse, never come from the academic institutional sources, and can only come from within the communities themselves.
How do we deal with the holes in the process?
Documents such as the Belmont Report and the ASA Code of Ethics serve reasonably well for protecting the individual, and for encouraging a sensitivity to the consequences of social research, but the commitment to community-level considerations is simply impossible to operationalize through standardized ethical review. Structural reforms by the IRB have been an important step, but most of the fundamental problems with the review process are not going to be addressed by reform. It will never be possible to ensure that all IRB offices have access to information on the particular vulnerabilities of all possible subject populations, or to take into account all possible long-range risks; there is a reason that the prohibition of such considerations is written into the regulations. If approval were to be withheld from every study for which there would be risk of harm that might manifest at
The solution is not going to be found in extensions of academic or governmental review, nor in any form of standardization, save the requirement to look beyond the present instruments of ethical oversight. In order to take into account considerations of community, of local historical, cultural, and political context, of long-range consequences particular to the study at hand, and of the right of communities to determine for themselves when and to what extent they have been harmed, and the appropriate means of resolution, as researchers we need to look to those same communities to whom our participants belong. The process of ethical review must be more than either prescriptive or reactive; it must be an ongoing dialogue between the researcher and the communities whose lives and futures the research relies upon and seeks to affect. The exact manifestation of community consultation will necessarily differ in research projects with different communities, as mechanisms for determining who should speak for a given community will likewise differ from one to the next. In the case of some indigenous communities, including those with whom my own research has taken place (Cragoe, 2017, in press), the question of who carries the authority to speak on the community’s behalf is a fraught one, as there is often internal conflict between state-recognized institutions of indigenous governance on the one hand, and voices of traditionalism, political dissent, or other intra- and inter-tribal factions (Deloria and Lytle, 2013; Foley et al., 2013; Wiethaus, 2007), to say nothing of the personal differences in belief concerning who speaks for the whole.
To be sure, the importance of community partnerships in research design, implementation, and analysis has not gone unnoticed by the scholarly or regulatory communities, with institutions including the World Health Organization (Hankins, 2016) and the UK Department of Health (Department of Health, 2005) writing community consultation into their guidelines for best practices in research (although these acknowledgments are often explicitly applied to biomedical research, due to the ongoing assumption that the physical, individual risks associated with this type of research is where research ethicists should most closely focus their attention). The inclusion of these guidelines for research practice continues, however, to place the onus of responsibility for community protection primarily on the principal investigators who are not typically members themselves of the communities in which they are conducting research. The language of the ethical instruction in the UK Department of Health guidelines is demonstrative of the problem: “[The Principal Investigator is responsible for ensuring that] potential participants and other service users and carers are involved in the design and management of the study
In terms of scholarly work toward the development of best practices for community involvement in the planning and management of human subjects research, there has similarly been an increased demonstration of the awareness of the issues since the late 20th century and into the present day. In fact, researchers studying a dazzling variety of social and health problems have discussed and implemented strategies for incorporating community participation (Cochran et al., 2008; Israel, 2014; Minkler, 2005; Sharp and Foster, 2002; Viswanathan et al., 2004, to name a few), with scholars of Participatory Action Research (PAR) operating at the applied edge of this change in human research methodology (Chevalier and Buckles, 2013; Dawson and Sinwell, 2012; McIntyre, 2008; Smith, 2008). Much of the literature on good practice in community involvement provides useful guides to both the various ways in which communities can be involved in the research process as well as the common issues that researchers face with collaborative work, including problems with trust, ownership, representation, and locating expertise (Israel, 2014; Minkler, 2005; Smith, 2008). Not all research needs to conform to the structure of PAR, but PAR does have certain lessons in ethical methodology that, I would argue, can usefully be applied to address the problems discussed here. Appropriately, PAR is a very common methodological approach for many researchers working with indigenous communities (e.g., Davis and Keemer, 2002; Holkup et al., 2004; Jetter et al., 2015; Johnston-Goodstar, 2013). As noted previously, control over their own knowledges and information is a concern of central importance for many indigenous communities, and partnerships to facilitate internal community generation, analysis, and dissemination of information through PAR has been a valuable tool. These partnerships often involve professional researchers sharing methods with indigenous community partners, and although direct collaboration is not always necessary, establishing a standard for community consultation would go a long way toward dealing with the ignorance of local context in academic review.
It is notable, however, that despite the proliferation of methodological and empirical work on the value of community partnerships, these sources nevertheless still neglect to adequately demonstrate the necessity for the strategies they propose in the context of the hegemonic institutions of ethical oversight. (One exception to this omission can be found in the 1999 commentary publication in
These kinds of solutions would surely create a greater burden for the researcher, and there would no doubt be many who would see these additional measures as unnecessary, given the common perception that social science research is, by and large, of very little risk to the participants. However, this is a burden that I would argue we must bear in order to provide greater protection for communities, and to institute checks against the kind of epistemologically exploitative research that has characterized so much of social research’s history.
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
