Abstract
Research ethics, once a platform for declaring intent, discussing moral issues and providing advice and guidance to researchers, has developed over time into an extra-legal regulatory system, complete with steering documents (ethics guidelines), overseeing bodies (research ethics committees) and formal procedures (informed consent). The process of institutionalizing distrust is usually motivated by reference to past atrocities committed in the name of research and the need to secure the trustworthiness of the research system. This article examines some limitations of this approach. First, past atrocities cannot provide the necessary justification unless institutionalized distrust is a necessary or efficient means to prevent future ones – and there are several reasons to doubt this. Second, the efficacy of ethics review in safeguarding morally acceptable research depends on the moral competence and integrity of individual researchers – the very qualities that institutionalized distrust calls into question. Third, ethics guidelines cannot, as is sometimes assumed, educate or guide researchers in moral behaviour unless they already possess considerable capacity for moral judgment. Fourth, institutionalized distrust is a potential threat to the moral competence and integrity of researchers by encouraging a blinkered view of ethical issues, inducing moral heteronomy through incentives, and alienating them to research ethics. We conclude that the moral problem posed by inappropriate short-term behaviour on behalf of researchers is dwarfed by the potential long-term consequences if their moral competence is allowed to deteriorate. Measures must therefore be taken to ensure that researchers are equipped to take their individual responsibility and are not obstructed from so doing.
Keywords
Introduction
Research ethics, unlike the natural sciences, produces normative output – in essence, statements on what ought to be done. Still an academic discipline, it has thus quite naturally come to double as the framework for extra-legal regulatory systems, much like jurisprudence is the foundation of legal regulation. It is tempting to assume that to be effective in guiding action, ethics must be formalized in the same manner, through steering documents, overseeing bodies and formal procedures.
Today, the number of ethical guidelines and professional ethical codes intended to guide research is increasing at a tremendous pace (Eriksson et al., 2008). We also expect more of them: The Declaration of Helsinki, for instance, has gone from modestly declaring itself ‘only a guide’ (World Medical Association, 1964) to forcefully asserting that ‘No national or international ethical, legal or regulatory requirement should reduce or eliminate any of the protections for research subjects set forth in this Declaration.’ (World Medical Association, 2008). General principles have partly given way to enumerations of concrete rules, for instance with regard to what pieces of information should be disclosed to research participants. In some contexts, ethics review has increasingly become a matter of scrutinizing informed consent forms (Coleman and Bouesseau, 2008; Edwards et al., 2011; Hoeyer, 2005).
In this article we argue that ethics review and guidelines are insufficient to ensure morally responsible research. In some circumstances, regulatory research ethics can be more of a hindrance than a help. We begin by describing the paradigm of institutionalized distrust that currently informs it. Next, we argue that past atrocities cannot be drawn upon to back claims that research must be more strictly regulated unless what is proposed is a necessary or efficient means to prevent future ones. We thereafter consider the main limitations of ethics review and guidelines. With regard to ethics review, requirements of consistency invite rigidity; lack of reliable indicators of a project’s moral soundness may lead to idiosyncratic decisions; and the fact that committees depend on the moral agency of investigators is often overlooked. Strict adherence to guidelines is also no guarantee that moral responsibilities have been discharged. In fact, if guidelines are used as standards against which performance is measured, responsible conduct will occasionally be punished and blind rule-following praised.
In the penultimate section, we identify some particular risks with the current system. First, ethics review that focuses strongly on some ethical aspects of research risks diverting attention from other morally significant issues. Second, guidelines with a low level of abstraction – that is, those orienting towards rules rather than principles – encourage a checklist-like approach to ethics that makes individual moral deliberation appear redundant, eventually leading to heteronomy of action. Third, when rules contradict (which they often do), they fail to provide guidance to researchers, and may even alienate them. The irresponsible conduct that follows tends to precipitate tighter regulation, thus perpetuating the vicious circle. Consequently, though substandard behaviour in the short term is indeed worrying, the moral competence of researchers in the long term should be cause for even greater concern.
Institutionalized distrust
Social scientists have described the drive toward tighter regulation and systems of oversight as an expression of the ambivalence and insecurity that pervades post-modern society (Miller and Boulton, 2007). People, it is argued, can no longer rely on social norms to govern the actions of others; to dare to cooperate they must look for other guarantees. Where developing a personal relationship with the other is not feasible, one must then either find a trusted person to vouch for the other, or fall back on formal structures such as laws, rules and contracts – backed, of course, by appropriate mechanisms of sanction.
To the degree that this picture accurately describes the societies we live in, biomedical research is in trouble. If trust depends on social norms, the researcher will – to most people at least – count as an unknown other who should not be trusted. In some contexts, health care personnel with whom potential research subjects are more familiar can act as ‘proxies’ or guarantors (Johnsson et al., 2012), but this is not always a viable option. It could be argued that if researchers are either insufficiently trusted or insufficiently trustworthy, we ought to at least make their actions more predictable so that public support for biomedical research may continue. This normative position forms the essence of the paradigm known as institutionalized distrust (Hall, 2005; Sztompka, 1998). This article focuses on two of its mechanisms: oversight and formal rules. By giving an overseeing body – in our case, research ethics committees (RECs) – the task of distrusting researchers, the public will not have to; they can go on cooperating, confident that the necessary control systems are in place. However, to ensure effective oversight and to maintain the legitimacy of the overseeing body, we also need clear rules or performance standards against which deviations can be spotted. Guidelines, once intended to provide guidance, are today designed with this regulatory need in mind.
Institutionalized distrust resembles distrust between people in that it implies taking precautions, doing check-ups, and developing contingency plans in order to minimize risk, but it rests on instrumental rather than empirical standards of justification. Whereas distrust between people is warranted by evidence of untrustworthiness, institutionalized distrust is rational insofar as it is likely to make the research enterprise more trusted and – perhaps – more trustworthy. This must be borne in mind whenever past experiences are used to back future policies.
The problem to the solution
If the Nuremberg Code is the foundation of bioethics, the Nazi atrocities that preceded it serve as the precautionary tale, but what moral does it tell? It is commonly claimed that it teaches us the necessity of informed consent (Goldworth, 1999). As we already know that informed consent is important, we may fail to notice the tenacity of this claim. Granted, involuntary participation is impossible insofar as the ideal of informed consent is in fact realized, but it does not follow that merely requiring that informed consent be obtained would have been effective. A legal requirement of voluntariness was in place already in 1931, but made little difference to the victims (Hoeyer, 2008). Arguably, no amount of research regulation will protect minorities in a totalitarian state, let alone one embracing Nazi ideology.
Now consider a more recent large-scale transgression of human rights – the Tuskegee syphilis study. The subjects – exclusively African-Americans – were led to believe that they were receiving treatment. This was a lie: despite the risks of untreated syphilis being repeatedly proven throughout the study and penicillin being readily available, they did not receive any. Through carefully placed letters to other physicians in the vicinity, the investigators even prevented the subjects from being treated elsewhere. Tragically, the Department of Health, Education and Welfare concluded in its Final Report that where the investigators had failed was in obtaining informed consent from their research subjects (Brandt, 1978). Ignored or overlooked was the fact that even before the age of informed consent, what transpired would have counted not as morally problematic but as obviously racist and evil.
Another lesson ostensibly taught by these examples is that researchers are unreliable unless watched. However, we must not forget that the Nazi atrocities, though invented by individuals, were perfectly in line with contemporary public policy. Would an REC, had there been one, have condemned these experiments, or applauded them? As for the Tuskegee case, there was oversight. A committee at the Communicable Disease Center (now the Centers for Disease Control and Prevention) decided in 1969 that the study was to be continued – casting some doubt on the ‘mad scientist’ account. Only when details of the study were leaked in 1972 was the project forced to a halt (Brandt, 1978). In other words, it took a whistle-blower – an individual – to end what the authorities let pass.
By virtue of their bestiality, the Nazi and Tuskegee cases remain persuasive even when badly told, but this is also what makes them miss the mark with regard to research regulation and oversight. The simple fact that some people are capable of murder does not make it reasonable to view every passer-by as a potential murderer. Similarly, atrocities committed in the name of research provide us with no good reason to distrust researchers across the board. What they do point out is what happens when abuse and exploitation are condoned or even encouraged by the society. As with other major crimes, state-sanctioned or not, the solution is hardly to be found in better monitoring.
A better chosen example to illustrate the need for research regulation would be one that points out genuine and justified uncertainty regarding researchers’ behaviour. It has been observed, for instance, that researchers occasionally impose more than reasonable risks on research subjects (Savulescu, 2002). The question is: Should this count as a reason to monitor them even more closely, or to question the efficacy of such measures in cultivating trustworthiness?
Limitations of ethics review
Independent review by RECs has been argued to serve a key role for maintaining public trust in biomedical research (Hansson, 2005). Its success in this regard may depend on how it is presented. It has been noted in other contexts that abundant use of corrective measures breeds further distrust, presumably by implying that there is much to correct (Koski, 2007). For similar reasons, other authors have argued that institutionalized distrust should remain ‘in the shadows, as a distant protective framework for spontaneous trustful actions.’ (Sztompka, 1998). What ethics review does for the trustworthiness of research is a different, and for our purposes more important, issue. Ideally, it will help prevent badly designed or otherwise morally problematic research from being carried out. The following are some important limitations to consider.
Rigidity
The legitimacy of RECs as extra-legal regulatory bodies hinges on their ability to reach rationally justifiable verdicts. This implies, first, a degree of consistency over time and, second, that inconsistencies that do arise can be reasonably attributed to moral progress. Guidelines rarely provide answers clear-cut enough to stave off the threat of indeterminism. For this reason, RECs have been found to rely more on local precedents than on theoretical frameworks (Stark, 2012: 165). Through their ‘institutional memory’, RECs are able to embody norms and carry them over to future generations of researchers, but institutional memory can also become a burden that impedes progress. Demands of consistency makes it impossible to improve one’s standards without calling past decisions into question. RECs also become less likely to critique societal norms, which undermines their position as moral authorities (if not as regulatory bodies). For instance, in a society infused with racist ideology, one could hardly trust an REC to reject a Tuskegee-like project. More generally, we cannot trust RECs to react to wrongs that common morality does not conceive of as such, or to abandon principles that no longer protect important values.
Idiosyncrasy
A main task of RECs is to weigh benefits and risks of proposed projects. The metaphor of weighing lends a flavour of objectivity to the procedure, as if it actually involved a set of scales. In reality, reaching consensus is very much an organic process. No matter how competent its members, an REC is not always ideally positioned to evaluate the scientific merits of research projects, especially when they deviate from the paradigm (Fistein and Quilligan, 2011). It is tempting, therefore, to distinguish between ‘ethical’ and ‘technical’ issues, where the former but not the latter would be the responsibility of RECs (McGuinness, 2008), but as badly designed research is by definition unethical, this position is difficult to justify.
Worse, arguments passed during REC meetings may not always draw on observations that are rationally related to what they are supposed to assess. In an American study of IRBs (institutional review boards), references to embodied, first-hand knowledge – sometimes even personal life experiences – often turned out to be more persuasive than scientific facts, perhaps because they were harder to challenge directly (Stark, 2012: 37). With the independency from research institutions that has become the norm in many countries, RECs usually lack personal knowledge of the applicants and so are unable to keep an extra eye on potential troublemakers (Kerrison and Pollock, 2005). Although this was arguably never their responsibility, the fact remains that at least some RECs regard judging the character of the researcher a crucial task. Some come to resort to surrogate measures such as her spelling abilities (Stark, 2012: 15–18). As noted by Klitzman and Appelbaum (2012), it is reasonable to suspect that the diversity in how RECs judge projects – which poses a great problem for researchers – reflects such idiosyncrasies rather than, as is often claimed, local community values.
Dependency
A final limitation of RECs consists in the fact that their trustworthiness depends on that of researchers. This is so for several reasons. First, researchers are not merely the objects of evaluation; in particular, when new areas of research are broached, their suggestions are sometimes elevated to local precedents (Stark, 2012: 49−50). Second, RECs commonly draw at least some of their members from the research community. Third, as RECs are usually not required to ensure that the research protocol is actually followed – which would in any case be prohibitively time-consuming – they will not be able to prevent harmful research unless researchers can be trusted to do what they have proposed to do and nothing else. Fourth, even the most diligent of RECs will sometimes fail to identify risks associated with a proposed project. When both the researcher and the REC fall short in this respect, people might be harmed (Savulescu, 2002). In addition, the time and effort that some RECs put into ‘wordsmithing’ informed consent documents (Klitzman and Appelbaum, 2012) may leave them little time for such double-checking. The responsibility ever resides with the researchers.
It has been observed in other contexts that in hierarchies of overseers and subjects, distrust tends to propagate upwards (O’Neill, 2002: 130–133). The present case seems to be no different: Already voices are heard asking how RECs are to be monitored (Coleman and Bouesseau, 2008). If one assumes the moral integrity of researchers to be compromised, such anxiety is understandable. Nevertheless, in the face of the problems we have pointed out, second-order monitoring would be largely unhelpful.
Are more guidelines needed?
Just like ethics review formalizes ethical deliberation, guidelines formalize its principles. They are crucial to, but do not imply, institutionalized distrust. On the contrary, there are at least three conceivable normative positions on what they are supposed to achieve. The first two, it turns out, are untenable, whereas the third requires us to rethink how guidelines are to be written.
Steering
The first normative position is based on a perceived need for accountability, and thus for steering documents. To preclude corruption, it conceives of a division of labour between legislators, arbitrators (RECs) and subjects (researchers). Just like an engineer fine-tunes the workings of intricate machinery, the rule-maker works with constraints and springs, trying to devise rules that cover any contingency and incentives persuasive enough to ensure compliance. To the degree that the rules require interpretation, RECs have the final say, but the optimal document will be one containing nothing but propositions the truth value of which different evaluators will consistently agree on, regardless of their domain knowledge – in other words, a checklist. Guidelines have moved some way toward this ideal. Several items in recent revisions of The Declaration of Helsinki – for instance, those listing the required contents of research protocols and informed consent forms – lend themselves to box-ticking (World Medical Association, 2008).
As tools for minimizing harms resulting from human forgetfulness, checklists have proved immensely useful where mistakes may cause disasters. Successful examples are seen in aviation and some areas of health care (Hales and Pronovost, 2006; Haynes et al., 2009). On the downside, checklists may cause ‘checklist fatigue’ and be perceived by doctors as ‘a limitation to their clinical judgment and autonomous decision making’ (Hales and Pronovost, 2006). At least some professionals, we believe, will be genuinely concerned about complex decisions being oversimplified rather than simply disgruntled over their loss of authority. Similarly, use of ethics checklists during hospital ward rounds (Sokol, 2009) may ‘reinforce the image of ethics as the application of ready-made concepts and rules’ (Eriksson, 2010), which is not how it ought to be carried out – or so many ethicists would argue.
Using checklists not only as reminders but to judge performance presents an even more fundamental problem. Any departure from standard procedure – regardless of whether it was in fact the best course of action – will count as an error unless those who judge see fit to grant an exception (and are authorized to do so). In other words, we risk punishing responsible conduct and praising blind rule-following. This problem is not unique to checklists; it pertains to any formal standard against which performance is assessed or judged. Provided that rule-following is not the only value at stake, any rule will occasionally be inapplicable or need to be applied differently than anticipated. In such cases, individual professionals – in our case, researchers – will be morally obligated to break rather than follow protocol. Of course, because they will also bear the consequences, we can expect many to become compliant rather than moral.
Education
The second normative position, unlike the first, presumes that researchers are motivated to act morally. However, it also presumes that they lack the requisite skills, and conceives of guidelines as the remedy. In practice, researchers familiar with guidelines may well be in the minority (Eastwood et al., 1996). They may not be all that different from health care professionals, who are often unfamiliar with codes, have negative attitudes to the growing volume of codes, believe that they have little practical value, seldom use them, and much prefer to rely on previous experience and peers’ opinions when making moral judgements (Höglund et al., 2010). One might be inclined to dismiss such attitudes as misplaced scepticism, in itself indicating a need for education, but because the inference makes sense only if we think of guidelines as the ‘golden standards’ of ethical conduct, this would be question-begging.
We should instead ask: Assuming that there is indeed a ‘moral deficit’, will guidelines be helpful in remedying it? Regrettably, they will not. With hundreds of guidelines applicable to a single research project, going by the book is already nigh impossible, and even if researchers were to read them all, guidelines would offer no panacea. They cannot just be ‘followed’; deciding which rule should be applied to a particular situation requires judgment – presumably, moral judgment (Eriksson et al., 2007). One must then ask what kind of judgment they are intended to support in the first place.
Lastly, if guidelines could actually educate, we should expect more widely recognized and more consistently structured documents – national legislation, for instance – to be at least as crucial to moral conduct. Few of us, however, have more than passing familiarity with the letter of the law, yet most lead mainly lawful and morally responsible lives. Guidelines, just like laws, seem better suited to express the current state of morality than to actually educate it.
Inspiration
This leads us to the third possibility: that guidelines are to advise or inspire researchers, or serve as ‘rallying points’ – as was the intention of the original Declaration of Helsinki. In practice, prevalent contradictions and ambiguities both within and between documents as well as their sheer volume proves a major hindrance to many researchers. Efforts to make guidelines more specific and thus more easily applicable have only aggravated this problem. Principles and values can be weighed against each other, but how does one weigh a concrete rule, such as one specifying a piece of information to be provided in an informed consent form, against other ethical concerns? Here, at least, guidelines fail to give proper guidance (Eriksson et al., 2008). There is also the problem of legibility: All too often, guidelines are infused by increasingly technical language that makes them more or less opaque to all but legal experts.
To truly inspire, guidelines need a much higher level of abstraction than is the case today. On the other hand, they might then lose legitimacy among researchers who have come to expect clear-cut directives. Practical problems aside, it is worth noting that a system resting on documents with a high level of abstraction implies optimism regarding the capacities of individual researchers, and thus is fundamentally different from one that embraces institutionalized distrust.
What we risk
More numerous and more detailed guidelines, more oversight and more severe punishment of deviants may be less effective than one would think. Still, one might argue that less effective is better than nothing. Such measures may at least, the argument goes, convey the gravity of the matter and make researchers aware of moral issues that they would otherwise have overlooked or ignored. Unfortunately, however, they also entail risks against which such potential benefits must be weighed. They all have in common that they pertain to researchers’ moral competence, and thus to the ability of future generations to handle unexpected moral problems, such as those that arise during the course of a project. As the Tuskegee case has taught us, this threat is not to be taken lightly.
Blinkering
Among the topics discussed in contemporary bioethics, informed consent has received the most attention by far (Hoeyer, 2008). Although there is significant disagreement on what we can hope to achieve through informed consent (Dixon-Woods et al., 2007; Ducournau and Strand, 2009; Hoeyer, 2003; Manson and O’Neill, 2007), there seems to be some agreement that not all ethical concerns are covered by it. For instance, whereas people might be able to protect their individual interests by refusing to participate in research, doing so does not help them voice any concerns they might have about the societal effects of a particular project (O’Doherty et al., 2011). This is not just a marginal issue. At least in Sweden, what matters most to people may not be that they are informed of all details of a study, but that its results are readily applicable, that its benefits are justly distributed, and that commercial interests do not determine the research outlook (Hoeyer et al., 2005). These matters are both largely opaque to research participants and unlikely to influence REC decisions.
Nevertheless, informed consent seems to all but dominate the review process. According to one study, informed consent was the most frequent cause for discussion between researchers and RECs (Edwards et al., 2011). Some RECs spend much time on the wording of informed consent documents because such issues seem particularly susceptible to objective resolution (Coleman and Bouesseau, 2008) or because they find that there is little else about a project that they can control (Hoeyer, 2005). In qualitative research, the requirements imposed by ethics review have been claimed to distract researchers from more pressing moral problems (Bosk and de Vries, 2004). In short, bureaucratic procedures entail a risk that important but less ‘manageable’ moral matters are left unaddressed.
Heteronomy
One of the many contributions of 18th century philosopher Immanuel Kant was his idea that to act morally is to act out of the moral duties prescribed by practical reason. This process, commonly referred to as self-legislation, is guided by formal principles that preclude any arbitrariness. Kant did not claim that we ought to do without laws or regulations – only that they can never provide sufficient moral reasons for acting. Whenever we act for any other reason than out of duty, we do not, says Kant, act morally. This pertains even to actions that are lawful, bring about good consequences, and do not violate any moral duties: unless the maxim of action is chosen because it is one’s duty, one does not act morally, but only legally.
Two hundred-odd years later, Kant’s idea of self-legislation – a kind of moral authorship – remains convincing. In contrast, the division of labour between legislators, arbitrators and subjects that we see in research ethics is a pragmatic move less about doing ethics than about restricting the range of problems that can be discussed on each level. Some researchers are happy with this because it allows them to concentrate on their research while remaining confident that ethical matters are taken care of by others (Wainwright et al., 2006). On the other hand, to judge from ethics review applications, many researchers fail to recognize moral problems in their projects because they view them solely through a legalistic perspective (Hoff, 2003). Standardized procedures and ready-made checklists may be to blame, as they provide researchers with neither reason nor opportunity to practice their moral skills.
Of course, barring legal imperatives, morality could still lose out to naïveté or complacency. Which one of complacency and legalism is the worst vice remains an open question; it comes down, we suspect, to long-term consequences. In contrast, whereas naiveté can be cured simply by pointing out whatever moral problems have gone unnoticed, researchers suffering from legalism can be expected to continue to ignore them, comfortable with the fact that formal requirements have been met. This makes them particularly ill-prepared to handle unexpected moral problems.
Alienation
With an increasing number of documents to follow and no clear guidance to how they relate to each other, researchers will increasingly find themselves subjected to contradicting requirements (Eriksson et al., 2008). Unless they learn to ignore some of them, they will fail to resolve moral problems. For instance, many biobank researchers think that re-consent must be sought when samples are to be used for new purposes, but many of them also claim that doing so would be practically impossible (Edwards et al., 2011). In a single system of norms, this conflict would be resolved by concluding either that previously obtained samples ought not to be reused or that re-consent cannot be a universal requirement. That the contradiction remains suggests that many researchers struggle with inconsistent sets of norms.
Further, there have been disconcerting reports of researchers experiencing more harmful than beneficial effects of ethics review, mostly related to excessive delay of projects (Edwards et al., 2011). Others see review as a merely symbolic activity (Fistein and Quilligan, 2011). Ethnographic researchers, in particular, have complained that their research is often misunderstood and rejected by RECs for nonsensical reasons, whereas the real moral dilemmas encountered on the field cannot possibly be predicted let alone fitted into an application (Bosk and de Vries, 2004). Some researchers have begun to delegate the task of filling out the review application (Kerrison and Pollock, 2005). This is in line with experiences from health care, where regulatory approaches to ensuring moral conduct tend to foster ‘don’t get caught’ attitudes (Mills and Spencer, 2001). As these examples point out, it is quite possible to acknowledge and even adhere to ethical demands while simultaneously alienating oneself from them. As ethics and morality thrive on involved argument and debate, this is a development that neither researchers nor academic research ethics can afford.
Trustworthiness through individual responsibility
Institutionalized distrust and its implementation through concrete and well-defined rules, systems of oversight, and clear incentive structures may bring benefits: increased short-time compliance; reassurance to the public; and protection against governmental infringement of the autonomy of research. Its limitations notwithstanding, worthwhile alternatives may seem to be lacking. As we know that research quality suffers from researchers’ breaking of rules, must we not take steps to ensure better compliance? To be sure, if ethical conduct implied rule-following, anything less than perfect compliance would be unacceptable. As we have argued in this article however, responsible conduct often runs obliquely to compliance with rules, and even where they intersect, institutionalized distrust may backfire, undermining rather than supporting morality. We do not hereby claim that any kind of regulation is counterproductive; after all, most of us do not habitually break laws. For most of us, however, abiding by the law is unproblematic because we have already acquired certain moral standards at an early age. Many minor offences – speeding, for instance – are committed not because people are unfamiliar with the law, but because certain laws lack legitimacy in the eyes of the public. We can expect much the same in research ethics: unless a norm is sufficiently internalized, enforcing it will be less effective than it could be.
How are we to ensure that the appropriate norms are internalized? An interesting point has been made by Martinson et al. (2005) about the possible causes of scientific misconduct. The authors conclude that the very abundance of misconduct counts against the predominant view of it as the province of occasional bad apples. They suggest, instead, that explanations be sought in the pressure that comes from fierce competition and burdensome regulations. Although this may not be the whole story, it seems to be in line with concerns expressed by other authors that financial rewards and promise of personal advancement may compromise research integrity (Koski, 2007) and that the culture of secrecy that so often prevails in scientific institutions may increase the likelihood of both inadvertent errors and fraud (Wicherts, 2011). Together, these findings point out abundance of incentives and shortage of norms as a particularly unfortunate pair. Tighter regulation may be a bad choice of remedy precisely because it adds yet another layer of incentives without – we suspect – making researchers more likely to internalize the norms in question. If our assumption is correct, regulation will succeed in directing action appropriately only insofar as the rules are designed just right. Moreover, in the long run, as researchers do not get to practise their moral skills, equating ethics with rule-following risks undermining moral agency. Therefore, once reasonably effective measures – sound cultural and social norms, legislation that prohibits abuse, and independent review – are in place to counter worst-case scenarios, cures to more prevalent maladies must be sought elsewhere.
Play to the strength of RECs
Many RECs have, through training and tradition, acquired a great deal of ethical and scientific competence. Although we can hardly do without them, there may be much to gain from rethinking their role in a way that plays more to their strengths. First and foremost, RECs should be required to justify their decisions rationally. At least in Sweden, this is not standard practice with regard to approved projects. Not only would such a practice be crucial to quality assurance, it would also both offer an opportunity to educate those researchers who take ethics seriously but lack experience, and serve to reinforce the legitimacy of the review process. Face-to-face meetings – though time-consuming – are preferable because they also allow REC members to become familiar with the applicants and their capacities for ethical decision-making (Hedgecoe, 2012). This would aid the risk−benefit analysis, potentially reducing the number of idiosyncratic decisions. Such meetings also encourage a more dynamic and nuanced ethical discourse, effectively countering rigidity. Ideally, after approval, the REC should remain available to researchers as an advisory body with whom the researchers may discuss ethical concerns that have arisen during the course of the project. In such a system, the dependency of RECs on the moral agency of researchers need no longer be considered a deficiency.
Use guidelines judiciously
We have argued that using guidelines as regulatory tools is a move away from the discursive nature of ethics, and so risks inhibiting rather than supporting the moral agency of researchers. If ethical guidelines are to actually inspire researchers to make better decisions, they must have a sufficiently high level of abstraction to give room for deliberation. They must never be allowed to degenerate into checklists. It can even be doubted whether guidelines can ever afford to list specific requirements, because this inevitably changes the way the document is conceived of and applied. The Declaration of Helsinki is just one of many examples where the authors might have taken the wrong turn towards legalism. If the worst comes to the worst, moral deliberation is reduced to box-ticking. Of course, specific rules – or even laws – may be inevitable where there is a considerable risk of harm, but as we have pointed out, the risks we run by under-regulating research must always be weighed against the potential damage done by over-regulating it. This should be possible to avoid if we, instead of seeing guidelines as standards against which research conduct should be tried and measured, regard them as statements in the ongoing debate on proper research conduct.
The sheer volume of ethical guidelines out there is a problem in itself. In general, we believe that sticking to a few generally aimed documents the legitimacy of which is widely accepted is much preferable to developing specific guidelines, even though the former may leave some issues underdetermined. When specific guidelines cannot be avoided, their relationships with other documents that pertain to the same field must be stated explicitly rather than ignored. Researchers should not be left in the dark as to how conflicts between different documents are intended to be resolved.
Nurture individual moral competence
We have argued that neglecting the moral competence of researchers paves the way for disaster. It has been long known to social scientists doing field work (Anspach and Mizrachi, 2006), but should be recognized by biomedical researchers as well, that researchers must be prepared to handle unexpected ethical problems that they alone are in a position to handle. To this end, developing deliberative skills is arguably more important than learning the ins and outs of ethics guidelines (Eriksson et al., 2008). Ethical reflection must be a process that continues naturally throughout any research project (Halavais, 2011). Efforts on the part of researchers to cultivate their skills should be coupled with greater trust from RECs (Miller and Boulton, 2007).
Peer review and openness in research institutions
Given that cultural, economic or organizational factors may be crucial to researchers’ prospects of acting morally, it is imperative to nurture openness within research institutions. One possibility is to complement ethics review with professional self-regulation through peer review (Murphy and Dingwall, 2007). Such a system does not necessarily have to be formalized: encouraging researchers systematically to have a trusted peer comment on study design and to double-check their data might suffice. The benefits of such an approach are most readily apparent with regard to ensuring proper scientific conduct, but it is reasonable to expect openness to stimulate ethical discourse – not just on proper handling of data but on a wide variety of issues.
Researchers must also take care not to restrict their interaction with the outside world to publications in scientific journals. By communicating with the public through lay media and with colleagues through, for instance, hospital- or institution-based lectures and seminars, researchers could find much-needed opportunities to practice voicing ethical concerns about their research as well as justifying it to others.
Conclusion
Moral conduct – in research or otherwise – implies moral discretion and competence. We have argued in this article that research ethics cannot be a matter of bioethicists drawing up documents and procedures which are then applied by the professionals. Ethics must, if it is to remain a practice of its own rather than developing into a branch of jurisprudence, be practised through discourse. For this reason we need ethics review to be an arena for researchers to discuss their research, receive advice, and practise their ethics skills, and guidelines to be generally applicable, value-based and inspirational rather than specific, rule-based and regulative.
Whatever doubts we may have about the moral competence of researchers, in the long run it will be crucial to morally acceptable research. Although institutionalized distrust may still have its place in the regulation of biomedical research, much is to be gained by reworking ethics review and ethical guidelines to meet another end – supporting researchers in taking individual responsibility for their research.
Footnotes
Declaration of conflicting interest
The authors state that there is no conflict of interest.
Funding
The work of LJ has been funded by the IMI-funded project BTCure (Grant Agreement No. 115142-1) and the BioBanking and Molecular Resource Infrastructure of Sweden, BBMRI.se.
