Abstract
Institutional Review Boards (IRBs) and their federal overseers protect human subjects, but this vital work is often dysfunctional despite their conscientious efforts. A cardinal, but unrecognized, explanation is that IRBs are performing a specific function – the management of risk – using a flawed theoretical and practical approach. At the time of the IRB system’s creation, risk management theory emphasized the suppression of risk. Since then, scholars of governance, studying the experience of business and government, have learned that we must distinguish pure from opportunity risks. Pure risks should be suppressed. Some opportunity risks, in contrast, must be accepted if the institution is to meet its goals. Contemporary theory shows how institutions may make these decisions wisely. It also shows how a sound organizational understanding of risk, a proper locus of responsibility, and appropriate institutional oversight all contribute to effective risk management. We can apply this general theory, developed in other contexts, to the problems of the IRB system. Doing so provides a unifying explanation for IRBs’ disparate dysfunctions by spotlighting five related deficiencies in IRB theory and structure. These deficiencies are (i) inability to focus on greater risks, (ii) loss of balanced theory, (iii) inaccessibility to guidance from senior leadership, (iv) unbalanced federal oversight, and (v) inflexibility. These flaws are deeply rooted in the system, and superficial reform cannot resolve them. Congress should overhaul the system to meet contemporary standards of risk management; this would benefit subjects, scientists, and the public that needs the fruits of research.
Keywords
Research ethics review embodies our determination that the rights and welfare of the individual research subject shall never be forgotten in the quest for knowledge and profit. At a time when a single drug can generate billions of dollars in revenue, protecting subjects is as important as ever.
Our concern for subject welfare draws on horrific experience. Hitler’s doctors, working alongside the crematoria at Auschwitz and Dachau, showed the depravity of research driven solely by a utilitarian imperative. A generation later, the British medical consultant Maurice Pappworth and the American anesthesiologist Henry K Beecher demonstrated that scientists in a liberal democracy could also disregard subject rights in their pursuit of scientific knowledge and societal benefit (Beecher, 1966; Pappworth, 1967). The inexcusable experiments of this period included research in which vulnerable subjects were denied treatment for their syphilis or injected with live cancer cells (Gray, 1998; Jones, 1981; Katz et al., 1972). We learned, beyond a shadow of doubt, that scientists need oversight.
James Shannon, director of the National Institutes of Health (NIH), was the principal architect of the original United States oversight system. In the early 1960s, Shannon became alarmed by NIH-funded research that ignored subjects’ rights and welfare, thereby putting subjects, and the NIH itself, at risk. He launched a series of discussions within the Public Health Service (PHS) that led to the promulgation of guidelines creating the IRB system, thus reducing risk for both subject and institution (Frankel, 1972; Stark, 2012). Other countries adopted similar methods for the same purpose.
The IRB system comprises IRBs and their federal overseers, the Office for Human Research Protections (OHRP) and the Food and Drug Administration (FDA). OHRP regulates research with human subjects funded by the agencies of the PHS, including the substantial program supported by the NIH, as well as 17 other federal departments and agencies that follow the same regulations. The FDA’s authority includes drug and medical device development.
Shannon created the IRB system to ensure that even the most respected investigator could use humans as experimental subjects only after a disinterested group had carefully considered their rights and welfare, including their psychological, legal, and economic well-being. Yet this system, despite the good intentions of its founders and the best efforts of IRB members and federal regulators alike, has become increasingly dysfunctional. This article presents a new theory to explain the system’s difficulties and show a new path to reform.
Dysfunction and proposed reforms
The system’s difficulties have been extensively documented. Dale Cowan, an experienced IRB chair, wrote in 1974 of his concern about ‘the sheer weight of the bureaucracy which has been proposed to regulate human experimentation. Many investigators and review committees are already of the opinion that the paperwork involved in preparing and approving acceptable protocols is extensive and taxing’ (Cowan, 1974). More recently, ethicists Norman Fost and Robert Levine (2007) summarized their concerns in an article titled ‘The dysregulation of human subjects research’. I Glenn Cohen and Holly Lynch of Harvard’s Petrie-Flom Center argue that the system ‘has a number of major deficiencies’ (Cohen and Lynch, 2014a: 1). Vanderbilt sociologist Laura Stark writes bluntly of ‘a system that both investigators and federal regulators agree is broken’ (Stark, 2014: 174).
Many authorities have suggested remedies for this dysfunction. Levine, in his canonical 1981 book, proposed making optional most IRB review of low-risk research involving ‘reasonably autonomous adults’, adding, ‘I am aware that I am proposing a radical change in both the letter and the spirit of the law’ (Levine, 1981: 242, 243). In 2015, federal officials, after combing through published criticisms of the system and soliciting feedback from interested parties, proposed eight ‘significant changes’ in the regulations (Department of Homeland Security and 15 other departments and agencies, 2015; Office of the Secretary, HHS, and the Food and Drug Administration, HHS, 2011). At a symposium on an earlier draft of these proposals, experts put forward a variety of other reforms (Cohen and Lynch, 2014b). Some were minor; others, in the spirit of Levine, were radical. Greg Koski, a former chair of OHRP, argued that the present system is based on a ‘failing protectionist paradigm’. The ‘necessary and long overdue’ remedy is a ‘complete redesign of the approach, a disruptive transformation’ (Koski, 2014: 346). Columbia psychiatrist Robert Klitzman (2015), who believes the system serves an important purpose, identifies many specific problems in IRB functioning and proposes extensive reforms at both national and local levels. Legal scholar Carl Schneider (2015: 200, 201) believes the system cannot be saved, and favors abolition rather than reform.
These authorities show striking dissimilarity in the problems they identify and the solutions they propose, suggesting that the system needs a wide array of changes to solve many distinct problems. But that is not the only possibility. I believe that many of the system’s problems flow from one cause: the IRB system is flawed as a system of risk management. This single underlying problem explains many of the system’s dysfunctions and, further, suggests a straightforward path to reform.
Valid and invalid goals for risk management
In some quarters, risk management is viewed as an unworthy or even improper attention to self-protection in the service of self-interest. It is in this vein that University of British Columbia ethicists Michael McDonald, Susan Cox, and Anne Townsend (2014: 114) worry that ethics review may be used ‘more for risk management by research institutions and sponsors rather than for genuine protection of research participants’.
There is, unfortunately, some truth in this accusation. Sociologist Caroline Bledsoe and colleagues (2007) point out that when an IRB requires signed consent forms linking subjects to topics, it may expose them to social, legal, and even physical risk. This requirement may result from an IRB feeling that it must comply with the strictest reading of the regulations even when it increases subject risk; examples include a consent form indicating that a woman is in a study of domestic abuse or that an African teenager may be serving as a rebel soldier.
An IRB that privileges institutional protection over subject welfare practices an unworthy form of risk management that is a miscarriage of the review process. This is a proof of the system’s dysfunction, not an argument that IRBs should not be doing risk management at all. After all, the system’s central purpose is to manage the risk that subjects encounter in research. But what relationship, if any, exists between protecting the subject and protecting the institution?
Kate Connolly and Adela Reid (2007) of Concordia University in Quebec view these as independent activities, writing that IRB review is ‘guided by participant protection and risk management concerns’. In a sense this is true; Levine (1986: 327) likewise comments that it is possible, and ethically permissible, to protect the individual and protect the institution as well. This is also true but still misses the central point, for in vital respects the interests of subject and institution are as one. In a properly-structured ethics review system, any committee decision that protects subjects also protects the institution. Further, an IRB that protects subjects has met any obligation it has to protect the institution.
IRBs were created specifically to protect subjects, and it is a perversion of their purpose to stray from that goal. This article assumes that the IRB system intends to pursue the right goal and asks why the path is so arduous. The answer is that the system fails at the difficult task of managing subject risk appropriately. The process of risk management is complex, and an organization can manage risk well only if its structure, ideology, and supervision harmoniously support it.
Risk management is no longer simply a matter of identifying and eliminating hazards. It is impossible to avoid all risks and remain in operation, and it is in deciding which risks to accept that IRBs falter. Risk management theory, drawn from the insights of scholars and the experience of business and government, can teach us much about why the IRB system struggles and how it should be reformed. The first problem is loss of goal orientation; the second problem lies in its structure, theory, and supervision.
Risk management in the IRB literature
Risk management is a concern in every area of health care (Carroll, 2011), but its relevance to the protection of human subjects has not been carefully explored. To be sure, some authors use the term ‘risk management’ in discussing IRB operations, but they either confuse its purpose, as Connolly and Reid do, or use the concept as it was 50 years ago, not as it is today (Buono and Kolb, 2010; Icenogle, 2003; IRB Advisor, 2010; McDonald et al., 2014; Modi, 2005; Zimmet, 2011a).
In older or casual usage, ‘risk management’ means reducing or eliminating risks. This meaning of the term can still be applied to risks that can be eliminated without hindering the institution’s operations; we call these pure risks. Other risks, sometimes called opportunity risks, must be managed, not eliminated, if the organization is to achieve its goals. Levine has noted, correctly, that science cannot be held to a zero-risk standard (Levine, 1981: 234). New treatments for disease, for instance, can be tested in animals but must ultimately be evaluated in humans, with hazards that can be reduced but not eliminated. We must accept, and manage, the opportunity risk of research with humans if we are to permit science to lessen society’s burden of suffering and premature death. The same process must be used to balance the opportunity risks of scholarship in the social sciences and other fields.
In the theory of institutional governance, ‘risk management’ is now a term of art that refers to processes designed to balance risk and opportunity under the guidance of senior leadership. These integrated processes ensure that an individual or committee that is responsible for monitoring a particular risk does so with an awareness of the organization’s larger purpose and is subject to oversight by the leadership.
An integrated approach benefits any institution, including nonprofit organizations and government agencies, as all organizations struggle with risk. Universities, hospitals, and medical schools are no exception; most have formal risk management plans that strive to consider all risks, and all institutional goals, in an integrated manner, with ultimate responsibility assumed by the leadership.
Evolution of academic risk management theory
The theory of risk management, and its widespread application, are relatively new. Fifty years ago, scholars made no theoretical distinction between pure and opportunity risk. Further, risk management was of interest primarily to banking and investment firms and was seen in purely financial terms (Dionne, 2013).
Businesses suffered when nobody was responsible for unanticipated hazards and failures. Senior executives therefore instructed mid-level managers to focus on specific types of risk. This approach, now called keeping risk in a silo, often backfired because of the dysfunctional incentives it created: a manager who is responsible for a specific risk can expect to be blamed when disaster strikes but given little credit when things go right (Grose, 1987). Such managers treat opportunity risks and pure risks alike and strive to reduce both to zero, making it harder for the institution to achieve its goals (Beasley and Frigo, 2010).
Enterprise risk management, the dominant contemporary theory, was created in response to these failures. Scholars in the field agree that risks should not be viewed in isolation from the institution’s goals; they counsel a holistic approach, integrating information about risks and goals from every part of the organization (Beasley and Frigo, 2010; Doherty, 2000; Grose, 1987). Practitioners, such as sophisticated executives, understand that they cannot abdicate their responsibility to set risk parameters, conduct ongoing oversight, and make changes when needed. The lessons of risk management are no longer restricted to the world of finance: when governments and businesses manage risk and opportunity well our lives are enriched; when they fail, smoke jumpers die, naval bases explode, and astronauts perish (Maclean, 1992; Vaughan, 1996; Waring, 2013: 185–190).
Early IRBs as effective risk managers
Contemporary risk management theory is informed by past failures, but there have always been organizations that managed risk effectively without the benefit of theory. Early research review committees provide good examples.
In 1953, the NIH opened a hospital, the Clinical Center, dedicated to medical research. The Clinical Research Committee, whose members were drawn from the Board of Trustees, vetted research felt to pose ‘unusual hazard’, and forwarded its recommendations to the director of the NIH (Frankel, 1972). This committee was created to manage but not eliminate risk; Stark has shown that, conscious of the NIH’s goal of transforming medicine and improving the public health, it actively sought ways to make even risky research possible (Stark, 2012). This is a classic example of the responsible management of opportunity risk with the active involvement of senior leadership. This committee’s ideology and structure would have met today’s standards of risk management, and it was successful; research proceeded without undue delay and there were no scandals involving subject harm. In the 1960s, when Shannon and his PHS colleagues issued guidelines that required local review of research (Stark, 2012; Stewart, 1966), they naturally used this committee as a model. The core of this original model, which required careful consideration of the rights and welfare of subjects, was preserved in later years as the guidelines were replaced by regulations, and the regulations themselves were expanded.
Early IRBs promptly snuffed out the unethical research that had led to their creation. PHS data showed that ‘problem projects’ that presented ‘possible hazards to subjects’ dropped from 7.4 per cent in 1966 to 1.7 per cent in 1968 (reported in Curran, 1969). Six years later, Cowan (1974) described ethics review at Case Western, where the first IRB chair was the dean of the medical school and its members were the chairs of the departments. Although Cowan worried that the future might bring ‘greater restrictions’, overall he felt that committee review worked reasonably well. This board balanced subject welfare and the public’s need for better medical care, as the Clinical Research Committee had done at the NIH.
Five factors that impair IRB risk management
Although today’s IRBs formally follow the model of the Clinical Research Committee, there are differences in five vital areas. Each makes effective risk management more difficult.
Loss of selectivity of review
In the 1950s and 1960s, the Clinical Research Committee reviewed only protocols that presented unusual hazard (Stark, 2012: 106–107). This selectivity was appropriate, for responsible managers focus on serious problems, give less attention to lesser threats, and ignore trivial risks.
The risk management literature suggests many ways to stratify risks. The simplest is to identify the top ten risks facing the organization; more complex approaches include visual representations such as a risk map, which reflects both the probability and severity of each risk (Fraser, 2010: 173–174; Grose, 1987: 18–21). However, the 1966 guidelines that established the IRB system forbade selectivity of this kind (Stewart, 1966): every protocol, even those with little or no risk, had to be reviewed and approved.
In 1981, new regulations recognized the desirability of permitting low-risk research to be exempt from IRB oversight or to undergo expedited review (45 CFR 46.101(b), 45 CFR 46.110). These exclusions, however, are limited to very-low-risk research, such as ‘research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods’ (45 CFR 46.101(b)). No biomedical research, no matter how low-risk, is exempted. Further, IRBs are free to conduct full review of any research even if it could be excluded, so in practice even studies that pose no realistic risk, like the examples that follow, may be subject to rigorous – and wasteful – review. This is one reason why IRBs today are spending ‘entirely too much time doing work that does not need to be done’ (Levine, 2006) and ‘wasting their energies on non-risky research’ (Gunsalus et al., 2007: 12). Consider the following example.
Kidney stones
Fredric Coe conducts research that is not particularly risky but is nonetheless scrupulously supervised. Coe is a nephrologist who analyzes leftover urine from his patients to study the proteins that (sometimes) prevent kidney stone formation. His work requires no active patient participation and he does not keep track of which patient left which sample (Coe, 2007).
Coe opened his practice at the University of Chicago in 1966, the year in which the PHS created the IRB system. For decades, his IRB let Coe work undisturbed, but in the late 1990s it began to impose increasingly burdensome demands. He must now prepare annual bibliographies and summaries of the recent literature, provide a detailed accounting of the urine samples his lab uses, and obtain consent from every patient whose leftover urine he might analyze.
Coe is baffled. ‘Surely it is jest to mention risk in the context of our protocol. Surely it is nearly insane to require any procedure at all to perform research using what is destined for the nearest toilet, what is unidentified and without value to those who produced it’ (Coe, 2007).
Coe’s protocol is not, however, entirely free of risk. It is possible, for instance, to collect cells from an anonymous urine sample, analyze the DNA of those cells, and post the genome, unique to the urine donor, online. A scientist with the right tools could identify the individual and post his or her name, leading to a loss of anonymity. Because the genome could reveal risks of future disease, this could lead, in turn, to discrimination in insurance or employment or to psychological, social, or other economic harm.
All this is possible; recent federal discussions contemplate treating biospecimens as ‘intrinsically identifiable’ because of the DNA they contain (Evans, 2013). On the other hand, Coe does not save the cells in these urine samples; he analyzes their proteins and then discards them. Is the theoretical risk of DNA analysis, re-identification, and individual harm, and any other harms that might occur as a result of Coe’s research, serious enough to justify the time and effort that regulation requires of Coe and of the IRB itself?
Loss of balanced theory
The theory of IRB function should recognize that the IRB’s task is to balance two important goals: subject welfare and societal benefit (Brendel and Miller, 2008; Rhodes, 2014; Whitney, 2015). Effective IRBs bear both in mind; failure to properly balance these goals will suppress worthwhile research or open the door to dangerous studies.
Early IRB theory predominately analyzed egregious experimentation that flouted subjects’ right of self-determination or exposed them to risk of injury or death. Early ethicists sought to learn why scientists had become morally blind and how to protect future subjects. These scholars paid only passing heed to social welfare; when researchers exposed unwitting subjects to a risk of death, any benefit to society was irrelevant. This early theoretical work was critical in laying the moral foundations for responsible research.
Once those theoretical foundations were laid, later ethicists drew our attention to less consequential risks. Some are of practical importance. Some, like the risk of re-identification of Coe’s urine samples, are hypothetical. And some can only be called nebulous, like the dignitary harms that the National Bioethics Advisory Commission (2001: 72) identifies ‘when individuals are not treated as persons with their own values, preferences, and commitments, but rather as mere means, not deserving of respect’. This principle is appealing, but it provides the IRB with no practical guidance and no way to balance protecting subjects from this harm with improving the health and welfare of society.
Some contemporary scholars do promote balance. Psychiatrist David Brendel and ethicist Franklin Miller call on IRBs ‘to negotiate competing pulls toward scientific discovery and the protection of human subjects’ (Brendel and Miller, 2008). Ethicist Rosamond Rhodes urges IRBs not to ‘focus narrowly on protecting research participants from any risks, regardless of how unlikely, fleeting, or trivial the anticipated harm’. Instead, boards should adopt a ‘balanced approach’ that incorporates both risk and societal benefit (Rhodes, 2014: 37–38). Yet societal benefit is discounted as morally unimportant in many discussions of the theory of research ethics. One IRB manual says, ‘The regulatory mandate is clear: human subject protection, first, foremost, and last’ (Shamoo and Khin-Maung-Gyi, 2002: 58). It is harder for IRBs to weigh two important social goals when one is dismissed as insignificant.
Empathy in geriatrics
This loss of balance can be seen in the report of educators Tomkowiak and Gunderson of an attempted curricular reform. The faculty at an unnamed medical school sought to improve medical student opinions toward the elderly. The program paired students with volunteer geriatric mentors – people who had the typical problems of aging but functioned well enough to visit a student over a 3-year period. The goal was to help students see ‘life changes through the eyes of their mentors’, who would be ‘living, breathing textbooks’. The teachers, following sound educational practice, surveyed the students to judge the value of the program (Tomkowiak and Gunderson, 2004).
Was the geriatric mentor program, including the student surveys, research that should be supervised by the IRB? Medical school officials initially indicated that it was not, perhaps because research in curricular reform, according to the regulations, is exempt from review (45 CFR 46.101(b)(1)(ii)). Two years later, however, the IRB abruptly ruled that the program was research that should be reviewed and that the faculty was guilty of scientific misconduct.
In describing the IRB’s actions, Tomkowiak and Gunderson cite the theoretical concern that ‘medical students are in what is often recognized as “coercive” circumstances for such research, especially when the research in question is being conducted by faculty who will evaluate them’. They mention another ethical hazard: ‘the unbalanced power relationship between the student and faculty member, and the relative feelings of powerlessness that might inhibit a student’s ability to decline participation’ (Tomkowiak and Gunderson, 2004).
Coercion and power differentials can be important, but the wooden application of ethical concerns like these can show a loss of balance and an absence of contextual awareness. One hopes that the medical school is engaged in a good-faith effort in which faculty help students become doctors who are not only expert but also humane. Responsible students want to participate in surveys that will improve the curriculum for incoming classes, and medical school faculty that find ways to instill respect for older patients should be applauded, not censured. The IRB’s concern with hypothetical risks ignores the reality of medical education and the social goal of the research.
Loss of an enterprise-wide perspective
Current practice puts surveillance of subject risk in the silo of the IRB, but risk would not be seen in isolation if IRB members were institutional leaders. In the system’s early years, both NIH and Case Western used committees that were directly responsible to, or integrated with, the top leadership. These structurally appropriate locations ensured that the senior leadership’s view of the enterprise as a whole was brought to research review.
Seasoned leaders understand research in the larger context of their institution’s strengths, weaknesses, and goals. Many leaders have had distinguished scientific careers and have first-hand experience with success and failure in research with human subjects, but their expertise is not limited to science. They have been given greater administrative responsibilities because they helped their organization respond to budget shortfalls, sexual harassment, researcher misconduct, cyber-attacks, despotic department chairs, regulatory fiascos, and, particularly in the United States, litigation. They have seen their hospital or medical school praised and savaged on Fox News or Slate. They have practical wisdom.
These leaders once served on IRBs; they decamped as meetings filled with trivia. Few department chairs serve on IRBs today, and senior faculty generally avoid IRB service (Fost and Levine, 2007). But a board composed of more junior faculty would be less likely to act contrary to the interests of the organization and the health of the public were it not autonomous. After all, many institutional committees do not include senior executives, but they do report to the leadership and thus benefit from its guidance. Medical school tenure, budget, and curriculum committees, for instance, make recommendations, but their decisions may usually be reversed by a higher official. If they operate in a silo, the silo is open at the top. This procedure was followed in the NIH’s proto-IRB, for the Clinical Research Committee could be overruled by Shannon when he was director of the NIH (Stark, 2012: 106–107).
When IRBs err, in either direction, the senior leadership should still be able to step in. This is the natural remedy for a committee that has lost a balanced perspective. The 1966 guidelines that established the IRB system did not limit supervision of IRBs by the institutional leadership. But IRBs were given unappealable power in the 1971 revision of the guidelines, which bars senior officials from reversing unfavorable decisions by the committee (US Department of Health, Education, and Welfare, 1971).
The question of whether the leadership or the IRB should have final authority can be viewed from either a theoretical or a pragmatic perspective. In theory, the IRB might need protection from senior officials who are blinded by money and power. Thus one author cautions against ‘pressure from principal investigators and administrators to cut corners’ (Zimmet, 2011b: 444); another emphasizes that the IRB must be ‘able to act as an independent and objective body without answering to multiple masters who may have different agendas’ (Prentice et al., 2006: 31–32). However, IRBs can never be entirely independent or objective; IRB members themselves have political opinions, pursue personal agendas, and are afraid of legal liability (Ceci et al., 1985; Nelson, 2006; Stark, 2012; Van den Hoonaard, 2011). And, through a combination of the factors discussed here, they often act as if the welfare of the public were unimportant. This injures us all.
Risk management theory abhors subordinate autonomy. A mid-level manager or committee may be responsible for the identification and assessment of specific risks, but a senior official, like the president, acting under the supervision of the board of directors, is responsible for integrated risk management. The higher official has a broader understanding of the institution’s goals (Committee of Sponsoring Organizations, 2004; Doherty, 2000; Grose, 1987); for a medical school or hospital this includes both reducing subject risk and improving the treatment of cancer and heart disease.
Theory aside, IRB autonomy would make pragmatic sense if institutional leaders had a history of unwisely overturning IRB decisions, but this did not happen in the NIH’s proto-IRB, nor did it happen at Case Western when the leadership and the IRB were united. The autonomous IRB model we have followed since 1971 has proven dysfunctional in ways that could be improved by allowing the leadership to become involved. This could be, for instance, through regular reporting, an appeals process, or other methods.
In a sound risk management system, scientists and IRBs would be recognized as operating in silos; both may have a limited view of the role of research in the institution, both may be biased with regard to the benefit to society and the risk to subjects, and both should be subject to oversight by the organization’s leadership. But instead of acknowledging the error of the IRB managing risk in a silo, the regulations weld the silo’s roof shut.
Child health
Jay Shen and his group developed educational programs to improve the nutrition and physical activity of inner-city Illinois fourth graders – an urgent goal at a time when childhood diabetes is an accelerating epidemic (Dabelea, 2009). The IRB required Shen to use consent forms couched in the language of a clinical trial. These forms advised parents that they could withdraw their children from the study at any time, and described the risks of the research. We are not told what risks might be involved in fourth graders being more active and eating better; we do know that, confronted with this ominous language, only 21 per cent of the parents gave permission for their children to participate (Shen et al., 2006).
Perhaps this IRB was too accustomed to reviewing clinical trials to modify its approach to this study; perhaps the IRB did not include someone knowledgeable about exercise, nutrition, or diabetes. No committee is perfect, and when an IRB errs, the investigator should be able ask for help from the leadership. Top officials at every urban medical school are aware of the crushing burden of obesity and diabetes for inner-city children. On appeal, an institutional official could instruct the IRB to stop requiring an inappropriate consent process.
Unbalanced federal supervision
Shen’s IRB may not have been uninformed; it may have been afraid. During 1998–2001, federal officials, accusing IRBs of deficient operations, temporarily shuttered all federally-funded research at a dozen institutions. The alleged problem was never that an IRB impeded important research; rather, it was always that the IRB failed to ensure that subjects were fully protected (Brainard, 2000). OHRP should balance the same goals as IRBs themselves: protecting the subjects of research from harm, and promoting the research that helps all of us live longer and better. It does not.
This cautious posture reflects OHRP’s political and cultural reality. The National Research Act of 1974, which gave ethics review the force of federal law, emphasized subject protection; so did Congressional hearings, the General Accounting Office, and blue-ribbon panels that examined the problem (General Accounting Office, 1996; National Bioethics Advisory Commission, 2001; National Commission, 1978; President’s Commission, 1981; Subcommittee on Human Resources, 1998). Under this pressure, OHRP understandably focuses on subject safety and has no stated commitment to public health or welfare. To my knowledge OHRP has never chastised an IRB that slowed or damaged research.
As a result of this one-sided federal oversight, IRBs today feel little pressure to take societal benefit seriously. Instead, they seek to eliminate every possible risk to subjects and, emphatically, to themselves. CK Gunsalus, of the University of Illinois Urbana Champaign Law School, notes that IRBs now ‘bend over backward to make sure all ‘t’s’ are crossed, but this inevitably leads to overzealous demands that impede research and discredit the IRB’ (Gunsalus et al., 2007). Koski has decried ‘a crisis of confidence and a climate of fear, often resulting in inappropriately cautious interpretations and practices that have unnecessarily impeded research without enhancing protections for the participants. Such ‘reactive hyperprotectionism’ does not usefully serve the research community, the participants or the public’ (Koski, 2003).
Hospital infections
OHRP itself has fallen prey to hyperprotectionism. In 2004, Johns Hopkins scientist Peter Pronovost demonstrated that the risk of fatal infections falls dramatically when hospital personnel use a checklist during certain procedures (Berenholtz et al., 2004). In 2007, OHRP halted an attempt to extend this research; the agency’s rationale was that the checklist might cause friction in the health care team and degrade patient care, although it admitted that the chance was remote (Faden et al., 2013; Gawande, 2007; Kuehn, 2008). OHRP continues to hew to this extremely cautious, and unbalanced, approach (Drazen et al., 2013; Lantos, 2013; Wilfond et al., 2013).
Rigidity
There are severe shortcomings in the ideology that IRBs follow, the operations they conduct, and the supervision they receive. There is, in addition, a fundamental flaw – the system’s rigidity.
Any risk management system should respond to the evolving character and circumstances of the organization. The NIH enjoyed complete freedom of action when it created the Clinical Research Committee. Once the committee was established, the NIH leadership could expand it, abolish it, change its membership, or modify its procedures as experience suggested. NIH’s practice did change, for instance, as Shannon and other leaders debated how consent should be obtained and documented (Stark, 2012).
The IRB system’s creators did not intend to create a rigid system, yet the 1966 guidelines did not permit the selectivity of review that the Clinical Research Committee enjoyed. The 1998–2001 shutdowns drastically reduced IRB flexibility (Halpern, 2008). Today, every organization, of every size, engaging in research of any kind, must use the same kind of committee, consider the same issues, require the same information in consent forms, and operate on the same schedule regardless of method or subject. Today’s system puts scientists like Coe, Tomkowiak, Shen, and Pronovost through elaborate processes to protect subjects from trivial or hypothetical risks. This wastes the time of the busy people, usually volunteers, who serve on the IRB and the talent of scientists who would otherwise lessen our burden of kidney stones, uncaring doctors, childhood obesity, and fatal infections.
Solutions
The IRB/OHRP system was created at a time when modern concepts of risk management were unknown, so Shannon and his colleagues could not have known that it contained latent flaws. Early IRBs managed risk well, because the flaws inherent in the system were not yet manifest. Those flaws, exacerbated by confusion about the system’s goals and unbalanced federal oversight, have now led to severe dysfunction.
Superficial reform cannot make the IRB system fit for task. Fortunately, the same theories of risk management that show why IRBs are dysfunctional show how Congress should reform the system.
Congress should replace the current model, which calls for a single method of ethics review, with one that permits any method adapted to that institution’s circumstances. Because each organization is best positioned to manage the risk within its walls, each medical school, hospital, university, nonprofit organization, and government agency that conducts research should develop a system appropriate for its own circumstances. Levine’s proposal for radical reform was to change the rules that apply to every institution; an approach based on modern risk management theory would expect each institution to develop its own rules.
This reform would spell the end of the IRB system as we know it, but not necessarily of individual IRBs. Some organizations, their freedom of action restored, might continue to use committees just like today’s IRBs; they could also delegate authority to other committees or individuals. Institutions would be free to partner with funders, specialty societies, patient representatives, and collaborative research groups to find optimal risk management approaches for specific kinds of research.
Should Congress require federal oversight of this new system? The decision to regulate any endeavor, including airline competition and scholarly and scientific research, involves difficult tradeoffs (Bardach and Kagan, 1982; Breyer, 1982). Theory provides no easy answer about federal supervision of a new system of human subjects protection. We do know that there is little sign of benefit from OHRP oversight, and ample evidence that it has increased IRB dysfunction (Bledsoe et al., 2007; Hyman, 2007; Schneider, 2015). In a new system, federal governance should be either abolished or rebuilt from the ground up.
A new system will still face the challenging task of deciding what risks subjects may be permitted to accept and how they should be informed of those risks. A new system will still grapple with the eternal dilemma of how best to balance subject protection and societal benefit. A new system will need to address serious ethical questions about studies of new cancer agents, research involving children, and many other kinds of investigation. But a new system based on sound principles of risk management will be better able to make these tough choices.
In the 1960s and 1970s, scholars, scientists, and officials of good faith agreed that subjects must be protected; that need is as urgent today as it was then. What has changed is that we now know that methods that seemed ideal 40 years ago have not led to the desired results.
Shannon and the other federal officials who created the IRB system believed they were exporting a model successfully pioneered at the NIH; they had no way of knowing that risk management principles developed decades later would show that their system was designed to fail. We can honor their spirit by building a better system – one designed to balance subject protection with our shared need for lifesaving research.
Footnotes
Acknowledgements
The author is grateful to Patricia Naughton, Shawna Peterson, Jennifer Pratt Mead, Hamisu Salihu, Laurence McCullough, and Eunice Thomas for their careful reading and thoughtful suggestions. An anonymous reviewer at Research Ethics pointed out that Coe’s urine samples could be re-identified through DNA analysis, and made several other valuable comments and suggestions.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
Regina O’Donnell, by funding the William W O’Donnell and Regina O’Donnell Chair in Family Medicine, provided invaluable support. The Center for Clinical Research and Evidence Based Medicine at the University of Texas Health Science Center at Houston, led by Jon Tyson, provided counsel and funding.
