Abstract

If science is about the search for truth then dishonesty has to be the gravest sin, and a sin to which biomedical science is going to be particularly vulnerable because observations – if right – are not always replicable in the way they are in the physical sciences. Many of these issues were addressed in a brief report published by the Royal College of Physicians in 1991, 1 but it was the publication of the subsequent book on ‘fraud and misconduct in biomedical research’ in 1993 that first brought the issue to the attention of a much wider audience. A new, fourth, and much revised, edition of this book appeared last year and a study of the way the book has evolved over time reveals much about the piecemeal and often rather unsatisfactory way the challenge has been addressed in the UK during the last 15 years. 2
Earlier editions of the book contained vivid descriptions of past cases of gross research misconduct. These are not repeated in the latest edition, presumably because the editors think that most people now accept that there is a problem. However, if the extent to which this is accepted can be judged by the effectiveness of the remedies now in place, that has to be a misjudgement. If people realized how common at least minor degrees of misconduct were, and how much damage they can do, more would surely have been done about it by now. And, while serious dishonesty may be rare, its flawed investigation can discredit the regulatory process itself. One university's flawed investigation of genuine misconduct 3, 4 recently did that body's reputation almost as much damage as the General Medical Council's mishandling of an allegation that turned out to be false. 5 And the temptations that trigger misconduct must have become more pressing in the last 20 years as more people have been attracted to postgraduate research; as people came to see how much career advancement can depend on having a portfolio of published research; and as universities found themselves competing for limited funds influenced by the findings of a necessarily flawed Research Assessment Exercise.
One unresolved issue is whether what now gets called ‘research governance’ should place the main focus on prevention or detection. Several contributors to the latest edition of the book on fraud and misconduct in biomedical research 2 have focused on prevention, but many will agree with Povl Riis that this approach has its limitations because most misconduct is committed by people ‘who know very well about the consequences of such behaviour, but think that they are too smart to get detected’. The book's editors call for students to be taught more about ‘research and publication ethics’, but it is doubtful whether they will ever be made more honest by sitting through a lecture telling them that ‘honesty is the best policy’. Like most of medical ethics, it is a concept best grasped by seeing it (and the lack of it) in action.
In the 10 years since a ground‐breaking meeting in Edinburgh 6 stressed the need for a regulatory body with the financial resources – a body with experience, and the statutory clout needed to investigate such problems properly – little has been done. A UK Panel for Health and Biomedical Research Integrity was eventually launched in April 2006, but even its supporters would admit that it has not achieved very much, and the Lancet, in a recent editorial characterized the whole thing ‘a missed opportunity’. 7 Indeed, despite the protestations of the panel's chair, Professor Sir Ian Kennedy, 8 it concluded that ‘rather than prolonging this ineffective enterprise, the UK needs to go back to the drawing board and show that it takes research integrity seriously’.
Prevention is certainly the approach that has been adopted by the UK Government, 9 so it is odd that, although the way the pharmaceutical industry has started to address some of its internal problems (in part thanks to the leadership shown by Frank Wells) is well described in the latest edition of this book, NHS ‘Research Governance’ gets almost no mention. The role of research ethics committees is covered, but we know they are not usually in a position to judge the scientific quality of any research proposal put before them, and not equipped to monitor any study once they have concluded that the approach is, at least as put to them, ethical. It is largely for this reason that the UK government moved to make NHS Trusts responsible for this following what they falsely claimed had been found nine years ago during an enquiry commissioned by the NHS Executive into allegations of ethically unacceptable clinical research in Stoke on Trent. 10
One problem with this approach is that it can be burdensome, complex and time consuming, and there is a growing belief that it is now doing more harm than good. 11, 12, 13 Current guidelines stipulate that all research sponsored by an NHS Trust must be monitored by the Research and Development Department at least once every 6 months and may need to take place more frequently. 9 Similar obligations rest with external sponsors. This can be very burdensome for multicentre studies, and near to impossible with international trials. When this is added to the complexity of the steps that have to be taken before research can even start, it is small wonder that even an unbiased but well-informed outsider has concluded, from his vantage point as the Chair of the National Institute for Health and Clinical Excellence, that this strategy is not only causing the cost of all research to rise rapidly but also ‘places a massive bureaucratic challenge on potential sponsors and investigators’. 14 Whether any of this is proportionate when the study is a simple piece of observational research is doubtful, but the present Government guideline does say ‘all’ research.
There are those who would say that this could well be money well spent – and the book's current co‐editor Michael Farthing is one of them 15 – but that is only going to be true if it really does prevent or detect fraud and misconduct and there is no evidence, as yet, that it does. The other problem is that, as the Chief Medical Officer Sir Liam Donaldson realized when he was the Regional Medical Officer for the Northern Regional Health Authority 15 years ago, individual Trusts have to cope with cases of serious medical misconduct so infrequently that they very seldom acquire enough experience to do it well. 16 If this is true for any form of misconduct, it must be even more true for research misconduct – a relatively rare cause of serious misconduct.
So where should we be looking if we accept, with Povl Riis, that prevention only has a limited role to play? This is where we can all be grateful to Drummond Rennie, the long‐serving editor of the Journal of the American Medical Association, and Kristina Gunsalus, an Adjunct Professor of Law in the University of Illinois, for what they have now written about the key role they played in America's current approach to these issues. 17 They take a helpfully historic approach, and this certainly makes it much easier for readers to understand, and to accept, the conclusions that were eventually reached by the Ryan Commission, as summarized in the document finally issued by the Clinton administration on 6 December 2000. 18 Much of what they have now said was said more briefly 11 years ago, 19 but the fuller treatment of these issues, alongside the helpful overview provided by Nicholas Steneck, the Director of the Research Ethics and Integrity Program in Michigan, and a consultant to America's federal Office of Research Integrity (ORI), certainly makes the new edition of the book by Wells and Farthing worth every penny of the £45 it currently costs.
Rennie and Gunsalus start by making a convincing case for a clear, watertight definition of serious research misconduct, and make it clear that the ORI is only charged with looking into allegations of ‘fabrication, falsification or plagiarism in proposing, performing or reviewing research or in reporting research results’ that have not already been reviewed by some other comparable authority in an equally rigorous manner. They also make it clear that, to be secure from subsequent legal challenge, ‘misconduct’ had to be the wilful transgression of some clear rule. It accepted that, although this required the adjudicating panel to make some judgement about intent, this was no more than the obligation that juries face in most criminal trials. Those undertaking any investigation needed subpoena powers, but investigation and adjudication should always be kept separate, and decisions based, as in civil proceedings, on the ‘preponderance of the evidence’.
The basic tenet had to be that it is ‘unfair to make a researcher liable for conduct that is not already clearly defined as being wrong’. It is on this ground that Rennie and Gunsalus convincingly criticize the definitions currently being used by most other countries. 17 They mock, in particular, as ‘doomed from the start’, the definition adopted during the Joint Consensus Conference on Misconduct in Medical Research in Edinburgh in October 1999 6 which was simply ‘any behaviour by a researcher, intentional or not, that falls short of good ethical and scientific standards’.
They argue that no system is ever going to work reliably without access to a small team of investigators who have done such work often enough to be able to do it well. This is a particularly serious deficiency in the UK at the moment only met, in part, by the various small groups who have done such work, from time to time, for some pharmaceutical companies. No system is ever going to win the trust of the public if those investigating any allegation are not manifestly free from any potential conflict of interest (as vividly shown by the Sheffield University's failure to deal with a concern raised by one of their own staff to the satisfaction of anyone other than their own administrators). 3, 4 And when this case, which involved the allegation that a pharmaceutical company had been trying to get a member of the University to publish flawed research, was passed to the UK's Medicines and Healthcare Products Regulatory Agency in 2006 they are said to have given the allegation ‘low priority’, and to have said that they did not have a procedure for dealing with such things. 20 The infamous, and not dissimilar, high profile Olivieri scandal at the Hospital for Sick Children in Canada was clearly not just a ‘one‐off’ problem that others can afford to ignore. 21
The description of the system now in place in America serves to highlight, therefore, just how unsatisfactory the current situation is in the UK, even though the ORI definition does not try to cover all forms of research misconduct. Rennie and Gunsalus are very open about this. As they point out, ‘science is a risky enterprise, often requiring much trial and error’ and nobody would ever risk doing such work if every error was treated as evidence of misconduct. 17 There are, nevertheless, many forms of misconduct that are not covered by the ORI's definition. The negligent (or deliberate) failure to publish research findings can delay the recognition that some particular form of treatment is completely ineffective or even harmful. 22, 23 This does not get a mention in the ORI definition, and only a single one‐line mention in the present book. Indeed there are many things that can be unethical about the way patients are treated, animals cared for, financial and other conflicts of interest handled, data manipulated, reports written, publication manipulated and authorship allocated, that go unaddressed in this definition. They stress that this omission was pragmatic, and was not intended to suggest that such behaviour cannot amount to serious misconduct.
Unfortunately that is the impression left by society's current failure to find any other effective way of expressing its disapproval of these moral lapses. Most commentators to date have also taken the same line as the ORI, and suggested that the gravity of any failure should be judged simply by the extent to which it appears intentional. 24 Others, however, would argue that the gravity of any lapse should also take into account the amount of harm done, or at risk of being done. Society disapproves of reckless driving, and the penalty for doing this rises if it is done in a built‐up area were there are many pedestrians at risk. They think it should rise even higher if someone gets killed. Intent merely makes the moral failing worse.
The problem is that, if we ignore careless or reckless work, we are, in effect, condoning it. And society's failure to devise an effective strategy for showing its disapproval unless any lapse has been judged intentional has serious consequences, because bad, careless and negligent research can certainly harm and kill. Indeed it almost certainly kills much more often than wilful misconduct. The book by Wells and Farthing contains two excellent discussions of the problems that ‘whistle‐blowers’ still face if they voice their concerns, but no contributor looks at the extent to which excessive regulation can simply inhibit research rather than improve it. Perhaps we should look at what happens in other fields of human endeavour: we should use many more carrots and rather fewer sticks. It may be difficult to devise a fair way to ‘name and shame’ persistent offenders, but we really could do more to celebrate good practice.
More seriously the regulatory ‘authorities’ are fast losing the moral respect of the research community because, over issues like informed consent, a concern for form seems to have trumped concern for substance. Research shows that patients are baffled by the present process of ‘signed consent’. 25 Most patients thought that ‘the primary function of consent forms was to protect hospitals’ and that the forms ‘allowed doctors to assume control’, but this does not seem to have troubled those who now think themselves responsible for policing medical ethics. Such mismanaged governance is as morally indefensible to many as the misconduct it is supposed to be addressing. And because respect once lost takes time to recover this is deeply troubling, as policing only works well when there is reciprocal respect.
Perhaps we need to get back to the approach that I first encountered when I became a Boy Scout. I was told that ‘a Scout's honour is to be trusted’, and that a ‘Court of Honour’ of my fellows would judge whether I had at least tried to live up to their expectations. The present approach clearly assumes that nobody in the Health Service, and no biomedical research worker, is ever to be trusted. Given that we all do better when we know that a lot is expected of us, as Baroness O' Neil outlined so cogently and concisely in her Reith Lectures in 2002, 26 the UK Government might do well to reverse the whole if its current approach to ‘Research Governance’, and work on the assumption that people want to be honest. Most will certainly strive harder to be honest if this is expected of them. The focus on prevention should give way to a focus on reliably identifying failure, although this approach will only work if there is a central ‘Office of Research Integrity’ with legal teeth (even if the study has not been supported by public money), run by people who know how to conduct a quick and competent investigation once allegations reach them, and then get what is found assessed in a transparently fair and timely way.
Such a strategy could not work worse than the present one, and would almost certainly be better. The tangible and intangible costs might also be less dauntingly high. It is shocking to see how little has been done to study these very different approaches to fraud in science scientifically. Even if it does not turn out to be significantly better, is it not better to believe that most people want to do the right thing, and reflect this in the way this troublesome problem is approached?
The problem is that it looks as though dealing with this sort of misconduct properly is going to require legislation. Unfortunately there is little evidence that our politicians are interested in this at present, and much recent regulatory legislation has not really been appropriate or proportionate. The Americans have recognized that research misconduct is still misconduct whether it is committed by a doctor, a nurse, a social scientist or a laboratory‐based research worker. Employers, and other regulatory bodies, may well know what to do for the best once an allegation of ethical misconduct has been established, but experience shows that they seldom find a team with the skill and expertise required to establish, in a fair and timely way, the facts on which any such decision needs to be based. Real or perceived conflicts of interest have also fatally compromised many such investigations in the recent past.
Footnotes
DECLARATIONS
Footnotes
Acknowledgements
None
