Abstract

In a recent paper, Allnutt et al. (2013) urge clinicians to base violence risk assessment on ‘empirical knowledge’ in order to ‘inform appropriate management interventions to reduce the identified risk’. However, the paper does not address the concerns of many general psychiatrists about the effect on clinical decision-making of the various types of risk assessment that have crept into mainstream psychiatry. These concerns include the lack of evidence that performing a risk assessment can bring about a reduction in the overall level of harm, the effect of high rates of both false positive and false negative findings, the misallocation of resources from low-risk to high-risk patients and the ethical problem of the lack of informed consent.
The paper claims that the research regarding the use of risk assessment in clinical practice is ‘promising’. In fact there is almost no evidence that the routine application of risk assessment has ever prevented any form of harm and the surprisingly few studies of the outcomes of clinical decisions based on various types of risk assessment show no effect from risk assessment itself. In this day and age of so-called evidence-based medicine, the lack of evidence for such an expensive clinical activity that in turn can result in the misallocation of mental health service resources would appear to be a major problem for the risk assessment industry. There is an obvious need for research to examine the outcome of interventions based on risk assessment before we go any further.
The paper states that the ‘ultimate goal is to manage specific populations of patients with an increased propensity for any violence’. However, the paper does not offer a satisfactory answer for the unacceptably high levels of both false positive and false negative predictions, even for more common forms of minor harms. The price of false positive findings is unfair restriction of freedom, especially for anyone caught up in the forensic spiderweb, and for false negatives it is the loss of care (Ryan et al., 2010). The large number of false negatives in both violence and suicide risk assessment means that the diversion of resources due to incorrect predictions would probably result in greater net harm than the harms prevented by identifying people for increased supervision and care.
The paper mainly discusses violence risk assessment. However, the prevention of suicide is arguably of greater concern to general mental health services, given the far higher rates of suicide compared to lethal violence. Several of the known risk factors for violence and suicide go in opposite directions (Large and Nielssen, 2013; Large et al., 2013), so that a risk assessment looking for one form of harm is likely to overlook the likelihood of the other.
Another issue that is not covered in the paper is the ethical problem arising from performing risk assessments on patients without their full knowledge or consent. Risk assessment is usually carried out to the patient’s disadvantage, without their informed participation, despite some evidence to show that patients are at least as good at assessing the likelihood of a bad outcome as clinicians (Roaldset and Bjørkly, 2010). Consent forms and statements that the patient understood the purpose of an interview rarely mention that the patient also understood that the assessment might include predictions about their future behaviour. One of the main reasons that risk assessment has been so unsuccessful is that it is largely based on incorrect assumptions about the nature of human behaviour, which is more closely linked to unpredictable future circumstances and fluctuations in mental states than fixed traits that are always present. This also goes some way to explain why risk assessment is so poor at forecasting the timing of adverse events.
In making the comparison with managing risk in the treatment of non-psychiatric disorders, the authors make the common mistake of confusing probability of an adverse event with the risk, which is properly defined as the probability of the adverse event multiplied by the loss incurred (Large and Nielssen, 2011). For other conditions, the patient is told the probability of various outcomes on the assumption that they are fit to make their own decisions regarding the risks of various treatments, or of non-treatment, according to how the outcome might affect them and whether they can afford the treatment. In mental health settings, even patients who are competent to make decisions about their physical safety and their potential for aggression are often left out of the process of predicting their own risk and devising their own management plans.
The authors acknowledge the problem of attempting to predict rare serious adverse events and the ‘limited investigation’ of the application of risk assessment in clinical settings. They also acknowledge that their conclusions are based on their personal views, rather than a systematic review of the topic. What they do not acknowledge are the large vested interests of the risk assessment industry, which if the royalties from the various copyrighted risk assessment tools and the fees charged for expert opinions on future dangerousness were considered, would probably far exceed any incentive ever offered by a pharmaceutical company. This also represents an undeclared conflict of interest in many of the articles about risk assessment.
See Review by Allnutt et al., 2013, 47(8): 728–736.
Footnotes
Funding
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors .
Declaration of interest
The author reports no conflicts of interest. The author alone is responsible for the content and writing of the paper.
