Abstract

Doctors in the UK are assessed at the end of each training year by a mechanism called the annual review of competency progression (ARCP). The stakes are high: failure to progress may mean having to declare receipt of a failure outcome, forfeiting a pay rise or having training delayed. During the 2020–22 COVID-19 pandemic, which caused a training crisis, new ‘no blame’ outcomes for failing to progress were applied for some trainees (outcomes 10.1 and 10.2). This invites the question: why have ‘blame’ outcomes at all?
Importantly, the current blame-centric ARCP process contains a loophole open to potential abuse: it is possible to enact career damage on a clinician under the guise of a progress review. It is also restrictive – an inflexible framework which struggles to adapt. This ‘Podium’ opinion article asks the question: why not make all ARCP outcomes no blame outcomes?
Problems
The ARCP at its best involves multiple senior clinicians (who know the trainee well) with the best interests of their trainee featuring in their assessments. It should recognise achievements, identify areas for improvement and help trainees who need more time to train or more support. In many places, it does function like this. However, the ARCP can be used as a mechanism to bully trainees and even exclude them from training. The Dr Chris Day whistle-blower case is perhaps the best-known instance of an alleged misuse of the ARCP. That case is making its way through the UK courts and one of the considerations it raises is one trainees might fear: an influential person uses the ARCP to introduce unfounded doubts over a doctor’s temperament, suitability, probity or competency. The onus then falls to the trainee to prove that these opinions are false. The ARCP documentation provides a lasting record of the doctor being assessed. It has the appearance of being very rigorous. Therefore, if the outcome of an ARCP smears a junior doctor, it is an easy stain to create and a difficult one to remove.
Recently, some curriculums have been changing, including the addition of a similar assessment (which allows for greater subjectivity) called the Multiple Consultant Report (MCR). Many trainees already perceive the ARCP as a subjective and problematic mechanism for assessing performance. 1 It has even been described as ‘the same as a rugby tie’ – intimating that if you get along with the people in the club you get your paperwork signed off to progress. 1 Beyond trainee perceptions, another recent study reported that supervisors evaluate their trainees based on their ‘global sense of trainee capability’, a process which is ‘highly influenced by the nature of the relationship between them’. 2 It is therefore concerning that, if anything, moves are afoot which potentially introduce more subjective assessments. The ability of individual assessors to provide skewed, heavily weighted input should be considered a loophole which needs to be closed urgently in any future changes.
One argument for assessments which are more subjective (‘do they have what it takes’) is that the alternative (quantitative assessments) can end up as box-ticking exercises. In that respect the ARCP process is also problematic because it relies on the trainee uploading electronic assessment forms called workplace based assessments (WBAs) throughout the year. If a trainee fails to obtain the required WBAs, they can fail. This creates a pressure to produce high numbers of lightweight workplace-based assessments (trainees can complete one for reading out the WHO surgical checklist, for example). Furthermore, this is a restrictive system with very little flexibility. This is especially evident now, in light of the COVID-19 pandemic since 2020 which demanded a level of flexibility that strict, time-limited pathways struggle to deliver. 3 Overall, pressure to pass year-on-year serves neither the subjective (do they fit in) nor objective (how many WBAs have they done) aspects of the ARCP.
For patient safety, some might be reassured that ARCPs ‘filter out’ unsafe trainee doctors – but is this the case? Additional MCR assessments were introduced partly because current systems were not sufficiently fulfilling this role (the General Medical Council was concerned about fitness-to-practise complaints).4 Current systems might even enable so-called ‘failure to fail’, meaning trainees are not failed when they should be.5 At first, this might seem in tension with my central ‘no blame’ proposition (unless the wrong doctors are excluded due to ARCP biases, whilst others work through the system and pass despite concerns). For trainees who should be ‘failed’ (perhaps through no fault of any person or system) the question is: will an adversarial ARCP process facilitate this? Perceived blame in ARCPs might not promote constructive, safety-focused conversations.
Solutions
In 2020–21 there were attempts to change the ARCP-WBA system following disruption to training. For instance, the so-called ‘no blame’ outcomes (10.1 and 10.2) were applied for some trainees. The addition of no blame outcomes is a progressive step (outcomes 10.1 and 10.2 were used to reflect that a trainee may not acquire their competencies when it is not their ‘fault’). However, the ARCP still contains outcomes where the responsibility for failing falls squarely on the trainee. They are not openly called ‘blame outcomes’ but the addition of ‘no blame outcomes’ has shown this is essentially how they are perceived.
Whilst no blame outcomes acknowledged that a trainee might fail to gain competencies due to COVID-related disruption, is there any difference between that ‘extrinsic’ reason for failure and any number of others (personal issues, trainee–trainer mismatch, lack of training opportunities and so on)? Even if a trainee disengages (which may be thought an intrinsic reason for failure), it is lazy training indeed which points the finger rather than asking the more introspective question of whether the training programme itself might have fallen short.
Arguably, if a trainee passes the rigorous selection processes to gain a place in a training programme, they have started out with the ability and attributes to succeed. To disagree with this would be to propose that national selection fails in its sole purpose. A famous trainer once said, ‘there are no bad students, only bad teachers’ – is it time to remove the idea of blame from the ARCP process completely?
Final remarks
It should not be contentious to state that UK trainees have had a challenging time recently; the introduction of no blame outcomes will have been a relief to many. However, the ARCP in-and-of-itself remains problematic. It is arguably unrealistic to replace the ARCP with more robust independent assessments (national exam-style assessment centres, for example). So, is there an ‘easy win’ by which we can realistically alleviate the problems of the ARCP system? The answer is yes: apply universal no blame outcomes and remove the adversarial component of the ARCP. Even return to the notion that training should take as long as it takes? Progress has been made with the application of no blame outcomes 10.1 and 10.2. Let’s apply a no blame culture to the entire ARCP process.
