Abstract
OBJECTIVE
Medical specialist trainees report dissatisfaction with both the usefulness and timing of feedback provided following summative examinations. This study aimed to explore ophthalmology trainee and supervisor experiences of feedback following final summative examination (the Royal Australian and New Zealand College of Ophthalmologists Advanced Clinical Examination (RACE)).
METHODS
Semi-structured interviews were undertaken with ophthalmology trainees who had recently sat RACE (2017-2021) (n = 19) and supervising ophthalmologists who support trainees to prepare for RACE (n = 10). Interview data were thematically analyzed.
RESULTS
Two themes were identified. Inadequate feedback related to trainee experiences receiving feedback relating to examination performance that was insufficient and unhelpful in identifying gaps in learning, explaining the reasons for failure and supporting preparation for resitting. Inability to contextualize feedback encompassed trainee and supervisor concerns regarding the inability to review examination manuscripts after sitting the examination, the absence of marking criteria, rubrics and model answers to understand the passing standard and the lack of opportunity to discuss performance with examiners.
CONCLUSIONS
Detailed, individualized task-level and process-level feedback on examination performance is needed for all trainees. Opportunities to view examination manuscripts, marking criteria and model answers, as well as speak with examiners, would improve transparency of the assessment process, enhance feedback and improve trainee success.
Introduction
Over 4 decades ago, Ende 1 described the paucity of feedback in medical education. While the provision of feedback within medical education has improved, a 2018 review reported 29% of studies found trainees’ perceived feedback to be low quality or insufficient. 2 The persistence of multiple barriers to feedback may explain this finding. These barriers include educator factors such as unapproachable teachers, discomfort giving feedback, low self-efficacy, low credibility and lacking competence in the skills being assessed.3–6 Learner barriers include lack of engagement in the feedback process, lack of humility and limited emotional receptivity to feedback.7–9 Relationship barriers to feedback have also been described. These include adversarial relationships and the absence of respectful, credible educational alliances.5,6,8–10 The content of feedback can also create barriers to its acceptance, arising from uncertainty about the approach and quantity of feedback. 4 Furthermore, inconsistent, judgmental, negatively framed, non-specific, and non-goal-oriented feedback are generally poorly accepted.4,6,8,10 Lastly, contextual factors such as lack of time and receiving feedback in inappropriate places can act as barriers to delivery and acceptance.6,8,10
The provision of timely, high-quality examination feedback forms part of the current best practice in postgraduate medical assessment standards.11,12 According to Burgess et al, 13 effective feedback in clinical settings is planned, explicit, descriptive, focused on behavior, specific, concise, verified by the recipient and honest. However, specialist medical trainees in Australia,14,15 and internationally (usually referred to as residents or registrars),12,16 often do not receive adequate examination feedback. The most recent annual nationwide survey of Australian specialist medical trainees found 34% perceived they received useful feedback about examination performance and 40% felt that the feedback provided was timely. 17 Feedback following specialist medical examinations has been identified as an area needing improvement, particularly for high-stakes summative assessment. 12
Postgraduate medical examination failure rates are high across Australian medical training colleges 14 and globally. 18 Evidence suggests that pass rates reduce with increased sittings,19,20 which usually indicates that trainees have not achieved competence and should seek further training. However, when competent trainees fail, the role of feedback in passing a subsequent examination is a topic worthy of exploration. Access to quality feedback to understand areas requiring improvement is integral to success on resit. Feedback on examination performance may also be beneficial for those trainees who pass, to enhance belief in their capability and to identify areas for continued professional development.
Although the importance of examination feedback has been recognized, there is a dearth of research regarding experiences of feedback after summative postgraduate medical examinations. 21 Consequently, this paper aims to explore the feedback experiences of Australian and New Zealand ophthalmology trainees and supervisors relating to final summative examination, the Royal Australian and New Zealand College of Ophthalmologists (RANZCO) Advanced Clinical Examination (RACE). The findings will be useful for improving the examination feedback process, for both RACE and other summative specialty medical exams, and to optimize medical specialist trainee learning and performance in future assessment sittings.
Methods
The reporting of this study conforms to the Consolidated Criteria for Reporting Qualitative research (COREQ): a 32-item checklist for interviews and focus groups (Supplemental file 1). 22
Ophthalmology training in Australia and New Zealand
Medical specialist training in ophthalmology in Australia and New Zealand is a 5-year vocational training program (VTP) overseen by RANZCO. The VTP is centered around work-based learning within clinical contexts, which is coordinated across a network of 8 vocational training networks. 23 The final summative examination of the VTP is known as RACE, a high-stake assessment hurdle that trainees must pass to progress to their final year of training. To be eligible to sit RACE, trainees are required to complete 3 years of the VTP, have passed previous summative and formative assessments and be signed off by their Director of Training (DOT) in their training network. 24
The RACE is comprised of a written and clinical component, both of which must be passed to progress. The written component consists of 2 sections conducted over 2 consecutive days. Part A has 9 short essay questions (SEQs) and Part B has 30 very short answer questions (VSAQs). There are 3 types of SEQs: (1) clinical scenarios where the diagnosis is fairly clear and the candidate needs to describe the clinical care required; (2) clinical scenarios where the diagnosis is unclear and the candidate needs to provide a diagnosis based on the information provided; and (3) disease discussions, such as a discussion of trachoma. The VSAQs test specific knowledge on a topic. The clinical component of RACE consists of a face-to-face objective structured clinical examination (OSCE). It assesses performance on each of the curriculum standards using real cases.
A main impetus for this research was concern within RANZCO that a large proportion of ostensibly competent trainees were failing the written component of RACE. This anecdotal evidence was substantiated in the larger study, of which this research is a sub-component. Analysis of pass rates in the larger study found, despite some annual variation, approximately 42% of trainees failed the written examination. 25 This is despite all trainees who attempted the examination being signed off as ready to sit by their DOT. The larger study made a number of recommendations to improve the examination to address the high failure rate among competent trainees.
At the time of the research, feedback following RACE comprised cohort feedback, referred to as the Examiners’ report, which was available to all candidates. The current format of the Examiners’ reports details the curriculum standard that each question relates to, the purpose of the question, requirements for a pass grade (eg, describing clinical findings, causes of conditions, differential diagnoses and treatment approaches), common reasons for unsatisfactory grades and the Examiners’ impression of cohort answers to each question. Examiners’ reports prior to 2022 did not provide detail on the requirements for a pass grade. Candidates received a pass/fail notification by email. Those who did not pass the exam received individual general feedback listing the total number of pass and fail marks in the written and/or clinical examination. 24 The general feedback listed the requirements for a satisfactory grade on each question, how the cohort performed as a whole on the question (eg, “Generally this was a well-answered questions covering basic physiology.”) and the Examiner's impression of the cohort's answers (“There was poor differential diagnosis and etiology.”). This feedback did not include an explanation of why the individual candidate failed specific questions.
Study design
This study is the qualitative component of a larger explanatory-sequential mixed methods study exploring the experiences of ophthalmology trainees sitting RACE and supervisors who support their preparation. 25 In-depth interviews were conducted to elicit rich data on how interviewees experienced and understood post-RACE feedback.
Ethics
Ethics approval was provided by the University of Tasmania Health and Medical Human Research Ethics Committee (Project ID: 25018) and the RANZCO Human Research and Ethics Committee (Reference, 129.21).
Participant recruitment
Potential participants were identified from contact lists maintained by RANZCO for formal communication purposes. Using a purposeful sampling method, email invitations were sent to the following groups to recruit participants for in-depth, semi-structured interviews: (a) “Trainees,” ophthalmology trainees who had recently sat RACE (2017-2021) (n = 166), and “Supervisors,” Fellows providing supervision to trainees preparing to sit RACE (n = 127). The email was sent from a generic RANZCO administration email address. A study information sheet, introducing the research team, detailing the purpose of the research and describing the research procedures, was attached to the invitation email. Persons interested in participating emailed the university researcher (B.J.), who maintained their confidentiality at all times.
Consent
All respondents to the invitation email provided verbal informed consent to participate prior to interview. Verbal consent was approved by the ethics committee in consideration of the characteristics of the study population and the minimal risk of harm from participation.
Data collection
Interviews were conducted by the lead author (B.J.) via Zoom or telephone at a convenient time for the interview participant. Interviews ranged from 27 to 75 min in length (average time 42 min). Interviews were semi-structured in nature, with an opening question used to elicit rich feedback: “Could we start by you telling me a little about your experience of the RACE exams?” Relevant prompts were then used to elicit further key information if not already discussed (see Supplemental file 2). The interview guide was developed by the research team and reviewed by an Expert Panel (consisting of members of the Board of Examiners and staff involved in the administration of RACE) prior to being finalized and submitted for ethical approval. All interviews were audio recorded and transcribed verbatim. Transcripts were returned to participants to check for accuracy and completeness and amendments made where necessary.
Data analysis
The researchers involved in data collection (B.J.) and analysis (B.J., M.K.) approached the study from the framework of outsiders. They do not have a background in ophthalmology or specialist medical education. Their backgrounds are in allied health and rural health research. The researchers monitored their own knowledge, attitudes and assumptions of ophthalmology trainees, high-stake examinations and RANZCO as an institution, reflecting on how their personal biases may influence the research process and findings. The researchers engaged in reflexive writing throughout the research process, documenting interview and data analysis notes to monitor their perspectives and insights. These notes were discussed with a third author (P.A.) (who has a background in epidemiology and rural health research) during the thematic analysis. All the team members involved in data collection and analysis are female and have a PhD.
Interview data were thematically analyzed, incorporating elements of reflexive thematic analysis. 26 Following review of transcripts, 2 authors (B.J., M.K.) independently deductively coded data related to experiences of feedback using NVivo version 12.0. Codes were hierarchically organized into broader subthemes and eventually themes as the coding progressed. Both researchers then met to discuss their coding and to confirm the main themes, subthemes and the relationships between themes. Themes and subthemes were refined through iterative discussion with the broader research team. Verbatim quotations were used to exemplify findings where appropriate.
Results
A total of 29 participants (12 females and 17 males) were interviewed, 19 trainees and 10 supervisors. Participants were drawn from all ophthalmology training networks across Australia and New Zealand operational at the time of the study. Among the interviewees, 8 trainees had experienced recent failure on one or more sittings of RACE and all supervisors described supporting trainees who had experienced failure on at least one RACE attempt.
Thematic analysis identified 2 key themes and 5 subthemes related to experiences of summative feedback following RACE.
Inadequate feedback
The first key theme, inadequate feedback, encompassed the experience of receiving feedback relating to examination performance that was insufficient and unhelpful in allowing trainees to identify gaps in learning, to understand an often-unique experience of failure, and how supervisors could support preparation for resitting. This theme included the subthemes: brief, generic comments; and feedback is only for failure.
Brief, generic comments
Trainees who had experienced examination failure acknowledged receiving individual feedback in addition to the Examiners' report. However, this feedback comprised brief, generic comments related to common reasons for failure across the entire sitting cohort. This collective feedback was considered unsuitable for such a high-stake assessment. If you fail, you get terrible feedback. You can’t do that for high school students, let alone for an examination like this. (Trainee #4)
Those who failed RACE described this as a novel experience given their previously successful academic careers. They were therefore highly motivated to understand why they had failed and how they could improve their performance on subsequent examination attempts. However, trainees and supervisors described that the brief, generic comments, especially in relation to the written examination, were largely unhelpful in providing guidance on individual examination performance. Formal exam feedback is not useful as candidates remain without a clear idea of how to address the problem. (Supervisor #2) I do feel that specific information from the exam committee would be useful rather than just reading the generic cohort feedback. (Trainee #9) It really doesn't help you pass if you don't know what you didn't do well. If you could get very detailed feedback specifically on what it is that you did wrong, you’ve got more chance of fixing that. (Trainee #17)
Supervisors reported being invested in trainee outcomes on RACE and were therefore willing to help unpack examination failure. However, the absence of detailed, individualized feedback impacted supervisors’ efforts to identify strategies to help trainees prepare for subsequent examination attempts. Feedback has been completely inadequate … feedback has to be clear and detailed so the candidate knows what they did wrong because if they don't, they'll just do the same thing again and again and again. (Supervisor #5)
Given the brief, generic comments received, interviewees who had experienced multiple RACE failures reported being motivated to attend a meeting with the Training and Progression Committee (TPC) to discuss their examination performance. However, trainees who attended a meeting with the TPC described again the generic nature of commentary that was largely considered unhelpful. Feedback tended to encompass general discussion of study approaches and the importance of self-care, which trainees felt disregarded their prior academic achievements. With the potential for exclusion from the VTP following their next examination attempt, trainees had anticipated more specific guidance on how to improve. [The committee] just wanted to tell me stuff that anyone that has studied for exams would have known, try to get into a study group, try to get into good study habits, just stuff that was like telling a year ten kid how to study and I was just like, ‘well, that was a useful hour of my time’. (Trainee #8) There are mechanisms that happen when people aren’t up to scratch and if they fail the exams there's meetings … I don’t know that it does a lot of good because I think it just highlights a problem and people know if they’ve failed, I don’t know if meeting with them changes what they know. (Supervisor #10)
Feedback is only for failure
While some trainees described feeling confident after having sat RACE, others described genuinely being unsure about their performance. Trainees were therefore anticipating detailed feedback, including which questions they had passed and failed, regardless of their overall achievement. However, trainees who passed RACE described being disappointed after receiving a generic pass result without further comment. I passed the first time so all I found out was pass/fail … I mean I’m kind of curious to know which ones I failed and which ones I passed. (Trainee #14)
Although having passed, some interviewees were concerned about examination questions they may not have met a passing standard for. These interviewees perceived that feedback on any such questions was important to their future safety as a surgeon, by identifying gaps in knowledge and opportunities for improvement. Therefore, they believed that detailed, constructive feedback should be provided to all candidates on questions failed, not only to trainees who did not pass overall. I would give feedback, even to people who passed. You could pass the exam and have failed both your glaucoma stations and not know that you are dangerously bad at glaucoma. (Trainee #4)
Inability to contextualize feedback
Inability to contextualize feedback was the second key theme, which centred around concerns that it was difficult to interpret and reflect upon feedback provided in relation to examination performance. Subthemes were as follows: no access to examination manuscripts; not knowing the passing standard; and no access to examiners.
No access to examination transcripts
Both trainees and supervisors expressed concern that written examination manuscripts were not retained by RANZCO and that clinical examinations were not audio or video recorded. Therefore, trainees had no opportunity to review examination performance with hindsight given feedback provided. Trainees made the point that the stressful and hurried nature of the examinations, as well as the time delay between sitting and receiving feedback, meant they could not remember exactly what they had written or said in response to questions. This meant trainees found it challenging to consider the brief and generic feedback received in relation to their own examination answers. Furthermore, supervisors could not review examination answers and provide further individual feedback on why trainees may have failed and how they could improve. It's such a high-speed exercise that you type like crazy … and I certainly can't remember what I wrote three months later, having studied for another part of the exam. There's no way coming out of either of those exams that I could guess which questions I'd passed and which ones I'd failed. … It just makes it really, really hard to go back and try and learn for the next time around when you don't know what you're aiming for and not to be ever given an exam transcript to see what I did right. (Trainee #15)
Interviewees therefore expressed the belief that providing copies of marked manuscripts was an important step in improving the ability to contextualize feedback and build performance capacity in subsequent RACE sittings. Trainees acknowledged that it may not be practical to release examinations to them directly, so they suggested that DOTs might have access and could act as an intermediary to unpack examination answers that did not meet the passing grade. Trainees also suggested that the college could consider video recording the clinical examinations, which would allow trainees to reflect on performance issues. If you're interested about educating candidates and providing standard for that, then why not give the papers and answers to the [Director of Training] and let them go through it and then provide your feedback and say ‘this is why you got that wrong’, rather than ‘I don't know why you got it wrong either’. (Trainee #19) If they video record the clinical sessions, then they can sit down and watch where they went wrong, or see what they're like on the other side, because we have our own idea of how we did, but actually, our memory might be totally different. (Trainee #12)
Not knowing the pass standard
Interviewees described how examination feedback did not provide information on the pass standard for questions. Trainees and supervisors highlighted that they were not privy to the marking criteria or rubrics used to allocate marks, especially within the written examination, to determine if their answers met a satisfactory standard. Another thing people have said [is problematic] is the transparency of the thing, having clear marking guidelines and having maybe a bit more transparency over exactly what they are looking for and what the marking criteria are. (Trainee #1) Some people fail, even though they might have been able to convey that what they were doing was right and safe … they didn't get it in the right order, or they didn't have enough key words. (Supervisor #6)
Interviewees also shared that model answers were not provided as part of the feedback from examiners. Trainees were aware that these had previously formed part of feedback to examination candidates and felt that they were necessary to understand the passing standard. Ten years ago, they used to give up model answers … which I think would be useful … basically, a bit more detailed feedback rather than a couple of lines of what they're looking for. (Trainee #5)
The absence of marking rubrics and model answers ultimately undermined the ability of trainees and their supervisors to reflect on feedback provided and the adequacy of answers. Trainees who had failed by a small margin felt that this was important in psychologically processing their failure, especially in identifying whether their answers had achieved some marks, and therefore how to improve their approach to answering questions on subsequent sittings. You can kind of guess what a pass/fail answer would be and whether you included some of those based on the feedback, but you wouldn't be able to know whether you're a clear pass or a borderline. (Trainee #18)
Trainees recognized there was wide variation in contemporary knowledge and experience of RACE among supervisors and other experienced ophthalmologists, which influenced advice given on examination performance. In some cases, trainees described that supervisors did little but affirm that their answers were appropriate and that they should have met a passing grade. In others, trainees recognized differing opinions among supervisors as to how questions should have been answered. This left trainees confused as to how to redress answers on subsequent sittings. Supervisors affirmed that clearer guidance from the examiners about model answers and passing standards was needed to better support trainees. It's very subjective depending on who you ask. You could ask someone and they would say, ‘that's totally fine.’ Then the other person would say, ‘no, that's wrong’. And then sometimes your question is, ‘who's right?’ (Trainee #11) If I had one thing that I could do to change the validity of the RACE exam, it would be to say that you need clear, transparent, open standards for the written examination. (Supervisor #5) I think what the official answer would be from my reading and research and other experience, this is what they would be expecting. But the question is, ‘who's they’ and ‘what are they expecting’ … and so that comes back to the issue around how black and white it should be, or is it about what's the minimum standard in order to pass? (Supervisor #7)
No access to examiners
Given the limited feedback provided and at times, conflicting advice from supervisors, trainees described being motivated to seek further information regarding their examination performance. Trainees indicated that this would ideally be from the examiners themselves who set and marked the exams. However, trainees described having signed a contract prior to sitting the exam that specified they would not attempt to contact examiners to discuss their examination results. They make you sign a document before you sit the exam saying you won't appeal, saying you won't question, saying you won't approach the examiners. (Trainee #12)
Some trainees expressed dissatisfaction with this arrangement and argued that given all trainees are adult learners, combined with the high-stakes nature of RACE, this warranted the opportunity to engage with examiners. Trainees were aware that discussion with examiners was permitted historically and felt that this should resume as a learning opportunity and to support achievement on subsequent RACE attempts. I’ve heard from those who went through the era where they had feedback was that [the examiners] got sick of people arguing why their answer should be right … so [the examiners] decided that they are not going to release your paper and their decision is final… I don't think that's the heart and soul of what a teaching institution should be about. (Trainee #19)
Discussion
The findings of this research are broadly applicable to international medical specialist training colleges, although some colleges may have comprehensive evidence-based feedback systems already in place. In this study, trainees and supervisors were critical of what they considered to be inadequate feedback provided following RACE, with trainees perceiving that they could not improve their knowledge and skills in specific curriculum areas unless they received tailored and specific guidance.
It is important to note that the perspectives of supervisors and trainees in this study were largely cohesive, which contrasts with the findings of Sender et al, 27 Perera et al, 28 and Yarris et al. 29 This stems from RANZCO supervisors being allied with the experiences of the learner, rather than the teacher, as they are separated from RACE procedures and marking.
The findings of this research predominantly align with Hattie and Temperley's 30 description of task-level feedback, consisting of the provision of structured feedback proformas, model answers and marking rubrics. Process-level feedback could be included within these components by providing trainees with additional information about strategies to detect errors in their approach to answering examination questions and how to apply corrective mechanisms. As ophthalmology trainees are high achievers whose academic track-records demonstrate excellent self-management in learning, self-regulation feedback is not generally appropriate. An exception to this may be in the case of trainees who fail. For this group, meetings with examiners may be appropriate, and discussions may provide substrate for considering self-regulation level feedback.
The desire and need for personalized feedback following summative examinations are not unique to RACE.14,15 To address the common issue of brief and generic feedback following high-stakes summative specialist medical examinations, the Academy of Medical Royal Colleges in the United Kingdom recommends that feedback includes an explanatory breakdown of results on each domain of an examination, rather than the examination as a whole. 12 One approach to improve the quantity and quality of feedback is to introduce structured examination feedback proformas.31,32 Structured feedback proformas ensure consistency and comprehensiveness of feedback, although it is important to retain free-text comment sections to allow examiners to elaborate on and individualize their comments to candidates. 33
While RANZCO provides individual feedback to RACE candidates who fail, at the time of our research, trainees who passed RACE received notification of their pass result and access to the Examiners’ report only. The Academy of Medical Royal Colleges advises that all specialist examining boards in the United Kingdom implement a feedback policy for high-stakes summative examinations 12 and that similar feedback should be provided to candidates who pass and those who fail. There is no current equivalent policy recommendation in Australia, as evidenced by a recent commentary authored by General Practice Registrars who voiced concerns about the lack of feedback following high-stakes summative Royal Australian College of General Practitioners examinations. 34 In our research, trainees and supervisors considered the lack of feedback to all candidates, regardless of examination result, as a missed opportunity for learning. Furthermore, universal feedback was considered a useful mechanism to ensure safe practice in the future, by allowing trainees to pursue self-directed learning on topics from written questions or clinical examination stations that they knew they had failed.
Compounding the issue of scant feedback, trainees and supervisors were concerned that it was challenging to contextualize feedback due to an inability to access examination transcripts. It was suggested that returning marked written examinations, and providing video recordings of clinical examinations, would help candidates and their supervisors better understand the reasons for failure and to identify learning required for success at the next attempt. Documented feedback is essential as research has found that verbal feedback is poorly or inaccurately recalled 1 month after examination among internal medicine trainees. 35 While video feedback has advantages of comprehensiveness, addressing communication issues, elaborating on clinical reasoning and noting professionalism issues, 36 it is time-consuming for examiners and has the risk of facilitating unreasonable litigation. For these reasons, video feedback consisting of model answers to clinical examination stations may be preferable.
Trainees and supervisors also expressed frustration at the absence of marking rubrics and model answers. This created distinct challenges determining how questions should have been answered, especially for supervisors who varied in contemporary RACE knowledge and experience. Other research has described marking rubrics as the most valuable aspect of feedback in a study among medical students. 37 Marking rubrics are routinely provided to undergraduate and postgraduate medical students, so all speciality trainees are likely to be experienced in using marking rubrics to guide preparation and performance and expect them to be provided for summative examinations. Indeed, general practice trainees in Australia have called for access to examination papers and marking rubrics as necessary components of standardized examination feedback. 34 Given the personal, social and professional consequences of failing high-stakes summative examinations. 25 Colleges could consider the dissemination of marking rubrics prior to examinations in order to promote targeted preparation, transparency and fairness. Past examinations with model answers would also bode well for better preparing candidates as to both the scope and depth of information to provide when answering questions.
Trainees who had failed RACE expressed concerns that examiners were inaccessible and there were no avenues of appeal. Although it has historically been permitted, at the time of this study, there was no pathway for RACE candidates to access the Examination Board for further feedback after receiving their results. Some trainees noted that access to examiners should be facilitated to discuss the reasons for failure and to seek advice on how to address performance deficits. Meetings between trainees and the examiners may provide an important opportunity to receive information about how to identify errors and how to re-strategize their approach to answering RACE questions. It may also provide a vehicle for self-regulation feedback by delivering information that prompts self-appraisal decisions about readiness to resit. Colleges may consider revising examination policies to formalize trainee access to Examination Boards, with policy documents including information on how to request access, the timeframe expected for meetings after examination results are released, the structure of further feedback provided and the documentation provided to trainees. However, care should be taken in the drafting of policy documents regarding access to examiners as it is not appropriate to provide an avenue for trainees to challenge their results. If access to Examiners is granted by policy documents, trainees would benefit from being accompanied by their DOT so that all supervisors involved in their learning can be made aware of specific learning or performance deficits that need to be addressed prior to resitting. An alternative approach could be that DOTs act as intermediaries, meeting with the Examination Board on the trainee's behalf and then with the trainee to provide specific feedback on examination performance.
There are some situations which present a challenge for the provision and utilization of feedback after summative examinations. Harrison et al, 38 observed a powerful culture, dominated by a fear of failure, within medicine. This culture emphasizes avoidance of examination failure to such an extent that there is a disconnect between examination performance and learning to enhance future clinical practice. This was not observed among our participants, with feedback universally welcomed in the case of RACE failure. However, it is acknowledged that providing detailed and individualized feedback is time-consuming and labor-intensive, which presents a particular challenge for specialist colleges with large cohorts of trainees. There are also concerns that providing marked transcripts of examination papers to candidates restricts the reuse of exam questions in subsequent examinations. 39 To address this, specialist colleges require large question banks, which may not be achievable in the short term.
There are also psychological and performance considerations that can present barriers to the utilization of feedback. Trainees have been described as not receptive to summative examination feedback, with those who pass being the least engaged in the feedback process. 40 To maximize the learning and self-efficacy impact of feedback, it needs to be positively framed (defined by van Ridder et al, 41 as referring “to the packaging of the message” independent of content) and constructed in a way that will more effectively engage those students who need the most help. 40
There are limitations to this study. Importantly, this study is specific to the examination processes of the Australian and New Zealand ophthalmology training program. As such, its findings may not be generalizable to other countries. Despite this limitation, the findings have value in providing information about the challenges that trainees experience within some training programs. Moreover, the interview data were derived from motivated respondents who self-selected to participate. Examination results may have been a primary motivator of participation. However, we found the proportion of trainees who participated in the research who failed RACE on at least one attempt was equivalent to the average proportion of trainees who failed RACE over recent years. There was a lag time between sitting RACE for some trainees and participation in interviews, which may have resulted in recall bias. Given the small numbers of participants, the experiences described cannot therefore claim to be representative of all trainees and supervisors. Increased participation may have provided further insight into experiences of feedback that were not identified as part of this research. Ensuring that all motivated participants were included and that participants were representative of all training networks across Australia and New Zealand at the time helped to address this potential limitation of the research.
Conclusion
Ophthalmology trainees in Australia and New Zealand receive limited feedback following final summative examination, which can be challenging to contextualize and apply for targeted improvements when resitting. The findings emphasize the need for improved task-level feedback and process-level feedback. Detailed, individualized feedback on examination performance is needed for all trainees, but especially those who failed, and their supervisors, to identify gaps in knowledge and prepare for subsequent examination attempts. Opportunities to view examination manuscripts, marking criteria, rubrics, and model answers, as well as seek feedback from examiners, would bode well for improving feedback following RACE. These strategies could well apply to other specialist colleges and serve to reinforce standards of performance and improve the quality and safety of medical specialists being trained across Australia and internationally.
Supplemental Material
sj-docx-1-mde-10.1177_23821205241286288 - Supplemental material for “Well I Failed, but I Have No Idea Why”…: Experiences of Feedback After High-Stakes Summative Specialist Medical Examination in Ophthalmology
Supplemental material, sj-docx-1-mde-10.1177_23821205241286288 for “Well I Failed, but I Have No Idea Why”…: Experiences of Feedback After High-Stakes Summative Specialist Medical Examination in Ophthalmology by Belinda Jessup, Penny Allen, Melissa Kirschbaum, Santosh Khanal, Victoria Baker-Smith, Barnabas Graham and Tony Barnett in Journal of Medical Education and Curricular Development
Supplemental Material
sj-pdf-2-mde-10.1177_23821205241286288 - Supplemental material for “Well I Failed, but I Have No Idea Why”…: Experiences of Feedback After High-Stakes Summative Specialist Medical Examination in Ophthalmology
Supplemental material, sj-pdf-2-mde-10.1177_23821205241286288 for “Well I Failed, but I Have No Idea Why”…: Experiences of Feedback After High-Stakes Summative Specialist Medical Examination in Ophthalmology by Belinda Jessup, Penny Allen, Melissa Kirschbaum, Santosh Khanal, Victoria Baker-Smith, Barnabas Graham and Tony Barnett in Journal of Medical Education and Curricular Development
Footnotes
Acknowledgments
The authors are grateful to the trainees and fellows who kindly gave their time and shared their experiences and insights, which enabled us to conduct this research.
Author Contributions
B.J. contributed to study design, was responsible for data collection and data analysis, and contributed to manuscript writing and editing. PA contributed to study design, data analysis, and manuscript writing and editing. M.K. was responsible for data analysis, and contributed to manuscript writing and editing. SK conceptualized the study and contributed to study design and manuscript editing. V.B.-S. contributed to study design and manuscript editing. B.G. contributed to study design and manuscript editing. T.B. conceptualized the study and contributed to study design, data interpretation and manuscript editing
DECLARATION OF CONFLICTING INTERESTS
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: The University of Tasmania based investigators have no real, perceived or potential conflicts of interest to declare. S.K., V.B.S. and B.G. are employees of RANZCO and manage the RANZCO Training Program and RACE Examinations. These researchers were not involved in the recruitment of participants, data collection (interviews), data analysis, or writing the original draft of the research findings.
FUNDING
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by RANZCO.
Ethics and consent
Ethics approval was provided by the University of Tasmania Health and Medical Human Research Ethics Committee (Project ID: 25018) and the RANZCO Human Research and Ethics Committee (Reference: 129.21). All respondents provided verbal informed consent to participate prior to interview.
Supplemental material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
