Abstract
Background
In 2013, the Accreditation Council on Graduate Medical Education (ACGME) launched the Next Accreditation System, which required explicit documentation of trainee competence in six domains. To document narrative comments, the University of North Carolina Family Medicine Residency Program developed a mobile application to document real time observations.
Objective
The objective of this work was to assess if the Reporter, Interpreter, Manager, Expert (RIME) framework could be applied to the narrative comments in order to convey a degree of competency.
Methods
From August to December 2020, 7 individuals analyzed narrative comments of four family medicine residents. The narrative comments were collected from July to December 2019. Each individual applied the RIME framework to the comments and the team met to discuss. Comments where 5/7 individuals agreed were not further discussed. All other comments were discussed until consensus was achieved.
Results
102 unique comments were assessed. Of those comments, 25 (25.5%) met threshold for assessor agreement after independent review. Group discussion about discrepancies led to consensus about the appropriate classification for 92 (90.2%). General comments on performance were difficult to fit into the RIME framework.
Conclusions
Application of the RIME framework to narrative comments may add insight into trainee progress. Further faculty development is needed to ensure comments have discrete elements needed to apply the RIME framework and contribute to overall evaluation of competence.
Keywords
Introduction
Competence is defined as multi-dimensional and dynamic, changing with time and linked to experience and setting. 1 The Accreditation Council on Graduate Medical Education (ACGME) defined six domains of competence expected of every resident.2,3 Programmes individually developed methods to gather assessments of trainees’ progress to guide promotion decisions.
Medical education evaluations often rely on rating scales defining trainee performance.4,5 Numerical ratings create a ranking system that can be used to benchmark trainee progress. This reductionist approach has come under scrutiny due to rating inflation 6 as well as poor correlations with narrative comments.7,8
Where competency-based performance is concerned, evaluations dependent on numerical systems fall short in capturing and evaluating progress in complex tasks and roles. 9 Questions have arisen about the validity and reliability of numeric ratings and scoring systems 2 and whether the qualities and capabilities essential for good performance post-graduation are assessable using only grades. 10 Narrative-based evaluations of clinical performance provide context to the numerical ratings. 11
With accreditation systems increasingly requiring programmes to document progress, 3 reliable systems for evaluation are needed more than ever. Entrustable professional activities (EPAs) have been advocated as a more advanced way of evaluating competence, 12 but “entrustment” still elicits confusion among clinician educators. 13 The Reporter-Interpreter-Manager-Educator (RIME)14,15 is a developmental framework for assessing trainees in clinical settings. 14 RIME suggests trainees progress through four stages, each requiring more complex application of the skills attained at the previous level. This model offers descriptive nomenclature readily understood and accepted by trainees and preceptors. 16 Ryan and colleagues reported on reliability of the RIME framework being used as a numeric rating with medical students. 17 Narrative comments modelling this framework can provide a richness of detail of progression that numerical scales cannot. 18
In response to the ACGME and to better document narrative-based descriptions of learners, the University of North Carolina Family Medicine Residency Program developed the Mobile Medical Milestones application (M3App©),
19
allowing faculty to document real time direct observations and provide formative feedback to residents (Figure 1). We sought to apply a process developed by Hanson et al
7
to evaluate narrative comments from the M3App© using the RIME framework to determine developmental progress from the feedback. Specific questions explored in this study were:
How accurate can independent reviewers assign RIME to narrative comments? What challenges emerged from applying the RIME model to narrative comments?

M3APP© feedback process. When completing feedback on a resident, faculty have the opportunity to enter a comment (A). They are then asked to identify a broad competency (B), followed by choosing a detailed competency (C). Therefore, multiple competencies can be selected for a single narrative comment. The figure is an adaptation of the M3APP©.
Methods
Narrative comments for four family medicine residents (2 PGY 1 and 2 PGY 2) were chosen for inclusion in this exploratory study. The narrative comments from July to December 2019 were obtained and de-identified to blind the researchers from the resident as well as evaluators (Figure 2). Because the M3App© allows preceptors to choose what ACGME Milestones the comment relates to, there were duplicate comments in the download for each resident and coded only once. This study was reviewed and approved by the university institutional review board.

Example output of resident performance from the M3App©. Faculty and resident names were removed to ensure anonymity.
From August to December 2020, we analyzed narrative comments from the M3App©. Our team consisted of a medical education researcher, a family medicine physician, an internal medicine intern, and four senior medical students.
Prior to the narrative analysis, background material about the RIME framework 14 was discussed to ensure all members of the team understood each classification. Examples of comments were presented so the members of the team had a shared mental model.
All narrative comments were independently coded deductively based on the RIME framework. If it appeared comments fit more than one category, multiple RIME categories were selected. If comments were unclear or simply a compliment, they were categorized as not applicable.
The research team met to discuss individual coding results. Narrative comments where 5 of 7 codes agreed were not further discussed. All other comments were discussed until consensus was achieved.
Results
For four residents, 221 narratives were obtained. After removing duplicates, 102 unique narrative comments remained. For the first research question, rater agreement was analyzed. Only 25 (25.5%) records met our threshold for assessor agreement. Inter-rater reliability for the independent review resulted in a Cronbach's alpha = .427. After discussion, 92 (90.2%) evaluations achieved consensus among assessors (Table 1).
Narrative comment rater agreement.
For the second research question, reviewers debriefed about this process and the challenges faced when coding comments. Comments that were vague, using verbs such as “great” or “excellent” to describe an action without providing more specific feedback were difficult to assess and fit into the RIME framework. Examples included “Great presentation with fellows” and “from MICU, one attending called to praise his excellent care.” These items were considered more of a compliment and rated as not applicable.
Technical skills often described the particular skill undertaken in a matter-of-fact manner. For example, “…RESIDENT performed 3 excisional biopsies today. He demonstrated good technique and appropriate caution. We worked on refining his technique for buried sutures…” This made codifying a procedural skill based on RIME impossible to do.
Discussion
Narrative comments on resident performance facilitate assessment of competence. This study however demonstrated that it is difficult to assign the RIME scheme by independently reading narrative feedback, primarily because of the lack of specificity in many narratives. Pangaro and ten Cate indicated comments need to be clear to communicate progress, which many of our narratives lacked. 20
Based on our study, there remains a need for faculty development related to narrative comments.7,11 The RIME framework presents an understandable vocabulary by clinician educators. 20 Training the reviewers to write narratives with the RIME framework in mind will also help evaluators. In so doing, they may also offer suggestions of how to progress to the next level. During our consensus process, it also became evident that contextual features such as setting adds clarity to the narrative.
The authors intend to repeat this process following faculty development on writing specific, actionable feedback that includes more contextual information. Establishing a shared mental model of trainee expectations to improve feedback contributes to applying a framework like RIME, reflecting the work and skill of a physician. Additionally, a more in-depth analysis linking RIME to the competency ratings will be conducted to determine if the narrative comments are congruent.
Conclusion
Narrative comments reveal strengths and weaknesses of trainees, information that is difficult to attain from a single summative score. Applying a framework such as RIME to narrative comments can offer insights into trainee progress toward independent practice, allowing for meaningful feedback for trainees. For future steps, faculty feedback regarding input of the comments would help ensure the ability to apply the RIME framework and further determine competence.
Footnotes
Acknowledgements
The authors would like to thank Dr Janice Hanson from Washington University School of Medicine, St. Louis, Missouri for her consultation about the process of assigning the RIME framework to narrative comments. UNC School of Medicine is a member of the American Medical Association (AMA) Accelerating Change in Medical Education and support for this project provided by the Reimagining Residency Initiative.
Author Contributions
Each of the authors contributed to the conception of this project, participated in data analysis, and have been integral to writing the manuscript.
Statements and Declarations
The authors have no conflicts of interest with this work.
Ethical Approval
Not applicable, because this article does not contain any studies with human or animal subjects.
Informed Consent
Not applicable, because this article does not contain any studies with human or animal subjects.
Trial Registration
Not applicable, because this article does not contain any clinical trials.
