Abstract
Traditionally, undergraduate medical education (UME) grading has been based on a tiered system. Tier-based grading can cause anxiety as medical students are compared to their peers. Students then become overly driven by the pursuit of creating favorable impressions by supervisors as well as by high grades. Additionally, the emphasis on normative parameters appears to misalign with the goal of UME which is to not sort learners into different residency programs but to train future doctors to meet the needs of society. This commentary is a call for action to shift from utilizing a normative-based grading paradigm in UME to implementing one in which learners are being assessed on their ability to attain specific competencies. It is important that UME transitions to competency-based assessments as the graduate medical education (GME) realm has already adopted this framework.
Traditionally, undergraduate medical education (UME) grading has been based on a tiered system, one in which a predetermined percentage of students receive a particular grade (ie, the top 40% receive a grade of Honors, 40% receive High Pass and the remaining 20% receive Pass). However, this grading scheme leads to the perception of a learning environment as being stressful. Accreditation bodies have acknowledged the impact that the learning environment has on learner well-being and performance. The Liaison Committee on Medical Education requires medical schools to regularly monitor the learning environment. 1 Tier-based grading can also cause anxiety as medical students are compared to peers they are rotating with during a clerkship or within a defined academic period. Students then become overly driven by the pursuit of creating favorable impressions by supervisors as well as by high grades. 2 Prior to the elimination of numerical scoring for the United States Medical Licensing Examination (USMLE) Step 1, USMLE Step 1 scores were routinely used as a screening parameter in the residency interview selection process. The use of USMLE Step 1 scores as a criterion for interview selection eventually proved problematic as scores that differed by up to 20 points were eventually recognized as actually not being significantly different given that the standard error of measurement and standard error of difference was 6 and 9 points, respectively. 3 This detracted from evaluating residency candidates holistically. Additionally, the emphasis on USMLE Step 1 scores misaligned with the goal of UME which is to not sort learners into different residency programs but to train future doctors to meet the needs of society. Normative parameters are useful in assessing one's knowledge base but not in evaluating clinical performance. 4 Currently, learners and institutions are solely informed if learners passed or failed Step 1.
This commentary is a call for action to shift from utilizing a normative-based grading paradigm in UME to implementing one in which learners are being assessed on their ability to attain specific competencies. It is critical that UME transitions to competency-based assessments as the graduate medical education (GME) realm has already adopted this framework.
GME appreciated that 4 steps needed to be implemented for success: (1) establishment of competencies, (2) determination of criteria for meeting expected skill levels within each competency, (3) development of assessment tools (such as 360 evaluations and direct observations) to evaluate the learner to assign a skill level, and (4) creation of a means to evaluate the effect of a GME competency-based assessment model on long-term learner performance and patient outcomes. 5
In 1999, the Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Specialties (ABMS) created the Outcomes Project which established the requisite competencies, when combined with knowledge, skills and attitudes, that would prepare a physician for independent practice. 6 In 2013, ACGME then launched the Milestones Project in which each specialty developed performance levels for each competency. This initiative allowed for transparency of expectations, improvement in self-directed assessment and garnering of more constructive feedback for professional development. In addition, the Milestones Project allowed for monitoring of learner progression over the course of residency education. However, there remains a dearth of standardized assessments to assign milestones. Moreover, there is insufficient data to determine the widespread impact of its utilization as there still remains a need to create consistency among Clinical Competency Committee practices across residency programs nation-wide. 7
In light of the swing in rendering of GME though, initiatives were constructed in order for UME to also move towards competency-based assessment. In line with this endeavor, in 2013, the American Association of Medical College (AAMC) constructed 58 competencies in 8 domains (patient care, knowledge of practice, practice-based learning and improvement, interpersonal and communication skills, professionalism, systems-based practice, and interprofessional collaboration). 8 From these competencies, the AAMC created a list of 13 core entrustable professional activities (EPAs) and 2 levels of performance for each: “pre-entrustable” and “entrustable,” corresponding to the novice learner and the learner capable of performing the EPA independently, respectively. 9 Unfortunately, there continues to be barriers to the embracing of a competency-based assessment system in UME.
One is the ongoing reliance on normative-based assessments which is steeped in tradition as elementary, middle, and high school education continue to use them. Although tier-based grading may be associated with unhealthy competition among learners, learners also fear that they will not be able to set themselves apart from their peers without numerical grades. They also fear that a competency-based scheme may impact their ability to be selected by the residency program of their choice. In addition, faculty members fear systemic consequences involving the residency match and the effect on the reputation of their institutions. 10
Another is the lack of widespread assessments available to assist in gauging learner competency level. Most medical schools rely on proxy assessments such as performance on multiple choice exams and oral and written presentations. They also rely on global rating forms which are prone to significant subjectivity and may be influenced by unconscious biases. 11 As in GME, the use of 360 evaluations and direct observations can be utilized in UME.
A third involves misconceptions on the purpose of competency-based assessment. Competency-based assessment is a formative tool, not a summative one. However, in one study that sought to operationalize a workplace-based assessment to measure performance of core EPAs in the pediatrics clerkship, it appeared that students self-selected completion of particular EPAs that they thought would influence their summative evaluation. 12 They were less inclined to seek out feedback on EPAs that they likely needed to improve on. After all, summative evaluations comprise the majority of the Medical Student Performative Evaluation form and play a significant role in matching into a residency program.
A fourth is in creating a harmonious UME competency committee given varying opinions, leading to multiple tensions. These include (1) determining if members should be a combination of faculty and students who are knowledgeable with the curriculum or faculty members who are not but who could potentially be more objective, (2) which assessments to use, (3) whether the focus should be on careful review of problem learners or on systematic review of all learners, and (4) types of learners that should undergo committee review: preclerkship or clerkship. 13
Conclusion
In conclusion, our future physicians will be engaging in competency-based assessment after matriculation. It is our responsibility for those in UME to prepare them. We need to ensure that our learners are meeting the competencies to become strong physicians. Although undergraduate learners have been accustomed to normative-based grading schemes, normative parameters do not entirely reflect clinical performance. Normative parameters are useful in demonstrating fund of knowledge but do not assess skills and attitudes. Enacting a competency-based model will require creation of assessments that include direct observation and faculty development pertaining to providing feedback focused on competency attainment. This change in assessment model will require ongoing monitoring to ensure positive outcomes for learners and patients, but our undergraduate learners are ready to embark on this new model of grading.
Footnotes
FUNDING
The author(s) received no financial support for the research, authorship, and/or publication of this article.
DECLARATION OF CONFLICTING INTERESTS
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article
