Abstract
Objectives:
This research examined the effects of multiple combined competency-based methods in an adult cardiac Commission on Accreditation of Allied Health Education Programs (CAAHEP) accredited diagnostic medical sonography program on clinical and didactic scores.
Materials and Methods:
A quasi-experimental nonequivalent group research design was used to evaluate students enrolled in an adult cardiac CAAHEP accredited echocardiography curriculum. Their cumulative final examination and clinical competency evaluation scores from multiple cohorts (pre- and post-intervention) were used for statistical analyses.
Results:
There were no significant differences between multiple competency-based assessments and didactic or clinical scores but unveiled reasons that coincide with the literature, such as nonvalid and subjective clinical assessment.
Conclusion:
These results suggest further evaluation of the credentialing process to ensure clinical competency.
Keywords
Diagnostic medical sonography (DMS) is a unique field compared to other allied health professions such as radiologic technologists, radiographers, nurses, and physician assistants as only four states require sonographers to be licensed; all other states do not require credentialing to practice.1,2 Licensure is a regulatory requirement to practice in a particular state compared to certification, which is voluntary and meant to denote recognition of clinical excellence. 3 In essence, sonography lacks standardization compared to other allied health fields because licensure is mandatory to practice.
Being a credentialed health care provider distinguishes oneself as an expert who possesses fundamental knowledge that represents the highest standard in his or her field.3,4 Technically, an individual can bypass attending a sonography program or acquire credentialing to practice sonography. However, it is recommended, and an individual likely will not become employed without being credentialed; in addition, labs that employ such individuals will not receive reimbursement from particular health insurance companies.5–8 Sonography credentialing bolsters the field and patient care. 4 This work was focused on the American Registry of Diagnostic Medical Sonography (ARDMS) 9 as the credentialing body of sonography, which before 2020 had nine different mechanisms to become credentialed, but currently has only five pathways. The ARDMS’s 9 most regulated is prerequisite number 2, where students must graduate from a Commission on Accreditation of Allied Health Education Programs (CAAHEP) accredited DMS program.
The combination of multiple prerequisite pathways and an increase in sonography programs were in response to a shortage of sonographers a decade ago.10,11 This anticipated need for sonography programs was focused on producing competent entry-level sonographers and funneled graduates to certification. However, sonographer certification is mostly composed of a computerized test in a traditional multiple-choice format. A scarcity of literature exists that compares the relationship between credentialing and clinical skills.12,13 One study 14 assumes that credentialing ensures clinical competency, yet these examinations do not directly measure clinical skills or have an in-person scanning assessment.12,13,15 The prerequisites for a credentialing examination are designed to confirm clinical competence and “may support the long-standing public and professional assumption that sonographer credentialing ensures clinical competency.”12(p1) Unfortunately, the prerequisites can be subjective and contain nonvalid assessments at clinical sites. 16
Few evidence-based programs, grounded in educational theory, support students’ needs within sonography programs,17–19 and a smaller sample of articles discuss CAAHEP DMS curricula. Mukhalalati and Taylor 20 discussed learning theories, such as humanistic, behaviorism, cognitivism, constructivism, experiential, social, and motivational, to relate them to health care training, with the exception of sonography. More importantly, there is a lack of appropriate leadership within sonography for true competency and standardization. For the last two decades, published articles relating to the need for sonographers to be registered, to achieve competency, and to obtain standardizing education posed minimal objectives, but few came to fruition.15,16,21,22
The ARDMS’s credentials are designed to enhance worldwide sonographic practice to encourage and improve patient safety. 23 Nevertheless, sonographer credentialing is not universally mandatory, and the screening of clinical prerequisites is likely subjective.9,16,24 Although some lab accreditation bodies have changed their requirements, mandating that all sonographers become credentialed,7,8,22,25 it is irrelevant for labs not seeking accreditation. These inconsistencies reinforce how vital it is to further demonstrate the difference between clinical and traditional didactic assessments to ensure competency in the sonography field. It may be necessary to either change the credentialing process or, at minimum, mandate credentialing for sonographers. Nayer 26 states that an objective structured clinical examination (OSCE) is a credible method to assess clinical competence but may not be reliable for licensure. However, combining multiple-choice examinations and OSCEs could achieve reliable levels. 26 Other studies have proved that the competency-based assessments, such as OSCEs, assess different skills than traditional assessments.27–29 These studies could have prompted credentialing organizations to incorporate a competency component instead of relying on clinical site preceptors. Some clinical preceptors may not have adequate educational training and knowledge to assess clinical students. 30 One study suggests that inconsistent use of learning theories among health care educators leads to possible varied outcomes. 20
The following research questions were posed as the basis for the educational research endeavor:
The primary objective of this study was to incorporate competency-based assessments (e.g., OSCEs, simulation, and formative and summative assessments) into a CAAHEP accredited adult echocardiographic program and determine their effect on a traditional multiple-choice assessment, such as a cumulative final examination. The assessments sought to determine whether students’ abilities improved within one or all three of CAAHEP’s learning domains (cognitive, psychomotor, and affective). The goal of the study was to assess whether the combination of clinical-like skills translated to cognitive skills in the cardiac specialty. Studies have determined that clinical competence evaluates different components than didactic. However, they assess either a separate specialty (e.g., vascular sonography) or a health care professional (e.g., resident physicians).16,30–33 Sonographers need the fundamental knowledge to translate them into clinical competence, which can be learned and applied from adult learning theories. The latter is exceptionally relevant to sonographers’ scope of practice and patient care.
Materials and Methods
The quasi-experimental nonequivalent group research design was used to evaluate students enrolled in a CAAHEP accredited adult echocardiographic curriculum. Their cumulative final examination and clinical competency evaluation scores from multiple cohorts (pre- and post-intervention) were used for statistical analyses. The control group and experimental group consisted of 42 and 39 students, respectively. The dependent variables for Research Questions 1 and 3 consisted of continuous data and were the sonography courses’ cumulative final examination. An independent t-test compared the experimental versus the control groups’ means of the final examination. The dependent variable of Research Question 2 was the clinical evaluation score, which was also continuous data. An independent t-test compared pre- and post-intervention means of students’ clinical evaluations. The study’s independent variables were the combined multiple competency-based assessments (cardiac OSCE, cardiac simulation assignment, and formative assessments). The cardiac OSCE, cardiac simulation, and formative assessments’ data types are continuous, nominal, and nominal, respectively. A priori, the statistical significance was set at a P value of < .05.
The OSCE replaced the pre-intervention group’s comprehensive checklist, where students had to solely acquire images of a normal protocol. The students were not timed and able to view which images were needed. The OSCE was timed, and the protocol had to be memorized. The pre-intervention group had no exposure to simulation and had to complete four semi-formative assessments with minimal feedback compared to the intervention group exposed to 16 formative assessments with extensive feedback. The dependent variables (i.e., final examinations and clinical evaluation scores) were analyzed between groups to see whether the combined independent variables had any effect on them. The primary dependent variable is the summative, multiple-choice, and hot spot question of final examination. An OSCE rubric created by the researcher was used to determine whether a student passed their competency. The incorporation of multiple formative scan lab assignments and simulation projects was designed to help students prepare for their OSCE.
Cardiac OSCE Methodology
Sonography students in the intervention cohort participated in the cardiac OSCE. A Phillips Affiniti X70 equipment and Samsung HS60 ultrasound equipment were used. Students were accustomed to these machines and used them throughout their training in the DMS program during their scheduled scan time and to complete formative assignments. Students had unlimited access to the machines throughout program enrollment. Students were notified 3 weeks in advance of which machine they were to use and their scheduled day to scan. Although standardized patients are typically used for OSCEs, they are costly.16,34 Baker et al 16 suggest using sonography students as OSCE models to mitigate cost, which was done due to a limited DMS program budget. However, students were not notified of their scanning partner until they showed up to take their OSCE. The OSCE is a pass-fail examination. A passing score of 75% using the OSCE’s objectives and guidelines rubric was required.
Three experienced sonographers (6, 25, and 13 years of experience) individually proctored and assessed the students. On OSCE completion, all sonographic images were sent to the DMS program’s PACS. This allowed all three experienced sonographers to collaboratively determine whether a student passed the OSCE (i.e., assessed whether the submitted images were diagnostic and that of an entry-level cardiac sonographer using the rubric). Baker et al 16 used the National Education Curriculum (NEC) and Society of Diagnostic Medical Sonography’s (SDMS) Scope of Practice guidelines 35 to create their objectives and criteria, which have been updated since 2011. It is important to mention that unlike Baker et al, 16 who evaluated carotid and lower extremity venous examination protocol, this study evaluated a transthoracic echocardiography protocol. The OSCE’s objectives and guidelines rubric was created using the Joint Review Committee on Education in Diagnostic Medical Sonography’s (JRC-DMS) 36 NEC’s current Common and Cardiac Specialty Curricula and SDMS’s Scope of Practice and Clinical Standards for the Diagnostic Medical Sonographer 35 while incorporating which learning domain(s) correlate to each objective.
At the beginning of the echocardiography course, students were given all OSCE requirements: the OSCE objectives, required echocardiographic clips, images, measurements, and multiple examples of an Intersocietal Accreditation Commission (IAC) accredited lab’s protocol. However, students were not permitted to use them during the OSCE. Students had to acquire the majority of standard diagnostic quality images within a standard transthoracic protocol geared to a specific random pathology within 45 minutes. For the sake of time, the suprasternal notch and subcostal views were considered extra credit (i.e., 1 point) if appropriately acquired within the allotted time. However, if the suprasternal and/or subcostal view(s) were necessary for a cardiologist in a real-world situation to confirm a diagnosis, it was required.
At the onset of the student’s OSCE, they had to randomly choose one of the American Society of Echocardiography’s (ASE) nine appropriate use pathologies (irrespective of the appropriateness). 37 Students then had to provide a sonographic assessment. For example, if a student selected pulmonary hypertension, the student had to image, measure, and assess the inferior vena cava via the subcostal window to calculate right atrial pressure (RAP) to quantify the pulmonary artery systolic pressure (PASP). In addition, students had to denote all standard required images that are used to assess an indication or pathology with an “x” at the bottom of the screen (e.g., marking an “x” on all continuous-wave spectral images of the tricuspid valve needed to quantify PASP). The cardiac OSCE incorporated the psychomotor, cognitive, and affective learning domains. The incorporation of formative assessments was a behaviorist and experiential learning approach to meet CAAHEP’s psychomotor minimum expectations via the OSCE’s objectives. The randomized pathologies and image acquisition to support or rule out a diagnosis incorporated a cognitivist, constructivist approach to meet the cognitive domain of CAAHEP. Finally, each student’s behavior toward their “patient” was observed to assess the affective domain (e.g., introducing themselves, explaining the examination, obtaining a clinical history, and ensuring comfortability throughout the examination).
Simulation Methodology
The program’s cardiac simulator was a high-fidelity HeartWorks® transthoracic echocardiography simulator. It allowed students to visualize 3D cardiac anatomy alongside an ultrasound image. Students were able to virtually dissect the 3D heart with the mock transducer, reinforcing how their transducer manipulation would appear echocardiographically. HeartWorks encompasses 15 simulated pathologies that students could learn. The instructor demonstrated the system to students in class and posted links to online instructional videos provided by the manufacturer. Students were assigned an individual and group assignment using the simulator but were allowed to schedule a time to practice throughout the cardiac curriculum.
The individual assignment served to introduce students to echocardiography, where they had to choose a cardiac structure and acquire images in the echocardiographic planes where the structure is normally visualized. The individual assignment was designed to enhance students’ psychomotor skills to help prepare them for their OSCE and clinical competency via real-time scanning while reinforcing echocardiographic anatomy knowledge. The group assignment was conceived to help students prepare for the cognitive domain. Groups of three were assigned. Each group was given one of HeartWorks’s 15 pathologies. Students were instructed not to tell the other groups what their assignment/pathology was. Students were required to acquire all primary and secondary finding images (whether present or not) and measurements needed to diagnose the pathology. Each group then created a cardiac case study in PowerPoint that was presented to the class so that they could guess the pathology. The group pathology work was devised to help students understand how to assess a pathology; students, however, were not permitted to scan their simulation pathology for the OSCE.
Formative Assignments
Weekly scanning assignments were administered to reinforce primary echocardiographic principles, images, measurements, and protocol learned in class. The first few weeks’ assignments incorporated basic echocardiographic view acquisition, M-mode, color Doppler, and spectral analysis principles. The first weeks’ assignments were the same for both the control and experimental groups except for feedback for the control group; the purpose of the control group’s assignments was to serve as a guide. Afterward, each week, students needed to submit all images that could be acquired within a particular view, in order of a protocol, via the DMS program’s PACS. Students needed to fill out a worksheet explaining what each image and measurement that can be applied in the view assess. Individual feedback was provided prior to the following week’s scan lab via the PACS. Before the students engaged in the current week’s scan lab, a live demonstration by the instructor was given to highlight common mistakes made on the previous week’s assignment, in addition to what images they needed to focus on for the next assignment.
Participants and Research Setting
A nonprobability sample of convenience was used for this study. Four student cohort participants (n = 82) ranged from 19 to 67 years of age who completed their adult echocardiography course and clinical cardiac internship. Participants’ final examination grades, OSCE scores, and clinical preceptor evaluations from a CAAHEP accredited adult echocardiographic program were acquired and de-identified upon obtaining Institutional Review Board approval from both the University of the Cumberlands and the SUNY Downstate Health Sciences University. Final examination grades and OSCE scores were obtained from the program’s learning management system, individual alumni, and current student records. Two cohorts comprised the control group (n = 43) and did not have exposure to multiple competency-based methods. The experimental group (n = 39) entailed the remaining cohorts and were exposed to multiple competency-based methods. All the tests were analyzed using SPSS.
Results
The first research question was to explore the relationship between the implementation of multiple competency-based techniques in a CAAHEP accredited adult echocardiographic program and students’ ability to successfully complete summative assessments. An independent two-sample t-test was run comparing students exposed to multiple competency-based techniques and those who were not. The results of the independent-samples t-test comparing cumulative final examination scores of those exposed to multiple competency-based methods (M = 83.32) and those not exposed to multiple competency-based methods (M = 80.81) revealed no statistically significant difference, t(80) = −1.39, P > .05.
The second research question was designed to determine the difference between the implementation of multiple competency-based techniques in a CAAHEP accredited adult echocardiographic program and its students’ clinical competency evaluation scores from clinical preceptors. The clinical evaluation summary score is an average of four different categories with multiple measures for each. The categories are the psychomotor, cognitive, and affective learning domains, in addition to professional behavior.
The results of an independent t-test between clinical evaluation summary score of those exposed to multiple competency-based methods (M = 95.71) and those not exposed to multiple competency-based methods (M = 95.16) revealed no statistically significant difference, t(80) = −0.52, P > .05). Additional independent t-tests were run to determine whether there was a statistically significant difference between the four individual categories.
The second t-test was run for the psychomotor domain. Psychomotor assessments that aligned with the OSCE guidelines and rubric were the following: (1) student aligns the plane of view on the longitudinal and transverse structures, and (2) student uses transducer appropriately to display anatomy. The results of an independent t-test comparing the psychomotor domain’s clinical evaluation score between those exposed to multiple competency-based methods (M = 96.1) and those not exposed to multiple competency-based methods (M = 95.16) revealed no statistically significant difference, t(80) = −0.80, P > .05.
The third t-test was run for the cognitive domain. The clinical evaluation’s cognitive domain portion encompassed the most assessments that aligned with the OSCE guidelines and rubric. Cognitive assessments included the following: (1) recognizing vital clinical information and historical facts that may impact the diagnostic examination; (2) the student reviews data from current and previous examinations to summarize findings; (3) the student adjusts instrument controls; (4) the student demonstrates knowledge of understanding Doppler ultrasound principles; (5) the student demonstrates knowledge of anatomy and pathophysiology; and (6) the student identifies and documents abnormal sonographic patterns of disease processes. The results of the independent t-test comparing the cognitive domain’s clinical evaluation score of those exposed to multiple competency-based methods (M = 94.44) and those not exposed to multiple competency-based methods (M = 93.63) revealed no statistically significant difference, t(80) = −0.69, P > .05.
The fourth t-test was run for the affective domain. Affective domain assessments that aligned with the OSCE guidelines and rubric were the following: (1) the student identifies the patient and interacts appropriately; (2) the student communicates effectively with the clinical staff; (3) the student’s actions in patient care are appropriate; (4) the student completes assigned tasks in a timely fashion; and (5) the student exercises professional conduct in the clinical setting. The results of the independent t-test comparing the affective domain’s clinical evaluation score of those exposed to multiple competency-based methods (M = 96.65) and those not exposed to multiple competency-based methods (M = 96.19) revealed no statistically significant difference, t(80) = −0.38, P > .05.
The fifth t-test was run for professionalism. The professional component that aligned with the OSCE guidelines and rubric is that the student assumes responsibility and uses good judgment. The results of an independent t-test comparing the professionalism component’s clinical evaluation score of those exposed to multiple competency-based methods (M = 96.28) and those not exposed to multiple competency-based methods (M = 95.9) revealed no statistically significant difference, t(80) = −0.31, P > .05.
The third research question was set up to ascertain the relationships between traditional multiple-choice format cumulative examinations and competency scores, such as clinical evaluation scores and OSCEs, within a CAAHEP accredited adult echocardiographic program. Three linear regression tests were run. The first measured the relationship between students’ final examination and their overall clinical evaluation scores from clinical preceptors. The second measured the relationship between students’ final examination scores and their cognitive domain score from their clinical evaluation score. Finally, the third measured the relationship between students’ final examination scores and their OSCE scores from faculty.
According to the first linear regression test, there is no statistically significant relationship between multiple-choice final examination scores and clinical evaluation scores from clinical preceptors, r(80) = .17, P > .05. The second linear regression test demonstrated no statistically significant relationship between multiple-choice final examination scores and clinical cognitive evaluation scores from clinical preceptors, r(80) = .21, P > .05. The third linear regression test showed no statistically significant relationship between multiple-choice final examination scores and OSCE scores from faculty, r(37) = .25, P > .05.
Discussion
The first research question results were interesting because the final examination was designed to prepare students for their credentialing examinations. The professional sonography organizations (e.g., ASE, SDMS, Society for Vascular Ultrasound) imply that credentialing examinations ensure clinical competency.12,14,38 However, the determination and maintenance of sonographer competence are obscure. 39 There is also a dearth of literature that analyzes the relationship between credentialing and clinical competency 12 ; the results from the first research question minimize the paucity of literature. Studies state that credentialing examinations do not entirely assess competency, which the data from the first research question support.27,28,40,41 The findings oppose the implications of sonography’s professional organizations that a credentialing examination ensures clinical competency. In this particular program, a didactic examination is the defining measure of a sonographer’s clinical competency despite no relationship between clinical skills and didactic examinations. The findings from these students would suggest that the traditional didactic approach was not ideal for ensuring competency, and other methods needed to be explored. However, it is crucial to remember that specific prerequisites must be met for one to sit for a sonography credentialing examination. CAAHEP requires that competency be acquired clinically where clinical preceptors attest to students’ ability to perform sonography.42,43 The other prerequisites require clinical experience, which is often verified via a clinical verification form. These require that someone with the specialty credential attest to the applicant’s core clinical skills, demonstrated on an actual patient. 2 The second research question provides further evidence regarding these students’ clinical competency.
When considering the second research question, it was essential to identify a relationship between the implementation of multiple competency-based techniques in a CAAHEP accredited adult echocardiographic program and the students’ clinical competency evaluation scores from clinical preceptors. The clinical evaluation scores encompassed four different categories and an overall summary that combined all four categories. Three of the categories are the learning domains required by CAAHEP (cognitive, psychomotor, affective) and align with Bloom’s Taxonomy43–45; these learning domains are also part of the OSCE rubric. The results are unexpected, considering that the multiple competency-based assessments were designed to prepare students to perform better clinically. It was hypothesized that there would have been a difference in the overall clinical evaluation score, and multiple regression analysis may have to be performed to see which of the four categories most contributed to the difference. However, since there was no statistical difference in the overall summary, individual t-tests were run for each domain. In this sample of students, it was surprising that none of the domains demonstrated a statistically significant difference. The expectation was that the psychomotor and cognitive domains would be significant, considering that the OSCE, the simulation project, and the formative assessments were to emphasize those learning domains.
The reasoning for the results and their implications can vary, but ultimately these students’ results support the existing literature. McIntyre-Hite 46 found that disagreements among competency-based “experts” lay within competency-based assessment practices, validity, and types of learning. Baker et al 16 state that clinical assessments are subjective and not valid. There are health care workers who may evaluate students and not have the proper educational training and development; therefore, their student assessments may be unreliable. 30 Michael et al 40 discovered that less experienced sonographers awarded higher clinical competency scores.
Regarding education, CAAHEP accredited schools do not require sonographers who assess students clinically to have a minimum amount of experience other than being credentialed in that specialty. 47 Unfortunately, there is also little research that compares sonographer experience to the level of clinical competency. An article that stated experienced sonographers had better accuracy did not include sonographers who performed less than 200 patient cases per year. 12 For example, some students may go to more robust (i.e., harsher graders) clinical sites, and weak students may go to clinical sites that always inflate grades, making it very difficult to compare student success. The grading discrepancy can also mean that clinical preceptors are not accurate and trained to fulfill this part of the credentialing process. This also calls into question their ability to attest to an applicant’s skills, thus potentially creating the need to standardize all components of the credentialing process. Furthermore, ARDMS only requires that an applicant’s clinical competency assessor be credentialed; there is no experience requirement. Perhaps, the credentialing process favors cognitive knowledge, measured with traditional didactic assessments (e.g., multiple-choice examinations), due to the variability and difficulty of creating a structured, reliable clinical assessment.
Moreover, one could question why CAAHEP greatly emphasizes clinical competency when assessment methods are so variable. In theory, credentialed sonographers should be competent. However, if there appears to be no valid clinical assessment provided by some clinical sonographers due to appropriate varied knowledge to grade applicants, the value of the credential could potentially be undermined. An avenue for further research may be to compare credentialed and noncredentialed sonographers of varied experiences. Regarding students being exposed to additional competency-based methods with no correlation to their clinical score, it could be that the competency-based methods had no effect on their score, or that there is an issue with clinical site assessment as required by CAAHEP and the JRC-DMS. If the latter, CAAHEP, and credentialing organizations, could consider having trained personnel assess students and applicants, respectively. However, this would increase both organizations’ financial costs, which may extend to stakeholders in the field.
The final research question sought to identify the relationships between traditional multiple-choice format cumulative examinations and competency scores. This was done by reviewing clinical evaluation scores from clinical preceptors and OSCE scores within a CAAHEP accredited adult echocardiographic program. The findings are similar to the first research question in that it also evaluated didactic assessment versus clinical competency assessment, except through a different lens. The first research question sought to compare competency-based assessments’ effect on a multiple-choice examination. While this objective was to compare the outcome score of a clinical skill versus a didactic skill, it was interesting that the cognitive domain portion of the clinical evaluation scores did not correlate to the final examination scores as multiple-choice examinations typically assess this learning domain. Another noteworthy observation is the higher clinical evaluation mean scores of 95.4% for the overall clinical score and 94% for the cognitive domain portion compared to the mean final examination score of 82%.
Another interesting point was that although the final examination is a didactic score (i.e., assesses the cognitive domain) and the OSCE is a scanning skill assessment (assesses all learning domains), they are equivalent compared to the clinical evaluation scores from clinical sites. These findings raise similar concerns as the results in the first two research questions. The first research question raised the concern for discovering the cause of no statistically significant difference between multiple competency-based assessments and final examination scores. In contrast, the second research question indicated a concern for analyzing why there is no statistically significant difference between multiple competency-based assessments and clinical evaluation scores. The second research question results align with the literature; however, one must ensure all factors are considered. Two reasons could account for the results: poor examination development or the invalid assessment of health care workers leading to higher scores. To account for poor examination development, one could assess the correlation of RDCS pass rates to final examination scores. Unfortunately, it was not possible to analyze this cohort’s RDCS pass rate due to COVID-19 affecting the registry-taking process. Fortunately, the program has had an excellent pass rate for the RDCS credentialing examination that is consistently higher than the national average.
Additional Application of Research
This review of a particular echocardiographic program’s outcomes support Baker et al’s 16 notion that clinical assessment is subjective; therefore, it calls into question the rigor of current prerequisite pathways to sit for a credentialing examination. Moreover, and unfortunately, sonographer credentialing is not universally mandated.9,22 Sonography’s credentialing organizations and CAAHEP’s intent to ensure that competent sonographers are produced to enhance patient care may not be achieved without, at minimum, mandating credentialing. Without a minimum standard to be credentialed, the results and additional assessments mentioned in this study are futile. Laboratory accrediting bodies, such as the American College of Radiology (ACR), IAC, and the American Institute of Ultrasound in Medicine (AIUM), are designed to promote and ensure a facility is implementing the highest level of image quality and safety using satisfactory equipment, credentialed personnel, and requiring peer-review processes to uphold quality assurance.7–9,25,48–51
IAC accredits echocardiography labs and only recently revised its accreditation process in 2017 by mandating sonographers to be credentialed (with a couple of provisions), while AIUM does not require credentialed sonographers. The two IAC provisions to become accredited without credentialed sonographers are for new graduates of a cardiac DMS program who are employed by an accredited facility and sonographers “cross-training in echocardiography or working to fulfill clinical experience prerequisites for a credentialing examination.”49(p9) The latter raises the issue of valid clinical assessment, as presented in the results of Research Questions 2 and 3. Moreover, labs are not required to be accredited, which raises the concern of labs’ incentive to become accredited. It turns out that Centers for Medicare and Medicaid Services (CMS) and other insurance companies such as Oxford are requiring lab accreditation to reimburse for particular ultrasound examinations.7,8,22,25,52 Therefore, and as Sorrentino 22 states, the trend for DMS reimbursement is moving toward more health care insurance companies requiring credentialing or accreditation. Though it is excellent that some factor (i.e., reimbursement) is driving credentialed sonographers, it would be optimal if it guarantees that credentials ensure competency. The leadership of ultrasound credentialing organizations should, at minimum, work together with sonography accrediting organizations (both professional and educational) to ensure that credentialing satisfies its purpose of optimal patient care secondary to ensured clinical competency. This study can serve to rekindle additional leadership initiatives from over 20 years ago to bolster the field of sonography to require that sonographers become credentialed, hopefully. At a minimum, the study serves as a guideline to create an optimal CAAHEP accredited program in the field of echocardiography to guarantee CAAHEP graduates are clinically competent by having a more structured faculty assessment in addition to clinical assessment of preceptors.
Study Limitations
Due to the study design, these students’ results are not generalizable, and this is due to several threats to validity. First, this study was conducted using a convenient sample of students from the host echocardiographic program. The study’s sample size was also limited to just one CAAHEP accredited adult echocardiographic program; therefore, it restricted the number of cohorts and students analyzed. Due to the small cohort measured, the risk of a type II error is possible. In addition, students graduating from a CAAHEP accredited program only address one of the five pathways to obtain an ARDMS credential. Furthermore, the host program taught all sonography specialties. This could skew the type of graduates taking the cardiac credentialing examination, resulting in a unique sample with minimal statistical power.
Another unique quality to this educational research was the use of the OSCE. In the host educational program, the OSCE used fellow students, as suggested by Baker et al, 16 instead of standardized patients, partly due to a limited budget. Although students did not know who they were scanning until the day of the examination, there may have been a bias as the students involved in the research had been completing peer scanning examinations throughout the curriculum. Another important limitation is that personal bias may exist among evaluators despite multiple evaluators assessing each student’s images. 53 In addition, personal bias may be present from clinical preceptors’ lack of training, knowledge, or education to ensure a completely valid and reliable assessment.16,30 Although this study suggests and supports the preceding statements that clinical preceptor evaluation is subjective and possibly inaccurate, the results are not generalizable as each preceptor only evaluated one student. Having each student being assessed by multiple clinical preceptors would be exemplary. However, due to CAAHEP not explicitly telling accredited schools when to send students to clinical, there is no standardization to accurately assess this as some schools teach their entire didactic curriculum before students step foot into a clinical site.
Implications for Future Study
The study’s findings unveiled areas of improvement for future studies as well as new research topics. A vital component for future studies is the need for a larger sample size in two areas, students and clinical preceptors. Ultimately, future research should attempt to prove a need for further advocacy to standardize the clinical component of credentialing and CAAHEP JRC-DMS education. The development of clinical assessments could be the first step to ensure true competency rather than just relying on credentialing. However, one must recognize that perceptions are only a fraction of the broader scale of competency and may not align with outcomes. One may want to investigate whether well-designed assignments translate to teacher effectiveness in a clinical component. Further research on clinical assessment and competency can benefit the sonography field.
Conclusion
Diagnostic medical sonography is a unique health care profession with many pathways for one to apply for a national board examination and become a credentialed sonographer. Graduating from a CAAHEP accredited school is one of the pathways to sit for the credentialing examination. The ARDMS credentialing examination is composed of a didactic component with multiple-choice questions and hot spot questions that primarily assess the cognitive domain. The literature is scarce in comparing the relationship between credentialing and clinical skills. Clinical competence is extremely relevant to the sonographers’ scope of practice and patient care. The clinical prerequisite to the credentialing examination is to ensure that applicants can adequately perform in the psychomotor and affective domains, yet both may be highly subjective. CAAHEP accredits sonography programs to ensure that entry-level sonographers are produced and are competent in the cognitive, psychomotor, and affective learning domains. However, there are few evidence-based programs grounded in educational theory to support educational needs in health care.
Over the last two decades, articles about sonographers being needed to be credentialed, have appropriate competency assessment, and obtain standardized education were published, with minimal coming to fruition. Other studies have determined that clinical competence assesses different components than didactic, but they assessed either a different sonography specialty or health care profession. Therefore, this study provided findings to begin addressing the need for both competency assessment and didactic assessment by incorporating and analyzing multiple competency-based assessments within a CAAHEP accredited adult echocardiographic program.
Footnotes
Ethics Approval
Ethical approval was not sought for the present study because this study falls under Common Rule Exemption Category 1 [45 CFR 46.104 (d)(1)].
Informed Consent
Informed Consent was not sought for the present study because this study falls under Common Rule Exemption Category 1 [45 CFR 46.104 (d)(1)].
Animal Welfare
Guidelines for humane animal treatment did not apply to the present study because this study does not entail animals.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Trial registration
Not applicable.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
