Abstract
Purpose
Faculty development is vital for sustaining and advancing medical education. While accreditation standards require US medical schools to offer faculty development, existing frameworks lack the specificity to guide program planning, implementation, and evaluation across all key domains—and none have been developed through a systematic, consensus-based process with national medical education leaders. To address this gap, authors launched a pilot initiative to derive a consensus-based framework, entitled the Medical Education Faculty Development Program Framework (FDPF), for undergraduate medical education.
Methods
Over the course of 2021 to 2024, faculty development experts from 3 medical schools conducted a sequential, mixed-methods study in 7 phases to develop the FDPF. Phases included literature and expert conceptual reviews to generate an initial draft, followed by 4 iterative rounds of focus groups and semi-structured interviews with faculty development leaders from national medical education organizations to refine the framework. A short form with 27 quality indicators was then piloted at 2 US medical education conferences in 2023 with a combined sample of 55 faculty development participants.
Results
Focus group and conference participants endorsed the FDPF as potentially valuable for onboarding new faculty development professionals and for self-assessment of institutional programs or accreditation preparation. Across both pilot administrations of the short form, participants reported the highest agreement with quality indicators related to tracking faculty development participation and offering sessions on core teaching topics and learning science. Agreement was lower for indicators related to program alignment with teaching standards, access to faculty performance metrics, preceptor training programs, and support for instructional design.
Conclusions
Iterative expert feedback through focus groups and leader interviews contributed to 2 distinct final products: (1) a concise short form, “27 Quality Indicators of Faculty Development Programs,” designed for rapid self-assessment, and (2) a comprehensive FDPF checklist designed for systematic program benchmarking, design, and enhancement. Together, these tools offer faculty development professionals a systematic, consensus-based resource for driving continuous improvement and adaptability in medical education.
Introduction
Faculty Development is key to a medical school's ability to adjust to transformative changes in health professions. As one example, technology such as generative artificial intelligence platforms, have significant impacts on curriculum design, assessment strategies, and teaching methods. 1 The increasing complexity of healthcare heightens focus on the importance of self-directed learning for adapting to new challenges. 2 Yet the innovations and changes impacting medical education demand not only that medical students develop new attitudes, skills, and knowledge, but that faculty do so as well. 3
The pace of change in medical education has accelerated dramatically, driven by emerging technologies, evolving clinical practices, and an unprecedented expansion of medical knowledge. 4 For example, the estimated doubling time of medical knowledge shortened from 50 years in 1950 to a projected 73 days by 2020, 5 underscoring the need for continuous adaptation among educators as well as learners. Faculty development is therefore essential for helping educators acquire the skills, perspectives, and pedagogical strategies required to navigate and respond to these rapid shifts in the educational landscape.
Although all US medical schools are required by accreditation bodies such as the Liaison Committee on Medical Education (LCME) 6 and the Commission on Osteopathic College Accreditation (COCA) 7 to provide faculty development, these standards are not adequately supported by robust evidence. 8 As a result, institutions must design their own approaches to faculty development without a comprehensive framework to structure program planning, resource allocation, or evaluation 9 This lack of detailed guidance contributes to wide variation in the scope, priorities, and quality of faculty development programs, pointing to the need for a shared framework. 10
Medical schools develop and conduct faculty development programs independently. While multiple institutions may rely on the same third-party development resource, such as IAMSE's Foundations of Health Professions Education Course, 11 the ways in which the materials from such programs are implemented vary considerably across settings. As a result, faculty development programs differ widely in structure, content, and quality. Yet these programs are essential for supporting faculty success across basic, clinical, and health systems sciences12,13 and their impacts are far-reaching. Faculty developers influence faculty, and these efforts in turn impact students. 14 The scope of impact these students will have on patients underscores the importance of investing in faculty development. 13 Despite this influence, most medical school faculty receive little formal preparation as educators prior to assuming teaching roles. 15 Establishing clear standards and guidance for faculty development can therefore have significant consequences for the quality of medical education and, ultimately, for healthcare practice.
While the literature offers valuable theoretical and practical contributions to faculty development, there remains no comprehensive framework that enables institutions to tailor programming in a flexible, systematic way to their unique needs and constraints.13,16,17 Several existing models address specific components of faculty development. Harden and Crosby describe 12 key roles of a teacher. 18 Kohan et al 12 expand upon this model with additional educator roles. Glassick et al 19 outline a framework focused on evaluating scholarly work. Ahmed et al 20 propose a tool for longitudinal evaluation of faculty development initiatives. Schultz et al 21 describe a competency-based process for assessing faculty performance; and Canadian researchers have developed a Fundamental Teaching Activities Framework to guide individual educator development. 22 Taken together, prior models and frameworks provide an important foundation for faculty development, yet they do not offer an encompassing, operational approach that institutions can use to design, implement, and evaluate comprehensive faculty development programs. No existing model provides a comprehensive, institution-level framework that integrates program structure, resources, activities, and evaluation to guide faculty development program planning and improvement. This gap underscores the need for a more integrated framework that can accommodate the diverse structures, resources, and priorities of different medical schools. Accordingly, we brought together a community of faculty development experts to develop a consensus-based, institution-level framework that could address this need and guide the creation of high-quality faculty development programs.
To address the need for a more comprehensive and operational faculty development framework, the authors sought perspectives from the community of faculty development experts to develop a consensus-based institution-level model that could guide program design, implementation, and evaluation.
23
Our goal was to clarify the roles, processes, and systems required for high-quality faculty development and to create a practical framework that institutions could adapt to their unique contexts. The study was guided by the following research questions:
What are the consensus quality indicators of undergraduate medical education (UME) faculty development programs, as described by expert faculty developers and other stakeholders? What do faculty development experts suggest about developing the framework as a scorable inventory with 130 indicators? What are potential uses for a faculty development quality-measures framework?
Methods
The research team consisted of experienced faculty development leaders from 3 US medical schools, whose professional backgrounds shaped both the initial conceptualization of the Faculty Development Program Framework and the interpretation of participant feedback. As long-standing faculty developers, the researchers brought assumptions about the importance of structured programming, institutional culture, and faculty development systems. The team engaged in ongoing discussions during data collection and analysis to examine how these professional perspectives influenced the coding decisions, interpretation of themes, and iterative refinement of the framework.
Following Institutional Review Board (IRB) approval (protocol #BHS1708), the framework was developed in 7 phases (Figure 1). Step 1 was drafting a literature-informed inventory with 155 quality indicators and 5 sections, entitled the “Faculty Development Program Quality Inventory (FDPQMI). This represented Draft 1 of what would later become the Faculty Development Program Framework (FDPF).” The questionnaire was newly developed for this study and had not undergone prior formal validation; in subsequent phases, it was pilot-tested with faculty development participants at 2 national medical education conferences.

The Faculty Development Program Framework (FDPF) development process.
Step 2. Next, Draft 1 of the FDPQMI was shared with expert faculty development peers at the Southern Group on Education Affairs (SGEA) in PDF format. These discussions revealed the importance of faculty developers to advancing change in their institutional ecosystems and highlighted the extreme variance in full-time equivalent staff and resources devoted to faculty development among institutions. Based on their feedback, the framework was distilled into a short form (Short Form Edition #1) and presented as a poster at the 2022 IAMSE conference, entitled “30 Quality Indicators of Faculty Development Programs.” During this period, the research team produced Draft 2 of the FDPQMI.
Step 3. Next, researchers conducted 3 rounds of expert focus groups via video conference to refine the Draft 2 inventory sections, language, and scope. These participants (
Step 4. In 2023, Pilot I of the Short Form (Edition 2) with 33 quality indicators was administered to attendees of the Generalists in Medical Education Conference (
Step 5. In 2024, researchers conducted Pilot II of the short form edition #3 (quality indicators = 27). Researcher LM presented the project at a training workshop at the AACOM conference, followed by administration of the short-form survey (
Step 6. The sixth phase included interviewing 2 leaders in medical education organizations to obtain guidance on how to develop the project further. These interviews were conducted via video conference using a semi-structured discussion about the draft framework, and informed consent procedures were conducted in the same manner as the focus groups. One interviewee had experience developing the Clinician Educator Milestones project, and another was interested in developing a faculty development course based on the quality indicators.
Step 7. In the final phase, the team refined the longer framework as a checklist (Draft 3) with 130 measures, through a process of researcher team review.
Candidate Items Rated to Derive 27 Final Quality Indicators.
Results
Results are presented based on insights gained from: (1) Focus Groups, (2) Interviews, (3) Surveys, and address the 3 research questions. RQ1. What are the consensus quality indicators of undergraduate medical education (UME) faculty development programs, as described by expert faculty developers and other stakeholders?
Consensus quality indicators of UME faculty development programs are codified in our Faculty Development Program Framework (Phase 7) (see Supplemental Appendix I). This framework provides a comprehensive structure allowing stakeholders to tailor it to their unique needs, and at varying levels of aggregation. One hundred thirty (130) program descriptors are organized into 27 quality indicators. These 27 indicators constitute the concise short-form inventory, while the full framework includes 130 detailed checklist criteria (measures) further categorized into 5 major domains (see Figure 2):
Resources: Personnel, budget, academic technology support, and so on needed for faculty development. Culture: How faculty developers influence the COM's learning culture. Activities: The training activities provided for faculty development by internal and external trainers. Outcomes: The standards, resources, and data required to establish faculty development outcomes. Program Evaluation: The processes and steps toward the faculty development program.

Framework structure.
During Phases 2 and 3 of the project, focus group participants provided detailed guidance on areas needing clarification leading to the final draft of the FDPF (Supplemental Table 3). Participants emphasized the necessity of adding an introductory section that clearly states the instrument's purpose, scope, and intended audience. This section was subsequently added to the Framework. Participants also asked that the final edition of the framework include an explicit definition of
Focus group participants also offered concise recommendations to strengthen key sections of the framework. For the Resources section, they suggested shortening the content, relocating it to the end of the instrument, and updating items to reflect the growing use of educational technology tools.
Focus group participants provided detailed guidance on areas needing clarification, leading to the final draft of the FDPF (Supplemental Table 3). They mainly provided feedback in 5 a priori themes: (1) Overall Usefulness, (2) Navigating the Sections of the Instrument, (3) Content of the Instrument, (4) The Scale and Utility of a Scorable Instrument, and (5) Possible Uses of the Framework.
1.
2.
3.
In the Culture section, participants recommended adding rewards or incentives for faculty participation in faculty development. For example, one participant said, “I didn’t see that anywhere … recognizing faculty who enrolled or completed a program … you want to celebrate them because then … they become promoters for the others.”
Participants also requested a revision on Item 9D to clarify the role of faculty developers on key committees; they also noted areas of redundancy that required streamlining. Feedback on the Activities section highlighted the need for a more coherent structure, and in response, the research team consolidated overlapping elements and organized the faculty development offerings into twenty content areas. RQ2: What do faculty development experts suggest about developing the framework as a scorable inventory with 130 indicators?
During Phase Focus Groups 1 to 3, participants discussed whether the FDPF should be presented as a scorable survey or as a broader reference framework. Several faculty developers emphasized that the FDPF is best positioned as a formative quality-improvement resource, rather than a summative evaluation tool or a survey instrument. As one participant asked, “Do you think that a survey that was 30 items long might be more valuable? And then the framework can be as long as it needs to be because it’ll be in a handbook or something?” Approximately one-third of the focus group participants felt the longer framework instrument had merit in a scorable version. Others cautioned that scoring could inadvertently introduce judgment or pressure. One participant expressed concern that using the FDPF as a scored inventory could make institutions feel “judged or penalized” for not meeting every criterion, noting that many items may not be relevant to all schools. They emphasized that the FDPF should function as a nonjudgmental inventory, not a mechanism for rating or ranking programs.
To help inform the potential utility of the framework as a shorter, nonjudgmental, formative scorable instrument, the research team piloted 2 shortened editions of the inventory as rating instruments at medical education conferences. During Phase 4, Pilot I was administered during the 2023 Generalists in Medical Education Conference in Seattle (
For Pilot I (
When asked to describe their primary role, Pilot I respondents (single selection) included 54.5% Faculty Developer, 9.1% Faculty Member, 18.2% Leadership, and 18.2% Other. The Pilot II version allowed multiple selections, with 72.7% selecting Faculty Developer, 54.5% selecting Leadership, and 51.5% selecting Faculty Member. Most respondents (90.9%) identified their undergraduate medical education institution as allopathic (MD).
During Phase 4, Pilot II (
Participants reported the lowest levels of agreement with the following quality indicators: (a) our institution provides meaningful data to the faculty development team to inform them about teaching ( “There is minimal research out there on the efficacy of faculty development in medical education. When I first started, something like this would have been very beneficial in planning, growing, and developing a new program—it would have helped me see what other schools were doing.”
During Phase 4, at the 2023 Generalists in Medical Education Conference, a workshop was conducted during which participants provided feedback on a shortened inventory, entitled “30 Quality Indicators of Faculty Development Programs” (Provided in Table 1). Their insights included (1) faculty developers often fulfill multiple roles, siloed in disparate departments throughout large institutions; (2) faculty development personnel, funding structures, and institutional priorities vary across universities, medical schools, and departments, all of which influence how programs are designed and implemented. Participants also provided input on demographics for the inventory and suggested categorizing faculty development activities into 20 content areas. Themes that surfaced from this focus group are provided in Supplemental Table 4.
Discussion
Through this project, we engaged in extensive dialogue about program quality indicators with the medical education faculty development community. Consistent with prior studies, these conversations revealed the complexity and diversity of faculty development programs and underscored a recurring need for clear implementation strategies, formal communication systems, and the need to evaluate programs. 25 These insights served as a general needs assessment for medical education faculty development directors, provided rich context, and informed the results of our research questions.
As a guide to faculty developers, especially new or inexperienced faculty developers. As an institutional self-study instrument for continuous quality improvement. As a tool for preparing for an accreditation site visit. The FDPF domains were mapped to relevant accreditation standards (COCA and LCME); a crosswalk is available in Supplemental Table 5. As a data collection tool used to find supporting evidence useful for advocating for faculty development resources. As a potential instrument for national benchmarking when used in a scorable format. As a framework for increasing faculty developer agency. By providing a structured, institution-level framework, the FDPF enables faculty developers to identify gaps, align programs with institutional priorities, and advocate more effectively for resources and organizational change.
We acknowledge that many of the current checklist items emphasize inputs and processes rather than outcomes. This emphasis was intentional for this early pilot version, given the need to establish consensus around foundational program structures before evaluating impact. Nonetheless, several outcome-oriented indicators are included in the FDPF, such as teaching standards, performance metrics, and program evaluation at Kirkpatrick Levels 3-4. Future iterations of the framework will expand this dimension further, with additional emphasis on indicators of impact at the learner, faculty, and institutional levels. Further validation through broader national sampling and psychometric analysis will be necessary to establish the framework's reliability and generalizability. In this way, the FDPF can evolve to balance both the foundational conditions for high-quality faculty development and the demonstrable outcomes that stakeholders increasingly seek.
The survey instruments used in this study were not based on validated instruments. While the study engaged FD leaders through focus groups, interviews, and pilot surveys, these methods were not designed to yield definitive consensus or validation. Instead, they provided rich expert feedback that shaped successive iterations of the inventory. The FDPF should therefore be understood as a developmental framework at an early stage, with future work required to establish consensus, reliability, and validation. In particular, researchers plan to conduct future studies employing factor analysis to explore the framework's construct validity.
Conclusion
After 2 earlier iterations, the final edition of the FDPF emerged as a robust, structured framework designed to guide and organize faculty development efforts. Its purpose is to support program creation, refinement, and quality improvement, not to function as a comparative ranking tool or to imply that institutions must meet every criterion. Rather than requiring universal adoption of all 130 items, the FDPF is intentionally flexible, allowing institutions to identify and use the indicators most relevant to their context, needs, and culture. Referring to the FDPF as an “instrument” would therefore be misleading, as it does not prescribe compliance with each criterion, nor is it intended to serve as a definitive, summative evaluation measure. Instead, it provides a comprehensive checklist or menu of evidence-informed elements from which institutions can selectively draw to strengthen their faculty development programs.
Beyond serving as a framework for program evaluation and planning, focus group and interview participants identified several practical ways in which institutions might apply the FDPF. These included designing new faculty development programs, clarifying institutional priorities, informing budget planning and resource allocation, guiding development of information systems for tracking faculty development activities, supporting internal evaluation or research, and structuring longitudinal faculty development initiatives. These examples illustrate the immediate functional utility of the framework across a range of institutional contexts.
Supplemental Material
sj-docx-1-mde-10.1177_23821205261439617 - Supplemental material for The Development of a Faculty Development Program Framework for Medical Education
Supplemental material, sj-docx-1-mde-10.1177_23821205261439617 for The Development of a Faculty Development Program Framework for Medical Education by Lise McCoy, Sebastian R. Diaz and S. Dennis Baker in Journal of Medical Education and Curricular Development
Supplemental Material
sj-docx-2-mde-10.1177_23821205261439617 - Supplemental material for The Development of a Faculty Development Program Framework for Medical Education
Supplemental material, sj-docx-2-mde-10.1177_23821205261439617 for The Development of a Faculty Development Program Framework for Medical Education by Lise McCoy, Sebastian R. Diaz and S. Dennis Baker in Journal of Medical Education and Curricular Development
Supplemental Material
sj-docx-3-mde-10.1177_23821205261439617 - Supplemental material for The Development of a Faculty Development Program Framework for Medical Education
Supplemental material, sj-docx-3-mde-10.1177_23821205261439617 for The Development of a Faculty Development Program Framework for Medical Education by Lise McCoy, Sebastian R. Diaz and S. Dennis Baker in Journal of Medical Education and Curricular Development
Supplemental Material
sj-doc-4-mde-10.1177_23821205261439617 - Supplemental material for The Development of a Faculty Development Program Framework for Medical Education
Supplemental material, sj-doc-4-mde-10.1177_23821205261439617 for The Development of a Faculty Development Program Framework for Medical Education by Lise McCoy, Sebastian R. Diaz and S. Dennis Baker in Journal of Medical Education and Curricular Development
Footnotes
Acknowledgments
The authors wish to thank Amy Hall, EdD, for her early work on designing the first draft of the framework. ChatGPT 5.2 (OpenAI, San Francisco, CA) and Microsoft Copilot.ai were used to assist with editing for clarity and grammar. The authors reviewed and approved all content and take full responsibility for the final manuscript.
Ethical Approval
NYITCOM Institutional Review Board (IRB) approval (protocol #BHS1708), 12.13.21.
Author Contributions
Author LM co-developed the framework, organized the study, conducted focus groups, performed qualitative analysis, and drafted the manuscript. Author SD co-developed the framework, peer-reviewed the qualitative analyses, revised the 30 indicators into an agreement scale, conducted the statistical analysis, and contributed substantially to the manuscript. Author SB co-developed the framework, peer-reviewed the qualitative analyses, and contributed substantially to the clinical education components.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the American Association of Colleges of Osteopathic Medicine (AACOM Grant # G23-03).
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Previous Presentations
30 Quality Indicators of Medical Education Faculty Development Programs. IAMSE Conference. Poster. June 13, 2023. Lise McCoy EdD, Dennis Baker PhD, Sebastian Diaz PhD, and Amy Hall EdD. Quality Indicators of UME Faculty Development Programs. Workshop. The Generalists in Medical Education, 11.4.23. Lise McCoy. A Systems Approach to Dynamic Faculty Development Programs in Undergraduate Medical Education. Workshop. AACOM Educating Leaders. 4.17.24. Lise McCoy, Sebastian Diaz, and Dennis Baker.
Data
Data are from focus groups, interviews, and surveys conducted by researchers. The datasets generated and analyzed during this study consist of transcripts and narrative comments that contain identifiable perspectives of faculty development professionals. In accordance with IRB protocols and to protect participant confidentiality, these data cannot be shared.
Supplemental Material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
