Abstract
Objectives:
We aim to examine the h(5) index of U.S. otolaryngology programs to help assess current academic productivity.
Methods:
A total of 116 otolaryngology departments with residency programs were included. Our primary outcome was the h(5) index, calculated cumulatively for faculty MDs, DOs, and PhDs within the department. Audiologists and clinical adjunct faculty were excluded. This was calculated over a 5-year period (2015–2019) using Elsevier’s database SCOPUS. Faculty affiliation within SCOPUS was confirmed by cross-referencing department websites. The h(5) indices were calculated and then correlated with other publication metrics, including total publications by department and publications in major otolaryngology journals.
Results:
The h(5) index was highly correlated positively with other metrics of academic productivity, including total publications and publications in top 10 otolaryngology journals. Greater variability in data was noted as the h(5) index increased. Similar trends were observed when the h(5) was compared to the number of residents accepted per year. Rankings of departments by Doximity and US News and World Report were positively correlated with h(5) though they remained weaker when compared to other correlations.
Conclusions:
h(5) indices are a valuable tool to objectively assess academic productivity for otolaryngology residency departments. They are a better indicator of academic productivity than national rankings.
Introduction
Academic productivity is considered a benchmark measure for those considering a future career in research or academia.1–4 The academic productivity of a department provides a better assessment of the overall scholarly activity of the program. Currently, however, there are relatively few objective measures to evaluate academic productivity within the field of otolaryngology.5–7
Departments tend to determine academic productivity through the use of numerous bibliometric measures, such as total number of publications or citation count. 8 As stand-alone measures, these parameters fail to differentiate the varying impact of articles and do not speak to the true productivity of an author, as they can be easily self-inflated.9–12 The h-index, on the other hand, measures both the productivity and citation impact of a scholar’s publications. As described by Hirsch 13 who introduced the metric in 2005, the h-index is an author’s number of articles, h, that have at least h citations each. This concept can be applied collectively to publications originating from a department. 14 It remains one of the most objective measures to evaluate both the scientific impact and contribution of an author and/or large-scale department.5,10,15
The aim of this study is to measure the h-index calculated over a 5-year period, termed h(5), of U.S. otolaryngology programs to objectively assess recent academic productivity.
Methods
The nature of the study was primarily data analysis of academic otolaryngology departments with residency programs. U.S. Otolaryngology departments, including Puerto Rico, were analyzed. The list of residency programs for the United States and the number of first-year residency positions were obtained from the Fellowship and Residency Electronic Interactive Database (FREIDA).
Our primary bibliometric outcome was the h-index calculated over a 5-year period (2015–2019), hence termed h(5). This was obtained using Elsevier’s database SCOPUS. The h(5) was calculated cumulatively for faculty within each institution’s department. Otolaryngology faculty included MD, MBBS, DO, and PhD degrees listed on individual department websites as of 2021. Audiologists, nurse practitioners, speech pathologists, physician associates, and clinical adjunct faculty were excluded from the analysis. If a faculty member was not listed on the individual department website, they were generally excluded from the analysis.
If authors of similar names appeared in the SCOPUS search, department affiliations (e.g., institution or city) on SCOPUS were evaluated and cross-referenced with department websites and the journals in which the faculty member published. If neither criterion aligned, faculty members were excluded in an effort to ensure that the correct otolaryngology faculty member had been included.
The SCOPUS database was used to obtain other publication metrics, including total publications by departments and total publications in top 10 journals (determined by Journal Citations Reports, Otorhinolaryngology) all within the same time frame (2015–2019). The SCOPUS database sorted and filtered publications to avoid repeat publication inclusions. Doximity, Research Output, Otolaryngology was used to assess Doximity’s research output score for each residency program. US News and World Report (USNWR) directly provided its 2015–2019 data on “Best Hospitals for Ear, Nose, and Throat.” The average of the year span was calculated and used in the analysis. Rankings for Doximity and USNWR were reversed to allow for a positive correlation between academic productivity and highest rated programs.
This study was reviewed by the University of Pittsburgh Institutional Review Board and received exemption as it did not involve human subjects.
Statistical analysis
In terms of statistical analysis, Spearman rho analysis was calculated to assess the correlation between the variables. 16 One outlier, Mass Eye and Ear/Harvard, was excluded from the production of all the graphs to better indicate trends as it has multiple associated academic programs, including Brigham and Children’s Hospitals. However, it remains included in the overall correlations calculations.
Results
A total of 123 programs were initially gathered from FRIEDA. Of these, seven institutions lacked faculty listings and were excluded from the analysis. A total of 116 otolaryngology departments with residency programs were included. The h(5) indices of the programs were calculated and are listed in Table 1. Correlations across all institutions were calculated. The h(5) versus other metrics of academic productivity and number of residents had strong, positive correlations. When compared against Doximity, Research Output, and USNWR’s rankings, positive but weaker correlations were observed (Table 2).
h(5) indices.
Overall correlations.
p ≤ 0.001.
The h(5) was significantly correlated with the total number of publications and total publications in top 10 journals (Figures 1 and 2). Greater variability of the data was noted with larger h(5) index. Additionally, a larger h(5) had a positive correlation with the number of residents accepted per year with similar variability seen (Figure 3). Furthermore, the positive but weaker correlations of h(5) with Doximity, Research Output, and USNWR rankings were highlighted. Here, the variability remained consistent as h(5) increased (Figures 4 and 5).

Total publications versus h(5).

Publications in top 10 journals versus h(5).

Number of residents per year versus h(5).

Doximity, research output versus h(5).

USNWR’s rank versus h(5).
Discussion
Currently, individuals and departments assess academic productivity using a variety of bibliometric measures and sources, including citation count, total number of publications, Doximity, American Medical Association resources and USNWR. However, according to a recent study, 56% of respondents believed that Doximity may not be accurate, indicating that these resources come with limitiations. 17
Doximity is one of the most widely used and accessible sites for healthcare professionals, as it allows individuals to search programs based on reputation, research output, size, and percent who subspecialize. 18 The research output score for each program is based on the recent alumni base. This score is calculated from a combination of the collective h-index of publications authored by alumni graduating within the past 15 years. 19 A significant shortcoming of this approach, however, is that it does not provide perspective on the current academic productivity of departments, as alumni do not necessarily remain at their training institutions, and 15 years is too long to accurately capture present-day program productivity. Furthermore, alumni may practice in nonacademic settings; thus, this metric does not accurately reflect current academic productivity.
Additionally, USNWR rates hospitals based on their clinical care performance. It identifies medical centers in various specialties that are best suited to patients whose illnesses pose unusual challenges due to underlying conditions, procedure difficulty, advanced age, or other medical concerns that increase risk. 20 Although these clinical factors are important, they do not reflect research productivity. This is exemplified by the weaker correlation between h(5) and USNWR rankings.
In comparison, the h-index carries significant advantages. Not only does it provide insight into an individual author, but it can also be used to assess entire journals or departments. 15 Furthermore, unlike other measures of productivity, the h-index is a reproducible measure that remains robust to outliers and is not skewed by a single popular article.21,22 In this study, h-index was correlated with both the total number of departmental publications and publications in top 10 journals. This suggests that the h(5) may be more advantageous than single-number metrics by having the ability to combine both output and impact and help provide an objective, robust alternative to currently existing metrics that only provide insight into output or impact individually. 23
However, this metric also presents limitations. For instance, an h-index does not differentiate between various types of publications, such as a review versus original research article. 24 Additionally, it does not consider differences in authorship contribution. First or senior authors are not differentiated from other authors on the publication, as all contributions are weighted equally. Although this assumption may not be true in all cases, no widely used objective measure of academic productivity currently considers an author’s individual contributions to a publication. 25 Additionally, departments with a higher percentage of research faculty (PhD degrees) may elevate the h(5) due to publications in journals with greater impact factors. Similarly, h(5) does not account for department size. As seen with our outlier, programs with larger amounts of faculty members may elevate the h(5). Finally, it is important to stress that the h(5) calculations in this article were dependent on accurate representation of faculty and residents on individual department websites.
The h-index and academic productivity are simply a few of the multiple indicators of academic standing of a department. Other variables that could be considered include non-peer-reviewed publications such as textbooks, extramural research funding (NIH, PCORI, SBIR, etc.), leadership roles in professional societies, courses offered at the institution, presentations at national meetings, and patents. However, this information is not readily available. The academic community would benefit from establishing key metrics that could be self-reported by departments on an annual basis to provide greater transparency into scholarly activity.
Overall, the h-index uniquely offers an objective metric to evaluate academic productivity, and its advantages outweigh any potential disadvantages. By allowing evaluation of a department’s academic productivity as a whole, h(5) provides insight into institutions’ current productivity. Although it correlates well with other measures of productivity, it does not suffer from the subjectivity and bias of reputation rankings or influence of clinical factors. It can be updated annually and may thus provide a more current representation than factors with longer time courses.
This study is not without limitations. Due to the disparate research activities of many individuals and possible submissions into broad journals rather than otolaryngology-specific journals, it is difficult for the “publications in top 10 journals” metric to fully capture this cumulative work. However, Otolaryngology journals frequently include basic science work. Furthermore, faculty members could have possibly shifted between departments during the 2015–2019 duration. With the methods provided, it would be challenging to verify the faculty timelines in their department. However, the work of faculty members would contribute to the overall academic productivity in the new department as well as their previous department in the event of overlap. Lastly, a power analysis for sample size calculation was not performed for this study.
Conclusion
The h(5) offers an objective measure of academic productivity. This metric can be used to provide a current perspective of scholarly activity at academic otolaryngology departments and is easily updated using available data.
Footnotes
Acknowledgements
We would like to acknowledge CTSI at the University of Pittsburgh for their assistance with statistical work.
Previous presentation
This article was presented at the American College of Surgeons’ Virtual Clinical Congress, October 23–27, 2021.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethics approval
Ethical approval for this study was waived by the University of Pittsburgh Institutional Review Board and received exemption as it did not involve human subjects (IRB #2106003).
