Abstract

From a historical perspective, the journal impact factor (JIF) was first proposed by Eugene Garfield to “eliminate the uncritical citation of fraudulent, incomplete or obsolete data by making it possible for the conscientious scholar to be aware of criticisms or earlier papers.” 1 Citations subsequently found other uses, such as to rate the importance of journals, articles, or authors. We agree that alternate citation metrics are available that are journal-, article-, or author-specific. 2 Nevertheless, we chose to focus on the JIF in our study because of its universality and comprehensibility and because none of the newer citation metrics (including altmetrics such as the number of views or downloads) are unequivocally superior to the JIF. 3 It is also noteworthy that altmetrics are easier to manipulate than the JIF. 4
The JIF has limitations. Scientific research suffers in quality when incentivized and linked to the need to inflate citation metrics for professional recognition. Clarivate observed in 2021 that up to 13% of citations continue to be self-citations, raising the specter of citation farms and coercive citation. 5 The JIF creates a self-perpetuating cycle: it encourages the selection of journals for manuscript submission based solely on citation metrics.
In the context of the above, we reemphasize our point that Indian researchers and Indian journals are caught in a vicious cycle of under-referencing of previous research and under-publication of quality research; as we also observed, this has changed little over the decades.6,7 We continue to assert that the citation of relevant, recent Indian literature in articles written by Indian researchers would help build and maintain thematic continuity in the field even when the research question or methodology is novel. This continuation in scientific themes and citation, where appropriate, would benefit the producers and consumers of Indian research, ranging from students and clinicians to administrators and policymakers.
Normalized and contextual metrics, including the Source Normalized Impact per Paper (SNIP), which weigh citations based upon the subject field and the expected number of citations, are useful alternatives to the JIF. However, as noted by Marilyn Strathern, “When any measure becomes a target, it ceases to be a good measure.” 8 All research requires evaluation for quality by quantifiable metrics. Our response to not doing too well on a citation metric should not be confined merely to questioning the validity of the metric but should extend to bettering our knowledge and utilization of contemporary regional research. Changing the metric that we look at may not be a sufficient remedy.
