Abstract
This paper responds to the issues raised by Cardinal, Schary, and Kim (2014) regarding a recent study published in Comprehensive Psychology (Knudson, 2013a). The issues raised by Cardinal and coworkers are important and related to the misuse of bibliometrics like the impact factor, but are also consistent with the data and interpretation in the Knudson (2013a) article. Both these articles correctly point out problems with the misuse of bibliometric variables in evaluating journals and the adverse consequences this has for research in Kinesiology and other fields. More research documenting the limitations and appropriate use of bibliometrics in evaluating Kinesiology-related journals, integrated with surveys of scholars defining the field of Kinesiology and its journals, are important solutions to the problems of Kinesiology identity and impact factor obsession.
Cardinal, et al. (2014) recently outlined several issues related to a study on the impact and prestige of Kinesiology journals (Knudson, 2013a). I appreciate their commentary bringing more attention to this topic because I share their concern about the misuse of bibliometrics like the impact factor and its potential adverse effects on journals and scholarship, as well as the visibility of Kinesiology scholarship in academia which has been the impetus to this recent line of my research. Cardinal, et al. (2014) were critical of the methods reported in establishing the sample of journals related to Kinesiology and how the results of the Knudson (2013a) study might be misused.
This paper is a response to their article organized in order of the issues they bring forward. Some of the methodology concerns noted by Cardinal and colleagues are unavoidable in any study of journals in an ill-defined, multi-disciplinary field like Kinesiology, using the variety of databases and indexes, subject headings, and bibliometrics available today. This paper also argues it is inaccurate to be critical of the Knudson (2013a) report regarding potential misuse of the results, given the article clearly noted the history of misuse of bibliometrics and cited the large body of literature arguing against it. In reality, reasoned and critical application of the Knudson (2013a) results and future research are needed to decrease the obsession with the impact factor and its negative consequences on science.
The Ill-Defined Field of Kinesiology and Kinesiology Journals
While it is fairly well-known in North America that the term “Kinesiology” refers to the academic study of human movement and physical activity, there is, however, less consensus on how Kinesiology is implemented within university academic programs, in scholarly research, and in peer-reviewed journals. Knudson (2013a) and Cardinal, et al. (2014) both correctly note that the field of Kinesiology is not uniquely indexed as an academic subject area in major citation databases, so journals related to human movement are listed in numerous other subject headings/categories. The field of Physical Education/Kinesiology in North America has a long history of debates and disagreements about the primary objectives of the discipline, profession, and research (e.g., Newell, 1990; Baker, Pan, & Hardman, 1996; Twietmeyer, 2012). This lack of a unified focus and identity in academe and citation databases, as well as the expanding specializations within Kinesiology initiated by Henry (1964) has resulted in a rather amorphous, general understanding of what Kinesiology faculty and librarians might consider a Kinesiology journal.
Unfortunately, Cardinal, et al. (2014) mistakenly attributed two small references to “core journals” used in referring to previous research in the introduction of the Knudson (2013a) report as the purpose of the study. This was certainly not the explicitly stated purpose of the Knudson (2013a) study and, in fact, the discussion of the study cautioned against the data being used as “evidence of a ranking of core journals” in Kinesiology. Some fields, often inter-disciplinary ones, actively strive to establish a core or hierarchy of journal prestige for faculty and program accreditation (Kushkowski & Schrader, 2013). Several of my recent studies on bibliometric properties of Kinesiology journals recommend that future research utilize surveys of Kinesiology faculty (Miranda & Mongeau, 1991; Silverman, Kulinna, & Phillips, 2014) to help better define what Kinesiology scholars would consider important journals in the field (Knudson, 2013b; 2014; in press). The lack of a uniform understanding of Kinesiology in academe and by the major citation databases is, however, an important credibility issue for the field. If Kinesiology scholars can not come to agreement on a core mission and journals to define a body of knowledge, how can they expect others outside Kinesiology to understand the field and our value to society? Cardinal, et al. (2014) do cite another of my papers on this point (Knudson, 2013b), so I certainly agree with them that Kinesiology would be better understood in the world if the field as a whole would present a unified mission and set of journals that constitute a unique body of knowledge.
This unified vision is not currently established and recognized in academe, so studies like Knudson (2013a) must select journals based on a general understanding of serials that publish research primarily related to human movement and physical activity. It is unclear what specific quantitative criteria requested by Cardinal and coworkers could be developed that would be able to uniquely classify research into the canon of core Kinesiology journals. Even if there were, what would be the percentage of pages or articles that focused on human movement that would define a journal as Kinesiology? What about studies of human injury issues that use animal models? In short, it would not be possible for reviewers and Kinesiology faculty to universally agree upon the inclusion criteria for what a core Kinesiology journal is.
Knudson (2013a) used the general criteria as journals previously examined in similar studies of Kinesiology journals and human movement-related subject categories from three major databases: Journal Citation Reports, SCImago, and Eigenfactor. This sample (n = 82) was larger than the 73 “working list” of journals studied by Cardinal and Thomas (2005) using an equally loosely defined sample of journals. This criticism of Knudson (2013a) is certainly ironic because Cardinal himself used similar general criteria and considered a smaller sample of journals from only the “sport sciences” category in one database, with four additional journals selected with unspecified criteria in his research. The Knudson (2013a) sample represented the largest number of Kinesiology journals examined relative to studies on this topic published to date. It also used the phrase “Kinesiology-related” consistently in the text to try and provide the most comprehensive study of the prestige and impact structure from journals most scholars would logically consider from Kinesiology, and not primarily from another field. All the previous studies on citation metrics of Kinesiology journals have general inclusion criterion, note this limitation, and list all the journals studied so the research could be replicated.
These limitations of the understanding of Kinesiology and indexing of journals in the field by publishers and databases are the primary causes of the missing data concern noted by Cardinal, et al. (2014), not design issues in the Knudson (2013a) study. The lack of indexing of another publisher's journal by a competing publisher limits any sample of Kinesiology journals appearing in all three databases to a sub-set of what many faculty would consider all the important Kinesiology journals. Journals with missing citation data are to be expected in a world where bibliometric databases are numerous, compete with other publishers' databases, and index only a small portion of the research and journals in any particular field (Minozzi, Pistotti, & Forini, 2000). This and many other well-known limitations of citation databases (Brown, 2014) are why any single citation metrics should not be used to evaluate journals. Cardinal, et al. (2014), therefore, provide somewhat of a contradictory argument questioning the lack of inclusion of other journals and expecting these additional journals to be listed in all the databases necessary to make the calculation of the composite impact and prestige indexes from several bibliometric variables.
Mea Culpa to the Journal of the Philosophy of Sport
Cardinal, et al. (2014) were correct in pointing out an error in Knudsen (2013, Table 1) related to the Journal of the Philosophy of Sport. I apologize for this error, which was likely related to a transcription error in journal abbreviations and unique listing of journals by databases. Words and symbols in journal titles (e.g., the, &) are handled differently by different databases and their search functions. This journal does indeed have complete data from all three databases so the standardized impact and prestige scores for Journal of the Philosophy of Sport in Table 1 should have been −1.24 and −0.80, respectively. The message of Knudson (2013a) would be to not consider these standard scores as evidence of poor quality of this journal. This journal is likely a primary source and considered prestigious in the Kinesiology sub-discipline of philosophy of sport and physical activity. The Knudson (2013a) data for this journal would indicate that neither impact or prestige citation metrics would identify the Journal of the Philosophy of Sport as an “elite” or “luxury journal” given the current climate of impact factor obsession (Alberts, 2013; Sekercioglu, 2013; Casadevall & Fang, 2014). The primary origin of this bias against many Kinesiology journals like the Journal of the Philosophy of Sport is likely differences in citation behavior and citation rates across sub-disciplines (Knudson, 2014, in press) rather than differences in scholarly rigor or quality.
Selecting Journals as Publication Outlets
Cardinal, et al. (2014) also incorrectly imply that Knudson (2013a) “concluded” that the impact and prestige scores in this study be used to select journals for publication in Kinesiology. The conclusions in the last paragraph and abstract of Knudson (2013a) were limited to noting that common bibliometrics of Kinesiology journals support the two-factor (impact and prestige) structure observed in other fields and there appear to be Kinesiology journals that serve more of one role than the other. Cardinal, et al. may be referring to an application comment in the paragraph warning about misuse of bibliometric variables, saying that a “combination” of these variables may be helpful for some scholars to consider in helping “select kinesiology-related journals that align with the purpose of their research.” Despite the lost time and citations that could result from an unsuccessful attempt to publish in an elite or luxury journal (Sekercioglu, 2013), some young scholars might still choose this strategy given they may be evaluated by senior faculty and administrators who will favor publications in journals with high impact factors. An exercise biochemist with senior colleagues obsessed with impact factors might be wise to submit first to high impact factor journals, also because in chemistry the impact factor may positively bias subsequent citations to the research (Didegah & Thelwall, 2013).
Cardinal, et al. (2014) go on to acknowledge that this concern about journal selection by authors is inconsistent with my previous work; however, I hasten to point out this concern is also inconsistent with a careful reading of the Knudson (2013a) report. I too share the concern that some might choose to devalue applied journals or establish a hierarchy of Kinesiology journals with misuse of biblometrics (Alford, 2012; Cardinal, 2013).
Numerous scholars and editors have pointed out the biases and limitations of biblometrics like the impact factor to evaluate journals (e.g., MacRoberts & MacRoberts, 1989; Opthof, 1997; Seglen, 1997; Phelan, 1999; Kurmis, 2003; Cameron, 2005; Garfield, 2006; Adler & Harzing, 2009; Vanclay, 2009; Brumback, 2012). It is also well known that the highly skewed nature of citations to articles in journals makes a journal's impact factor virtually unrelated to the citations of individual articles in that journal (Seglen, 1997; Starbuck, 2005; Garfield, 2006). Despite all this evidence, I contend research documenting these limitations (Knudson & Chow, 2008; Knudson, 2013a, 2014) in Kinesiology and scholar surveys are still needed because the self-interest of powerful scholars and journals seems to perpetuate the misuse of bibliometric variables (Casadevall & Fang, 2014). Scholars interested in strategies to limit the abuse of bibliometrics should consider the recommendations for individual action proposed by the San Francisco Declaration on Research Assessment (DORA) [http://am.ascb.org/dora/] or Casadevall and Fang (2014). Alberts (2013) argues that leaders in science need to accept responsibility for making substantive evaluations of individual scientific contributions and not rely on journal editors and impact factors of journals. Alberts (2013) and others (Adler & Harzing, 2009; Brumback, 2012; Brembs, Button, & Munafo, 2013) have nicely summarized how the misuse of bibliometric variables harms science and individual scholars being evaluated using these metrics rather than the merit of their work.
English Language Bias
Cardinal, et al. (2014) also note that the Knudson (2013a) study was limited by the inclusion of only English-language journals and the greater prominence of “Kinesiology” in North America. The English language bias is another well-known limitation of most citation databases (Brown, 2014) and this was explicitly addressed in the limitations of the Knudson study. The acknowledgment of a North American bias was, albeit, more implicit, but both these are weakness of available citation databases and not the design of the Knudson (2013a) study.
Kinesiology Publication Outlets
Whether or not the best scientific journal outlets are biased toward the English language, the field of Kinesiology would benefit from a unified understanding of its unique academic niche and the journals that establish that body of knowledge. The identity of the field of Kinesiology, while generally understood within the field throughout the world, is often misunderstood by the public and called something else in other countries. Sport Science, Exercise Science, Physical Activity, and Human Movement Studies are just some of the English-language synonyms for Kinesiology department names (Baker, et al., 1996). A survey of scholar perceptions of important journals in Kinesiology and its synonyms, including international scholars and journals published in other languages would be helpful in potentially establishing a unified academic identity for the field. This lack of identity led me to use the phrase “Kinesiology-related” journals in some of my recent research on this topic (Knudson, 2013a, 2013b, 2014). Expanding knowledge of the biblometrics, and scholar identification of and ratings of Kinesiology-related journals are important. Following the lead of academic librarians that use numerous indicators of journal quality (Nisonger, 1998) would improve our understanding of the journals that establish the body of knowledge in Kinesiology.
Authors of Kinesiology-related research seeking to publish their work in a peer-reviewed journal have numerous options. Traditionally, authors have been encouraged to select journals with missions and readership aligned with the topic of the study, high visibility through indexing, and perceived prestige. Authors also often consider pragmatic issues such as review time and page changes when selecting journals. Given the improved electronic indexing of published papers, authors might select journals based on cost and access to potential readers. There is preliminary evidence that access to papers as full-text on the Internet (Murali, Murali, Auethavekiat, Erwin, Mandrekar, Manek & Ghosh, 2004) influences readership and citations. Articles in open access journals also receive similar citations as traditional, subscription journals (Bjork & Solomon, 2012; Solomon, Laasko, & Bjork, 2013), so there are even more journals to which authors can submit their work.
With so many choices of specialized sub-disciplinary and multi-disciplinary journals to choose from, some have sought to establish a hierarchy of journal influence in a field often using the impact factor. These hierarchies or attempts to formalize a core set of journals are fraught with problems given that journal influence is multi-faceted, with numerous differences that make any single bibliometric variable inadequate and inaccurate (Seglen, 1997; Kurmis, 2003; Cameron, 2005; Coleman, 2007; Garfield, 2006; Adler & Harzing, 2009; Vanclay, 2009; Brumback, 2012). Knudson (2013a) provided the first evidence that two unique factors (prestige and impact) observed within dozens of bibliometric variables in other fields (Bollen, Van de Sompel, Hagberg, & Chute, 2009; Leydesdorff, 2009; Zhou, Lu, & Li, 2012) were also identifiable in four bibliometrics from Kinesiology journals. This is a first step in helping define the influence of journals related to Kinesiology, but more needs to be done.
The factors that determine journal influence should be better identified, since there is poor agreement on these factors and terminology. Martin (1996) proposed that basic research be evaluated by three factors: the quality of the research, its potential influence, and its impact or actual influence on the field. West and Rich (2012) hypothesized scholarly journal influence had three factors: rigor, prestige, and impact. Traditionally, scientific prestige tends to be concentrated in a small minority of journals in most fields (Gonzalez-Pereira, Guerrero-Bote, & Moya-Anegon, 2010).
Unfortunately, many consider the impact factor a surrogate measure of journal prestige, when it provides primarily information about impact or usage (MacRoberts & MacRoberts, 1989; Bollen, et al., 2009; Leydesdorff, 2009; Zhou, et al., 2012). With increasing electronic access to individual papers, the percentage of highly cited papers is coming from a shrinking percentage of articles from these top or elite journals (Lozano, Lariviere, & Gingras, 2012; Lariviere, Lozano, & Gingras, 2014). Clearly, more nuanced use of several citation metrics within specific fields like Kinesiology, integrating other data like scholar ratings will provide important information for Kinesiology faculty to select journals in which to submit their work. Perhaps extended research on journal influence in Kinesiology might create a core set of Kinesiology-related journals, with auxiliary sets of other closely related disciplinary and professional journals pertinent to Kinesiology. This would be valuable to the identity of Kinesiology and avoid the negative consequences of the specificity of a hierarchy or pecking order of Kinesiology journals.
Cardinal, et al. (2014) correctly note that sometimes research is published in journals that do not appear to align with the academic field that seems most logical based on its content. This is especially true of cross-disciplinary fields (Black, 2012) like Kinesiology. Sometimes independent and multi-disciplinary journals are excellent publication outlets for research and debate. I appreciate the flexibility of some journals, because many journals in Kinesiology have restrictive publication policies that do not consider interdisciplinary or controversial research, or even debate important to the field. It is possible, for example, that several of my own studies on this topic have not been allowed to be reviewed by some Kinesiology journal editors because journal influence was a topic that did not fit into one of their sub-disciplinary silos or an internal strategy to publish articles with a high probability of future citations supporting the journals' impact factor.
Summary
Overall, I agree with many of the concerns expressed by Cardinal, et al. (2014) about the dangers that the misuse of bibliometrics pose for Kinesiology journals and research. These concerns are consistent with the citations, data, and interpretation in the Knudson (2013a) article. More research documenting the limitations and appropriate use of bibliometrics in evaluating Kinesiology journals, integrated with surveys of scholars defining the field and its journals, are important solutions to addressing the problems associated with an obsession with the impact factor. With more work like Knudson (2013a) and surveys similar to Silverman, Kulinna, and Phillips (2014) in the future, perhaps more university administrators and faculty will be familiar with the limitations of bibliometrics, as well as the academic niche filled by Kinesiology, and will seek to publish in Kinesiology journals when their work is related to human movement/physical activity.
