Abstract
This guide for faculty presents the data sources, online tools, and citation metrics that are most widely used to evaluate the quality, scholarly impact, or reputation of business and economics journals. The guide, which is distinctive in its subject emphasis and its intended audience, may be useful to scholars, departments, and committees that seek to identify appropriate publication outlets, to evaluate or demonstrate the impact of particular research contributions, or to distinguish between legitimate and predatory journals. The topics and resources covered include the multiple dimensions of journal quality, journal rankings based on citation impact (e.g., Scopus, Web of Science, Google Scholar Metrics, Cabell’s Journalytics, RePEc), journal ratings based on expert opinion (e.g., Australian Business Deans Council, Chartered Association of Business Schools, Harzing’s Journal Quality List, Excellence in Research for Australia), other means of assessing journal quality (e.g., Directory of Open Access Journals, Beall’s List, Cabell’s Predatory Reports), and resources for gauging the citation impact of particular articles.
Introduction
Academic libraries are increasingly called upon to develop systems and services that directly enhance the capabilities of their parent institutions (Franklin, 2012; Herold, 2014; Miller, 2014; Walters, 2016a). In particular, library personnel now contribute to all phases of the scholarly communication process, from building and maintaining data archives to managing institutional participation in Open Access initiatives (Auckland, 2012; Sewell and Kingsley, 2017; Si et al., 2019).
At Manhattan College, the librarians have developed special expertise in the evaluation of scholarly journals and publishers. Individual faculty, departments, research groups, and committees—in particular, the tenure/promotion committee—have sought our help when identifying appropriate publication outlets, assessing or maximizing the impact of their research, presenting their contributions in a favorable light, and determining whether particular publications are legitimate or predatory. For several years, we have offered information and advice about articles, journals, and publishers, mainly within the framework of our reference and consultation services. More recently, we have intensified our efforts to provide faculty with the resources, tools, and knowledge they need to conduct their own evaluations.
More than a dozen practical guides to journal quality assessment have appeared in the past decade (Belter, 2018; Brantley et al., 2017; Brown, 2014; Corrall et al., 2013; Cox et al., 2019; Kennan et al., 2014; Leiss and Gregory, 2016; Suiter and Moulaison, 2015; Tattersall, 2017; Thuna and King, 2017; Vinyard and Colvin, 2018; Walters, 2016b, 2017a, 2017b; White, 2016). However, none focus specifically on the needs of business faculty, and all are intended for librarians, researchers in fields such as information science, or faculty with a strong interest in bibliometric issues. In contrast, this guide provides a general overview for faculty and graduate student in business, economics, and related fields. It may be especially useful as a source of content for presentations, handouts, slideshows, and web pages.
The guide covers general principles such as the multidimensional nature of journal quality (Walters, 2022), the advantages and disadvantages of particular information sources, and the interpretation of the most common indicators of scholarly impact. The version used at Manhattan College includes additional information that is specific to the College, such as where to find particular databases and how to gain access to resources held by other local universities.
Context
Six dimensions of journal quality
1. Editors’ and publishers’ intentions, legitimate or predatory 2. Adherence to established norms of peer review (e.g., anonymous review, use of expert reviewers, absence of bias in reviewer selection, adequate time for review, reasonable acceptance rate, and reviews that are intended to improve the paper through revision). 3. Adherence to norms of scholarly publishing other than peer review (e.g., transparency, economic sustainability, provisions for long-term preservation of content, reasonable fees, professionalism in presentation and web design, and a web interface that facilitates discovery and access). 4. Scholarly quality, as assessed by expert evaluators 5. Impact on subsequent scholarship (e.g., number of times cited, outlets in which the journal is cited, rate at which citations accrue, multidisciplinary impact, and extent to which theories/perspectives/methods introduced in the journal are incorporated into later work). 6. Impact on teaching and practice (e.g., citations in textbooks, citations in students' papers, inclusion of articles in course syllabi and reading lists, and influence on professional norms and standards).
Three key points
1. Dimensions 4, 5, and 6 really require an assessment of the individual articles within each journal. 2. The quality of a journal is not a good proxy for the quality of any particular article in the journal. There is great variation in quality within each journal, especially with regard to dimensions 4 and 5. 3. Assessments of journal quality are important, nonetheless, since assessments of individual articles are not always feasible and can be heavily influenced by individual and institutional biases.
Journal rankings based on citation impact
Six key points
1. Citation rates vary greatly among fields or even among subfields, so a low rate within one discipline may be a high rate within another. Even though some citation metrics claim to adjust for these inter-field differences, none of them do it well. 2. Because of inter-field differences, and because citation metrics are not always intuitively meaningful, it is usual to report the rank or percentile score of a journal rather than the raw score—e.g., “ranked 388th of 794 journals (43rd percentile) in the Scopus 3. Because Scopus includes more journals than Journal Citation Reports (JCR), the relative rank of any journal will be higher (better) in Scopus than in JCR. For instance, 4. Many journals appear in multiple subject categories, and a particular journal will normally have a different rank and percentile score within each category. 5. If you work in a specialized or heterodox research area, you may want to highlight your journal’s position within a category other than those used by the citation databases—e.g., “ranked third among the top five economic history journals identified by [author], based on Scopus CiteScore.” 6.
Scopus
1. The Scopus citation database includes more than 40,000 journals—roughly speaking, the top two-thirds of the journals in each subject area—including nearly 3000 business and economics journals. 2. Journal rankings based on Scopus data are available from three different web sites: Scopus Sources, CWTS Journal Indicators, and Scimago Journal & Country Rank. Each is freely accessible online. 3. Although the three web sites all use the same underlying data, they differ in their citation metrics, their subject categories, and their update frequency. Scopus Sources is the most commonly used of the three.
Scopus Sources
Within Scopus Sources (Elsevier, 2022b), the main size-independent citation metric is CiteScore. The 2020 CiteScore is the number of citations received in 2017–2020 to articles published in 2017–2020 divided by the number of articles published in 2017–2020.
To see all the relevant subject categories, do five separate searches, for
Although the full Scopus database, with bibliographic entries for individual articles, is available only by subscription, the free version of Scopus Sources is identical to the one available through the Scopus database.
CWTS Journal Indicators
Within CWTS Journal Indicators (Centre for Science and Technology Studies University, 2022), the most straightforward size-independent citation metric is Impact per Publication (IPP). The 2020 IPP value is the number of citations received in 2020 to articles published in 2017–2020 divided by the number of articles published in 2017–2020.
The most relevant subject categories are
Scimago Journal & Country Rank
Within Scimago (SCImago Lab, 2022b), the most straightforward size-independent citation metrics are the three
The most relevant subject categories are
Web of Science
1. The Web of Science Core Collection includes Science Citation Index, Social Sciences Citation Index, and Arts & Humanities Citation Index, but the three indexes are often regarded as a single database. 2. The Core Collection includes about 21,000 journals—roughly speaking, the top one-third of the journals in each subject area. The arts and humanities are not covered as extensively as the natural sciences and the social sciences, however. 3. Journal rankings based on Web of Science data are presented within Journal Citation Reports (JCR) and at the Eigenfactor.org site. Although JCR requires a subscription, Eigenfactor.org is freely accessible online. 4. Although JCR and Eigenfactor.org use the same underlying data, they differ in their citation metrics and their update frequency.
Journal Citation Reports
Within JCR, the main size-independent citation metric is Impact Factor (IF), also called Journal Impact Factor (JIF). The 2020 IF is the number of citations received in 2020 to articles published in 2018 and 2019 divided by the number of articles published in 2018 and 2019. The Five-Year Impact Factor (5IF) is similar, but based on the articles published and cited in 2015–2019.
The broad
Eigenfactor.org
Within Eigenfactor.org (2022), the main size-independent citation metric is Article Influence Score (AI). AI is similar to 5IF but attempts to control for inter-field differences in citation rates. Eigenfactor.org uses the same subject categories as JCR.
Google Scholar Metrics
Google Scholar is a good source of data on individual articles, but it presents journal-level citation data only for the top journals in each field or subfield. Moreover, both of its journal metrics, h5-index and h5-median, are size-dependent; neither is appropriate for evaluating the contributions of particular individuals or departments.
Cabell’s Journalytics
Cabell’s Journalytics, available by subscription, ranks 11,000 journals in 18 fields, including business. Exclusion from Cabell’s Journalytics should not be regarded negatively, however. The Journalytics subject categories are narrowly defined, so some important journals are omitted. The selection criteria are available online (Cabell’s International, 2022).
In Journalytics, the two main indicators of journal quality are Cabell’s Classification Index (CCI) and Difficulty of Acceptance (DA). CCI is the average number of citations per article over a three-year period, relative to the other journals in the same subject category. (Cabell’s states that CCI is expressed as a z-score, but they actually appear to use percentile ranks.) Journals included in multiple categories will have more than one CCI. Journals with insufficient citation data are designated either
DA indicates the percentage of articles that were written by authors at the top research institutions. It is intended to avoid the problems associated with acceptance rates, which may be calculated in several different ways and do not account for major variations in the extent of self-selection among authors.
RePEc Aggregate Rankings for Journals
Along with Social Science Research Network (SSRN) and arXiV, RePEc is one of three major online archives of scholarly papers (preprints, working papers, and manuscript versions of journal articles) in economics and related fields. The RePEc site (Federal Reserve Bank of St Louis, 2022) ranks journals and other publication outlets based solely on the documents submitted to RePEc by authors or their institutions. Several citation metrics are presented, including
RePEc includes data for all the sources represented in the archive—nearly 6000 journals and other serials (e.g., working paper series). Because the rankings include all the sources in which one or more of the submitted papers appeared, most of the listed sources have very low impact. For that reason, a journal at the 50th percentile in RePEc will have a much lower percentile rank in Scopus or Web of Science.
Journal ratings based on expert opinion
Some well-known journal ratings are based not on citation data, but on surveys that ask respondents about their opinions, choices, or hypothetical behaviors. The central construct is sometimes “quality,” but it can also be more specific: reputation, perceived impact, importance for research, importance for teaching, or value for promotion and tenure. Ratings based on expert opinion are common in business, but also in the arts and humanities, since (a) many articles in the humanities appeal to relatively small audiences, so high-quality scholarship may not be highly cited; and (b) the major citation databases provide relatively poor coverage of the arts and humanities.
Only the most comprehensive ratings are listed here. Rankings of journals in specialized subfields of business and economics often appear as journal articles or conference papers, but most of them include no more than a few dozen journals.
Australian Business Deans Council
The most comprehensive expert ratings are those of the Australian Business Deans Council (2019b) (ABDC). In the most recent round of evaluations, international panels of subject experts rated nearly 2700 journals in 16 subfields of business: 1501 Accounting 1502 Finance, including actuarial studies 1503 Business and management 1504 Commercial services 0104 Statistics 1505 Marketing 1506 Tourism 1507 Transportation and freight services 180105 Commercial and contract law 180125 Taxation law 1599 Other fields of business 0806 Information systems 1401 Economic theory 1402 Applied economics 1403 Econometrics 1499 Other economics
About 7% of the journals are rated A* (highest), 24% A, 32% B, and 37% C. The ratings, last updated in 2019, are freely accessible online, along with a comprehensive guide to the ABDC methods and data (Australian Business Deans Council, 2019a).
Chartered Association of Business Schools
The ratings of the Chartered Association of Business Schools (2021a) (CABS or ABS) are also highly regarded. In the most recent review cycle, an international panel of 53 experts ranked more than 1700 journals in 22 subfields of business, assigning ratings of 4 (highest; 8% of the journals), 3 (18%), 2 (32%), or 1 (41%). Although the evaluators were provided with a range of citation data, those data were not formally incorporated into the ratings. The evaluators consulted with professional societies about particular journals, and they were also given opportunities to discuss the journals rather than rating them independently.
The CABS ratings were last updated in 2021. They are freely accessible online, but registration is required. The interpretive guide includes descriptions of the five rating categories (Chartered Association of Business Schools, 2021b).
Harzing’s Journal Quality List
Anne-Wil Harzing of Middlesex University presents journal rankings from nine different organizations, along with selected CiteScore data, in her Journal Quality List (Harzing, 2021). The List includes more than 950 journals in 16 subfields of business, broadly defined. It does not include all the journals rated by ABDC and CABS, however, since it excludes the lowest-rated journals as well as those rated by just a few of the nine organizations. Harzing’s List is useful for comparing the ratings assigned by different groups of experts, and it is distinctive due to its inclusion of ratings from non-Anglophone countries. The List is updated every few months.
Excellence in Research for Australia
Excellence in Research for Australia (ERA) provides the most comprehensive set of multidisciplinary journal ratings based on expert opinion. The ratings, assigned by international panels of subject experts, were last updated in 2010 but may still be useful due to the large number of journals included (nearly 21,000). About 5% of the journals are rated A* (highest), 14% A, 28% B, and 51% C; 1% are not rated. The 2010 ERA journal ratings are available online (Open Australia Foundation, 2014), along with the field of research (FoR) codes and additional information about the project (Australian Bureau of Statistics, 2008; Australian Research Council, 2015).
Other means of assessing journal quality
Journals excluded from the main citation databases and journal rankings are not necessarily of lower quality. They may be recently established, of modest citation impact but important for teaching or practice, or published by small organizations that are not fully integrated into the scholarly communication system. They may also have other characteristics, such as unpredictable or delayed publication schedules, that make them ineligible for inclusion in the major citation databases.
Open Access (OA) journals are especially likely to be excluded from some rankings. OA journals, which make all their content freely accessible online, are funded through donations, endowments, and article processing charges (APCs) rather than subscription fees. The APCs are generally paid by the authors themselves, their external funding agencies, or (rarely) their universities.
Most scholarly publishers have at least a few OA journals, and some publishers are exclusively OA. OA journals can be found at all levels of the impact/reputation hierarchy, and some—such as the
Evaluation criteria
1. Reputation of the publisher or the sponsoring organization; comments found online. 2. Publisher’s for-profit/nonprofit status and financial characteristics (Companies House, 2022; Internal Revenue Service, 2022). 3. Other journals from the same publisher. Predatory publishers sometimes have just one or two journals in each of many unrelated fields. 4. Citation impact of particular articles in the journal, as shown in Google Scholar. Use the Advanced Search interface. 5. Index coverage: inclusion of bibliographic records (not necessarily full text) in databases such as ABI/INFORM, EconLit, and EBSCO Business Source. Search the databases or see the title lists that are available online (e.g., American Economic Association, 2022; EBSCO Information Services, 2022; ProQuest, 2022). 7. Affiliations and reputations of editors and editorial board members; presence or absence of contact information for key personnel. 8. Presence or absence of other information that is normally found on journal web sites. 9. Extent to which the instructions to authors and the peer review process adhere to generally accepted standards. 10. Web site appearance and style. Keep in mind, however, that legitimate publishers in some countries may not adhere to conventional U.S. standards of presentation. 11. False or misleading information about index coverage or citation metrics. 12. For OA journals, inclusion in the Directory of Open Access Journals, Beall’s List, and Cabell’s Predatory Reports.
Directory of Open Access Journals
DOAJ, the Directory of Open Access Journals, 2022; Directory of Open Access Journals, 2022b), includes OA journals that meet stated criteria for legitimacy. Absence from DOAJ should not be regarded as a negative factor, however, since not all good OA journals are included.
Beall’s List
Beall’s List (Beall, 2015, 2021) includes the publishers and journals that have met the “predatory” criteria set forth by Jeffrey Beall, a librarian at the University of Colorado at Denver. The main list identifies predatory
The Beall’s List criteria focus exclusively on two of the six dimensions of journal quality (dimensions 2 and 3), and there is no mechanism for removing journals from the List. A publisher’s presence on Beall’s List may therefore reflect the characteristics of its web site more than a decade ago. (Only one publisher, MDPI, has ever successfully challenged their placement on the List.) Beall’s List should not be accepted by itself as evidence of predatory status. Nonetheless, most authors avoid publishers that have been publicly labeled as predatory, regardless of their actual quality.
Cabell’s Predatory Reports
Cabell’s Predatory Reports is much like Beall’s List, and it uses similar criteria to identify predatory journals (Cabell's International, 2019). It does have two advantages over Beall’s List, however. First, since Cabell’s requires a subscription, the publisher may have greater resources to devote to the evaluation process. Second, there is a systematic appeals process by which journals may challenge their placement on the list.
The criteria for exclusion from Cabell’s Predatory Reports are different from the criteria for inclusion in Cabell’s Journalytics, so some business journals will appear on neither list.
Journal characteristics that are less useful as indicators of quality
For OA journals, the amount of the APC payment is not a good indicator of a journal’s quality. Several studies have shown that predatory journals actually charge lower fees than legitimate OA journals.
Acceptance rates can be misleading as indicators of selectivity, since (a) they do not account for the characteristics of the authors who submit to the journal (self-selection) and (b) there is no standard method of calculating acceptance rates (e.g., whether a revised paper counts as a new submission). The calculation methods used by different publishers or journals can produce dramatically different results.
Citation impact of particular articles
As noted earlier, citation impact varies substantially among the articles within each journal. Citation counts for individual articles are therefore often useful, at least for the articles published more than one or two years ago—those that have had time in which to be cited.
Article-level citation data are available from Scopus, Web of Science, and Google Scholar. However, many legitimate journals are excluded from Scopus and Web of Science, both of which count just the citations that have appeared in the journals they cover. Google Scholar provides more complete citation counts, since (a) it covers all the scholarly publications that the search mechanism can find online, including some books, preprints, working papers, etc.; and (b) it counts all the citations that have appeared in any of those publications.
Although the citation counts reported by the three sources are different in magnitude, they are strongly correlated with each other. Google Scholar does report relatively high counts for teaching- and practice-oriented papers, however, due to its broader coverage of those sources.
Journal finders
The most appropriate publication outlets for a particular paper can be identified through publishers’ web sites, bibliographic databases, and resources such as the Directory of Open Access Journals. Moreover, the larger commercial publishers have developed journal finders to help authors identify relevant journals based on the titles and abstracts of their papers.
Many of the journal finder web sites provide metrics such as CiteScore, Impact Factor, acceptance rate, average time from submission to first decision, and average time from acceptance to publication. Most are not designed for journal-name searches, but you may be able to get information for a particular journal by entering the title and abstract of a paper published in the journal.
Journal finders of the major commercial publishers
1. Elsevier (2022a) JournalFinder 2. Springer Nature (2020) Journal Suggester 3. Taylor & Francis (2022) Journal Suggester (beta version) 4. Wiley (2022) Journal Finder (beta version)
Other journal finders
The Edanz (2022) Journal Selector appears to include just the major commercial publishers. The Enago Open Access Journal Finder (Enago/Crimson Interactive, 2022) requires an e-mail address, although a false address will work.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
