Qualities, biases and use of journal rankings to measure research quality
Black Cat Blog: Issues in academia, research, & publishing
Lydia Barza, PhD
Journal rankings are highly contested in academia. Many departments use them to determine faculty promotion and tenure, under the assumption that publishing in “top-tier”, “A-grade” journals is a measure of research quality. The logic follows that if you are successful in publishing in these journals, then your work is of superior quality. A journal’s reputation is based on a number of factors and a high ranking comes with the assumption that the journal only accepts research of the highest merit.
A fair system for assessing journal quality in itself would be useful to academics and consumers alike. With the rise of predatory journals and conferences, some kind of quality check for journals is certainly warranted.
Let’s briefly look at what the research says about how journal rankings are determined and what major biases are documented regarding this system. Others have looked at this issue from a purely mathematical perspective, either criticizing or affirming the ranking algorithms. My perspective is centered on a review of relevant research on the logic behind methods used and its impact on scholars and their work.
Spoiler alert…Articles that contribute to knowledge can be found in BOTH high- and low-prestige journals.
Qualities of “A” Journals
Almost all journal ranking systems are citation based. Citations or references to an author’s work is considered to be a major factor in determining the impact of the work. When a researcher’s work is “trending”, it is cited frequently in other academics’ work. It’s like being retweeted, liked, and shared - for nerds.
Journal prestige is the greatest factor influencing citation rates (Singh, Haddad & Chow, 2007). Top-tier journals are more frequently cited, rendering it a self-fulfilling prophecy. In fact, the more a paper is cited, the more likely it is to be cited again (Macdonald & Kam, 2007). This type of “persuasive” citation may account for up to 40% of citations and is primarily based on the reputation of the journal or scholar, rather than its merit (Bornmann & Daniel, 2008). It has also been suggested that some editors may favor articles that they predict will more likely to be cited (Adler & Harzing, 2009).
A few highly cited papers can significantly affect a journal’s ranking. For example, about 75% of papers in the journals Nature and Science were cited below the journals’ impact factors. The impact factor reflects the average number of citations to articles published. Therefore, the journal ranking may be an indication of the overall quality of the journal (in a broad sense) but does not speak to the quality of individual papers.
Despite the fact that articles in top-tier journals are cited more frequently, several studies have shown that many high quality papers are published in medium and even low quality journals (Oswald, 2007; Singh, Haddad & Chow, 2007; Starbuck, 2005). In fact, one review concluded that “experiments reported in high-ranking journals are no more methodologically sound than those published in other journals” (Brembs, 2018). In another study of over 1,000 manuscripts submitted to 3 elite medical journals, 14 of the most highly cited articles were rejected by those journals (Siler, Lee & Bero, 2015). Further, 12 of the 14 were rejected by editors and never sent for peer review. This shows that editors and reviewers of top journals are not always adept at finding the diamonds in the rough and that there are more factors at play.
“because even articles in the lowest quintile of journals may actually belong among the best 20% written, it makes no sense to dismiss these articles as valueless merely based on where they appeared” (Starbuck, 2005, p.195)
A journal is considered top-tier if it has a high rejection rate. This rate is like a badge of honor for some journals that like to tout the fact that only, say 10% of the total number of submissions are ever successful through their peer review process. Indeed, any reputable journal would not accept every paper that is submitted. However, “the more authors are encouraged to submit their papers to quality journals, the higher will be the rejection rates of these journals” (Macdonald & Kam, 2007). So, the push for academics to publish in highly ranked journals inflates the number of submissions to those journals versus ones that rank lower on the totem pole. A large volume of submissions are problematic for publishers. For example, “the higher the rejection rate of a journal, the less likely that submissions will be refereed at all”, most being either fast-tracked by editors who invite prestigious authors to submit their work or simply reviewed by the editors and not sent on for peer review (Macdonald & Kam, 2007).
Many low-tier journals publish articles that appeal to a more narrow audience. In addition, some top-tier journals reject articles that go against the grain in the field or reflect something outside the current rhetoric. Some prestigious journals also require that papers are fundamentally interesting. Some good research is simply not that interesting to most people and may, again, appeal only to a specialized group. This, however, does not make it any less valuable or speak to the quality of the work. Some of my own work is specific to a population in the Middle East. As a result, I have gotten push-back from editors and reviewers about its generalizability and applicability to a wider international audience. My intention is to have a “high impact” on the local community whether or not it appears in any high-impact journal.
Sources for Journal Rankings
In short, it is suggested that limiting submissions to a narrow list of journals perverts the research process. It gives too much power to the few journals who have their own limitations in aim and scope. It limits one’s research audience and binds creativity and diversity of the work.
“Not everything that can be counted counts, and not everything that counts can be counted.”
—Albert Einstein
What determines whether a journal is top-tier adheres to some circuitous logic. For instance, top-tier journals publish authors with high reputation, but then authors acquire a high reputation by publishing in top ranked journals (Dewett and DeNisi, 2004). It is also the case that authors from top-tier schools tend to publish in top-tier journals. It’s possible that we have a chicken and egg scenario.
Not sure who started this whole thing, but when colleges review their own publication goals, they usually start by looking at journal lists by top-tier or comparable institutions. Compiled lists of quality journals, usually published by departments, are then copied by other departments (Macdonald & Kam, 2007). This, of course, does not reflect a careful and systematic review of the quality of these dissemination sources. Rather, it is a generally outdated rehash.
At a Disadvantage: Interdisciplinary Scholars, Qualitative Researchers, & Non-English Speakers
Research also reveals particular biases of top-tier journals against interdisciplinary work (Pfirman & Martin, 2010) and qualitative research. “A” journals often do not include multidisciplinary work (Adler & Harzing, 2009). In addition, they do not represent all disciplines and tend to be more generalist, putting specialized work at a disadvantage (Rafols, et al., 2012). This means interdisciplinary researchers often have to develop a publication strategy in order to navigate the more reputable journals that favor single-discipline perspectives (Lyall & Meagher, 2012). Dicing up a project or shifting the focus to satisfy reviewers can then water down their original purpose and vision. One study pointed to the contrary and found that doctoral students with interdisciplinary dissertations published more, although it depended on the nature of their academic hire (Millar, 2013).
It comes to most of us as no surprise that bias against qualitative research is still alive and well (Copes, Brown & Tewksbury, 2011). Common issues for such researchers include detailing methods while fitting their description into the standard format required by top journals, maintaining a balance between making a novel contribution to theory and linking too closely to existing theory, and being judged by inappropriate (quantitative) standards (Pratt, 2008).
These biases lead to the conclusion that “pressure for prestige may distort the natural processes of research and publication” (Leung, 2007).
Fair or not, English is the language of science for now and any weaknesses in expressing work in English puts researchers at a great disadvantage. Living in a country with English as the official language was the single greatest factor distinguishing scholars who published in higher-tier journals from those who published in lower-tier journals in one study (Paiva, et al., 2017). Other factors included living in a country with a higher GDP, supervising more than 5 graduate students, and mentoring junior researchers. American authors are more frequently cited, although the number of non-American authors has increased over the past couple of decades (Charkhchi, et al, 2018). 40% of French researchers in one study stated that their limited English skills were a significant barrier to publishing (Duracinsky et al., 2017).
How journal rankings are (mis)used
China’s infamous cash for publication policy holds that scholars who publish in highly ranked journals receive bonus pay. It has been reported that these incentives can be over $100,000 per article. Keep in mind that the average salary of a professor is under $10,000. As a result, more than 50% of articles submitted by Chinese authors were recently retracted by scientific journals due to breaches in academic integrity.
In the West, there may not be an immediate cash incentive for publishing in a top-tier journal but one’s reputation as a scholar is often determined by it. I remember once being approached by a professor in another department who I had known for several years casually. Her eyes wide open, she said, “I heard you just published in an ‘A’ journal.” I remember feeling a bit shocked at the attention she now was giving me, since she’d barely acknowledged me in the past. I found this praise empty. What would have flattered me most was if she actually read my article and commented on it.
Most common, journal rankings are used to determine the quality of individual authors’ work for the purpose of promotion and tenure. Some universities even tell faculty that any work that is not published in a journal ranked highly will not even count towards promotion. So, the rankings are used for the purpose of ranking individual scholars – not its original intent.
“By itself, the value of a particular journal citation metric is largely meaningless.” (Bradshaw & Brook, 2016)
So…here’s my conclusion
Top-tier journals have developed a reputation for publishing research that is usually of good quality and of some significance. Yes, there are retractions in “A” journals but no system is perfect.
However, these journals do favor some scholars, institutions, methods, disciplines, and populations over others.
With these caveats in mind, a scholar’s publication record may be judged fairly only by looking at individual work rather than making sweeping assumptions simply based on journal rankings.
References
Adler, N. J., & Harzing, A. W. (2009). When knowledge wins: Transcending the sense and nonsense of academic rankings. Academy of Management Learning & Education, 8(1), 72-95.
Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of documentation, 64(1), 45-80.
Bradshaw, C. J. A., & Brook, B. W. (2016). How to Rank Journals. PLoS ONE, 11(3), e0149852. http://doi.org/10.1371/journal.pone.0149852
Brembs, B. (2018). Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in human neuroscience, 12, 37.
Charkhchi, P., Mirbolouk, M., Jalilian, R., & Yousem, D. M. (2018). Who's Contributing Most to American Neuroscience Journals: American or Foreign Authors?. American Journal of Neuroradiology, 39(6), 1001-1007.
Copes, H., Brown, A., & Tewksbury, R. (2011). A content analysis of ethnographic research published in top criminology and criminal justice journals from 2000 to 2009. Journal of Criminal Justice Education, 22(3), 341-359.
Dewett, T., & Denisi, A. (2004). Exploring scholarly reputation: It's more than just productivity. Scientometrics, 60(2), 249-272.
Duracinsky, M., Lalanne, C., Rous, L., Dara, A. F., Baudoin, L., Pellet, C., ... & Chassany, O. (2017). Barriers to publishing in biomedical journals perceived by a sample of French researchers: results of the DIAzePAM study. BMC medical research methodology, 17(1), 96.
Leung, K. (2007). The glory and tyranny of citation impact: An East Asian perspective. Academy of Management Journal, 50(3), 510-513.
Lyall, C., & Meagher, L. R. (2012). A masterclass in interdisciplinarity: Research into practice in training the next generation of interdisciplinary researchers. Futures, 44(6), 608-617.
Macdonald, S., & Kam, J. (2007). Ring a ring o’roses: Quality journals and gamesmanship in management studies. Journal of Management Studies, 44(4), 640-655.
Millar, M. M. (2013). Interdisciplinary research and the early career: The effect of interdisciplinary dissertation research on career placement and publication productivity of doctoral graduates in the sciences. Research Policy, 42(5), 1152-1164.
Oswald, A. J. (2007). An examination of the reliability of prestigious scholarly journals: evidence and implications for decision‐makers. Economica, 74(293), 21-31.
Paiva, C. E., Araujo, R. L., Paiva, B. S. R., de Pádua Souza, C., Cárcano, F. M., Costa, M. M., ... & Lima, J. P. N. (2017). What are the personal and professional characteristics that distinguish the researchers who publish in high-and low-impact journals? A multi-national web-based survey. ecancermedicalscience, 11.
Pfirman, S., & Martin, P. (2010). Facilitating interdisciplinary scholars. In R. Frodeman, J. Thompson Klein & C. Mitcham (Eds.), The Oxford Handbook of Interdisciplinarity (pp. 387). Oxford; New York: Oxford University Press.
Pratt, M. G. (2008). Fitting oval pegs into round holes: Tensions in evaluating and publishing qualitative research in top-tier North American journals. Organizational Research Methods, 11(3), 481-509.
Rafols, I., Leydesdorff, L., O’Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262-1282.
Singh, G., Haddad, K. M., & Chow, C. W. (2007). Are articles in “top” management journals necessarily of higher quality?. Journal of Management Inquiry, 16(4), 319-331.
Siler, K., Lee, K., & Bero, L. (2015). Measuring the effectiveness of scientific gatekeeping. Proceedings of the National Academy of Sciences, 112(2), 360-365.
Starbuck, W. H. (2005). How much better are the most-prestigious journals? The statistics of academic publication. Organization Science, 16(2), 180-200.
Related blog posts
Chavarro, D. & Rafols, I. (2017, October 30). Research assessments based on journal rankings systematically marginalize knowledge from certain regions and subjects. [Blog post]. Retrieved from http://blogs.lse.ac.uk/impactofsocialsciences/2017/10/30/research-assessments-based-on-journal-rankings-systematically-marginalise-knowledge-from-certain-regions-and-subjects/
Gillis, A. (2017, January 12). Beware! Academics are getting reeled in by scam journals. [Blog post]. Retrieved from https://www.universityaffairs.ca/features/feature-article/beware-academics-getting-reeled-scam-journals/
Harzing, A. (2018, March 20). Where to submit your paper: Compare journals by impact. [Blog post]. Retrieved from https://harzing.com/blog/2018/03/where-to-submit-your-paper-compare-journals-by-impact
Waltman, L. (2016, July 11). The importance of taking a clear position in the impact factor debate. [Blog post]. Retrieved from https://www.cwts.nl/blog?article=n-q2w2c4
Waltman, L. & Traag, V. (2017, March 8). Use of the journal impact factor for assessing individual articles need not be wrong. [Blog post]. Retrieved from https://www.cwts.nl/blog?article=n-q2z254
APA Citation of entire blog: Black Cat Editing blog (https://www.blackcatediting.com/blackcatblog)
APA Citation of this blog post: Barza, L. (2018, September 24) What makes a top-tier journal top tier? Qualities, biases and use of journal rankings to measure research quality. [Blog post]. Retrieved from …
Comments?
What do you think about the journal ranking system?
Do you have any experiences to share regarding the biases identified?
What alternatives exist or do you propose to the current journal ranking system?
Want more? Subscribe
to receive new articles to your inbox: https://www.blackcatediting.com/