Impact Factor: The Metric You Love to Hate
by Justin B. Moore, PhD, MS, FACSM
The Scholarship of Public Health addresses topics relevant to scientific publishing, dissemination of evidence and best practices, and the education of current and future professionals. This column presents some considerations and best practices for finding time to produce scholarship in the form of a manuscript or presentation.
Thomson Reuters has released the 2016 Journal Citation Reports®, which contain Impact Factors for all of the 11,000-plus journals that they index. As usual, it has caused a high level of mass consternation. As the sky is most certainly not falling, despite indications otherwise, I felt obligated to chime in and let you know a little secret: it’s not the Impact Factor; it’s us.
Impact factor is a simple metric that has numerous flaws that have been documented in great detail. However, despite these flaws, journal editors, publishers, and some authors wait with bated breath for them to be released annually. Why, you might ask. First, an increase in Impact Factor is cause for a celebratory email from the Editor in Chief to the readership, former authors, editors, and editorial board, as a clear indication of the importance of the journal. Second, a decrease in Impact Factor is cause for a cautionary email (or blog post) from the Editor in Chief lamenting the flawed Impact Factor and the irrelevance to the journal’s quality. Some journal editors simply note the new value, update their website, and get on with life. The editors of the Journal of Public Health Management & Practice (JPHMP) are in the latter category [Note: Impact Factor for the JPHMP fell slightly to 1.258 in 2016, which is presented without commentary].
So why are we, the people, not Thomson Reuters, to blame? It’s analogous to the US News and World Report Annual Best Colleges Rankings. The Rankings, which are likely an honest attempt to inform consumers’ decisions on where to attend college, are a case study in flawed metrics. Overly complex and reliant on subjective assessments confounded by pedigree (ie, those doing the rankings are humans who attended the colleges they rank), the Rankings are about as useful as asking ten college graduates, “Where do you think I should go to school?” Impact Factor has the opposite issue (ie, simplicity), and a dose of flawed objective data (ie, one source of citation data). What they have in common is a desire for those ranked or rated to flaunt the ‘scores’ they are given on their websites, in their promotional materials, and in press releases. As such, we give these metrics a level of credibility simply by reporting our scores and rankings to the rest of the world. Last time I checked, you have to have a subscription to access the Thomson Reuters Journal Citation Reports®, so they’re not exactly branding journals with a “scarlet score.”
So, what to do? First, we could stop putting Impact Factors on our journal websites. While I don’t think that most editors have this level of control over their publishers, it would undoubtedly put the journal at a disadvantage, since many universities put undue emphasis on Impact Factor as a marker of publication quality for promotion and tenure (an amazingly silly practice). As such, tenure track faculty would have to go collect this information on their own, wasting precious time. We could ask tenure committees to only consider Impact Factor among many other alternative metrics (at the journal and article level), hypothetically reducing the influence of the Impact Factor. However, as someone who has suggested this to numerous chairs and deans, I can assure you this will be an uphill battle. Finally, we can all just run our own race, and not care about such markers of “quality.” Editors can just publish the best articles that are submitted, authors can just submit our articles to the journal we feel is the best fit for the content, and readers can read the articles of most relevance to our research and practice. Perhaps in this model, we can free ourselves from the branding and judgment of others, and simply rely on our own judgment and intuition. Wouldn’t that be nice?
Or we can just focus on h-indexes…
Justin B. Moore, PhD, MS, FACSM, is the Associate Editor of the Journal of Public Health Management and Practice and an Associate Professor in the Department of Implementation Science of the Wake Forest School of Medicine at the Wake Forest Baptist Medical Center in Winston-Salem, NC, USA. Follow him at Twitter and Instagram. [Full Bio]
Read previous posts by this author:
- Finding Time for Scholarly Writing, Part II
- Finding Time for Scholarly Writing, Part I
- Who Is a Scientist, Anyway?
- Letting Journal Editors Do (Some of) Your Work for You
- Selecting the “Best” Journal as an Outlet for Your Work
- How Can Public Health Students Make Themselves Competitive for Employment?
- Writing an Abstract for Publication
- When Is Public Health Coming to Students of Public Health?
- JPHMP Direct Voices2021.08.09Resources to Help Schools Promote COVID-19 Vaccination
- Big Cities Health Coalition2021.06.30How Health Departments Are Addressing Substance Use Disorder and Overdose During a Pandemic
- Healthy People 20302021.06.16Podcast: Law and Policy as Tools in Healthy People 2030
- HRSA's Investment in Public Health2021.05.18Video Q&A — Preventive Medicine for Rural America: Why More Training Programs Must Be Here