Selecting the “Best” Journal as an Outlet for Your Work

by Justin B. Moore, PhD, MS, FACSM

The Scholarship of Public Health addresses topics relevant to scientific publishing, dissemination of evidence and best practices, and the education of current and future professionals. This column presents some considerations and best practices when producing an abstract for your manuscript or presentation. This post looks at selecting the “best” journal as an outlet for your work.

At some point in the writing process, the lead author of a manuscript must decide where to submit the final product. There are a lot of factors for consideration, some good, some bad, and some ugly. I am going to take them in the reverse order.

The Ugly

One reason that people consider submitting their work to a particular journal, but shouldn’t, is the increased likelihood for acceptance. But choosing a journal because others have had a lot of success publishing there, or because you look at a published article in the journal and tell yourself, “Our paper is better than that one,” can lead you on a path to low-quality or predatory journals. While it’s true that rejection rates are extremely high (80+ percent) for many of the established journals that enjoy a large readership, water tends to find its own level. Rather than thinking in terms of competing to be among the 5-20% of manuscripts accepted for publication, a better strategy is to consider acceptance rates in context of the similarity of your manuscript to others published in the journal. For example, if you have a manuscript that reports the results of an analysis of a cross-sectional dataset in a specific sub-population, you should see if the journal you’re considering has published cross-sectional analyses in similar populations. Some journals prefer to publish only longitudinal, observational studies, and this will become evident if you read the instructions for authors and/or review the articles published in the journal under consideration over the previous year.

The Bad

Editors and authors have been trying to assess the impact of journals, articles, and authors for decades, and I don’t expect this to change in my lifetime. Indices such as Impact Factor, H-Index, and Altmetric attempt to quantify impact in terms of citations (the former two) or dissemination through electronic and traditional channels (Altmetric). Many other indexes involve similar metrics. These indices are not bad per se, as they all provide useful information. But over-reliance on them, or attaching too much importance to them, can be harmful. Since these numbers exist, people employ them when choosing journals, considering the importance of work, and judging the productivity of authors. Much like rankings such as the US News and World Report Best Colleges list, these indices attempt to quantify things that defy quantification, despite a resulting metric that can be almost meaningless in the wrong context. In short, metrics can be useful in identifying journals that people read, articles that people cite, or topics interesting to the popular press or community at large, but they can’t tell you what a “good” journal is. selecting best journal outlet

The Good

This is going to seem overly simplistic, but the best outlet for your work is one that is read by the audience you hope your article finds. To do this, you must ask yourself (or your co-authors), “Who do I hope will read my work?” Depending on the answer, you might consider different journals as outlets. Many journals define their audience in their information for authors or readers. Other journals have formal relationships, or are published by, professional organizations. These characteristics can be useful when selecting a journal. For example, if you’re analyzing secondary, national data on a condition or behavior and feel that your estimates are superior and more accurate than previous methods, you might consider an outlet like JAMAor NEJM, as they have a large, diverse audience and often publish similar papers. If you are reporting the impact of a national health policy that has relevance to a broad audience of public health practitioners and policy makers, you might consider the AJPH. If you’re reporting the results of a cluster-randomized community screening trial that might inform preventive services provision, you might consider the AJPM. If you’re reporting the results of an innovative approach for evaluating Health in All Policies Initiatives, or areview of state public health actions to support farmers markets, the results of which would be relevant to state or local health workers, you might want to consider the JPHMP. In short, there are a number of excellent outlets for your work. It’s just a matter of deciding who can best utilize or learn from your work and trying to understand where they might go for research they can use. If all else fails, you might ask yourself where you go to learn about the topic of your manuscript, as sometimes, the simple answer is the correct one.

Justin B. Moore, PhD, MS, FACSM, is the Associate Editor of the Journal of Public Health Management and Practice and an Associate Professor in the Department of Implementation Science of the Wake Forest School of Medicine at the Wake Forest Baptist Medical Center in Winston-Salem, NC, USA. Follow him at Twitter and Instagram. [Full Bio]

Read previous posts by this author: