Writing the Results Section of Your Manuscript for the Journal of Public Health Management and Practice

by Justin B. Moore, PhD, MS, FACSM

The Scholarship of Public Health addresses topics relevant to scientific publishing, dissemination of evidence and best practices, and the education of current and future professionals. This column presents some considerations and best practices for finding time to produce scholarship in the form of a manuscript or presentation.

Justin B. Moore, PhD, MS, FACSM

Ostensibly, the results section of an article should be the easiest to write. The analysis section of the manuscript tells the reader what the authors did with the data, and the results section simply presents the output of those analyses in a format that is, at least hypothetically, easy to understand and interpret. Unfortunately, things that are hypothetically one way don’t always play out that way in practice. As such, many results sections are poorly set up by the analysis section, dense, redundant with tables or figures, and/or confusing. This can be tragic regardless of the journal the work is published in, but a poorly constructed results section can be especially detrimental in an applied journal. When writing a results section, one must also take into account the readership of the journal if the work is going to enjoy a maximum impact. For example, the results section of a well-written article in an epidemiology journal might look quite different from one in a practice-oriented journal. The Journal of Public Health Management and Practice (JPHMP) has a large audience of practitioners and policy makers, many of whom do not have advanced training in statistics. As such, an appropriate analytical plan and results section should be written in a manner that encourages interpretation. To accomplish this for JPHMP, authors should consider a few points:

  • If possible, choose analyses that are as parsimonious as possible. While a complicated model that controls for multiple covariates may be technically correct, there is a difference between being correct and being useful. Furthermore, if a model must control for multiple covariates, consider strategies that aid in the interpretation of coefficients, such as centering variables on their mean in cases where zero has no meaning (eg, age).
  • Relatedly, avoid presenting coefficients without interpreting them, especially in complex regression models where the interpretation is difficult or hampered by transformations. For example, if a value is log-transformed due to non-normal distribution, provide an example in the original scale to aid the reader in understanding the relationship between the variables. Related, avoid presenting standardized coefficients for variables where the unstandardized coefficients are more meaningful. Standardized coefficients are useful for understanding relative contributions but aren’t useful for absolute expected changes.
  • When possible, present point estimates with confidence intervals. Never simply present P values. Also, at JPHMP, we use the American Medical Association Manual of Style, so statistical significance (ie, P values) should be expressed to 2 digits to the right of the decimal point unless the P value is less than .01, in which case the P value should be expressed to 3 digits to the right of the decimal point.
  • Never attempt to describe results that fail to achieve significance at the a priori threshold for statistical significance, such as suggesting that the results “approached significance” or displayed a “trend towards significance.” Similarly, statistics are never “highly significant.”
  • Think carefully about the manner in which data are presented in tables. Tables should stand alone, as many readers prefer to glean your results from your tables rather than the text.
  • Data visualization in figures should be a similarly thoughtful process. Many journals, including JPHMP, allow for readers to download tables, figures, and images separately from the article for inclusion in presentations. As such, more visually informative and attractive figures will be more likely disseminated.

Weigh in: What topics would you like to know more about? Leave a comment below.


Justin B. Moore, PhD, MS is the Associate Editor of the Journal of Public Health Management and Practice and an Associate Professor in the Department of Family & Community Medicine of the Wake Forest School of Medicine at the Wake Forest Baptist Medical Center in Winston-Salem, NC, USA. Follow him at Twitter and Instagram. [Full Bio]

Read previous posts by this author:

3 comments

  • I have to disagree with one comment. “Never attempt to describe results that fail to achieve significance at the a priori threshold for statistical significance, such as suggesting that the results ‘approached significance’ or displayed a ‘trend towards significance.'” There is a belief in some circles that a p-value of 0.04 tells you something that is radically different than a p-value of 0.06. Both represent findings that should be reported, but treated with caution. I would much rather put my faith in a p-value of 0.06 that has a solid mechanistic explanation than one of 0.04 that appears to defy any scientific rationale. Similarly, I would trust a p-value of 0.06 that was associated with other closely related outcome measures that did achieve statistical significance than a p-value of 0.04 that was surrounded by other similar outcome measures that failed to achieve statistical significance.

    Context is critical in interpretation of p-values. Unfortunately most scientists do not allow the context of a finding to enter into the discussion of borderline p-values (either the 0.06 p-value or the 0.04 p-value) out of a fear of violating some sacrosanct edict of research conduct.

    It seems that every scientist has a p-value receptor in their brain. It stimulates the pleasure center when it encounters a p-value less than 0.05 and it stimulates the pain center when it encounters a p-value greater than 0.05. But no matter what the value, the p-value receptor also shuts down all other areas of the scientist’s brain once it encounters a p-value of any size. And arguing that p-values on the “wrong” side of an arbitrary Type I error rate of 0.05 should be discussed only as a negative result encourages this sort of unthinking approach to p-values.

    At a minimum, look at the confidence interval for a “negative” finding. If it includes the null value but also includes values that are considered clinically important, then you should describe the result as being one that warrants further study with a larger sample size.

    Other than this one complaint, I think it is a very good article.

    • Thank you for reading, and I greatly appreciate you taking time to comment. I actually agree with you, which is why I don’t like folks editorializing about P values. If you are going to set an a priori P value, then one should interpret your statistic as meeting or not meeting this threshold (ie, not “almost meeting”). That said, you’re totally correct; the .05 (or .01 or .001) threshold is pretty arbitrary. What is most important is that folks present all the information (eg, point estimates, CI, etc) and let the reader interpret that information in the context of their own work.

      Thanks again for joining the conversation!

      Justin

  • Your article is excellent. What I don’t understand is why public health advocates are not vocal about the dangers of not competently and efficiently helping PR? Where are PH advocates getting on the air and highlighting these very dangerous situations for citizens? Mayor Bloomberg who, given has a huge platform, really is the only person screaming about public health and its importance.

Leave a Reply