US News and World Report Releases New Public Health Program Rankings – But the Only Data Are Still Peer Popularity

Even though we are a field grounded in data – and even though there are plausible ways of measuring program quality through measures of tuition, job outcomes, debt, faculty-student ratios, and the like, US News and World Report continues to simply field these (closely watched) Rankings as a mere popularity contest.

What’s in a Ranking? Not as much as one might like, it turns out.

US News and World Report surprised some of us a few months ago by sending out another request for rating one’s peer schools and programs of public health, though the last Rankings had just been released in 2021. They have now released the 2022 Rankings. There has long been a little-discussed truth about how the US News and World Report Rankings system works – that the Rankings are only based on how your peers / friends / enemies rate your program. Per US NWR methodology statement – “The U.S. News Best Health Schools Rankings are based solely on the results of peer assessment surveys sent to deans, other administrators or faculty at accredited degree programs or schools in each discipline.” (emphasis mine)

Even though we are a field grounded in data – and even though there are plausible ways of measuring program quality through measures of tuition, job outcomes, debt, faculty-student ratios, and the like, US NWR continues to simply field these (closely watched) Rankings as a mere popularity contest. If I sound bitter about this, I definitely am (and only a little bit because my institution saw an inauspicious move from tied at #10 in 2021 to #11 in 2022). At a time where public health is in the limelight, where our field is likely on its way back down from the 2021 peak interest in graduate education, and where we have almost 400 institutions in the US now awarding graduate degrees in public health, it seems prudent to ask if we can, and should, do better in how we represent ourselves to potential applicants of our programs.

There are a couple important points regarding the US NWR Rankings that bear consideration. First, that it’s not at all data-driven for a field that should be (see bitter reflections above), but also that these Rankings surely favor larger and more established institutions (Figure 1). Our “top 10” schools accounted or 22% of our 2020 graduates (the most recent year graduation data are available from NCES), the 11th to 30th ranked schools accounted for 17%, the 31st-50th accounted for 13%, and the remaining 343 schools account for the rest (48%). These programs are also among the oldest in the country. But what should make a top 10 school? Faculty productivity, sponsored awards, alumni debt-to-earnings? Likely some of these might track size or program age, but not all would, especially related to tuition and graduate earnings.

Another point is that there is a substantial amount of instability in the US NWR Rankings related not just to (conceptual) measurement error, but in (unreported) variation in estimates that might mean that schools within 3-5 Rankings have the functionally equivalent ratings from their peers (Figures 2 and 3).  And that there are now 190 ranked institutions, which seems to me such a substantially cognitively burdensome ratings task that at some point (past rank 50? 100? 150?) it’s not clear what these Rankings even mean. There is also the risk that reputational Rankings can reinforce systematic disadvantage and institutional racism and classism. It seems to me relevant that of the 12 HBCUs that report they confer graduate public health degrees, 6 are unranked, one is in the top 100, and the remainder are ranked 100-150 by US NWR (and, so, PH deans and their offices). The unranked HBCU programs are smaller (less than 10 graduate degrees conferred each in 2020), but the others are midsized (15-40 graduates); a system based on peer rating alone for so many institutions may create substantial reputational disadvantages for smaller or regional programs, including, importantly, HBCUs.

A several-point change in the Rankings might be associated with a mean rating (on a 1-5 scale) of 0.2 or less. Given the integer-based rating system, notwithstanding the philosophical problems with the “popularity contest” nature of the Rankings, mean measures of 4.0 vs 3.8 vs 3.7 within a year might not be particularly discerning. Moreover, year to year, based on who is in and how harshly or positively peer ratings occur, you might have the same score but change Rankings (eg, Yale was rated a 3.8 by their peers in 2021 [#14] and in 2022 [#11); USF was rated a 3.3 in 2021 [#20] and in 2022 [#16]). Being in or out of the top 10, 15, or 20 may have substantial reputational effects for recruitment and grant-getting. Seems relevant when, also that these Rankings have been viewed as so important by some that they are willing to commit fraud to improve their Rankings.

Can we do better?

Not all schools and programs want to be ranked. Reasonably so – it is inherently a game of winners and losers. But, given the strong desire for straightforward, accessible “quality” metrics, if we had to do Rankings or ratings, how could we do better by potential students, faculty and staff, and alumni?

Transparency, and standardized, data-driven reporting

As part of accreditation, schools and programs of public health must collect a substantial amount of data. We also report an enormous amount of data to the federal government and collect still more ourselves for institutional research / quality improvement / performance management. Perhaps it is time to embrace an open data movement and transparently show information to students, alumni, and the public more broadly. A member organization like ASPPH could expand what it shows at present to include quality measures for participating programs.

Employment outcome data for recent graduates is already collected, both for ASPPH and (in some schools) for NACE. Let us put those outcomes out into the ether, by program and degree. We can all also transparently list our programs’ full costs (not merely by per credit or analogous), faculty-to-student ratio, and average debt. As for measures of research quality, there are now semi-independent sources we could leverage like Scopus, Web of Science, or others; schools and programs of public health could also contribute data to projects like IRIS, which provides evidence around the productivity of member faculty and their economic impact on their communities. The key here, in my view, is that a critical mass of likeminded schools would need to be willing to be transparent with these currently sensitively-held data for this to work. We could also, collectively, push US News and World Report to do the work of making data-driven Rankings, which arguably they do a bit better for other fields.

We should do better, and we could. Until then, no doubt these easily-consumed Rankings will reign, despite being bereft of data in a field that is supposed to elevate data.

Author Profile

JP Leider
Dr. JP Leider is the Director of the Center for Public Health Systems at the University of Minnesota and Associate Faculty at the Johns Hopkins Bloomberg School of Public Health. He has active projects and collaborations with foundations, national public health organizations, public health researchers and academics, and public health practitioners. His current projects focus on public health systems, the public health workforce, and public health finance. He holds a PhD in Health Policy and Management from the Johns Hopkins Bloomberg School of Public Health.