Extra Extra, Read All About It: University Rankings Would Be Helpful Except that They Are Currently Trash
Not all schools and programs want to be ranked. Reasonably so – it is inherently a game of winners and losers. But, given the strong desire for straightforward, accessible ‘quality’ metrics, if we had to do Rankings or ratings, how could we do better by potential students, faculty and staff, and alumni? Learn more: https://wp.me/p7l72S-8Cr
I get the sense that if I were on Twitter, I would have the opportunity to engage in online discourse and get real fired up with the news of the day. Probably good, then, that my esteemed Center for Public Health Systems at UMN SPH has a Twitter and I don’t, yet. But it’s nice that I had friends text me the hot-off-the-press news that Yale and Harvard announced that they were withdrawing from the US News & World Report Law School Rankings (despite Yale being #1). Some have made the argument that maybe others (like hospitals) should do the same.
Let’s not forget that in public health, we don’t even have the benefit of our US News and World Report rankings using the types of outcomes data that Harvard and Yale are (reasonably) questioning. As I pointed out in March of this year, we are basically working with an increasingly large and unwieldy popularity contest, and that is, as US News likes to say, solely what determines our ranking (emphasis mine).
It seems to me that Harvard and Yale have made well-reasoned arguments for a while to the powers-that-be that the rankings are creating perverse incentives to recruit only certain kinds of students, and to encourage students to get only certain kinds of jobs. Rankings should not result in this; it is not reasonable, and it should not occur. And in public health we don’t even GET to argue about whether that level of detailed data are being used fairly. We are, I guess, not important enough for that level of data collection. We are instead, basically being asked… do you like me?
And those answers drive applications in a huge way, year after year. There’s no question scholarships drive matriculations, but rankings drive applications. In a field of 300+ institutions offering undergraduate programs and 300+ offering graduate programs, it makes sense students would look to rankings, in one form or another, for help. How else are students to make their way through such a crowded marketplace? Perhaps we owe them something better than what there is at present. And that’s not just a selfless proposition, either.
While we as a field may have seen record applications and applicant counts in 2020 and 2021, there is no question in my mind that we are on our way back down from the peak, and likely back down past our pre-COVID high of 2018. Consider that the US still has a strong labor market and low unemployment, and a demographic crunch I have written of previously. Until that changes, these spell challenging times for post-secondary programs, including ours. Retrenchment and austerity should be on everybody’s minds – and this means the rankings will matter more than ever. Including how bad they are. Parenthetically, the new rankings for online MPH from Fortune are… a little better in some ways… but probably won’t hold up during academic downturns; there’s just not enough ‘there’ there. And some schools and their online programs are just, well, missing. Odd. Better luck next time!
Here’s what I wrote last March. I feel like it aged … decently.
Can we do better?
Not all schools and programs want to be ranked. Reasonably so – it is inherently a game of winners and losers. But, given the strong desire for straightforward, accessible ‘quality’ metrics, if we had to do Rankings or ratings, how could we do better by potential students, faculty and staff, and alumni?
Transparency, and standardized, data-driven reporting
As part of accreditation, schools and programs of public health must collect a substantial amount of data. We also report an enormous amount of data to the federal government and collect still more ourselves for institutional research / quality improvement / performance management. Perhaps it is time to embrace an open data movement and transparently show information to students, alumni, and the public more broadly. A member organization like ASPPH could expand what it shows at present to include quality measures for participating programs.
Employment outcome data for recent graduates is already collected, both for ASPPH and (in some schools) for NACE. Let us put those outcomes out into the ether, by program and degree. We can all also transparently list our programs’ full costs (not merely by per credit or analogous), faculty-to-student ratio, and average debt. As for measures of research quality, there are now semi-independent sources we could leveraged like Scopus, Web of Science, or others; schools and programs of public health could also contribute data to projects like IRIS, which provides evidence around the productivity of member faculty and their economic impact on their communities. The key here, in my view, is that a critical mass of likeminded schools would need to be willing to be transparent with these currently sensitively held data for this to work. We could also, collectively, push US News and World Report to do the work of making data-driven Rankings, which arguably they do a bit better for other fields.
We should do better, and we could. Until then, no doubt these easily consumed Rankings will reign, despite being bereft of data in a field that is supposed to elevate data.
JP Leider, PhD, is the Director of the Center for Public Health Systems at the University of Minnesota School of Public Health, and a member of the JPHMP Editorial Board. He is available at leider (at) umn (dot) edu.