Making Sense of Changes to the U.S. News Rankings Methodology

Standard disclaimer: I have been the data editor for Washington Monthly’s rankings since 2012. All thoughts here are solely my own.

College rankings season officially concluded today with the release of the newest year of rankings from U.S. News and World Report. I wrote last year about things that I was watching for in the rankings industry, particularly regarding colleges no longer voluntarily providing data to U.S. News. The largest ranker announced a while back that this year’s rankings would not be based on data provided by colleges, and that is mostly true. (More on this below.)

When I see a set of college rankings, I don’t even look at the position of individual colleges. (To be perfectly honest, I don’t pay attention to this when I put together the Washington Monthly rankings every year.) I look at the methodology to see what their priorities are and what has changed since last year. U.S. News usually puts together a really helpful list of metrics and weights, and this year is no exception. Here are my thoughts on changes to their methodology and how colleges might respond.

Everyone is focusing more on social mobility. Here, I will start by giving a shout-out to the new Wall Street Journal rankings, which were reconstituted this year after moving away from a partnership with Times Higher Education. Fully seventy percent of these rankings are tied to metrics of social mobility, with a massive survey of students and alumni (20%) and diversity metrics (10%) making up the remainder. Check them out if you haven’t already. I also like Money magazine’s rankings, which are focused on social mobility.

U.S. News creeps slower in the direction that other rankers have taken over the last decade by including a new metric of the share of graduates earning more than $32,000 per year (from the College Scorecard). They also added graduation rates for first-generation students using College Scorecard data, but this is just for students who received federal financial aid. This is a metric worth watching, especially as completion flags get better in the Scorecard data. (They may already be quite good enough.)

Colleges that did not provide data were evaluated slightly differently. After a well-publicized scandal involving Columbia University, U.S. News was moving away from data sources from the Common Data Set—a voluntary data system also involving Peterson’s and the College Board. U.S. News mostly moved away from the Common Data Set, but still primarily used it for the share of full-time faculty, faculty salaries, and student-to-faculty ratios. If colleges did not provide data, then U.S. News used IPEDS data. To give an example of the difference, here is what the methodology mentioned for the percentage of full-time faculty:

“Schools that declined to report faculty data to U.S. News were assessed on fall 2021 data reported to the IPEDS Human Resources survey. Besides being a year older, schools reporting to IPEDS are instructed to report on a broader group of faculty, including those in roles that typically have less interaction with undergraduates, such as part-time staff working in university hospitals.”

I don’t know if colleges are advantaged or disadvantaged by reporting Common Data Set data, but I would bet that institutional research offices around the country are running their analyses right now to see which method gives them a strategic advantage.

The reputation survey continues to struggle. One of the most criticized portions of the U.S. News rankings is their annual survey sent to college administrators with the instructions to judge the academic quality of other institutions. There is a long history of college leaders providing dubious ratings or trying to game the metrics by judging other institutions poorly. As a result, the response rate has declined from 68% in 1989 to 48% in 2009 and 30.8% this year. Notably, response rates were much lower at liberal arts colleges (28.6%) than national universities (44.1%).

Another interesting nugget from the methodology is the following:

“Whether a school submitted a peer assessment survey or statistical survey had no impact on the average peer score it received from other schools. However, new this year, nonresponders to the statistical survey who submitted peer surveys had their ratings of other schools excluded from the computations.”

To translate that into plain English, if a college does not provide data through the Common Data Set, the surveys their administrators complete get thrown out. That seems like an effort to tighten the screws a bit on CDS participation.

New research metrics! It looks like there is a new partnership with the publishing giant Elsevier to provide data on citation count and impact of publications for national universities only. It’s just four percent of the overall score, but I see this more of a preview of coming attractions for graduate program rankings than anything else. U.S. News is really vulnerable to a boycott among graduate programs in most fields, so introducing external data sources is a way to shore up that part of their portfolio.

What now? My biggest question is about whether institutions will cooperate in providing Common Data Set data (since apparently U.S. News would still really like to have it) and completing reputation surveys. The CDS data help flesh out institutional profiles and it’s a nice thing for U.S. News to have on their college profile pages. But dropping the reputation survey, which is worth 20% of the total score, would result in major changes. I have been surprised that efforts to stop cooperating with U.S. News have not centered on the reputation survey, but maybe that is coming in the future.

Otherwise, I expect to continue to see growth in the number of groups putting out rankings each year as the quantity and quality of federal data sources continue to improve. Just pay close attention to the methodology before promoting rankings!

Author: Robert

I am a professor at the University of Tennessee, Knoxville who studies higher education finance, accountability policies and practices, and student financial aid. All opinions expressed here are my own.