Options for Replacing Standardized Test Scores for Researchers and Rankers

It’s the second Monday in September, so it’s time for the annual college rankings season to conclude with U.S. News & World Report’s entry. The top institutions in the rankings change little from year to year, but colleges pay lots of attention to statistically insignificant movements. Plenty has been written on those points, and plenty of digital ink has also been spilled on U.S. News’s decision to keep standardized test scores in their rankings this year.

In this blog post, I want to look a few years farther down the line. Colleges were already starting to adopt test-optional policies prior to March 2020, but the pandemic accelerated that trend. Now a sizable share of four-year colleges had taken a hiatus from requiring ACT or SAT scores, and many may not go back. This means that people who have used test scores in their work—whether as academic researchers or college rankings methodologists—will have to think about how to proceed going forward.

The best metrics to replace test scores depend in part on the goals of the work. Most academic researchers use test scores as a control variable in regression models as a proxy for selectivity or as a way to understand the incoming academic performance of students. High school GPA is an appealing measure, but is not available in the Integrated Postsecondary Education Data System and also varies considerably across high schools. Admit rates and yield rates are available in IPEDS and capture some aspects of selectivity and student preferences to attend particular colleges. Admit rates can be gamed by trying to get as many students as possible with no interest in the college to apply and be rejected, and yield rates vary considerably based on the number of colleges students apply to.

Other potential metrics are likely not nuanced enough to capture smaller variations across colleges. Barron’s Profiles of American Colleges has a helpful admission competitiveness rating (and as a plus, that thick book held up my laptop for hundreds of hours of Zoom calls during the pandemic). But there are not that many categories and they change relatively little over time. Carnegie classifications focus more on the research side of things (a key goal for some colleges), but again are not as nuanced and are only updated every few years.

If the goal is to get at institutional prestige, then U.S. News’s reputational survey could be a useful resource. The challenge there is that colleges have a history of either not caring about filling out the survey or trying to strategically game the results by ranking themselves far higher than their competitors. But if a researcher wants to get at prestige and is willing to compile a dataset of peer assessment scores over time, it’s not a bad idea to consider.

Finally, controlling for socioeconomic and racial/ethnic diversity are also options given the correlations between test scores and these factors. I was more skeptical of these correlations until moving to New Jersey and seeing all of the standardized test tutors and independent college counselors that existed in one of the wealthiest parts of the country.

As the longtime data editor for the Washington Monthly rankings, it’s time for me to start thinking about changes to the 2022 rankings. The 2021 rankings continued to use test scores as a control for predicting student outcomes and I already used admit rates and demographic data from IPEDS as controls. Any suggestions that people have for publicly-available data to replace test scores in the regressions would be greatly appreciated.

Author: Robert

I am a professor at the University of Tennessee, Knoxville who studies higher education finance, accountability policies and practices, and student financial aid. All opinions expressed here are my own.