School rankings and other forms of measurement are intended to provide useful information to potential students and their families. But as the schools themselves have learned, competing for a higher spot in the ranking can become like an arms race. They pull resources away from accomplishing the mission in a bid to keep or increase the score.
The U. S. News and World Report college rankings unleashed a floodgate.
In 1983, when U. S. News and World Report launched its college ranking system, it was only the first of what became a staple in the college search process, and then the graduate school search process. Everybody in the world piled on, it seemed. Forbes, the Economist, the Wall Street Journal, the Financial Times and more dove into the world of rankings, each hoping to attract the attention of students and their families. Moreover, even though we know that schools don’t change all that much year-to-year, the rankings jump around wildly as Stanford’s President pointed out some time ago.
And it worked. The rankings attracted attention, put the survey on the map and was regularly consulted by everyone in the higher education business.
Prior to their effort, schools were ranked (if at all) based on internal metrics such as reputational surveys by deans or faculty research productivity. For the MBA ranking, Businessweek changed all that by saying that actually, external constituents such as students, recruiters and alumni should weigh in on how well schools ranked. As the magazine itself proclaims, the “best judges of MBA programs are graduating students, recent alumni, and companies that recruit MBAs. We ask those stakeholders questions about everything from jobs to salaries, classroom learning to alumni networks. Their answers form the heart of this ranking. Because these stakeholders can have differing and overlapping needs and interests, we rank schools based on five indexes that capture fundamental elements of business school education: Compensation, Learning, Networking, Entrepreneurship, and Diversity (for US schools only).”
Schools did everything they could to preserve a favored place in the ranks. All of a sudden, school administrators got a lot more interested in issues that were not part of their historic mission. Funds went into nicer student facilities and better, more comfortable places for visiting recruiters to sit. As teachers, we started to see a lot more emphasis on teaching ratings. Alumni were being courted for favorable ratings. Factors such as small class sizes, GMAT scores, entry-level salaries and compensation all went into the mix. Some schools even added ranking placement to administrators’ compensation conversations!
Despite much grumbling about just about everything about the rankings, schools seemed powerless to ignore them. In 2004, Wharton tried to drop out of the whole system by refusing to give the journal access to their student data (Harvard came along for the ride). It didn’t work – the journals worked around them and ranked them anyway. And for smaller schools, or less well-known ones, not being on the ranking list could lead to drops in applications and less support from alumni.
The dark side of rankings – unintended consequences
Those in favor of the rankings include anybody who is from a school that is highly rated (just kidding).
The case that students and their families should be able to access information about the schools which administrations might not be keen to release is a solid one. However, it’s become clear that there are any number of negative unintended consequences to all this ranking fervor.
Rankings have inadvertently become a system of governance for university spending
As Dan Ariely has pointed out, the mere fact that something is being measured means those subject to the measurement will pay attention and attempt to make the metric more favorable. Mitchell Stevens, a sociology professor at Stanford University observed, “They’re kind of a peculiar form of governance,” he said. “They’re not states, they’re not official regulators, they don’t have the backing of a government agency. But they effectively serve as the governance of higher education in this country because schools essentially use them to make sense of who they are relative to each other. And families use them basically as a guide to the higher education marketplace.”
Rankings reinforce inequality
As a Politico article from 2017 concluded, universities that were “once ladders of social mobility” increasingly reinforce existing wealth. As they say, “America’s universities are getting two report cards this year. The first, from the Equality of Opportunity Project, brought the shocking revelation that many top universities, including Princeton and Yale, admit more students from the top 1 percent of earners than the bottom 60 percent combined. The second, from U.S. News and World Report, is due on Tuesday — with Princeton and Yale among the contenders for the top spot in the annual rankings.
The two are related: A POLITICO review shows that the criteria used in the U.S. News rankings — a measure so closely followed in the academic world that some colleges have built them into strategic plans — create incentives for schools to favor wealthier students over less wealthy applicants.
As the same article points out, when Georgia State made enormous progress on helping its diverse student population achieve graduation, it fell 30 spots in the rankings, despite doing a great job helping their students establish the basis for a better life.
Including compensation data as a big part of the ranking favors schools in expensive areas
It seems reasonable to include compensation data as part of the information provided to prospective applicants – especially for very expensive programs like a top-rated MBA program. But, as many have pointed out, many schools whose graduates go to work in less expensive areas may be penalized for this measure. It also is a negative for schools whose students choose less lucrative paths.
Incentives to cheat abound
Given how much people in universities care about the rankings, the temptation to game the system can be irresistible. One of Columbia University’s own math professors drew attention to the fact that reality on the campus and the data provided to the rankings were deeply at odds. At Claremont McKenna, an official resigned after admitting to reporting faked test scores. In a really egregious case, Temple University’s Business School Dean was convicted of wire fraud and other crimes in connection with inflated figures.
The beginning of the end of rankings?
It made news this week, therefore, that major medical schools were dropping out of the ranking competition. Harvard’s Dean George Q. Daley was the first to announce that the school was not going to participate in the rankings by providing information to the outlets. He cited the example of Dean John Manning of Harvard Law School who had made a similar decision for his school.
In a statement released to the University, he noted that “As unintended consequences, rankings create perverse incentives for institutions to report misleading or inaccurate data, set policies to boost rankings rather than nobler objectives, or divert financial aid from students with financial need to high-scoring students with means in order to maximize ranking criteria. Ultimately, the suitability of any particular medical school for any given student is too complex, nuanced, and individualized to be served by a rigid ranked list, no matter the methodology.” He also pointed out that information which would allow students to compare schools would be available in unweighted form from the American Association of Medical Colleges.
A rush of other medical schools followed suit.
So, rankings optional?
As is becoming clear, while the rankings fascinate people, draw readers, and have forced universities to be more responsive to external constituencies, they have substantial unintended consequences that shouldn’t be overlooked. So, too with any system of measurement.
And in what might just be the early warning we are looking for, some outlets have dropped the ranking business altogether. The Economist did so in 2022 amidst “withering criticism.”
Colin Diver, former President of Reed College says he “sure hopes” the current pressures will result in the rankings being de-emphasized. As he said, should this happen, “educators, freed from the U.S. News straitjacket, will be liberated to pursue their distinctive educational missions: to set their own priorities; to focus more intently on what students learn, which now receives no weight in the ranker’s calculus; to take chances on more promising applicants from less privileged backgrounds; and to prepare graduates for a broader range of fulfilling careers. They will, in short, return to higher education’s historical function as an engine of social mobility and service for the public good.”