by Robert Herold
The Spokesman-Review recently reported on the GU Law School's ranking according to U.S. News and World Report. The article created the false impression that these rankings are somehow scientific and beyond reproach. The Spokesman noted that the GU Law School, spiffy new building and all, had dropped into the "Fourth Tier" of ABA-accredited law schools. The blame was traced to a recent drop in the percentage of graduates passing the bar, along with a declining teacher-to-student ratio.
First published in 1983, the U.S. News rankings do matter -- to prospective students, to alumni and to the public at large. Most important, they matter to the institutions themselves. All schools seek to market a favorable image created through good rankings. Likewise, they seek to explain away, or better yet, just ignore, ratings that aren't so good. Gonzaga University actually has always scored very high in this survey. Among 33 Western institutions that fall into the master's-granting category in the 2003 U.S. News study, Gonzaga ranks fourth.
More to the point, the GU Law School ranking raises the question: Why does anyone care? Former Stanford University president Gerhard Casper, in a letter written in 1996 to U.S. News Editor James Fallows, asked just that question. He then explained why it was that he didn't care, but worried that many people did: "I am extremely skeptical that the quality of a university -- any more than the quality of a magazine -- can be measured statistically. However, even it if can, the producers of the U.S. News rankings remain far from discovering the method."
Casper then cited as evidence the relatively low overall rankings of two of America's premier public universities: the University of California at Berkeley and the University of Michigan. Along with the University of Virginia, these two schools always rank at the top of public research universities. But compared to the private institutions, they end up down in the twenties. This past year, for example, Berkeley ranked 20th, Michigan 25th. Casper would rank both in the top half dozen universities in the country. This observation brought him to criticize the capriciousness inherent in the rankings. Harvard ranked first in the faculty resources category in 1995, but sank to 11th in 1996, even though faculty resources had not deteriorated. Or consider the important student-to-faculty ratio. Casper points out that some schools showed suspiciously significant improvement. Could it be that universities, like corporations, were "cooking the books" to make themselves look better?
Consider Cal Tech. It scores low in the whimsical "Value Added" category because its graduation rate doesn't match the expected graduation rate based on incoming SAT scores. Casper asked the obvious question: Could it be that Cal Tech, a most rigorous school, was being penalized for adding too much value?
And speaking of dropping to the dismal depths, two years ago the University of Oregon, of all schools, dropped from the second tier into the third, even though all the relevant indicators were almost identical. Then, in 2003 the Ducks climbed back into the second tier, also for no apparent reason. As for what gives, who knows?
Admissions officers, to no surprise, have figured out ways to play the formula. For example, evidence of "selectivity" gets you points. Two indicators are used to demonstrate selectivity at the undergraduate level, which translates into the presumption of high standards: 1) SAT scores of applicants; and 2) Percentage of applicants admitted. Schools want those high SAT scores and like to report a low percentage of applicants admitted. To get these numbers, what you do is simply drop the requirement for applicants to include their SAT scores at all, which many schools have done. Predictably, students who have marginal SATs no longer send them in. Presto, SAT scores rise and the institution appears to be more exclusive. Of course more students will now apply, too. High school GPA and accompanying record becomes the only measure of aptitude, a much more subjective measure that results in a higher percentage of students being rejected. Presto! More evidence of high standards. In the end, however, the school has not changed at all. If anything, it has lowered its standards, but U.S. News will reward it.
But we Americans do love rankings. I'm no exception. As soon as I read the Spokesman story, I ran right down (like school administrators across the nation, no doubt) and bought an issue of "America's Best Graduate Schools." To my surprise, on inspection, I found a number of most interesting facts, none of which were reflected in the final rankings or in the Spokesman-Review's coverage. Turns out the GU Law School scores quite high in two important categories: 1) "Peer Assessment;" and 2) "Assessment score by lawyers and judges." In the latter category, GU not only scores higher than all fourth-tier schools, but it also scores higher than all third-tier schools (there is no second tier, rather a second quartile, if you can figure that out). But most amazing, it turns out that GU scores higher even than 39 of the 100 top-ranked law schools in the country, including schools such as Syracuse, Maryland and the University of Kentucky (you know, those other big-time basketball powers).
The real story is that Gonzaga finds itself challenged by its own recent successes. Enrollment is up, as are SAT and LSAT scores. New dorms are being constructed. Academic buildings are being expanded. Faculty lines are being reassessed. But it has all happened so quickly. While the school responds to its many new challenges, we ought not make much of these rankings -- neither high nor low. As two researchers put it in a 1998 analysis of the U.S. News report, "the use of ranks exaggerates small differences in quality among schools."
Robert Herold is a retired Eastern Washington University professor who is currently an adjunct professor of political science at Gonzaga University.
Publication date: 05/08/03