What’s your number

A critical look at university ranking systems

In the 2012 QS World University Rankings published last Monday, McGill placed 18th in the world and ahead of all other Canadian institutions. The top three places were filled by the Massachusetts Institute of Technology, University of Cambridge, and Harvard University, respectively.

These all look like remarkable achievements, but what do they actually mean? Do the rankings really matter?

The QS rankings are based on five performance indicators. The academic peer review, which asks active academics around the world to nominate up to thirty institutions in their field (excluding their own), accounts for forty per cent of a school’s score. The number of citations per faculty member and faculty-student ratio each make up another twenty per cent. Finally, the percentage of international students and staff, plus a recruiter review, each hold a ten percent weight.

While this methodology seems reasonable, a closer analysis of this method points out many problems. For instance, the largest concern involves the academic peer review. By accounting for nearly half of an institution’s score, this single element can create great variability in the rankings between years. In addition. it is unlikely for academics to have accurate views of exactly how teaching works at universities worldwide. In an interview with the Sydney Morning Herald, Vice-chancellor of Macquarie University in Australia Steven Schwartz commented, “It’s a bit like evaluating cars by asking pedestrians to rank them, whether they’ve actually ever been in them or driven them or even know how to drive.” The recruiter review has drawn similar criticism for creating variability and unreliability in the results.

Another issue concerns the usage of citations per faculty. Citations are very much dominated by the natural science field and most are published in English. This implies a disadvantage for universities that do not use English as a primary language or have a strong focus on their arts and humanities departments.

Many also claim that the QS method depends too much on an institution’s wealth, which can indeed affect indicators by improving factors like research and student satisfaction. But money is not everything – a big endowment cannot improve other aspects of a university’s quality of education, such as the faculty’s dedication to teaching or student diversity.

Despite these controversies, university rankings still appeal very much to students. Studies have shown an increase in ranking correlates with an improvement in the next year’s undergraduate admissions in terms of higher yields, grades, and other standard measures.

As such, the results of ranking systems likely make universities eager to improve their funding, and by consequence, their international reputations. McGill may have even more incentive as its better-funded neighbour the University of Toronto, is close behind at 19th place (Quebec universities are underfunded by an estimated $750 million per year compared to other Canadian universities). Nonetheless, money should not be the only medium universities use to improve their quality.

Ranking systems should not be the only means students use to determine where they will be most happy. Perhaps the media should draw less attention to these unbalanced results so that universities and students can focus on what they themselves really value.