Today’s post is aimed at helping anyone who is currently trying to decide which college or university to attend, starting next September. Many factors will likely be considered in making the final decisions of where to go, and no doubt, a lot of people will give much consideration to differences in the costs of attending different schools. Tuitions can vary a great deal, and there is a general positive relationship between the costs of attending a particular institution for an undergraduate degree and the ‘prestige’ of that institution, as perceived by the public. Of course, critical to justifying the higher costs of attending some colleges, there is also a widespread belief that the highly-prestigious institutions deliver a higher quality education than what is available at the ‘less-knowns’. This latter belief is an illusion. It is based on the mistaken expectation that you get what you pay for when it comes to higher education. In some ways you do, but not in the ways that are relevant to the majority of education consumers — the students and their parents.
This morning, I came across an illuminating report of a study that was done by researchers at Wabash College, which basically shows that there is almost no relationship between the amount of money a college spends on education and the quality of the education it provides. The study was reported at the recent annual meeting of the Association of American Colleges and Universities, and you can find the details, here.
Reading about this study inspired me to re-post something today that was originally published here several months, on June 25, 2012, Choosing a University for an Undergraduate Degree? Ignore the Rankings Lists. In this commentary, I explain why the prestige of a university is not significantly related the quality of undergraduate education it delivers, or to the quality of the student-experience. This is a total cut-and-paste, so if you read that post recently, you can move on to somewhere else from here, without missing anything new.
Choosing a University for an Undergraduate Degree? Ignore the Rankings Lists
Most students have options when it comes to choosing which college or university to attend for their undergraduate education. Some of the important factors to consider when making choices include, the availability of the desired program of study, the location of the institution, and the costs of attending. Different people may weigh each of these factors differently, but nearly everyone considers one or more of them when deliberating over the options.
Many people will also consider another factor — one that is much less tangible than program availability, location, or cost. I am referring here to the general reputation that an institution has among the lay population. Such reputations are often unqualified and vague. Rather than being based on genuine comparisons of the value of education or training available at different schools, they tend to be based on “what one hears” about a particular school, or how often it is mentioned in the news, or in other contexts (TV shows, movies, magazine articles, etc.). The general repute of an institution will, nonetheless, sway the decisions of many students (and parents) about which university or college to attend for a postsecondary education.
While there is no doubt that unaccredited diploma mills are a poor choice for any serious student who wants a worthwhile education, general reputations vary greatly among the thousands of accredited colleges and universities in North America. Most of us have acquired implicit respect for certain institutions merely from hearing them referred to often and in mostly positive contexts. Many of us will also make unsubstantiated generalizations about an institution based on its overall reputation. In most instances, an unwarranted overgeneralization about a college or university does not lead to any significant problems. On the other hand, it may become a problem for people who are trying to decide where to invest in a postsecondary education, especially if it leads them to make compromises in terms of the more valid considerations of program availability, location, or cost. Unfounded beliefs about the relative quality of undergraduate teaching at different schools can lead to flawed choices. Excellent opportunities may exist at schools with less recognizable names.
Okay, so subjective impressions that are based on indirect evidence may be flawed and shortsighted. But, what about those college or university-rankings lists that various organizations publish from time to time? In general, such rankings tend to be based on objective criteria, so it seems reasonable to have some faith in their validity. But, does that make them useful?
Although they can be mildly interesting to some, university rankings are virtually useless to the average person. At least, they are not useful for those purposes for which many people will actually use them. As I will explain, such rankings should be ignored when deciding where to go to for an undergraduate degree in most disciplines within the sciences, social sciences, or applied sciences, the arts, or fine arts.
Importantly, I am referring here only to rankings that deal exclusively with universities, without including undergraduate colleges. When it comes to comprehensive rankings-lists for all U.S. colleges and universities that offer undergraduate programs, the best I have seen is published by Forbes. Unlike university-rankings lists, the undergraduate college rankings tend to be more useful for discriminating between the “good” places to go for a bachelor’s degree and the “not so good.”
I had the inspiration for today’s blog post while recently perusing the Times Higher Education World University Rankings for 2011-12. The Times Higher Education list is one of the better known and most comprehensive ranking of global universities. Some of the other well-known university rankings lists include US News National University Rankings, QS World University Rankings, and Academic Ranking of World Universities, to name just a few. Each list has a particular geographical scope, which is usually limited to a particular country or continent. A few are global, including the THE list. A respected and widely-consulted ranking of Canadian universities is published each year by McLean’s.
I rarely check out these types of ranking lists, even though one might expect I would be more than a bit interested in them, given that I am a university professor, I am active in research and teaching, and I spend a lot of time giving students advice about how to achieve their higher-education goals. I also have three children who are likely to be heading off to college or university over the next few years. Yes, I would seem to have a number of good reasons for wanting to know which schools are the best.
But, the truth is, I would never consider using a university-rankings list as an aid to student advising. They simply are not useful for that purpose. To understand why, one must appreciate the variety of important activities that are conducted at any global university, beyond the teaching of bachelor’s students. In my experience, most members of the public who have never been employed in a university setting grossly overestimate how much these institutions focus their resources on undergraduate teaching. Experienced academics, on the other hand, understand that it is the research mission that is most highly valued and nurtured by university administrators, and by the governments that provide public funding. Most university professors dedicate more blood, sweat, and tears to their research, and perhaps also to training of Ph.D. students, than they do to teaching undergraduate students. Universities tend to hire new faculty members on the basis of their research profiles, and give somewhat less consideration to teaching ability. In other words, professors are generally hired to do research, and expected to teach, whether they are good at teaching or not. Of course, this is also true at some of the “highest ranked” universities. Most university professors have never received any formal training on how to teach effectively. I often tell students that this explains why so many of us are lousy teachers!
Back to the rankings lists… Let’s be straight on what I’m saying, here. My position is that the relative ranking of universities on these lists should not be used to decide which schools are likely to provide a better undergraduate education. The simplest reason why university rankings should not be used to decide which school to attend is because those rankings are based on many dimensions or aspects of a university that have very little, or nothing at all, to do with content or delivery of undergraduate programs. Below, I’ll say more about the types of factors that go into the compilation of a university-ranking list, using the methodology behind the THE list, as a general example.
For now, let me make the point that only around 5% of a university’s score on the THE ranking is based on factors that are directly relevant to undergraduate education. The other 95% of a university’s score is based on factors that have little or no relevance to determining the quality or delivery of undergraduate education available to its students. Consider the central missions of any global university — research, teaching, knowledge transfer, and international activity.
The following is an overview of how THE ranking scores are determined.
There are 13 performance indicators, grouped into 5 areas:
Teaching — the learning environment (worth 30 per cent of the overall ranking score)
Research — volume, income and reputation (worth 30 per cent)
Citations — research influence (worth 30 per cent)
Industry income — innovation (worth 2.5 per cent)
International outlook — staff, students and research (worth 7.5 per cent).
Factors related to teaching account for less than one-third of a university’s overall score. THE also provides alternative rankings based on the specific performance areas. So, what if we just look at the rankings based only on the Teaching indicators? Well, let’s look at how they determine this particular 30% of the overall score — you will see that only a tiny fraction of it comes from undergraduate teaching considerations.
Half of the Teaching score is based on the results of a survey. Quoting from the methodological description on the THE website:
” Thomson Reuters carried out its Academic Reputation Survey – a worldwide poll of experienced scholars – in spring 2011. It examined the perceived prestige of institutions in both research and teaching… The results of the survey with regard to teaching make up 15% of the overall rankings score.”
There are two points I want to make about this measurement: First, notice that it’s based on the “perceived prestige” of a university in teaching and research. I would venture to say that it’s not too difficult for an experienced scholar to judge the prestige of a university based on quality and quantity of research conducted, because there are many visible indicators of research funding, activity, and output. Unless someone was once a student at a particular university, however, it is unlikely that he or she will have a clear view of the quality of undergraduate teaching that goes on at most universities, other than the ones with which they are currently associated. Admittedly, there are some scholars who happen to do research or administrative work, which, in one way or another, gives them a close enough vantage point to a few universities that they may be able to provide valid assessments in terms of the general quality of undergraduate teaching. But, individuals with real insight into the quality of undergraduate teaching at different universities are exceedingly rare.
The second limitation I want to point out about the “perceived prestige” measurement is that when pondering their views on the quality of teaching that exists in universities at which they have never been a student or instructor themselves, most experienced academics will consider what they know about the “products” of doctoral-level training, or even the postdoctoral training environment. These products, of course, are the people receiving a Ph.D., many of whom go on to have significant impact in various areas of research, engineering, or some other type of creative production. In other words, perceptions about teaching quality are based on perceptions of postgraduate training, not undergraduate teaching.
Other factors that contribute to the Teaching category include: 1) Ratio of PhD to bachelor’s degrees awarded by each university, which is worth 2.25% of the overall ranking scores. 2) Number of Ph.Ds awarded relative to the number of faculty members (i.e., academic staff) at the university, worth 6% of the overall score. 3) institutional income scaled against academic staff numbers, … adjusted for purchasing-power parity so that all nations compete on a level playing field, …” This is worth 2.25% of a university’s overall score.
Factors 1 and 2 are more relevant to training of graduate students. Most students join the workforce after college, so only a small proportion of undergraduates would have anything at stake in the quality of graduate training available where they choose to earn their bachelor’s degree. Although some undergraduates may appreciate being in an environment that includes graduate students, most do not care. Factor 3 is about money; having more of it may contribute in some ways to having superior undergraduate teaching resources, but those are seldom the spending priorities for a university, these days. In other words, none of the factors that have been considered so far are valid indicators of the quality of undergraduate teaching.
If you’re keeping track, you may have noticed that we still need to account approximately 5% of the overall THE ranking score. Finally, we’re getting to something that’s actually relevant to predicting the quality of undergraduate teaching — or at least, the quality of the undergraduate learning experience:
“Our teaching and learning category also employs a staff-to-student ratio as a simple proxy for teaching quality – suggesting that where there is a low ratio of students to staff, the former will get the personal attention they require from the institution’s faculty,…”
As the folks at THE are quick to point out, “… this measure serves as only a crude proxy – after all, you cannot judge the quality of the food in a restaurant by the number of waiters employed to serve it…” Accordingly, it accounts for just 4.5 per cent of the overall ranking scores.
Despite it’s crudeness, the staff-to-student ratio is, in my opinion, the only factor contributing to the overall ranking score that is clearly relevant to determining the quality of undergraduate education for the majority of university students.
I hope my analysis of the methods behind the Times Higher Education global university rankings makes the point that universities exist for the sake of much more than just teaching undergraduate students. University professors are hired to do research, and expected to teach — not the other way around. Things are somewhat different at most liberal arts colleges, however, so it’s important to keep in mind that I’m talking about universities, here. Of course, different organizations use different formulae to compile their university-rankings lists, so there is some variation in terms of how relevant the rankings are to the concerns of undergraduate students. But, its important to remember, if the rankings are comparing universities and not just undergraduate colleges, much more weight will be given to various aspects of the research mission, including doctoral-level training.
The problem I’m getting at is the way these rankings lists end up being used by many regular people to make important decisions that should not be made on the basis of such rankings. Don’t get me wrong — it’s not that I don’t think the rankings lists are useful, nor am I about to criticize the methods that are used to compile them. They are relevant for regular folks, for certain reasons. But, none of those reasons have much of anything to do with the undergraduate training mission of the typical global university. These rankings lists can contribute to the impressions that typical consumers have about the “quality” of particular universities. This is fair enough. After all, there is a lot of good research and objective analysis behind some of rankings lists. The Times Higher Education World University ranking is a fine example of that.