ranking lists

Choosing a University for a Bachelor’s degree in Psychology? Here Are More Reasons to Ignore the Global University Rankings Lists.

My previous commentary argued that university-rankings lists should be ignored when deciding where to attend for a bachelor’s degree in most fields of study. The reasons basically boiled down to this: University rankings are based primarily on research activities and other factors that are unrelated to teaching undergraduate students. Yes, the “greatest universities” have lots of great scholars, who do great research, make great discoveries, and other great stuff. But, that doesn’t mean these “greats” are passing on anything special to average students they meet in the classroom.

In fact, faculty members who do tremendous amounts of research often do very little undergraduate teaching. For example, one of my former colleagues (now retired) is internationally renowned, and her large grants and other research accomplishments over a few decades did much to advance the prestige of our university. But, during the 16 years that our careers overlapped, I don’t think she taught a single undergraduate class. She did give seminar classes to graduate students, but she did no undergraduate teaching that I noticed during those 16 years. This is the way she wanted it, and she was able to swing things that way, because she was an outstanding researcher. I am sure she did her fair share of undergraduate teaching earlier in her career, but later, when she was widely recognized as a research-superstar, she was able to “opt-out” of teaching bachelor’s students. The majority of my colleagues would similarly choose not to teach undergraduate classes if that option was available. Most of us find our research activities and our training of new researchers (i.e., Ph.D. students) far more enjoyable than lecturing to undergraduate students. Most of us, but not all of us, feel that way. And I’m not just talking about my own university, here. It’s like that everywhere, especially in the most research-intensive disciplines.

Many people, including college and university students, do not realize it, but the majority of university professors have little desire to deliver lectures to undergraduate classes. As I mentioned in my previous post, most universities hire professors to do research, and expect them to teach — it’s not the other way around. (This does not apply so much to a liberal arts college). The point I’m getting at is simply that, in general, the more a particular professor contributes to research or training of graduate students, often, the less he or she will teach undergraduate students. And remember, the reputation or standing of a university is based mostly on research-related factors, and the only teaching and training that contributes significantly to a university’s reputation is doctoral-level training. This is just another example of the disconnect between the position a university holds on a university-rankings list, and the quality of undergraduate teaching and training that it delivers.

Consumers should understand these things when trying to decide where to go for a bachelor’s degree. One should ignore the general reputation or prestige of the university, because it is irrelevant to finding the best place for your undergraduate studies. The most relevant factors are geographical location, costs, and the availability of the desired program of study. So, how does one choose among multiple schools that happen to be in the same city, all of which offer the relevant undergraduate program, and with very similar tuition and other associated costs? Are all the options going to be equally attractive? No, it’s likely there are some significant differences in what a student would experience at the different schools. But, in order to discover what those things are before deciding where to enroll, it is necessary to make a personal visit to the schools in question, and ask specific questions of the right people.

Now, I will give a specific example of how focusing on the general reputation of one university relative to another can lead thousands of students to a make less-than-optimal decision when choosing a university. Full-disclosure — I am a faculty member in the Psychology department at Concordia University in Montréal, and I am going to be comparing some features of our undergraduate psychology programs to those offered at McGill University.

Of course, McGill is recognized around the world as a “top tier” university. Some refer to it as the Harvard of Canada. McGill is almost 200 years old. Concordia University was founded in the mid-1970s. I would venture to say that most people outside of Canada have never heard of Concordia University.

Each year, about 500-700 new students begin a bachelor’s program in Psychology at either McGill or Concordia. Both of these universities have relatively large Psychology departments, with a few dozen faculty members, and undergraduate enrollments of over a thousand students. There are two other large universities in Montréal, but they are French, so students who want to attend an English university and study psychology have to choose between McGill and Concordia.

Many students with excellent entry grades know they will be accepted at McGill, and they don’t even apply to Concordia. Many other students who have good grades will apply to and be accepted at both schools, but most of them will decide to attend McGill. I suspect that if you ask these students why they chose McGill when they could have gone to Concordia instead, most will say something about wanting to go to the “best university.” That rationale is lame, and it clearly reflects the difference in general reputation of these two universities among the public.

The undergraduate Psychology programs at Concordia do just as well as McGill’s at preparing students to join the general workforce after obtaining their bachelor’s degree. Concordia may even be doing a better job at this; it’s hard to assess, because students at either school will take the same kinds of courses, taught by equally-qualified and experienced professors. There is nothing that is explicitly taught to Psychology majors at McGill or Concordia that is particularly unique to either program. Moreover, the undergraduate Psychology curricula at Concordia and McGill are basically the same as at any other major university in the U.S. or Canada (or Australia, New Zealand, U.K.). If you plan to join the workforce after earning a bachelor’s degree, then either Concordia or McGill is equally capable of preparing you for that eventuality. In fact, the knowledge acquired as a bachelor’s student in Psychology will be generally the same at any accredited university.

But, there are other things to consider, of course. Among the most relevant are factors that influence satisfaction with the student-experience at the particular universities in question. For example, class size and teacher-to-student ratios tend to be important determinants of student satisfaction.

Most people prefer having classes in which there are 25 – 50 other students rather than classes with 100 – 200 classmates. There are a few reasons why students tend to prefer smaller class sizes, but I’m not going to go into all of that, here. I think most people appreciate that smaller is better when it comes to class-size. So, in terms of this factor, undergraduate Psychology at Concordia gets the nod over the same program of study at McGill. If the plan is for the student to join the workforce after earning their bachelor’s degree, then either school will be equally capable of preparing the student for that eventuality, but the general experience at Concordia will be more enjoyable for most students. To me, that seems like an important consideration to have in mind when choosing where to go to university. This factor can even impact the quality of learning that occurs, because students who are enjoying their classes are more likely to attend them.

Of course, a significant proportion, though still a minority, of those students who earn a bachelor’s degree in Psychology will decide they want to be psychologists, and they will therefore need to go to graduate school to earn a doctorate degree. For these students who plan to go on to the Ph.D., it’s relevant to consider certain additional features of the two Psychology departments being compared. These are features that influence how well the undergraduate programs are at preparing their students for getting into graduate school and succeeding once there.

As discussed in several other places on this blog, the most important thing that Psychology students need to do in order to get into a good Ph.D. program is acquire a lot of research experience. Accordingly, the extent to which students have opportunities to participate in their professor’s research should be a major factor when choosing between two potential schools for a bachelor’s in Psychology. Here again, Concordia gets the nod over McGill. The Psychology department at Concordia has a culture of involving undergraduates in research, beyond the standard option of being able to do an Honors thesis in the final year of the bachelor’s degree. Nearly every faculty member in the Psychology department at Concordia has a few volunteer research assistants working in their labs at any given time, and almost all of these volunteers are Psychology students who are trying to position themselves to be able to get into a master’s or Ph.D. program after the bachelor’s. And the strategy is highly successful — Concordia graduates have a very high success rate when it comes to getting into graduate school. I can’t say anything certain about the prospects for the typical McGill graduate in Psychology, because I don’t have access to the necessary information, but I am quite confident that they do not, in general, have the same success at getting into graduate school as do Concordia students. Of course, many McGill students will also succeed in getting into graduate school, but many will fail simply because they did not get the same opportunities to gain research experience and set up effective letters of recommendation as the Concordia students.

Overall then, for most students looking to study Psychology (in English) at a university in Montreal, Concordia will be a more satisfying choice than McGill. Not for all, but for most. This may be one of the reasons why I often meet Psychology students at Concordia who began their undergraduate degree at McGill but switched to Concordia after talking to friends who were already at Concordia. I have never heard of an undergraduate student switching from Psychology at Concordia to McGill. Although it’s possible that it happens, I have not heard of a single case in 18 years. One thing is for sure, I hear from a lot of Psychology students at Concordia who are glad to be there instead of at McGill.

Okay, I didn’t really set out here to promote Concordia university and it’s Psychology department. And I certainly don’t want to be bashing McGill as an institution, in any way. My purpose here has been to show how the things that really matter in determining one’s satisfaction with a university education are not the same factors that contribute to the public perception or reputation of a particular institution. The global ranking of a university says nothing about what happens at the level of particular programs in specific disciplines. Almost any university will have some areas of strength as well as some areas of mediocrity. These variations within an institution are totally obscured by rankings-lists.

Choosing a University for an Undergraduate Degree? Ignore the Rankings Lists

Most students have options when it comes to choosing which college or university to attend for their undergraduate education. Some of the important factors to consider when making choices include, the availability of the desired program of study, the location of the institution, and the costs of attending. Different people may weigh each of these factors differently, but nearly everyone considers one or more of them when deliberating over the options.

Many people will also consider another factor — one that is much less tangible than program availability, location, or cost. I am referring here to the general reputation that an institution has among the lay population. Such reputations are often unqualified and vague. Rather than being based on genuine comparisons of the value of education or training available at different schools, they tend to be based on “what one hears” about a particular school, or how often it is mentioned in the news, or in other contexts (TV shows, movies, magazine articles, etc.). The general repute of an institution will, nonetheless, sway the decisions of many students (and parents) about which university or college to attend for a postsecondary education.

While there is no doubt that unaccredited diploma mills are a poor choice for any serious student who wants a worthwhile education, general reputations vary greatly among the thousands of accredited colleges and universities in North America. Most of us have acquired implicit respect for certain institutions merely from hearing them referred to often and in mostly positive contexts. Many of us will also make unsubstantiated generalizations about an institution based on its overall reputation. In most instances, an unwarranted overgeneralization about a college or university does not lead to any significant problems. On the other hand, it may become a problem for people who are trying to decide where to invest in a postsecondary education, especially if it leads them to make compromises in terms of the more valid considerations of program availability, location, or cost. Unfounded beliefs about the relative quality of undergraduate teaching at different schools can lead to flawed choices. Excellent opportunities may exist at schools with less recognizable names.

Okay, so subjective impressions that are based on indirect evidence may be flawed and shortsighted. But, what about those college or university-rankings lists that various organizations publish from time to time? In general, such rankings tend to be based on objective criteria, so it seems reasonable to have some faith in their validity. But, does that make them useful?

Although they can be mildly interesting to some, university rankings are virtually useless to the average person. At least, they are not useful for those purposes for which many people will actually use them. As I will explain, such rankings should be ignored when deciding where to go to for an undergraduate degree in most disciplines within the sciences, social sciences, or applied sciences, the arts, or fine arts.

Importantly, I am referring here only to rankings that deal exclusively with universities, without including undergraduate colleges. When it comes to comprehensive rankings-lists for all U.S. colleges and universities that offer undergraduate programs, the best I have seen is published by Forbes. Unlike university-rankings lists, the undergraduate college rankings tend to be more useful for discriminating between the “good” places to go for a bachelor’s degree and the “not so good.”

I had the inspiration for today’s blog post while recently perusing the Times Higher Education World University Rankings for 2011-12. The Times Higher Education list is one of the better known and most comprehensive ranking of global universities. Some of the other well-known university rankings lists include US News National University Rankings, QS World University Rankings, and Academic Ranking of World Universities, to name just a few. Each list has a particular geographical scope, which is usually limited to a particular country or continent. A few are global, including the THE list. A respected and widely-consulted ranking of Canadian universities is published each year by McLean’s.

I rarely check out these types of ranking lists, even though one might expect I would be more than a bit interested in them, given that I am a university professor, I am active in research and teaching, and I spend a lot of time giving students advice about how to achieve their higher-education goals. I also have three children who are likely to be heading off to college or university over the next few years. Yes, I would seem to have a number of good reasons for wanting to know which schools are the best.

But, the truth is, I would never consider using a university-rankings list as an aid to student advising. They simply are not useful for that purpose. To understand why, one must appreciate the variety of important activities that are conducted at any global university, beyond the teaching of bachelor’s students. In my experience, most members of the public who have never been employed in a university setting grossly overestimate how much these institutions focus their resources on undergraduate teaching. Experienced academics, on the other hand, understand that it is the research mission that is most highly valued and nurtured by university administrators, and by the governments that provide public funding. Most university professors dedicate more blood, sweat, and tears to their research, and perhaps also to training of Ph.D. students, than they do to teaching undergraduate students. Universities tend to hire new faculty members on the basis of their research profiles, and give somewhat less consideration to teaching ability. In other words, professors are generally hired to do research, and expected to teach, whether they are good at teaching or not. Of course, this is also true at some of the “highest ranked” universities. Most university professors have never received any formal training on how to teach effectively. I often tell students that this explains why so many of us are lousy teachers!

Back to the rankings lists… Let’s be straight on what I’m saying, here. My position is that the relative ranking of universities on these lists should not be used to decide which schools are likely to provide a better undergraduate education. The simplest reason why university rankings should not be used to decide which school to attend is because those rankings are based on many dimensions or aspects of a university that have very little, or nothing at all, to do with content or delivery of undergraduate programs. Below, I’ll say more about the types of factors that go into the compilation of a university-ranking list, using the methodology behind the THE list, as a general example.

For now, let me make the point that only around 5% of a university’s score on the THE ranking is based on factors that are directly relevant to undergraduate education. The other 95% of a university’s score is based on factors that have little or no relevance to determining the quality or delivery of undergraduate education available to its students. Consider the central missions of any global university — research, teaching, knowledge transfer, and international activity.

The following is an overview of how THE ranking scores are determined.

There are 13 performance indicators, grouped into 5 areas:

Teaching — the learning environment (worth 30 per cent of the overall ranking score)

Research — volume, income and reputation (worth 30 per cent)

Citations — research influence (worth 30 per cent)

Industry income — innovation (worth 2.5 per cent)

International outlook — staff, students and research (worth 7.5 per cent).

Factors related to teaching account for less than one-third of a university’s overall score. THE also provides alternative rankings based on the specific performance areas. So, what if we just look at the rankings based only on the Teaching indicators? Well, let’s look at how they determine this particular 30% of the overall score — you will see that only a tiny fraction of it comes from undergraduate teaching considerations.

Half of the Teaching score is based on the results of a survey. Quoting from the methodological description on the THE website:

            ” Thomson Reuters carried out its Academic Reputation Survey – a worldwide poll of experienced scholars – in spring 2011. It examined the perceived prestige of institutions in both research and teaching… The results of the survey with regard to teaching make up 15% of the overall rankings score.”

There are two points I want to make about this measurement: First, notice that it’s based on the “perceived prestige” of a university in teaching and research. I would venture to say that it’s not too difficult for an experienced scholar to judge the prestige of a university based on quality and quantity of research conducted, because there are many visible indicators of research funding, activity, and output. Unless someone was once a student at a particular university, however, it is unlikely that he or she will have a clear view of the quality of undergraduate teaching that goes on at most universities, other than the ones with which they are currently associated. Admittedly, there are some scholars who happen to do research or administrative work, which, in one way or another, gives them a close enough vantage point to a few universities that they may be able to provide valid assessments in terms of the general quality of undergraduate teaching. But, individuals with real insight into the quality of undergraduate teaching at different universities are exceedingly rare.

The second limitation I want to point out about the “perceived prestige” measurement is that when pondering their views on the quality of teaching that exists in universities at which they have never been a student or instructor themselves, most experienced academics will consider what they know about the “products” of doctoral-level training, or even the postdoctoral training environment. These products, of course, are the people receiving a Ph.D., many of whom go on to have significant impact in various areas of research, engineering, or some other type of creative production. In other words, perceptions about teaching quality are based on perceptions of postgraduate training, not undergraduate teaching.

Other factors that contribute to the Teaching category include: 1) Ratio of PhD to bachelor’s degrees awarded by each university, which is worth 2.25% of the overall ranking scores. 2) Number of Ph.Ds awarded relative to the number of faculty members (i.e., academic staff) at the university, worth 6% of the overall score. 3) institutional income scaled against academic staff numbers, … adjusted for purchasing-power parity so that all nations compete on a level playing field, …” This is worth 2.25% of a university’s overall score.

Factors 1 and 2 are more relevant to training of graduate students. Most students join the workforce after college, so only a small proportion of undergraduates would have anything at stake in the quality of graduate training available where they choose to earn their bachelor’s degree. Although some undergraduates may appreciate being in an environment that includes graduate students, most do not care. Factor 3 is about money; having more of it may contribute in some ways to having superior undergraduate teaching resources, but those are seldom the spending priorities for a university, these days. In other words, none of the factors that have been considered so far are valid indicators of the quality of undergraduate teaching.

If you’re keeping track, you may have noticed that we still need to account approximately 5% of the overall THE ranking score. Finally, we’re getting to something that’s actually relevant to predicting the quality of undergraduate teaching — or at least, the quality of the undergraduate learning experience:

            “Our teaching and learning category also employs a staff-to-student ratio as a simple proxy for teaching quality – suggesting that where there is a low ratio of students to staff, the former will get the personal attention they require from the institution’s faculty,…”

As the folks at THE are quick to point out, “… this measure serves as only a crude proxy – after all, you cannot judge the quality of the food in a restaurant by the number of waiters employed to serve it…” Accordingly, it accounts for just 4.5 per cent of the overall ranking scores.

Despite it’s crudeness, the staff-to-student ratio is, in my opinion, the only factor contributing to the overall ranking score that is clearly relevant to determining the quality of undergraduate education for the majority of university students.

I hope my analysis of the methods behind the Times Higher Education global university rankings makes the point that universities exist for the sake of much more than just teaching undergraduate students. University professors are hired to do research, and expected to teach — not the other way around. Things are somewhat different at most liberal arts colleges, however, so it’s important to keep in mind that I’m talking about universities, here. Of course, different organizations use different formulae to compile their university-rankings lists, so there is some variation in terms of how relevant the rankings are to the concerns of undergraduate students. But, its important to remember, if the rankings are comparing universities and not just undergraduate colleges, much more weight will be given to various aspects of the research mission, including doctoral-level training.

The problem I’m getting at is the way these rankings lists end up being used by many regular people to make important decisions that should not be made on the basis of such rankings. Don’t get me wrong — it’s not that I don’t think the rankings lists are useful, nor am I about to criticize the methods that are used to compile them. They are relevant for regular folks, for certain reasons. But, none of those reasons have much of anything to do with the undergraduate training mission of the typical global university. These rankings lists can contribute to the impressions that typical consumers have about the “quality” of particular universities. This is fair enough. After all, there is a lot of good research and objective analysis behind some of rankings lists. The Times Higher Education World University ranking is a fine example of that.