Paying the Price for Higher Education: Don’t Do It!

Today’s post is aimed at helping anyone who is currently trying to decide which college or university to attend, starting next September. Many factors will likely be considered in making the final decisions of where to go, and no doubt, a lot of people will give much consideration to differences in the costs of attending different schools. Tuitions can vary a great deal, and there is a general positive relationship between the costs of attending a particular institution for an undergraduate degree and the ‘prestige’ of that institution, as perceived by the public. Of course, critical to justifying the higher costs of attending some colleges, there is also a widespread belief that the highly-prestigious institutions deliver a higher quality education than what is available at the ‘less-knowns’. This latter belief is an illusion. It is based on the mistaken expectation that you get what you pay for when it comes to higher education. In some ways you do, but not in the ways that are relevant to the majority of education consumers — the students and their parents.

This morning, I came across an illuminating report of a study that was done by researchers at Wabash College, which basically shows that there is almost no relationship between the amount of money a college spends on education and the quality of the education it provides. The study was reported at the recent annual meeting of the Association of American Colleges and Universities, and you can find the details, here.

Reading about this study inspired me to re-post something today that was originally published here several months, on June 25, 2012, Choosing a University for an Undergraduate Degree? Ignore the Rankings Lists. In this commentary, I explain why the prestige of a university is not significantly related the quality of undergraduate education it delivers, or to the quality of the student-experience. This is a total cut-and-paste, so if you read that post recently, you can move on to somewhere else from here, without missing anything new.


Choosing a University for an Undergraduate Degree? Ignore the Rankings Lists

Most students have options when it comes to choosing which college or university to attend for their undergraduate education. Some of the important factors to consider when making choices include, the availability of the desired program of study, the location of the institution, and the costs of attending. Different people may weigh each of these factors differently, but nearly everyone considers one or more of them when deliberating over the options.

Many people will also consider another factor — one that is much less tangible than program availability, location, or cost. I am referring here to the general reputation that an institution has among the lay population. Such reputations are often unqualified and vague. Rather than being based on genuine comparisons of the value of education or training available at different schools, they tend to be based on “what one hears” about a particular school, or how often it is mentioned in the news, or in other contexts (TV shows, movies, magazine articles, etc.). The general repute of an institution will, nonetheless, sway the decisions of many students (and parents) about which university or college to attend for a postsecondary education.

While there is no doubt that unaccredited diploma mills are a poor choice for any serious student who wants a worthwhile education, general reputations vary greatly among the thousands of accredited colleges and universities in North America. Most of us have acquired implicit respect for certain institutions merely from hearing them referred to often and in mostly positive contexts. Many of us will also make unsubstantiated generalizations about an institution based on its overall reputation. In most instances, an unwarranted overgeneralization about a college or university does not lead to any significant problems. On the other hand, it may become a problem for people who are trying to decide where to invest in a postsecondary education, especially if it leads them to make compromises in terms of the more valid considerations of program availability, location, or cost. Unfounded beliefs about the relative quality of undergraduate teaching at different schools can lead to flawed choices. Excellent opportunities may exist at schools with less recognizable names.

Okay, so subjective impressions that are based on indirect evidence may be flawed and shortsighted. But, what about those college or university-rankings lists that various organizations publish from time to time? In general, such rankings tend to be based on objective criteria, so it seems reasonable to have some faith in their validity. But, does that make them useful?

Although they can be mildly interesting to some, university rankings are virtually useless to the average person. At least, they are not useful for those purposes for which many people will actually use them. As I will explain, such rankings should be ignored when deciding where to go to for an undergraduate degree in most disciplines within the sciences, social sciences, or applied sciences, the arts, or fine arts.

Importantly, I am referring here only to rankings that deal exclusively with universities, without including undergraduate colleges. When it comes to comprehensive rankings-lists for all U.S. colleges and universities that offer undergraduate programs, the best I have seen is published by Forbes. Unlike university-rankings lists, the undergraduate college rankings tend to be more useful for discriminating between the “good” places to go for a bachelor’s degree and the “not so good.”

I had the inspiration for today’s blog post while recently perusing the Times Higher Education World University Rankings for 2011-12. The Times Higher Education list is one of the better known and most comprehensive ranking of global universities. Some of the other well-known university rankings lists include US News National University Rankings, QS World University Rankings, and Academic Ranking of World Universities, to name just a few. Each list has a particular geographical scope, which is usually limited to a particular country or continent. A few are global, including the THE list. A respected and widely-consulted ranking of Canadian universities is published each year by McLean’s.

I rarely check out these types of ranking lists, even though one might expect I would be more than a bit interested in them, given that I am a university professor, I am active in research and teaching, and I spend a lot of time giving students advice about how to achieve their higher-education goals. I also have three children who are likely to be heading off to college or university over the next few years. Yes, I would seem to have a number of good reasons for wanting to know which schools are the best.

But, the truth is, I would never consider using a university-rankings list as an aid to student advising. They simply are not useful for that purpose. To understand why, one must appreciate the variety of important activities that are conducted at any global university, beyond the teaching of bachelor’s students. In my experience, most members of the public who have never been employed in a university setting grossly overestimate how much these institutions focus their resources on undergraduate teaching. Experienced academics, on the other hand, understand that it is the research mission that is most highly valued and nurtured by university administrators, and by the governments that provide public funding. Most university professors dedicate more blood, sweat, and tears to their research, and perhaps also to training of Ph.D. students, than they do to teaching undergraduate students. Universities tend to hire new faculty members on the basis of their research profiles, and give somewhat less consideration to teaching ability. In other words, professors are generally hired to do research, and expected to teach, whether they are good at teaching or not. Of course, this is also true at some of the “highest ranked” universities. Most university professors have never received any formal training on how to teach effectively. I often tell students that this explains why so many of us are lousy teachers!

Back to the rankings lists… Let’s be straight on what I’m saying, here. My position is that the relative ranking of universities on these lists should not be used to decide which schools are likely to provide a better undergraduate education. The simplest reason why university rankings should not be used to decide which school to attend is because those rankings are based on many dimensions or aspects of a university that have very little, or nothing at all, to do with content or delivery of undergraduate programs. Below, I’ll say more about the types of factors that go into the compilation of a university-ranking list, using the methodology behind the THE list, as a general example.

For now, let me make the point that only around 5% of a university’s score on the THE ranking is based on factors that are directly relevant to undergraduate education. The other 95% of a university’s score is based on factors that have little or no relevance to determining the quality or delivery of undergraduate education available to its students. Consider the central missions of any global university — research, teaching, knowledge transfer, and international activity.

The following is an overview of how THE ranking scores are determined.

There are 13 performance indicators, grouped into 5 areas:

Teaching — the learning environment (worth 30 per cent of the overall ranking score)

Research — volume, income and reputation (worth 30 per cent)

Citations — research influence (worth 30 per cent)

Industry income — innovation (worth 2.5 per cent)

International outlook — staff, students and research (worth 7.5 per cent).

Factors related to teaching account for less than one-third of a university’s overall score. THE also provides alternative rankings based on the specific performance areas. So, what if we just look at the rankings based only on the Teaching indicators? Well, let’s look at how they determine this particular 30% of the overall score — you will see that only a tiny fraction of it comes from undergraduate teaching considerations.

Half of the Teaching score is based on the results of a survey. Quoting from the methodological description on the THE website:

            ” Thomson Reuters carried out its Academic Reputation Survey – a worldwide poll of experienced scholars – in spring 2011. It examined the perceived prestige of institutions in both research and teaching… The results of the survey with regard to teaching make up 15% of the overall rankings score.”

There are two points I want to make about this measurement: First, notice that it’s based on the “perceived prestige” of a university in teaching and research. I would venture to say that it’s not too difficult for an experienced scholar to judge the prestige of a university based on quality and quantity of research conducted, because there are many visible indicators of research funding, activity, and output. Unless someone was once a student at a particular university, however, it is unlikely that he or she will have a clear view of the quality of undergraduate teaching that goes on at most universities, other than the ones with which they are currently associated. Admittedly, there are some scholars who happen to do research or administrative work, which, in one way or another, gives them a close enough vantage point to a few universities that they may be able to provide valid assessments in terms of the general quality of undergraduate teaching. But, individuals with real insight into the quality of undergraduate teaching at different universities are exceedingly rare.

The second limitation I want to point out about the “perceived prestige” measurement is that when pondering their views on the quality of teaching that exists in universities at which they have never been a student or instructor themselves, most experienced academics will consider what they know about the “products” of doctoral-level training, or even the postdoctoral training environment. These products, of course, are the people receiving a Ph.D., many of whom go on to have significant impact in various areas of research, engineering, or some other type of creative production. In other words, perceptions about teaching quality are based on perceptions of postgraduate training, not undergraduate teaching.

Other factors that contribute to the Teaching category include: 1) Ratio of PhD to bachelor’s degrees awarded by each university, which is worth 2.25% of the overall ranking scores. 2) Number of Ph.Ds awarded relative to the number of faculty members (i.e., academic staff) at the university, worth 6% of the overall score. 3) institutional income scaled against academic staff numbers, … adjusted for purchasing-power parity so that all nations compete on a level playing field, …” This is worth 2.25% of a university’s overall score.

Factors 1 and 2 are more relevant to training of graduate students. Most students join the workforce after college, so only a small proportion of undergraduates would have anything at stake in the quality of graduate training available where they choose to earn their bachelor’s degree. Although some undergraduates may appreciate being in an environment that includes graduate students, most do not care. Factor 3 is about money; having more of it may contribute in some ways to having superior undergraduate teaching resources, but those are seldom the spending priorities for a university, these days. In other words, none of the factors that have been considered so far are valid indicators of the quality of undergraduate teaching.

If you’re keeping track, you may have noticed that we still need to account approximately 5% of the overall THE ranking score. Finally, we’re getting to something that’s actually relevant to predicting the quality of undergraduate teaching — or at least, the quality of the undergraduate learning experience:

            “Our teaching and learning category also employs a staff-to-student ratio as a simple proxy for teaching quality – suggesting that where there is a low ratio of students to staff, the former will get the personal attention they require from the institution’s faculty,…”

As the folks at THE are quick to point out, “… this measure serves as only a crude proxy – after all, you cannot judge the quality of the food in a restaurant by the number of waiters employed to serve it…” Accordingly, it accounts for just 4.5 per cent of the overall ranking scores.

Despite it’s crudeness, the staff-to-student ratio is, in my opinion, the only factor contributing to the overall ranking score that is clearly relevant to determining the quality of undergraduate education for the majority of university students.

I hope my analysis of the methods behind the Times Higher Education global university rankings makes the point that universities exist for the sake of much more than just teaching undergraduate students. University professors are hired to do research, and expected to teach — not the other way around. Things are somewhat different at most liberal arts colleges, however, so it’s important to keep in mind that I’m talking about universities, here. Of course, different organizations use different formulae to compile their university-rankings lists, so there is some variation in terms of how relevant the rankings are to the concerns of undergraduate students. But, its important to remember, if the rankings are comparing universities and not just undergraduate colleges, much more weight will be given to various aspects of the research mission, including doctoral-level training.

The problem I’m getting at is the way these rankings lists end up being used by many regular people to make important decisions that should not be made on the basis of such rankings. Don’t get me wrong — it’s not that I don’t think the rankings lists are useful, nor am I about to criticize the methods that are used to compile them. They are relevant for regular folks, for certain reasons. But, none of those reasons have much of anything to do with the undergraduate training mission of the typical global university. These rankings lists can contribute to the impressions that typical consumers have about the “quality” of particular universities. This is fair enough. After all, there is a lot of good research and objective analysis behind some of rankings lists. The Times Higher Education World University ranking is a fine example of that.



  1. Dr. Mumby:

    Just came across your blog and glad i did so as i found this article(and many others) excellent. You are the first one to mention the lack of correlation between school ranking and academic excellence. Is this really true? I’ve applied to the graduate program of the University of Bridgeport in Connecticut for the Jan 2016 semester and they have low student to teacher ratios and low student population (< 5000 students) but are academically ranked #148 or worse, depending on the ranking you are looking at. Would i be academically disadvantaged by this ranking? Will it hurt my chances of employment compared to one from a highly ranked school?


  2. I don’t doubt your argument that the quality of undergraduate education is unrelated to school prestige. But what about the value of school prestige all by itself? Should the undergraduate hope that the hiring managers and admissions committees of his or her near future be so enlightened as to not overvalue the name of the school on the diploma?

    Even peer-review journals don’t seem immune to the sway of a nice label :

    Or maybe I’m just a bitter State University alumnus who got broken on the rocky shores of an insanely competitive graduate admissions process 😛


  3. Hello Dr. Mumby,

    I just read your posting; and I find it amazingly useful and enlightening just like all your other posts. Thank you for sharing this with us, it totally destroys the popular myth that “high-ranking schools” gives the best undergraduate education.

    I do have a question however; you said that university ranking is not a very good indicator of the quality of undergraduate education, but what about the graduate one? Staff and students ratio and pourcentage of phD awarded to bachelors aside, how well does it predict the quality of education in graduate school?

    Does the ranking of an university predicts quality of training at a graduate level?


    1. thank-you for the kind words, and for the interesting question!
      I would say, Yes, the rankings are somewhat more relevant for discriminating between universities in terms of the training they can offer graduate students. But to the extent that there is a relationship between an institution’s ranking and graduate training, it really has more to do with access to a greater range of training opportunities, usually because there are more and better research facilities, and more well-funded researchers, at the higher ranked universities. The individual professors and researchers at high-ranked universities, however, are not necessarily better at training their graduate students than the professors and researchers at a lower-ranked university. When it comes to the quality of training one gets at the graduate level, much depends on whether he or she has the insight to recognize that graduate school involves a lot of self-teaching and self-training. Other than what the student brings to the table, the other most important determinants of graduate-training quality will depend on what the student’s graduate supervisor can offer in the way of scholarly and professional mentoring. Some of the best graduate supervisors are at lower-ranked schools, and some of the worst are at the top-ranked schools, and vice versa. So, while some of the factors that go into determining a university’s position on most global ranking lists may say something about the institutions overall track-record at producing successful Ph.Ds, or about the total amount of funding held by its researchers (and therefore, available to support graduate training), I don’t think an individual should use the rankings to help decide where to apply for graduate school. The data that help determine a university’s ranking on most list are all very general, and apply to the institution as a whole, rather than individual departments, programs, or professors. So, it is best to ignore those ranking lists when trying to choose a graduate school, and focus instead on the availability of the program you desire, as well as on the potential graduate supervisor, and what he or she has to offer in terms of specific research opportunities, as well as the track records of their former grad students.

      – Dave


      1. I am really pleased to have found this discussion. It certainly rings true with my personal experiences. Some of the introductory classes at my large, well-ranked university had 10 times the number of students in them as the comparable classes my friends were taking at a nearby state school of mediocre rank. Their professors knew them by name and were enthusiastic about their subjects, whereas mine were often noticeably frustrated and exhausted by having to teach students 300+ at a time. In some respects, and certainly in some courses, I believe my friends got a much better education than I did.

        This information is also timely and helpful for me, as I am starting to write a blog about how to effectively fund one’s graduate education. I plan on sharing your article, and stressing the fact that a more expensive school is not necessarily a better school.


  4. Hello, I just recently came across this sight and find it far more informative than most others concerning graduate study. I am about a year away from completing my undergraduate degree, Major: History, Minor: Psychology. I have no lab research experience thus far but I am going to follow your advice and contact some different professors/researchers to see if I can find an opening. My question for you is;
    1.) If I decide first to just get a Masters in psychology will I then only have 2-3 years left for a doctorate? or do most doctorate programs require a full 4-5 years regardless of previous degrees?

    In other words, I’m pretty certain I want to get a phD or Psyd in the long run but I’m not sure if I can get into a program with my current resume, especially one with scholarship

    2.) Is it rare that universities will waive the higher out-of-state tuition costs?

    thank you,


    1. Thank-you for you kind comments, Dusty.
      best of luck with your search for a position in someone’s lab.
      To answer your first question, it is normally 2-3 years in addition to a Master’s. It is not unusual for it to take a bit longer, but most programs today are trying hard to get students through within a normal period of residency. When a student requires longer than that normal period (3 years for a PhD), it is often because of slow progress in his or her research, or delays in getting the dissertation written up. Of course, a lot of people end up taking longer because some unfortunate personal circumstances knock them off the rails for a while.
      About the likelihood of getting an out-of-state fee waiver: It tends to be based on the level financial need. If a candidate is able to demonstrate that the fee differential will be an obstacle to enrolling in their program, then the out-of-state waiver is more likely.
      – Dave


  5. I work for a ‘top 100’ University, one so ranked by both THES and Shanghai Jiaotong Academic ranking of World Universities, and I completely agree with Dave’s cut-and-paste. Here’s a little anecdote to demonstrate the difference, from an undergraduate’s perspective. I used to work for La Trobe University in Melbourne, Australia, one which does [just] appear on the Jiao Tong Top 500 list. I got to know a young postdoc there, who had done his undergraduate degree and doctorate in physics. As he put it, as the son of first-generation Serbian migrants who had not finished high school, they were proud that he had gone to university at all. One or two of his former schoolmates had gone to ‘better’ nearby universities, like Melbourne and Monash, But he himself admitted that his high school performance was marginal, with an average of 51% in most subjects, yet somehow he was able to get in. Because he had done science at school, he continued with that option.

    In his first year, he found that the academic staff went out of their way to make their subjects interesting. At La Trobe, the staff knew they were starting with weak students, and made it their mission to help them achieve their potential. If students were having problems, help was available, extra classes were laid on, and study support teams helped the weaker students.. At the end of the first year, his results were averaging 60%, higher in physics. At the end of the 3-year undergraduate program, he was offered entry to the science honours course in physics, and ultimately received first-class honours. He received a scholarship for his PhD and completed it. He subsequently secured a tenured position at a well-ranked institution, but never lost his commitment to excellence in teaching. His two friends from school didn’t do so well, and their experiences at the ‘better’ universities were that undergraduates were left to sink or swim, and nobody ever noticed or cared about how they were getting on .


    1. Thank-you for sharing this. It is a great example of how the relative ranking of a university fails to predict the relative success of its graduates. The main determinants are the personal qualities a student possesses (determination, work-ethic, and things like that), and the level of support and facilitation they encounter in the learning environment. – Dave


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s