Tuesday, May 04, 2010

Ranking departments topologically rather than totally.

The US news rankings came out a while back (Jon Katz had two posts on this). As usual, this will prompt a round of either back-slapping or back-stabbing, depending on whether your department ranking went up or down (ours didn't change at all, which could also be a bad thing).

What I'd like to propose is a completely different way of doing rankings.

It's generally accepted that the place where rankings make the most difference is in graduate admissions, and there's a secondary effect in faculty hiring (since faculty want to get good students to work with). The general belief is that students will tiebreak between universities based on ranking, in the absence of more contextual information.

But it's also insanely silly to obsess about the relative rankings of (say) the top 5 schools, or to exult in your movement from 53 to 47 in the rankings. What I believe is generally true is that there are rough strata (antichains in a partial order, if you will) in which departments are generally of equivalent rank. Spending time and energy trying to optimize within such a statum is a useless waste (which doesn't mean that people don't LOVE to do it, because any activity is positive activity, right ? ... right ? ....)

What we do keep track of, and is interesting, is which universities our admits reject us for, or accept us in place of. If I'm not Stanford or MIT, but students are rejecting me only to go there, then I'm not happy, but I feel minor relief that at least they're not rejecting me for the University of Obscurity in Scarceville, Podunkistan.

But of course we know what this is ! it's a topological order ! So I propose the following tiering scheme:
A department is at tier k if "all" departments it is rejected for are at tier k-1 or less. 
Note 1: We have to define "all" carefully - there's always someone who's (say) following a boyfriend or girlfriend, or really wants to live in some town, etc etc. My preferred definition of "all" would be "at least 80%" or some large figure like that.

Note 2: If in fact people did select universities based on the "current" ranking scheme, this order would reflect that. Of course I don't believe this will happen

Note 3: This might even allow for more fine grained analysis based on subject area. Depending on the areas of the admitted students, one could create stratified orders by area.

Note 4: No I have no clue how to get this data, but many departments informally maintain this information (I know we try to get this info when we can), and it's not like the current approach is dripping with rigor anyway.

Note 5: If you're an administrator, you'll hate this when you're trying to move to a higher level, and you'll love it when you actuall make the move. The problem with the lack of granularity might annoy some people though.


  1. At CMU this information was posted publicly on the department admission office notice board and on an internal web page.(i.e a list of all the students we accepted and which schools they ended up going to.)

    As you would expect pretty much everyone who rejected us ended up going to MIT, Berkeley and Stanford (in that order I think).

    But for those we accepted there was no public information of which other schools they considered. Informally I think for about 50% we were the only "top" school and the other 50% had other "top" offers.

  2. Irrespective of your elaborate discussions on dept. rankings, it is unbecoming of a respected academic to use bigoted terms like Podunkistan when you are trying to paint a picture of obscure inferior places.

  3. I have been thinking about the problem of ranking units, whether they are countries, departments, universities or what not. Especially bing a post-doc in Singapore, which is a ranking obsessed country.

    Anyway, my conclusion is pretty much similar to yours, ranking between top-10 departments etc is useless endeavor. Differences are too small to make any real difference, except in rank order.

    Other problem I feel is not adequately addressed by rankings is that in effect it is a dimensionality reduction from D-dimensional space to 1-dimensional. To log(D)-dimensional we could get easily with JL lemma, but I believe that we lose a lot of relevant info by going to 1-d.

    So my suggestion is to collect statistics, and cluster all departments to k different clusters. Then you need to assign Tier -labels to each cluster and you are done.


Disqus for The Geomblog