Tuesday, March 20, 2007

On ranking journals

Via Cosma Shalizi comes a new way of ranking journals developed by Jevin West, Ben Althouse, Martin Rosvall, Ted Bergstrom, and Carl Bergstrom. It works on the PageRank principle: take a random paper from a journal and do a random walk on the citations, using the stationary distribution to rank journals. The result is eigenfactor.org, a website that catalogues many journals and other publications, and provides both a ranking by their "pagerank" but also a "value for money" that might be useful when deciding which journals to purchase for a library.

Impact measurement for journals has become a popular parlor game, as well as impact factors like the 'h-index' for individual researchers. There are all kinds of problems with these measurements in general, and Eigenfactor does provide a way of eliminating some of the usual problems with measuring impact across multiple communities with different citation mechanisms, community sizes, and so on.

Eigenfactor has a few top 10 lists for different areas (science, social science, etc): here's my informal list of top ranked computer science algorithms (and friends) journals, ranked by article impact factor (all scores are percentiles over the universe of 6000+ journals considered):
  • SIAM Review: 98.35
  • J. ACM: 97.91
  • IEEE Trans. Inf. Theory: 95.05
  • Machine Learning: 93.92
  • SICOMP: 93.04
  • JCSS: 90.93
  • Journal of Algorithms: 90.31*
  • DCG: 85.29
  • CGTA: 81.96
  • Algorithmica: 79.13

* The ACM Transactions on Algorithms, which absorbed most of the editorial board of J. Alg, is too new to show up. This ranking should probably reflect the historical relevance of J. Alg as well as its current state.

7 comments:

  1. Just shows that this measure is useless. There is no reasonable ranking under which JALG (or TALG) is a better journal (or even remtoely closely in quality to) than DCG. In my very humble opinion. In fact, in some cases DCG publishes papers that are stronger than JACM papers (Hales work and some papers from SoCG special issues to name some concrete examples).

    Repeat after me: "all rankings are stupid".

    ReplyDelete
  2. ok.

    "all rankings are stupid".

    there, feel better ?

    having said that, you do realize that having some cases where DCG papers are better than JACM papers doesn't mean that DCG is overall better than JACM ?

    The folks at eigenfactor rightly emphasize that these rankings can only (if at all) be used to make judgements about which journals to keep; it's not really about quality (a much more nebulous notion)

    ReplyDelete
  3. Much better. Thank you.

    I agree with your comment, naturally...

    ReplyDelete
  4. I guess there is another problem with ranking journals. Suppose we have a new ranking algorithm: we run it and it says that journal A is better than B, although we think B is better than A. So we declare the algorithm doesn't work. If the outcome is what we expect, then we say that it's a good ranking.

    All in all, all these rankings do, is just repeat what we already think.

    ReplyDelete
  5. we run it and it says that journal A is better than B, although we think B is better than A. So we declare the algorithm doesn't work. If the outcome is what we expect, then we say that it's a good ranking.

    Only if that is true for all pairs A and B. However if the new ranking generally matches our perception but it moves up or down a single journal A, then people are usually willing to reconsider their opinion of journal A.

    ReplyDelete

Disqus for The Geomblog