The h-index and its cohorts are proposed measures of individual research impact, measured by citation counts. More traditional measures include things like citation counts, and number of high quality journal/conference publications. Derek Lowe at In the Pipeline has been blogging about the Impact Factor, the number du jour for measuring the "quality" of a journal. It is defined as the number of citations to articles in a journal divided by the number of papers in the journal (computed over a window of two years).
Apparently, it is now a popular game for journals to try and ramp up their IF (for example, a journal that many review articles will generate very high citation counts). This has caused much angst among researchers, because like any "objective" system, the IF can be gamed to the point of meaninglessness.
It is probably no surprise that we search for "measurable" indicators of quality, whether they be paper ratings, conference acceptance rates, individual impact, journal quality, or what-have-you. On the other hand, I doubt there is anyone who actually takes such ratings seriously (clearly we pay attention to the numbers, but always with the disclaimer that "true quality can't be measured in numbers"). It must be peculiarly frustrating to people in the "hard" sciences (and I will take the liberty of including theoryCS in this group) that we attempt to find objective truths, and yet our attempts to evaluate the quality of our work are so fuzzy and ...(this is hard for me to say..) subjective.
I wonder if our brethren (sistren?siblingen?) in the "not-so-hard" sciences are better able to come to terms with this.
No comments:
Post a Comment