Monday, April 17, 2006

Bias in paper reviewing

Nowadays, I spend my time looking the most emailed articles on the NYT (or more uncharitably, "rating the competition"). An interesting Op-Ed from Sunday talks about bias in decision-making, and some of what the author says provides an interesting perspective on how we review papers.

Unlike in many other areas of computer science, theory papers are not reviewed double-blind; reviewers in general know the identity of the authors of a paper. I will say upfront that I don't think there is a real problem with this approach. It's not that I think that we are saintlier than reviewers in other disciplines; it's just that a combination of the nature of the subject and the value system of the area makes objective evaluations a little easier. However,
A Princeton University research team asked people to estimate how susceptible they and "the average person" were to a long list of judgmental biases; the majority of people claimed to be less biased than the majority of people. A 2001 study of medical residents found that 84 percent thought that their colleagues were influenced by gifts from pharmaceutical companies, but only 16 percent thought that they were similarly influenced.
We'd like to think that we can "factor out" the influence of author names when reviewing, but
Dozens of studies have shown that when people try to overcome their judgmental biases — for example, when they are given information and told not to let it influence their judgment — they simply can't comply, even when money is at stake.
What's also interesting is how we make decisions with limited information,
...researchers asked subjects to evaluate a student's intelligence by examining information about him one piece at a time. The information was quite damning, and subjects were told they could stop examining it as soon as they'd reached a firm conclusion. Results showed that when subjects liked the student they were evaluating, they turned over one card after another, searching for the one piece of information that might allow them to say something nice about him. But when they disliked the student, they turned over a few cards, shrugged and called it a day.
Or if you dislike a paper, you look for evidence to reject it, and if you like it, you look for evidence to champion it (rather than looking for evidence first, and making a judgement later).

And yet, all the people who scream 'Bias' when papers submitted to single-blind reviewing are rejected don't necessarily have a point:
And yet, if decision-makers are more biased than they realize, they are less biased than the rest of us suspect. Research shows that while people underestimate the influence of self-interest on their own judgments and decisions, they overestimate its influence on others.
What does all of this mean ? I am more biased than I think, but less biased than you think. It's good to keep that in mind (at least the first part), when reviewing papers. It's basic psychology after all.


Disqus for The Geomblog