Tuesday, January 09, 2018

Double blind review at theory conferences: More thoughts.

I've had a number of discussions with people both before and after the report that Rasmus and I wrote on the double-blind experiment at ALENEX. And I think it's helpful to lay out some of my thoughts on both the purpose of double blind review as I understand it, and the logistical challenges of implementing it.

What is the purpose of double blind review? 

The goal is to mitigate the effects of the unconscious, implicit biases that we all possess and that influence our decision making in imperceptible ways. It's not a perfect solution to the problem. But there is now a large body of evidence suggesting that

  • All people are susceptible to implicit biases, whether it be regarding institutional status, individual status, or demographic stereotyping. And what's worse that we are incredibly bad at assessing or detecting our own biases. At this point, a claim that a community is not susceptible to bias is the one that needs evidence. 
  • Double blind review can mitigate this effect. Probably the most striking example of this is the case of orchestra auditions, where requiring performers to play behind a screen dramatically increased the number of women in orchestras. 
What is NOT the purpose of double blind review? 

Double blind review is not a way to prevent anyone from ever figuring out the author identity. So objections to blinding based on scenarios where author identity is partially or wholly revealed are not relevant. Remember, the goal is to eliminate the initial biases that come from the first impressions. 

What makes DB review hard to implement at theory venues? 

Theory conferences do two things that are different from other communities. We
  • require that PC members do NOT submit papers
  • allow PC members to initiate queries for external subreviewers. 
These two issues are connected. 
  1. If you don't allow PC members to submit papers, you need a small PC. 
  2. If you have a small PC, each PC member is responsible for many papers. 
  3. If each PC member is responsible for many papers, they need to outsource the effort to be able to get the work done. 
As we mentioned earlier, it's not possible to have PC members initiate review requests if they don't know who might be in conflict with a paper whose authors are invisible. So what do we do? 

There's actually a reasonably straightforward answer to this. 


  • We construct the PC as usual with the usual restrictions.
  • We construct a list of “reviewers”. For example, "anyone with a SODA/STOC/FOCs paper in the last 5 years” or something like that. Ideally we will solicit nominations from the PC for this purpose.
  • We invite this list of people to be reviewers for SODA, and do this BEFORE paper submission
  • authors will declare conflicts with reviewers and domains (and reviewers can also declare conflicts with domains and authors) 
  • at bidding time, the reviewers will be invited to bid on (blinded) papers. The system will automatically assign people. 
  • PC members will also be in charge of papers as before, and it’s their job to manage the “reviewers” or even supply their own reviews as needed. 
Any remaining requests for truly external sub reviewing will be handled by the PC chairs. I expect this number will be a lot smaller.

Of course all of this is pretty standard at venues that implement double blind review. 

But what if a sub-area is so small that all the potential reviewers are conflicted

well if that's the case, then it's a problem we face right now. And DB review doesn't really affect it. 

What about if a paper is on the arXiv? 

We ask authors and reviewers to adhere to double blind review policies in good faith. Reviewers are not expected to go hunting for the author names, and authors are expected to not draw attention to information that could lead to a reveal. Like with any system, we trust people to do the right thing, and that generally works. 

But labeling CoI for so many people is overwhelming.

It does take a little time, but less time than one expects. Practically, many CoIs are handled by institutional domain matching, and most of the rest are handled by explicit listing of collaborators and looking for them in a list. Most reviewing systems allow for this to be automated. 

But how am I supposed to know if the proof is correct if I don't know who the authors are. 

Most theory conferences are now comfortable with asking for full proofs. And if the authors don't provide full proofs, and I need to know the authors to determine if the result is believable, isn't that the very definition of bias? 

And finally, from the business meeting....

Cliff Stein did an excellent job running the discussion on this topic, and I want to thank him for facilitating what could have been, but wasn't, a very fraught discussion. He's treading carefully, but forward, and that's great. I was also quite happy to see that in the straw poll, there was significant willingness for trying double blind review (more than the ones opposed). There were still way more abstentions, so I think the community is still thinking through what this might mean.


No comments:

Post a Comment

Disqus for The Geomblog