Thursday, September 09, 2004

More SODA...

The review process was interesting for me: the discussions and reviews exposes one to a diversity of opinion and perspectives that one doesn't always get in one-on-one discussions, and there is a lot to learn.

Probably even more importantly, it gives me a better sense of what kinds of papers get into SODA. With the competition being what it is (this year I believe is a record low for acceptance), the goal changes from finding papers to accept to finding papers to reject, which means that papers have be a lot more polished, a lot more relevant, and a lot more interesting (at least to some subset of reviewers) than before. A paper that is overall decent has a tough battle, because of the sheer size of the submitted set.

The main problem of scale we face (and have been doing so for a few years now), is the reviewer load, (and at 65+ papers/person, we do have an issue). The question that really comes up is: is it that the (roughly) same set of people are writing more papers, or is it that there are just more people in the SODA community ? The answer is important, because it governs how we address the issue of reviewer load.

If there are overall more people entering the community, then it makes sense to continue to increase commitee sizes, expand conference durations and/or number of papers accepted (by shortening presentation times) and other such strategies. If it is that people are writing more papers, it is less clear what one can do about this, other than starting new conferences/journals...

Returning to the issue of reviewer load, I recently heard two different radical strategies that attempt to deal with different aspects of the problem.

* To handle the case of recycled papers that go from conference to conference in the hope of finding reviewers that haven't seen them:
- Create a repository where all submitted papers are registered. When a paper is submitted to a conference, a link is set up, and then when reviews return, they are filed with the paper. If the paper is submitted again, the reviews are there to see.

Main cons: public flagellation of papers is never a good idea, and authors could always create new entries to defeat this scheme

* To handle the issue of review quality and the perception of unfairness
- once papers have been accepted, make reviews for accepted papers public (anonymously).


  1. It would be nice to know by some data analysis if we
    have more people submitting papers or more papers
    being submitted. I do think there are both trends going on. I see more papers being submitted from
    Europe and Asia than before. Also, the number of
    people with multiple papers is increasing even among
    students. I believe people are writing more papers
    than before because the competition is increasing - it
    is common these days to have students graduating with
    10-15 conference papers. One side effect is that
    people don't want to suggest even minor improvements
    to others - they write a paper instead.

    Is this good or bad for the community and the people?
    I don't know but it is becoming difficult to keep
    track of developments in even closely related areas.
    Good survey papers are really needed. CRC handbooks
    also seem to do some of this indirectly although I
    wish the authors put up their articles on the web - who is going to pay hefty sums for these gargantuan
    sized books?


  2. Speaking about repositories, Psycoloquy is a brilliantly executed repository in Psychology. Authors post brief summaries of their papers, links to the papers, and often lively debates about the matter ensue.

    For example:

  3. Looking in the last SODA proceedings, there are not that many authors with multiple submissions. As such, I assume the problem is the number of people submitting increases. However, theory is still a far cry (27% acceptance this SODA) from conferences like Mobicom or SigComm (<10% accpetance). Furthermore, looking in the last SODA proceedings, I dont get the feeling that all papers are good. So maybe even 135 accepted papers is too many. So, I think we should start worrying about this issue when SODA acceptance rate falls below (say) 15% (at this point, it would probably be more selective than STOC/FOCS). At the time being, enlarging the committees is a simple solution. There are enough clueless untenured faculty that would be happy to be on the SODA committee.

  4. indeed you are.

    I have been speculating about some data collection recently. As always, getting the data is much harder than actually getting the answers. What we would like is the set of submitters to (say) the past 5 SODAs, in order to determine all kinds of good stuff. However, there is only data available for last year's and this years (PC chairs for prior years might have personal copies of submission stats, but that needs to be determined). there is also the issue of appropriately finding duplicates: am I s. venkat, suresh venkat, suresh venkatasubramanian... and anonymizing the data (which is fairly easy to do with md5 or something like that).

  5. I have noticed that most of the accepted papers in SODA/STOC/FOCS seem to be from one of the IVY league schools or from one of the established Labs. Of course, one can argue that it is natural that top schools are the only ones that are undertaking research in theory/algrorithms. But in my personal opinion, I have seen papers with big names or big organizations getting accepted very often, inspite of their mediocre content. Is it absolutely true that the review process is blind? I don't believe to be so.

    Also, why in the world do we see the same set of names on program committees with little/no permutations. If you just happen to look at, say, X's webpage, you can see that they have been in pretty much every other theory conference's program committee. I believe this creates an atmosphere for internal politics in reviewing papers. I know a lot of people who deserve to be in program committees. On talking to them, I even heard that they pretty much gave up on SODA/STOC/FOCS and they are rather submitting their papers to lesser known theory confereces.

  6. the first point from the post above can be established easily with some data gathering. I have no idea whether the claim is true or not, but either way it is easy to test.

    As for the second point, (theory PCs being an in-house affair), there is a little more merit to that, although I don't know how much of it is overt and how much is just lack of care of committee construction. I think this year's SODA has roughly 1/3 of the commitee comprising people who have never served on SODA/STOC/FOCS commitees before (myself included), which is a plausible enough number.

    One point that is worth considering: theory committees are very small as a fraction of the number of submitted papers, and in areas like databases, the committees are much larger.


Disqus for The Geomblog