Two easy ways to improve your paper but lessen your chances of acceptance at a conference: Add more results and simplify your proofs. Adding a result could only increase the usefulness of a paper but program committees see many results in a paper and conclude that none of them could be very strong. One of our students a few years ago had a paper rejected at STOC, he removed one of his two main theorems and won the best student paper award at FOCS.

Given the same theorem, the community benefits from a simple proof over a complicated proof. Program committees look for hard results so if they see a very simple proof, it can count against you.

Two easy ways to improve your paper but lessen your chances of acceptance at a conference: Add more results and simplify your proofs. Adding a result could only increase the usefulness of a paper but program committees see many results in a paper and conclude that none of them could be very strong. One of our students a few years ago had a paper rejected at STOC, he removed one of his two main theorems and won the best student paper award at FOCS.

Given the same theorem, the community benefits from a simple proof over a complicated proof. Program committees look for hard results so if they see a very simple proof, it can count against you.

I agree mostly with his take on how PCs view papers: simple proofs can indeed be looked down upon. The interesting question is this though: since PC members are the

*same people*who, when not on PCs, have this problem, what is it about PC membership that causes their judgement to skew ?

The usual argument is load: PC members in algorithms conferences typically review far more submissions than PC members in any other conference mainly because of two inter-related reasons:

1. Our PCs are small

2. PC members are not permitted to submit papers to the conference

(note that (2) more or less forces (1)).

Or could one argue that it is the right and appropriate thing for PC members to prune papers in this fashion ? And that it is the authors' responsibility to make the best case for their submission in a system which will always be imperfect ? One might think that this reasoning would

*encourage*, rather than discourage, simple proofs, because these are easier to understand and lead to a better exposition in a conference setting.

It seems to me that one reason an elegant proof might be looked down upon in comparison to a more technical, grungy proof is that if the reviewer is not intimately familiar with the area of the paper, they might not appreciate the value of the elegant result, or be aware of how hard it was to achieve such an understanding of a problem etc. This doesn't sound like a problem that can be fixed easily, unless every paper can be reviewed by an expert in that specific area, which seems difficult to manage.

I would like to venture the slightly controversial claim that theory (STOC/FOCS/SODA/etc) committees are not as rigorous in providing feedback and comprehensive reviews as many other conferences. There are many good reasons why this is the case, and I don't think one can fault reviewers who do the best they can under severe load, but the fact remains, and it would be nice to see more discussion of this in business meetings or even in informal forums in the community.

Although this is somewhat removed from the original point about reviews themselves, I feel that feedback itself is a method for ensuring accountability and openness. A reviewer who has to write a detailed explanation of what they like/don't like in a paper will automatically do a more thorough job of reviewing it. Again, this is not a matter of harasssing reviewers: structural changes will have to made in how theory committees are set up to make this practical.

I don't think that simplicity in itself is viewed

ReplyDeletenegatively by program PCs. In fact if someone has

a simple proof/algorithm/result on a well studied

open problem it is definitely a plus. The flip side

is that if the problem studied is not known

to members of the PC or to the wider community of

reviewers it becomes difficult to judge. One has to

spend time thinking about the problem first to appreciate the difficulty of coming up with a simple

proof. Given the short review cycle of conferences,

very few people are going to think about the problem

by themselves.

Chandra

exactly: consider the following two situations:

ReplyDelete1. You have a trivial proof of a claim that took a year to discover

2. You have a trivial proof of a claim that took 5 minutes to discover

The only way to distinguish between 1 and 2 is if the reviewer is able to prove the claim themselves. For a paper not in one's area, this is not easy.

However, although our normative standards might prefer 1 to 2, it is not clear why this should be the case.

Suresh, I don't think we are on the same page

ReplyDeleteas you imply by "exactly". It is not the duration

of the time it took to obtain a simple proof that

matters. What matters is whether the problem in

question has been studied before by enough people

and whether there is an implied interest in the problem.

For something new or not well known problem a simple

solution might be viewed with caution which is not

necessarily a bad thing.

ok fair enough. clearly if a new problem has what appears to be a simple proof, one has the right to question the value of the result IF there are no other mitigating factors (the problem itself is interesting, has applications, etc etc).

ReplyDeleteOf course this doesn't apply to the problem references in Lance's original post, since that paper got a best student paper award from FOCS.

Could you be more specific about the "many other conferences" that give better feedback than theory? I just finished my reviews for OSDI, and while I believe I did a good job, I know that there are plenty of issues I may have missed or just not covered in enough detail. Maybe I haven't been around enough yet, or maybe both OSDI and theory conferences enough to see the differences. If there's a community that's doing much better, though, I'd like to hear about it.

ReplyDelete-David Molnar

I think not even the criterion in the comment above is always fair: even if the proof can be rediscovered by the reviewer in short time, it might have taken a year of research to discover the right, true statement to prove, and only given that it is easy to find a proof.

ReplyDeleteIMHO reviewers should primarily judge the results, whether they're interesting, new, unexpected, useful etc. etc., and the proofs should just be viewed as means to an end, to certify the truth of the proven statement. From this point of view, which unfortunately is not taken by large parts of the theory community, the simpler a proof is, the better (given, obviously, that the result is of value by itself.)

I can't say I agree with what Jan argues: a proof is not just a means to an end: it often provides new ideas to add to one's toolkit. We need go no further than Erdos's "book proofs" to appreciate this idea....

ReplyDeleteOf course, I won't deny that proofs have value besides being a certificate to the truth of the proven statement. What I wanted to say is only that for the purpose of reviewing a paper for a conference, these extra values should be considered secondary, and the primary emphasis should be on the results themselves, not the proofs.

ReplyDelete