I will often write "We store the $n^2$ pairs of objects in a data structure" when what I really mean is "We store the $\binom{n}{2}$ pairs ..". Of course there's no "real" difference in asymptotic land. But I've often found that students reading a paper will puzzle over asymptotic handwaving wondering why someone wrote $O(n^2)$ when it should really be over all pairs and whether they're missing some key insight. It's even worse when people throw in gratuitous and specific constants just to make an analysis work ("why do we really need to run for 18.5 n steps???").
While I completely understand the reasons people do this (ranging from the last minute deadline insertion to a misguided attempt at clarity), I think that making these shortcuts often makes a paper harder to read rather than easier. There are places where the asymptotics are all that matter ("Run this randomized strategy $O(\log n)$ times and apply a Chernoff bound") but even there, it doesn't hurt to use $c \log n$ (or a specific constant if you need a specific polynomial bound) instead. And when there actually is a precise expression that conveys meaning (like in the binomial example above), it's far better to use that to signify what's going on.
I think the precept we teach to students in algorithm analysis (don't throw in O() notation till the end) would be well worth paying more heed to ourselves.
Caveat: I'm not advocating this in ALL instances. I'm advocating a greater sensitivity to when and where the use of O() notation is more or less appropriate.
I think in some cases, people are afraid to use a specific constant when that constant could readily be improved with more careful work.
ReplyDeleteMy proposed solution, at least for online papers: choose a specific text color, say green, in which to write certain constants. Green numbers will be understood to mean "the statement is true with this value, which I haven't attempted to optimize."
... though I guess I'm not talking about quite the same situation as you (n choose 2 vs. n^2). In such cases, when I don't want to write the exact expression, I have considered writing "the \approx n^2 pairs of objects".
ReplyDeletePet peeve: When someone writes, "Run this for O(log n) steps," when they mean Theta(log n) steps. Big difference. We really need a different notation, though, as shorthand for, "Let c be a sufficiently large constant, independent of X, Y and Z. Then run this for c log n steps."
ReplyDeleteAnon: your idea could be handled by declaring upfront that $c_i$ for any i will be used to denote constants that do not depend on X, Y, Z. Then you can use them with impunity.
ReplyDeleteAndy D: saying "approx n^2 pairs" is truly lazy :)
http://cs.uchicago.edu/
ReplyDeleteinteresting news for TCS.
Simons is an alum of MIT and Berkeley so one of them will probably end up with the center.
I wonder if he could just have donated the money rather than host this competition. How much influence did the proposal really have on the choice of finalists and final choice?
The really careful thing to do would be to sequentially number your constants as Suresh suggests and write down all the inequalities they need to satisfy in an apendix. And of course make sure the system is feasible. :)
ReplyDeleteOn the other hand I like how Terry Tao uses non-standard analysis to achieve the same effect in some of his posts. Seems like that way most of the messy bookkeeping is still under the hood, but the rigor is there. For example this proof is fairly readable: http://terrytao.wordpress.com/2011/08/21/hilberts-seventh-problem-and-powers-of-2-and-3/. And it seems like quite a few constants would've been lying around if he didn't define "a polylogarithmic quantity" and "a quasipolynomial quantity".
But I think I'm too lazy even for this.
For some reason, new version at cs.uchicago.edu does not mention that Berkeley and MIT/Harvard are the other finalists.
ReplyDeleteYou can find that info in Google Cache.
Original announcement (taken from google cache):
ReplyDelete(Aside: why can't this process be transparent and fair? Because private money is allowed to be untransparent and unfair? I posted this info (twice) to the Lance-Gasarch blog yesterday, but it was censored.)
Chicago Plan Chosen Among Final Three in Simons Foundation Search for Institute for the Theory of Computing
The Simons Foundation informed us on September 28 that the University of Chicago/TTIC proposal for an Institute for the Theory of Computing has been selected to be one of the final three proposals that they will consider. Last December, after an initial "letter of intent" phase, seven major institutions and consortia across the country were invited to submit full proposals to host the new Institute, which will benefit from a $6M/year grant from the Simons Foundation. The other two finalists are Berkeley and the MIT/Harvard consortium. Enthusiatically supported by President Robert Zimmer and PSD Dean Robert Fefferman, the Chicago plan involves major institutional commitments. The plan has been spearheaded by CS faculty members Laszlo Babai, Alexander Razborov, Stuart Kurtz, and our former faculty member Lance Fortnow (Northwestern University).
The Institute's mission will be to further the core intellectual agenda of the Theory of Computing, the mathematical study of the fundamental nature, power, and limitations of efficient computation, as well as the manifold interactions of this field with other disciplines, ranging from mathematics and statistics to the sciences and engineering.
We expect to host a site visit before the end of November. The final decision is expected to be made in December.
I'm not advocating for either approach, but when I teach undergraduate algorithms classes I try to get the students used to the fact that we can ignore the constants. For example, when calculating 1+2+...+n instead of using the arithmetic progression formula, and then saying it is theta(n^2), what I do is to show that it has n terms that are <= n and n/2 terms that are >= n/2, therefore getting the desired bound. The advantage of this kind of reasoning is that it often allows the students to solve problems most of them wouldn't be able to solve otherwise. For example, they can find that log(n!) is theta(n log n) without knowing Stirling formula. As another example, I can show that a random element of an array is a "1/4-approximate median" with 1/2 probability and use it to analyze quicksort and randomized selection.
ReplyDeletehttp://research.microsoft.com/en-us/um/newengland/events/itcs2012/accepted.htm
ReplyDeleteAs a community, when did we decide to let PC members submit to ITCS? Or if we did not decide, then who made this decision?