Thursday, August 31, 2006

Undergrads and arXiv ?

Via Daniel Lemire, this prediction:
Today’s undergrads have never thought about the world any differently – they’ve never functioned without IM and Wikipedia and arXiv, and they’re going to demand different kinds of review for different kinds of papers.
< snarky researcher on>
If you are an undergrad who's never functioned without the arXiv, please please pretty please apply for a summer job at AT&T, and request me as your mentor.

I shall now wait for a deluge of applications to clog my mailbox.

< snarky researcher off>


Categories:

Wednesday, August 30, 2006

Planar graphs and Steinitz's theorem

One way of representing a 3-connected planar graph is as the 1-skeleton (i.e the vertices and edges) of a convex polytope. A neat way of visualizing this is by shining a light above one face of the polytope and looking at the shadow formed (see David Eppstein's beautiful picture). Steinitz's theorem says that in fact this is an exact characterization: convex polytopes are exactly those polytopes whose edge graphs are planar.

There are different ways this convex polytope can be constructed, and it can be made to take various forms. Oded Schramm, in a paper titled 'How to Cage an Egg' that should go on my list of cool paper titles, shows that given any convex polytope and any smooth strictly convex body in R3, there is a combinatorially equivalent convex polytope that can be embedded such that each edge touches the surface of the convex body (in particular, we could use a sphere). There are many other results of this flavor; Ziegler's book on polytopes has more, as does Jeff Erickson's page on constructing convex polytopes.

But the question that puzzles me is this. Steinitz's original proof yields a polytope with rational coordinates (and thus integer coordinates). However, the values involved are doubly exponential in the number of vertices. Subsequently, Onn and Sturmfels showed that a "mere" single exponential suffices (specifically, their bound is of the form n169n3). It is open whether a polynomial (quadratic?) bound suffices.

Enter László Lovász, and the Colin de Verdière number of a graph. It would take too long to explain this number here (van der Holst, Lovász and Schrijver have a survey); basically it's the second eigenvalue of an associated matrix and has the interesting property of correlating with planarity, outer planarity and linkless embeddability for different values of the number. Lovász and Schrijver showed that a Colin de Verdière matrix (a matrix achieving the number) can be used to embed a planar graph on the sphere (using the geodesics as edges) so that each face is the intersection of the sphere with a convex polyhedral cone.

Lovász, in later work, then showed that this matrix can be used to generate a Steinitz representation of a planar graph as well. Which finally brings me to the question I wanted to ask:

Is there anything known about the size complexity of the Steinitz representation that the Lovász construction generates ? My references for the size of the Steinitz representation are quite old (the Ziegler book and the Onn/Sturmfels paper) and maybe there is a known polynomial upper bound ? If not, could the Lovász construction be a good candidate ?

It's worth noting that for higher dimensional polytopes, it is known that a doubly exponential representation is the best one can hope for. As often happens, the 3D case is special in this regard.

Update: David Eppstein points out that Chrobak, Goodrich and Tamassia showed a bound of O(n log n) bits per vertex (instead of n3) in SoCG 1996 (they apparently weren't aware of the Onn/Sturmfels result though).

Update II: Isn't the internet great ! Another commenter points out a monograph by
Jürgen Richter-Gebert
on the realization of polytopes. Here's where things get interesting. The monograph (dated 1996) has two Steinitz realizations. One is a general bound that uses 18n2 bits per vertex (better than Onn/Sturmfels) and the other, which applies to simplicial polytopes (specifically realizations of the 3-connected planar graphs that Chrobak et al consider), uses linear number of bits per vertex (something like n * log 43) ! So this work improves on the Chrobak et al work, while being roughly concurrent (Both author groups are unaware of each other's work).

Of course all of this predates the Lovász construction, so the possibility of further improvement still exists.

Categories:

Saturday, August 26, 2006

Mathematics and Wikipedia

From yet more ink spilled over l'affaire Perelman, a description of mathematics apropos for these modern times:
Mathematics is supposed to be a Wikipedia-like undertaking, with thousands of self-effacing scriveners quietly laboring over a great self-correcting text.



Categories:

A language for computational topology

sigfpe makes a case for Haskell, using simple Haskell code to compute Betti numbers.

Categories:

Friday, August 25, 2006

TWA: Travelling while Asian

So let me get this straight:I suspect you also can't have brown skin, but naah: that would just make me paranoid...


Categories:

Tuesday, August 22, 2006

The "I-Team" vs the A-Team.

From Terence Tao's Field's medal announcement:
He did this work in collaboration with four other mathematicians, James Colliander, Markus Keel, Gigliola Staffilani, and Hideo Takaoka. Together they have become known as the "I-team", where "I" denotes many different things, including "interaction"... The word "interaction" also refers to interactions among the team members, and indeed collaboration is a hallmark of Tao's work.
Consider this, from the A-Team:
I like mathematical progressions, but we're really picky about whom we work for.
Given one of the things Tao won the Fields Medal for, this is strangely appropriate.


Categories:

Replacing impossibility by infeasibility

One of the great conceptual breakthroughs in cryptography was the idea that an impossibility - this code cannot be cracked - can be replaced by infeasibility - this code is computationally hard to crack - with great success. Indeed, the notion that all you need to do is fool a computationally bounded adversary is at the core of derandomization.

It's an idea that transcends theoryCS - that you can model entities by computationally bounded objects, and can then prove interesting things about them that you couldn't have done if the entity was all powerful. And it's realistic ! As long as you're willing to believe in the effective variant of the Church-Turing thesis (all effective computations are in BPP; we'll pretent quantum computing doesn't exist for now), then it accurately models any computational entity you can choose to build.

Which brings us to social choice and voting theory. As you may well be aware, Arrow's theorem is a rather depressing impossibility result for designing a "fair" election. Either you lose some nice property (no dictatorship!), or some adversary can find a way to beat the system another way. But given all of the above, a more plausible question one might ask is: can a computationally bounded adversary crack an election ? (Disclaimer: I have no evidence that Diebold is a computationally bounded adversary; it certainly has no resource constraints)

What got my attention were two new papers on the arxiv, one by Faliszewski, Hemaspaandra, and Hemaspaandra, and the other by Hemaspaandra, Hemaspaandra, and Rothe.

The first paper asks the question that Katherine Harris must have surely wondered about, "How Hard Is Bribery in Elections?". The short answer is, "It depends", and the long answer is that depending on the type of voting scheme and who's doing the rigging (a briber or the voters themselves), the problem goes between being in P and being NP-Complete.

The second paper studies the problem of when the organizer of an election seeks to rig it:
Electoral control refers to attempts by an election's organizer (``the chair'') to influence the outcome by adding/deleting/partitioning voters or candidates.
In some circles, this is also known as the venue selection process for SODA. What the authors show is that by combining elections in a certain way, the overall election can be made resistant (i.e NP-hard) to control by the organizer.


Categories:

Saturday, August 19, 2006

Originality vs understanding, and the nature of creativity.

One of the comments in the SIGGRAPH forum I talked about earlier was about the (mistaken) emphasis on original trivial contributions versus "incremental" but more profound contributions. The statement is of course stacked (who would argue in favor of "trivial" contributions !), but it brings up a point worth noting.

You can be creative while being original, and indeed this is the standard way one thinks about creativity. But solely emphasizing originality misses a deeper truth about why we do research, which is the acquiring of wisdom and understanding. Indeed, it takes a certain, different kind of creativity to deeply understand certain concepts, often by establishing connections between known areas or making subtle observations about the relationships between entities. Indeed, in computational geometry, we have a name for one such kind of creativity: "Chan's method" :)

Originality and understanding are part of the larger process of acquiring knowledge and wisdom about a domain. A new area of knowledge profits from originality because problems start blossoming forth, and ideas come fast and furious. Over time, deep understanding leads to the building of connections among known concepts, and this further strengthens the area. Mature areas like mathematics find great value in the establishing of connections; the whole of category theory can be viewed as one gigantic connection generator.

Computer science is a young discipline; like all young disciplines, new ideas are valued, and they come so fast that there is often little time to reflect and make connections. My personal belief (your mileage may vary) is that at least in the realm of algorithms and geometry we've reached a kind of consolidation limit point, where more and more often we are running up against problems that call for brand new ways of thinking, and where "more of the same" kinds of results seem less interesting. In such a setting, results that prove less, but connect more, seem more valuable to me.



Categories:

Wednesday, August 16, 2006

SIGGRAPH, hiring, and peer review.

There's been an interesting flareup in the graphics community prompted by the posting of a note by Michael Ashikhmin on his web page. Ashikhmin is a known graphics researcher (with SIGGRAPH papers to his credit) who recently (June this year) proclaimed his desire to leave the graphics community because of
my deep disgust for the state of affairs within computer graphics research community and my inability to fit well within existing system
His grievances are many, and appear to be a direct consequence of the hegemonic nature of SIGGRAPH, by far the most prestigious conference in graphics. Specifically, he argues that all the usual problems with conference reviewing (extreme subjectiveness, poor quality of the reviews, clique-formation) are exacerbated by the overwhelming influence SIGGRAPH papers have on one's career (being hired, advancing in one's career, getting tenure, etc).

None of the objections are particularly novel; indeed, I have yet to go to a theory conference where people have NOT complained about the reviewers. However, what takes this beyond just your ordinary embittered-researcher-rant is that many prominent researchers in graphics appear to publicly both agree with him.

Michael says in his letter that senior graphics researchers recommended that he host a forum devoted to discussing this issue, and once he set the forum up, many well known and respected graphics researchers (John Hart, Marc Levoy, and Jonathan Shewchuk among others), commented publicly on the matter. In fact, Marc Levoy (who's the papers chair for SIGGRAPH 2007) went further and organized a town hall meeting at SIGGRAPH 2006 to discuss paper reviewing procedures (I don't know what transpired there).

There are many comments on the forum from anonymous commenters who claim to be published authors at SIGGRAPH. As far as I can tell, not one person disagrees with the primary claims that Michael makes, although Marc does attempt to mount a defense of the paper review process, while still acknowledging the main problems, and outlining strategies that he will employ in 2007 to fix some of them.

Many good suggestions were made. One of the primary ones was to add a new SIGGRAPH-like conference so that one venue didn't have to take all the load (STOC and FOCS were cited favorably here). Prohibiting committee members from submitting was another idea (again, the theory community was cited), although this was frowned upon by Marc Levoy, who complained that he wouldn't be able to find people for a committee (he did aver that he wouldn't be submitting anything).

This is probably the first time I've seen this kind of discussion take place (partly) on the record without dismissing the complaint as sour grapes. The question of course is whether anything will come of it in the long term. It's worth mentioning that even in our conservative (by nature, not by politics) research world, change can happen, and can often happen rapidly. Witness the collective revolt by academicians of all stripes against Elsevier, and closer to home, consider the split in ICRA (one of the main robotics conferences) to form RSS (Robotics: Science and Systems).


Categories:

Tuesday, August 15, 2006

Poincare's conjecture

Dennis Overbye has a great article in the NYT today about the resolution(?) of Poincare's conjecture. He does an excellent job describing both the conjecture itself and the main line of attack that Perelman follows, while narrating the history of the proof both accurately and without unnecessary romanticization of the mathematicians involved.

For a more detailed nontechnical survey that goes more into the roles of the different players, see Allyn Jackson's article on the latest Notices of the AMS. A more technical account is presented in a manuscript by Shing-Tung Yau.


Categories:

Thursday, August 10, 2006

Guns don't kill people, people kill people.

I was watching CNN, and a talking head (who happened to be a former Somebody in Israeli aviation) was making a point that I thought was quite reasonable. He said that it's too hard to focus on the method of making explosives, since explosives sufficient to destroy a plane in mid-air can be made from a variety of chemicals. It was more important, in his view, to focus on the people executing the missions; it was much harder to hide oneself from an acute observer than it was to hide chemicals.

Israeli immigration is notorious for doing exactly this; I've never been to Israel, but colleagues tell me of being interrogated to within an inch of their lives about the details of theorems in the papers they were presenting. After 9/11, I remember similar arguments being made in the US, but they quickly got bogged down in issues of racial profiling, and we quickly found that randomly pulling out gentle Caucasian grandmothers from Iowa was a reasonable thing to do.

So what's the computer science angle here ? If we think of computer security, I'd argue that many of the discussions revolve around blocking techniques: this attack on that cryptosystem, that weakness in this OS, and so on. It seems like the security precautions being put into place on the airlines now are just like that: a weakness is revealed, a (usually overblown) patch is put in place, and so on.

This approach appears completely unsuited for protecting against "real" security hacks though ! The much-derided 'humint' approach appears to be far more effective (and far less overtly intrusive to the population at large).


Categories:

Precision, recall, and bottles of water.

The latest airline explosion plot has sparked a predictable overreaction by the much-beloved TSA, which once again demonstrates that it lacks a basic understanding of precision and recall.

Let's recap, shall we ? Suppose you're trying to find all elements of a subset S in a universe U, and return as an answer the set H.

Precision measures the quality of your answer; what fraction of the elements of H are indeed in S ? The higher the precision, the lower the false-positive rate.

Recall measures the efficacy of the procedure; what fraction of the elements of S did you actually find ? The higher the recall, the lower the false-negative rate.

As any statistician will tell you, false-positives and false-negatives are complementary; increase one, and the other decreases. Search engines need high precision, (they also need good ranking, but that's a different story).

The TSA is clearly going for high recall. Banning bottles of water will surely eliminate any future plans for liquid explosives that use water, but it also eliminates the many (how shall I say) *innocent* uses of water ?

p.s I don't mind the short term enforcement of stricer guidelines while law enforcement attempts to figure out what kinds of explosives were being designed. I just have no faith that the new draconian regulations will be relaxed any time soon, considering how long it took to allow us to carry nail clippers on board flights again.


Categories:

Thursday, August 03, 2006

Timestamping using the arXiv...

When is it appropriate to post an article to the arXiv ? You could opt to post when
  • A journal /conference accepts the paper. You could also just post it on your website at that point, although the arXiv does allow for better dissemination.
  • You submit to a journal/conference. I'm told that this is often the model in areas of physics. Seems reasonable enough, although at least in CS, the fear of being "scooped" might prevent you from doing so, not to mention the often-byzantine "double-blind" policies of some conferences.
  • You've dried yourself off from the shower you were taking when inspiration struck. Ok, so maybe I'm exaggerating slightly, but...
Can the arXiv be a valid reference for time-stamping ? In other words, can you use a document posted on the arXiv as proof of 'prior art' ? Consider the following scenario: Santa and Banta* are both working furiously (and independently) on a brand new result in synthetic polymorphic differential logic. Santa submits his paper to the ACM Symp. on SPDL, while Banta misses the deadline. Banta, smartly enough, posts his manuscript on the arXiv, while Santa's paper is deemed not algorithmic enough and is rejected summarily from SPDL.

When IEEE SPDL comes along, who claims precedence ? Certainly not Santa, since he has no document to be cited ? But can Banta claim precedence merely by posting on the arXiv ? there has been no peer review after all, and his proof could be full of holes.

It would be easy enough to declare that Banta has no claim to precedence, since there is no peer-reviewed cited work available. But there are two problems with this:
  • It negates the value of the arxiv ! After all, if I cannot claim any kind of precedence, but can have someone pick over my result and improve it, what incentive do I have for posting anything ? One answer to this could be that my result is cast-iron, and can't be improved, but this happens far less often, and cannot be a useful answer in all situations.
  • It ignores common practice within the community ! People will often talk about new results they have, and if the result is important enough, word will spread, and ownership will be (informally) established.
  • People cite technical reports all the time ! One of the founding papers of streaming algorithms was (and has remained) a DEC technical report.
Personally, I'd still like to use the "peer-reviewed publication" as the best model for timestamping. But this also explains why the arXiv appears more popular among physicists and mathematicians, who publish primarily in journals. After all, the likelihood of a paper getting rejected by a journal can be reduced to a far lower number than the corresponding number for a conference, and so publishing on the arXiv is a relatively risk-free venture. I also suspect that people hold off on more controversial work, sending it out to the arXiv only when already accepted to a peer-reviewed venue.



As my Indian readers will know, Santa and Banta often star in jokes making fun of the lack of intelligence of a particular ethnic community in India. At least in this anecdote, they are both smart researchers; consider it my contribution to ethnic amity. :)


Categories:

Musings on the arXiv.

Things that make you go 'hmmmm', or why arXiv can't replace peer review:
The polynomial solution of graph isomorphism problem is obtained by consideration symmetry properties of regular $k$-partitions that, on one hand, generalize automorphic $k$-partitions (=systems of $k$-orbits of permutation groups), and, on other hand, schemes of relations (strongly regular 2-partitions or regular 3-partitions), that are a subject of the algebraic combinatorics.
Although I appreciate the value of the arXiv, there are some things about it that I still don't get.
Take the paper excerpted above. It claims to solve graph isomorphism in polynomial time, a result that if true would be quite momentous. The abstract already warns me that the technical material would be complex, and (call me parochial, shallow, what-have-you) the complete absence of grammatical structure does raise some tiny red flags.

So where does this leave me ? I'd like to know whether there is any merit to this paper. In the absence of peer review, I'd have to wade through it myself, or hope that someone with more technical expertise in the material does it for me, and announces their verdict in a forum where I can hear about it.


Categories:

Disqus for The Geomblog