Tuesday, May 30, 2006

Shortest Paths On Convex Polytopes

So here I am, trying to get some sausage tasting in before I leave. I'm trying to hunt down a paper by Schreiber and Sharir on computing shortest paths exactly in three dimensions on a convex polytope.

The story of this problem is interesting. If the domain is arbitrary (three dimensions with arbitrary obstacles: think of a missile flying through a city), the problem is NP-hard (which is good if you're a pacifist). If you then restrict yourself to shortest paths on a convex polytope (now you're a snake on a really nice hill, trying to find some food), then the problem becomes easier.

Eons ago (well ok, back in 1986) Sharir and Schorr gave the first polynomial time algorithm for this problem. Their solution (runnning in n3 log n time) exploited a very neat fact about shortest paths on a convex surface; the so-called "unfolding" property.
suppose you draw a shortest path between two points on a convex surface, and then cut out the faces containing the path. If you unroll them onto a table, the path you drew would be a straight line. Voila !
This was subsequently improved. Mount used a "continuous Djikstra" method that expanded wavefronts from the start point, shaved the running time down to n2 log n. This approach was generalized to non-convex polyhedra by Mitchell, Mount and Papadimitrious. in 1990, Chen and Han removed the log factor in the algorithm, using a different approach. That's where things stood for a long time (I'm ignoring other aspects of the story) till 1999, when Sanjeev Kapoor had a paper in STOC that claimed an n log2n running time for the problem.

Here's where things get strange: apparently no one (except the author, I imagine) believes that the paper is correct. There are numerous details left unspecified in the conference version, (Joe O'Rourke has made an attempt to explain what's going on), and to date there has been no journal version of this paper.

Which brings us to the paper I started off with. Schreiber and Sharir have a paper in this year's SoCG that presents an optimal algorithm for the shortest path problem on convex polytopes. Their algorithm runs in n log n time. The long version of the paper is now 131 pages and counting (!), and contains in an appendix an attempt at refuting aspects of Kapoor's algorithm.

Verifying the details of this algorithm will take some time, but at least there is a fully detailed manuscript available. Perhaps this will bring to an end the mystery surrounding the true complexity of the problem.

Aside: So I do a google search for "An efficient algorithm for shortest paths on a convex polytope in three dimensions", and the top links I find are for "Approximating shortest paths on a convex polytope in three dimensions". I guess approximations are more important after all.

Update (3/26/07): I recently received the following email from Sanjiv Kapoor and am reproducing it with his permission:
The single outstanding comment "apparently no one (except the author, I imagine) believes that the paper is correct" on your blog, is worrisome in its expansiveness -- (note that a blog is a public document)--

A more detailed (at least enough details from my viewpoint-) was available and never requested by anyone-- so I don't know if anyone even tried to seriously understand the method, apart from Joe O'Rourke and recentlly by SS. The only error which I myself found, has not been reported by anyone else as yet. The recent paper with Inkulu and Maheshwari gives an "improved" planar SP algorithm, which of course didnt make it to SOCG, but has crossed the 40 page single space) mark (if that is a measure of detailed correctness). In my original version I actively tried to omit all details which would require a few minutes thought--
He also encloses a mail from Yevgeny Schreiber which acknowledges his comments and modifies one of their examples claiming a problem with his algorithm. Yevgeny continues to mention technical details in the 1999 paper that remain unresolved, so this matter remains unsettled.

On Zero Knowledge in Games...

I'm back now from the ESA PC meeting, not to return for at least five more years. ESA has a 'diversity' policy: no one can repeat memberships on a committee within five years (This doesn't apply to PC chairs though). Rumor has it that this is a reaction to a system where new blood wasn't coming into the conference committees at all.

I don't know if institutionalized diversity is necessarily the answer in the long term: maybe it's a good short term measure to get some new blood in. You do need some amount of continuity on committees while trying to encourage fresh participation. SODA/STOC/FOCS appears to do this reasonably well.

While catching up with my blogs (what are they putting in the water in Atlanta, anyway ?), I came across this post from Antimeta. It considers (for example) the situation when reading a letter of reference:
we were trying to figure out whether silences can carry implicatures, or more ordinarily, whether you can say something without words. And of course the answer is yes:

Q: "What do you like about John?"
A: [silence]

The problem is about the communication of meaning, and the interplay with intention. Antimeta goes on to say:
A is expected to make a contribution to the conversation mentioning some positive fact about John. A's silence violates the maxim of quantity (she hasn't said as much as is expected), so Q can infer that some other conversational principle (one requiring politeness) must conflict with anything that A would be in a position to say. Therefore, Q comes to believe that there is nothing (or at least nothing relevant) about John that A likes.

But then I realized that we should think about this (and perhaps the original recommendation letter example) a bit more carefully. It seems that the story given above could work in at least two different ways. In one case, A is struggling for an answer, and the silence just comes about because she can't think of anything she likes about John. In the second case, A knows there is nothing she likes about John and remains actively silent. I think the second case is an example of an implicature carried by a silence, but the first is not.

The explanation of the distinction comes from Grice's earlier classic paper, "Meaning" in which he suggests that speaker A means y by utterance x iff "A intended the utterance of x to produce some effect [y] in an audience by means of recognition of this intention." He comes to this recursive intention account of meaning by way of a bunch of examples, which I think parallel the situation here. If I don't intend you to believe (or consider) something by means of my action, then I didn't mean it, even if it's true. Thus, my silence can reveal my dislike for John, just as an accidentally dropped photograph can reveal where I was the other day, but neither means it. But even just performing the action intentionally isn't enough - Grice suggests that showing someone a photograph doesn't constitute a meaning of what is depicted, because my intention plays only an unnecessary role in the observer's coming to believe the truth of what is depicted.

All I can say is: Phew ! Pursuing a Ph.D program in logic and philosophy is no joke.

Seriously though, the post also talks about a paper by Edward Epsen that attempts to apply zero knowledge ideas to game theory. Specifically, the author considers a two-player setting where one player either does (is "informed") or doesn't (is "uninformed") know whether the game being played is one of k choices. The second player has a prior belief with probability q that player 1 is informed. It turns out that there's a zero knowledge protocol to drive q to 1. That is, there is a protocol that will convince player 2 that player 1 is informed about the choice of game being played without knowing which game it is.

I'm simplifying: these are games of incomplete information, where players may not even know the payoffs from their actions. According to the author, these were first used by Aumann and others to handle Cold War conflicts (one example given is: "suppose the US convinced the Soviet Union to reduce its ballistic missile stockpile by 100. What percentage of the Soviet stockpile is this?". Obviously it would be hard to determine the answer to the second question, and thus hard to determine the payoff of the first action.


Sunday, May 28, 2006

On scores for papers: A postscript

The SIGACT reviewing system asks reviewers to assign scores when reviewing papers. It goes something like this (lightly edited):
  • 9-10: An enthusiastic yes. I will fight strongly for this paper.
  • 8-8.99 A strong vote for acceptance. A solid contribution. This paper should be in the top third of the papers in the conference.
  • 7-7.99 A vote for acceptance. Not a stellar result, but clearly worth accepting.
  • 6-6.99: A weak vote for acceptance. A reasonable contribution to
  • an interesting problem - or maybe the contribution is good but
  • the authors don't seem to understand what it is and/or express it
  • well - or maybe it's a good paper, but the subject area is marginal
  • for the conference.
  • 5.0-5.99: Ambivalent. I might support accepting this paper, but I don't advocate accepting it. Probably publishable as a journal paper, but a bit too specialized or too incremental or perhaps it has nice ideas but is too preliminary, or too poorly written.
  • 4.0-4.99 A competent paper, but not of sufficient interest/depth.
  • 2.5-3.99: A solid vote for rejection. It is very unlikely that I could be convinced to support this paper.
  • 1-2.49 A strong vote for rejection. A poor paper, unsuitable for any journal.
  • 0.01-.99: Absolute reject. Completely trivial and/or non-novel and/or incorrect and/or completely out of scope.
More often than not, these scores are not revealed to the authors. And that's a good thing. The scores themselves are mostly placeholders for more intangible sentiments. Since ultimate selections are to a large degree relative, absolute numbers don't mean so much. Also, as anyone who's ever reviewed papers knows, high scores don't guarantee acceptance, and conversely.

I think it's a good idea that scores are kept merely as internal markers, and only provide light hints as to the final fate of a paper. I've reviewed papers where the individual review is given choices like 'strong accept, weak accept, weak reject, strong reject', and the categorical nature (as well as implied decision) of these verdicts make it harder IMO for a paper to survive an early dissing by a reviewer. Moreover, if you get a paper rejected with a set of reviews containing three weak accepts, it can be really puzzling to figure out what happened.

More importantly, not revealing scores prevents people from falling into the confusion that paper reviewing is a deterministic objective process; it most certainly isn't. However, at least in my experience, paper discussions can be extremely nuanced and sophisticated, far more so than a mere "objective" score can achieve.


ESA PC Meeting: Day II

ETH Zurich was one of the first "foreign" universities I ever heard of. Pascal was the first "real" language I learnt, and I knew that the inventor of Pascal, Niklaus Wirth, was at ETH Zurich (he's retired now).

The ETH Computer Science building is very snazzy. From what I was told, it used to be inhabited by chemists, and there are still lab spaces with racks for beakers and spots for Bunsen burners (ooo, Bunsen burners, brings back memories of my dismal lab skills). There is at least one rather charming 'old-school' classroom that I saw, with wooden benches and fold-out desks. The look was somewhat marred by a very modern looking projector, and a computer screen on the front desk.

The department itself has a good geometry presence, with Emo Welzl , Bernd Gartner, and Michael Hoffman. It compares well (for CG) with places like Utrecht and the Free University of Berlin.

We managed to finish up earlier than scheduled today. The last few papers to be accepted always take the most time: it's like how the complexity of a system increases dramatically at a phase transition ! Overall, I think the main lesson I took back from looking at the papers is that experimental work takes a great deal of skill and effort. You have to tease out the interesting phenomena and design the right experiments to illustrate your points well. Word on the street is that far more good papers were submitted than there were slots available. It's an example of a trickle down effect that actually works !! As STOC, FOCS, and SODA have progressively become saturated, the quality of conferences like ESA is steadily increasing.

Sunday mostly everything is closed in Zurich. There is a nice hike one can do in the Uetliberg region (a hilly area that faces the lake). Was a good 1.5 hour hike, with distance markers not indicated by miles, but indicated in planetary fashion. The start point was the sun, the end point was pluto, and points in between represented the scaled distances of planets from the Sun. Geeky, but interesting.

It's time to go home...


Saturday, May 27, 2006

ESA PC meeting, Day 1

This is a hectic time for me: I just got back from a workshop, and am now in Zurich for the ESA PC meeting, after which I go vortex hunting for a few days. Of course, my not-quite-10-month-old chose this exact weekend to get off his b*** and start slithering around on the floor.

Zurich is a nice town, and I emphasize the word 'town'. People remarked on how quiet the town appeared to be even on Friday and Saturday nights. We finally found the area where all the action was, but it was still quite small. The wonders of decentralization ! Apparently, Switzerland is so decentralized that you acquire citizenship by municipality or canton vote, rather than by any kind of nationalized process. My neighbours in Philadelphia barely know who I am :)

I had a good reason to like Switzerland even before I came here. It is the only place where an American green card is sufficient to enter the country: no annoying visa forms to fill out (Hindi-Swiss bhai bhai !!). My second reason to like Zurich in particular is that this is the first European city where I didn't feel like a homeless bum wearing the clothes that I did. Much of Western Europe is far too well-dressed for my slovenly tastes.

Where was I ? Oh yes, the PC meeting...

Of the major theory conferences that I am familiar with, STOC, FOCS, SoCG all have physical PC meetings, and ESA occasionally has one as well (they didn't have one last year). SODA most notably among the major conferences does NOT have a PC meeting: given the size of SODA PCs, this is not surprising.

Having a physical PC meeting creates an interesting dynamic. It has often been observed that people are much ruder on the internet than in real life; the relative anonymity of the Net appears to eliminate the social norms that pressure us into good behaviour. A physical PC meeting does tend to moderate sharp edges in our tendencies to do battle. As long as the PC chair can prune obvious rejects ahead of time in electronic discussions, like Thomas Erlebach did for us, we can have a fairly efficient meeting, and have reasonable discussions on all papers.

People put a lot of faith in scoring: it is interesting that high scores are not always a sure sign of acceptance. When people get together to discuss papers, far more nuances emerge, and overall I think this is a very good thing. Of course, all of this can happen in an electronic meeting as well; it's just that the process of deliberation feels very different. It's also quite a pleasure to meet face-to-face some of the people you might have had discussions with.

One day over; one more to go. Time to find some good Swiss chocolate (and no, I don't mean Lindt).


Friday, May 26, 2006

Hales' detailed proof of Kepler's conjecture, in Disc. Comp. Geom.

Readers of this blog will be well aware of the interesting events surrounding Thomas Hale's proof of Kepler's conjecture:
...that close packing (either cubic or hexagonal close packing, both of which have maximum densities of pi/(3sqrt(2)) approx 74.048%) is the densest possible sphere packing
In short, his proof involved large amounts of computer verification, and after three years of intensive effort, the Annals of Mathematics determined that it could not verify with certainty that the proof was correct (not because there were problems, but because the proof had many "low-level components" that lacked a more general intuition, and were thus hard to verify, especially given the degree of computer search involved).

A strange arrangement was then made: the Annals of Mathematics published a briefer version of the proof, and made the code/data for the proof available unreviewed on its website. Discrete and Computational Geometry then undertook to publish detailed versions of the six preprints that Hales and his student S. P. Ferguson wrote in 1998.

That issue is now out, with a foreword by Gabor Fejes-Toth (who ran the committee that first attempted to verify the conjecture for Ann. Math) and Jeff Lagarias (much-missed AT&T alum who spent a significant amount of time and effort restructuring and clarifying the proofs from the original preprints.


Wednesday, May 17, 2006

On the algorithmization of science

The latest issue of ACM's tree-killer has a Viewpoint by Thomas Easton on how algorithmic thinking pervades all areas of science (and soft science), and all I can say is, 'Hallelujah !':

Mathematics and algorithms are so essential to computational thinking that computer science will retain its historical emphases on these topics. Other fields that, like biology, have sought to mathematize their content will find that, because mathematics has incorporated algorithmic thinking, they might get closer to the queen by emphasizing algorithms over equations. Resistance is likely to be strong in the humanities, but even there algorithmic thinking is becoming essential. For example, it has found a niche through the way computers are used to verify that classic texts and artwork are properly attributed to their creators, to detect plagiarism, and even to aid the creative process.

More and more educators will have to grapple with the need for courses that inculcate algorithmic, or process-oriented, thinking. This might mean students will take more computer science courses. If this happens, computer science educators may need to redesign those courses to emphasize algorithmic thinking in ways that satisfy the needs of students in other fields, including those with only a distant relationship with mathematics.

(via Daniel Lemire)


The more things change...

It's a common story (and one experienced by many):

Young student starts taking an interest in research
He studies hard, clears his apprenticeship, and starts publishing.
He has some success, and gains some recognition.
Starts to gain confidence, and branch out in his own sub area
Committees start rejecting his papers; frustration ensues
He tries other publications, and more rejections follow.
But his work is being recognized by those who appreciate him: funding follows slowly.
He creates a new conference, for misunderstood creators like himself.
It works for a while, but he gets disenchanted, and tries to go back to the more prestigious conferences.
More rejections, funding remains limited, but his fame grows and grows
Finally, he's so well known that his area gets its own name, and retrospectives are organized in his honor.

And who's the unfortunate researcher who exemplifies this trajectory ? It's Claude Monet !


Thursday, May 11, 2006

Behavioral computing and airline loading...

Can distributed and local computing really be more efficient than centralized planning ? In economics of course, we know the answer, but in computing ?

Consider the problem: how to sequence passenger loading on an airplane so as to minimize completion time. In a Wired article by Dave Demerjian (pointer via Virginia Postrel), researchers proposed a "reverse pyramid" method (rear windows and middles before front aisles, roughly speaking) for loading passengers. The article also mentions "WILMA", or "Window-Middle-Aisle" order, and rotating zones (last 5 rows, first 5 rows, and so on), all of which are in use on different airlines.

Southwest however uses a different method, one that many of you are probably familiar with. Passengers are partitioned into three zones. Passengers enter by zone, and they sit wherever they want. According to the article,
...while Southwest's open seating might seem like an invitation for chaos, it actually illustrates a tendency among passengers to self-organize when left to their own devices. "Passengers who are free to sit anywhere usually do a good job staying out of each other's way," he explains. "Without having studied it in detail, I would imagine that an open boarding model is faster than assigned seating."
What we have here, ladies and gentleman, is an example of "Behavioral Computing". Michael Kearns at UPenn is one of the people investigating this space, and as he puts it,
Perhaps the computer science view of this fascinating line of thought can be best summarized as follows: Using relatively local information, distributed human organizations can compute good approximations to the all-pairs shortest paths problem. What other sorts of distributed optimization problems can humans networks solve?
Now this is not computing in the algorithmic sense: no n^2 time algorithms or NP-hardness results. But it reflects a growing interest in the "wisdom of crowds", social networks, and emergence phenomena (heck, even Charlie Eppes is working on 'cognitive emergence'). Kearns, and his students Nick Montfort and Siddharth Suri have some initial behavioral studies on graph coloring problems: there isn't a paper yet as far as I can tell, but I've heard about the results informally.

For those who might be wondering, this is not warmed-over distributed computing. Distributed computing indeed involved local computations, but the entities were slaves, devices that ran specific protocols that led to the desired outcome (leader election is a classic problem of this kind). There is a notion of malicious machines (a device that may have been compromised), and Byzantine agreement is a famous example of the kinds of problems one considers.

But the fundamental difference is that in this notion of behavioral computing, the local agents are not running pre-specified programs. They are often acting in their own interest, or working under some incentive model, ("I want to find the best seat quickly"), and may execute different procedures towards this goal. The key idea is to set up "a system that controls network structure, information conditions, incentives, and a variety of other variables of interest".

Thus, behavioral computing is more like distributed game theory, and fits nicely into the rapidly growing area of computational game theory.


On rejection and reviews..

Lance, Luca and Oded have all weighed in on the matter of rejection letters and the proper frame of mind one views them with. Oded makes the important point about such things being about the allocation of scarce resources, rather than the assigning of value to a paper (or a candidate).

This alternate "frame" is crucial to understanding some of the more bizarre quirks in our conference review process. Theory conferences are (in)famous for short to non-existent comments to authors; it is not uncommon in certain other areas to gget pages and pages of comments back. However, you quickly learns that pages and pages of comments are usually as useless as a blank page in trying to determine why your opus didn't make it, while other piecees of %##$^&$ did.

Even the usual process of assigning scores is not useful, since the score indicates some absolute figure of merit, whereas many papers live or die based solely on the particular mix of submissions. In some conferences, it's even worse: you are required to give a paper a rating of the form "strong/weak accept/reject or neutral", essentially passing judgement on a paper in vacuo, as it were.

Once we accept (or realize) that conferences (especially nowadays) are about the allocation of scarce resources (speaking slots, or even 10 pages in the proceedings if you still insist on paper proceedings), then it becomes clear that trying to assign absolute scores and verdicts to papers without being able to compare with others is meaningless.

As a corollary, it means that any review process where reviewers don't get to see a large fraction of the papers is flawed. It also means that fetishizing the "feedback to authors" is also misleading. As Oded points out, this only gives an illusion of a value judgement where none can really be had.

It also has more radical implications for the way we design conference committees, (number of people, whether PC-submission is allowed, who gets to review what). But that's a tale for another post...

Wednesday, May 10, 2006

Numb3rs reference: Neil Sloane's Online Encyclopedia of Integer Sequences

This week's episode of Numb3rs had a reference to Neil Sloane's (AT&T colleague) Online Encyclopedia of Integer Sequences. Charlie was attempting to backtrace where a network hacker was coming from, and discovered that his own code had been compromised. A secret code had been inserted, that spelt out a number sequence. Charlie was explaining this in class,
Student in Charlie's class: 23, 5, 18 is W E R in Sloane's Encyclopedia of
Integer Sequences. Maybe it's simple alphabet cipher?
(Thanks to Graham for the dialogue)


Friday, May 05, 2006


Ever since Aaron Archer (aka A. Dogg) joined AT&T Labs, I've had half an ear out for funky CS raps. Enter MC Plus+:
MC Plus+ is the founder and indisputably the #1 greatest computer science gangsta rapper ever. He is to CS gangsta rap what a blue screen is to Windows, what Vaseline, maple syrup and sour cream are to a good time. He's putting CS on the map, producing raps for students from computer science departments around the world. With another 4 years still at Purdue, trying to get that PhD, he has promised to keep producing CS hip-hop for all the grad students in the struggle.
MC PLus+'s latest offering: Alice and Bob, a paean to cryptography...

(HT: BB)


Disqus for The Geomblog