Monday, June 30, 2008

An open letter to journals not mirrored by ACM and DBLP

Dear journal,
If it isn't too much of a problem, could you possibly attach a BibTeX link to each article that you list ? For the sake of my cruelly abused fingers ? Could you ? PLEASE ?

NSF Update, continued: limits on submissions

I had heard a rumor about this a few months ago, and now it's confirmed: there will be a CISE-wide limit on the number of proposals submitted by a PI (or co-PI/senior personnel). Here's the relevant text (from the IIS call, but it's part of the coordinated solicitation):

In any contiguous August through December period, an individual may participate as PI, Co-PI or Senior Personnel in no more than two proposals submitted in response to the coordinated solicitation (where coordinated solicitation is defined to include the Information and Intelligent Systems (IIS): Core Programs, the Computer and Network Systems (CNS): Core Programs and the Computing and Communication Foundations (CCF): Core Programs solicitations). For example, between August 2009 and December 2009, an individual may participate as PI, co-PI or Senior Personnel in one proposal submitted to a core program in CCF and in a second proposal submitted to a core program in CNS, or an individual may participate as PI, co-PI or Senior Personnel in two proposals submitted to an IIS core program, etc.
The key difference is this: in the past, each separate solicitation (whether TF, or IIS, or whatever) had its own separate limit on the number of proposals: now the cap is summed over the coordinated solicitation. I expect there to be major rumbling about this.

One saving grace is that this only applies to the coordinated solicitation: other cross-discipline programs have their own separate caps.

Saturday, June 28, 2008

The social web makes me dizzy.

  1. I have a web page
  2. I have a blog
  3. I have started using twitter.
  4. As it so happens, I have a facebook account
  5. and I often share interesting items via Google Reader
So far so good. But here's where the fun starts.
  1. The shared items from GR appear on my blog
  2. Shared items, and blog posts, appear on my web page via lifestream
  3. facebook updates appear in GR via an RSS feed
  4. blog posts, twitter updates, facebook updates, and GR updates appear on friendfeed
  5. I can read friendfeed via the web, or via twhirl, (and also twitter)
  6. I can post comments to friendfeed posts via twhirl
  7. Someone can post a comment to a blog post seen on friendfeed and the update appears on twhirl. I can reply to this comment and it appears on their friendfeed
It seems that the whole point of the social web is to keep us updating so much that we end up not doing anything worth announcing !

NSF Update

(ed: please ignore if you don't get (or plan to get) funding from the NSF)

A missive from Jeanette Wing sent out today tells of consolidation in the standard solicitations for CCF/CNS/IIS. To quote:
Each of CISE's three divisions will have one solicitation that covers its core activities normally covered in the past by many solicitations. The Coordinated Solicitation will be the simultaneous release of these three solicitations, describing all core research programs within each division.

...

The Cross-Directorate Solicitation will describe cross-cutting research
programs, those with interests that cut across the entire CISE Directorate and are managed entirely within CISE. (Excluded are cross-cutting programs joint between CISE and other entities; see Further Notes.) For FY09-FY10, they will be:

- Data-Intensive Computing
- Network Science and Engineering
- Trustworthy Computing
Main bit of note: the "small" solicitations (i.e for amounts less than 500K) will be due in December. This in all likelihood includes the new Algorithmic Foundations solicitation. This is earlier than the Feb-March deadlines we had the last two years.

Tuesday, June 24, 2008

The F-1 process

I've been out of grad school long enough that I should know better, but PhD Comics still rings true in a way few comics can (xkcd is another). Of course, now I will find myself identifying more with the bearded old professor than the students, but still...

Today's strip sums up the foreign student visa experience perfectly. What they could have also added was a panel where Tajel explains the umpteenth bazillionth time to a clueless local, "Yes, we need a visa to go to X".

Monday, June 23, 2008

FOCS 08 list ?

The FOCS 2008 page has a link for accepted papers, but it's empty for now. We know at least 4 of the papers that got in, but does anyone know of a list ?

Update: that was quick! here it is.

Tuesday, June 17, 2008

SoCG 2010: The countdown begins

Ok, so it's a fairly long countdown :).

As mentioned earlier, Valerio Pascucci and I will be co-organizing SoCG 2010 somewhere in the Greater Salt Lake Area, including but not limited to the Great Salt Lake itself, the largest man-made excavation on earth, the best powder on earth, and the home of the (not-quite) fastest growing religion on earth.

I have never done this before, and have been consulting previous organizers for advice, things to do, things not to do, and so on. So I thought it would be interesting to blog about the conference organizing process in real-time, so to speak, with the not-so-humble hope that it might help future organizers as well. Don't expect anything juicy, or controversial ! For that you'll have to catch me in an off-the-record moment. As usual, I'll have a tag devoted solely to posts about the conference if you want to track only these posts.

I should also note that although I might carelessly lapse into the first person singular when talking about the organizing, 'I' should be read as short-hand for 'Valerio and myself', especially if something good happens :). Any bumbling should be assumed to be credited to me alone.

Moving on...

Conferences are like weddings, I am realizing, in that the key things to nail down are the venue, venue, and venue (followed by food, and entertainment). Based on a straw poll of people at the conference, it seemed like either hosting SoCG on campus or at Snowbird have roughly equal support (a downtown hotel was less favorable). So the first order of business is to request bids from both Snowbird and the campus conference services. Once we get these, we can start taking a closer look at the numbers that show up.

Scheduling point: the 2010 World Cup will start around the same time as the conference, as pointed out by Marc van Kreveld. Given the size of our European contingent, this is something we definitely need to worry about. On the other hand, maybe we should have country-themed sessions, with national colors and all :)

Saturday, June 14, 2008

A 'Nixon going to China' moment

Via Mathematics under the Microscope, the IMA-commissioned report on the validity of citation statistics as a means for measuring quality. It seems appropriate that this sentence was written (and methinks, could only be written) by mathematicians:
"Numbers are not inherently superior to sound judgments"

Thursday, June 12, 2008

SoCG 2008: Invited Talks

A Discrete Laplace-Beltrami Operator

Alexander Bobenko gave the first invited talk at the conference. As with Mathieu Desbrun's talk from a few years ago, this talk was on the area of 'discrete differential geometry', particularly focusing on a discrete Laplace-Beltrami operator, and how to compute Delaunay triangulations on polyhedral surfaces.

The general principle behind efforts in discrete differential geometry is quite apropos for a CS audience. Rather than doing a 'numerical discretization' of differential geometry to enable computations, the idea is to discretize the entire theory, so that the continuous theory emerges as the limit of the discrete theory.

This goes against the flow of much of what we often see in discrete computer science, where the idea is to relax to an underlying continuous domain in order to learn things about discrete structures. It also makes a lot of sense, because if successful, it gives a 'native' version of differential geometry that's naturally robust without having to worry about the level of discretization etc.

The slides are quite readable on their own, and I'd recommend that anyone who's interested take a look. In summary the idea is this. Given a function on a manifold, you can write down an energy operator (called the Dirichlet energy) such that the gradient of this energy is the Laplace-Beltrami operator on the manifold.

If you do this for piecewise linear functions defined on a simplicial surface in 3-space, then you get an expression for the Laplacian: however, this expression is not intrinsic (it depends on the embedding of the surface in 3-space). In the world of differential geometry, being intrinsic is the whole point, so this is not a happy state of affairs.

It turns out that if you use a triangulation of the surface constructed by using an 'intrinsic Delaunay triangulation' (more on that in a second), and then define an interpolation for a function on the vertices in the usual manner, then the resulting energy function is minimal, and the Laplacian you get *is* intrinsic. An intrinsic Delaunay triangulation is constructed much like you'd construct a regular Delaunay triangulation, with the caveat being that you might have to use geodesics on the surface as "edges" of the triangles, and some triangles might end up being degenerate (so it's not a "normal" triangulation).

But in any case, doing this allows you to work with a well-defined Laplace operator on a discrete structure, and the second part of the talk showed how you can construct discrete minimal surfaces using such an operator.

Geometry is Everywhere (part XLVII)

Ken Clarkson gave the second invited talk, which was a romp through the world of metric spaces and the various notions of dimensions used to describe them. Ken doesn't yet have talk slides online, but somewhere in the union of this, this and this is much of the talk material. The main premise was to weave together different notions of dimension for a metric space, and how they can be used to do geometry in the abstract. If I'm not giving more details, it's because the talk covered so much ground that it's hard to know where even to start. It was wildly entertaining though, and there was a general clamor afterwards for Ken to write a book on this topic (which I hope he does).

And the "47" ? A cryptic shoutout to Pomona College alumni: if you're one, you probably understand the reference.

Tuesday, June 10, 2008

SODA 2009 deadlines: Jun 26/Jul 3

Via CC: SODA 2009 server is up, and the deadline is Jul 3. New things this time:
  • Pre-submission of abstracts (like SoCG) on June 26
  • Papers must have "complete proofs": any details that can't fit can go to the appendix.
I foresee LONG appendices this time :)

Monday, June 09, 2008

SoCG 2008: Business meeting

Bullets from the business meeting:
  • 140 attendees: higher than the abysmally low 111 from last year, but not as good as the 180 reached when we had SoCG in Brooklyn (with the boat ride and the open bar. aahhh...)
  • 42/130 papers accepted. Immediate rejection of papers with no abstracts submitted earlier
  • John Hershberger is the 2009 SoCG PC Chair. Wants papers on the themes of coffee, microbrews, and rain. Good thing I just finished my paper on 'Maximizing coffee production while minimizing fermentation during the rainy season". This paper will be submitted from an ISP somewhere in Europe with 3 random co-authors added on, as per Monique's instructions for maximizing paper acceptance.
  • SoCG 2010 will be in... TA DA.... Salt Lake City !! Yes, yours truly and Valerio Pascucci (newly arriving in Utah) will be organizing the 2010 sausage festival in the land of fry sauce and wonderbread. Mark your calendars NOW !!
And with that, we broke for dinner.

SoCG 2008: Day 1

A good first day for the conference: we were inside mostly, so the murderous heat didn't affect us in any real way. The conference is hosted in the business school, which is really nice and plush, like all business schools are. Of course, you have to get used to seeing scrolling stock tickers.

The most entertaining talk by far was Jeff Erickson's talk on the homotopic Frechet distance. This is a variant of the Frechet distance where you have obstacles in the space the leash is moving in, and the leash has to move continuously around obstacles (in other words, two adjacent leash positions should be homotopic). They have a polynomial time algorithm in which the main nontrivial step is showing that the set of homotopy classes of paths that can yield optimal paths is bounded.

The Frechet distance is called a man-dog distance because you can think of it as the shortest leash length needed when a man walks on one curve and a dog walks on the other. During the talk, Jeff used Marty and Erik Demaine to demonstrate the algorithm, carefully omitting to specify who was man and who was dog. For obstacles, he used two graduate students on the paper: the deeper significance of "grad student = obstacle" I will leave for the reader to ponder.

In the category of "Talk that clearly was something interesting, if only I was smart enough to understand it", was Manor Mendel's talk on p-convexity. The deeper message was very intriguing: namely, that obstructions to embedding trees with small distortion in normed spaces take the form of a kind of strong convexity of the norm.

Many other talks were quite interesting: Timothy Chan's dynamic core sets talk, Tasos Sidiripoulos's talk on "circular embeddings", a way of generalizing the well known Treemap visualization tool so that the regions are no longer necessarily long and skinny, and Ken Clarkson's talk on Johnson-Lindenstrauss projections for manifold distances.

It was a good day.

p.s Twitting was an interesting experience. I'm not sure what to make of it. It provides more instant gratification to the (few) people following along, and it didn't distract me too much during the talks (I can't say what it did to my neighbours, but I didn't get any dirty looks, so...)

It's definitely different from blogging, which is more long form (yes, it's 2008: a blog post is now officially "long form"). I'll do some more of it tomorrow, and see how it goes.

SoCG 2008

I'm here at SoCG 2008 in College Park, Maryland. It's a sweltering 100 F and likely to get worse tomorrow: I think I'll attend lots of talks :)

I might also try a little experiment. During the conference, I'll post quick updates on twitter, while leaving this blog for longer-length posts. If you don't know what twitter is, it's like a microblogging environment: every post ("tweet") has to be 140 characters or less.

If you have a twitter account, you can follow me. If not, you can track the RSS feed, but most feed readers don't update these feeds that often, so you're likely to get bursts of posts.

p.s twitter tends to croak under heavy load, and the Apple Developer's conference is Monday, so hopefully the tweets will get out.

Sunday, June 08, 2008

A seminar on randomization

I'll be running a graduate seminar on randomization next semester. The goal is to cover, in 14 80 minute-long student presentations, a range of topics all centered around our ability to toss coins.
Some of the constraints:
  • Student background is good, but not extensive (i.e they are familiar with Chernoff bounds, but not much else; not much complexity background, and Kleinberg-Tardos-level algorithms familiarity)
  • Breadth is important: it's important to me that students become familiar with a wide range of topics rather than drilling deeply down into one (you could run an entire semester-long course on concentration of measure). This goes back to the previous point: at this stage, I'd like students to get as much wide exposure as feasible, in the hope that it will make later, deeper material more accessible to them later on.
  • Source material should be accessible and presentable (by students) in 80 minute chunks.
Here's a list of topics: I'd love to hear what people think, and get suggestions on changes/additions/deletions. Pointers to source material will also be helpful.
  • A randomized algorithm either has an uncertain running time, or a likelihood of error. How do we analyze such algorithms ? This will lead directly to an investigation of tail bounds on distributions, martingales, and the method of bounded differences. We will also look at the famous minimax principle for proving lower bounds on the complexity of randomized algorithms.
  • We'll take a tour of the many places where randomization leads to more efficient (and more elegant) algorithms (hashing, approximate counting, randomized incremental constructions, randomized rounding)
  • We'll get acquainted with BPP (the randomized analogue of P), and other random complexity classes, and how randomness and hardness are connected. We'll also look at interactive proofs.
  • Where does randomness come from ? If we have a deterministic procedure to generate random bits, how is it random anymore ? We'll talk about pseudorandomness (the art of making more from less).
  • Since it's tricky to make random bits, we'd like to minimize the effort involved. How can we 'derandomize' algorithms ?
  • (if time permits) Quantum computing would appear to be "probability with complex numbers". What makes it similar (and different) to probabilistic computation ?
  • Saturday, June 07, 2008

    They're coming out of the woodwork !!

    Well, maybe not, but it's getting harder and harder to keep track of all the TCS blogs out there. I just discovered that James Lee has a "trial blog" (a tlog ?) with some very interesting posts. Doing the P vs NC posts has made me jealous of all the wordpress.com folks and their nifty LaTeX embedded in the posts: I think I'll switch over at some point myself.

    Friday, June 06, 2008

    A perspective on solitude

    From Smilla's Sense Of Snow:
    Cantor illustrated the concept of infinity for his students by telling them that there was once a man who had a hotel with an infinite number of rooms, and the hotel was fully occupied. Then one more guest arrived. So the owner moved the guest in room number 1 into room number 2; the guest in room number 2 into number 3; the guest in 3 into room 4, and so on. In that way room number 1 become vacant for the new guest.

    What delights me about this story is that everyone involved, the guests and the owner, accept it as perfectly natural to carry out an infinite number of operations so that one guest can have peace and quiet in a room of his own. That is a great tribute to solitude.
    And here's a more prosaic, but entertaining, take on Sam Cantori and his hotel.

    Update: as commenter hfv points out, I (and Peter Høeg) got the attribution wrong. It's really Hilbert's story.

    Thursday, June 05, 2008

    P vs NC V: Linear PRAMS II

    This is where we left off last time:
    If a set of inputs parametrized by d parameters is accepted in the linear PRAM model with p processors in t time, then there is a way of partitioning using hyperplanes, so that each cell of the resulting arrangement can be labelled 0 or 1 so that an input z is accepted iff it lies in a cell labelled with a 1.
    The above lemma describes the geometric structure induced by an efficient computation on a linear PRAM. Now, we'll see how a high parametric complexity problem breaks this structure.
    We will do this by demonstrating that such a problem has too many accepting and non-accepting inputs close to each other to allow for such a cell labelling with these few hyperplanes.

    1. Setup

    We start with some parametrized problem (with parameter ) with cardinality n and bitsize . Remember that the bitsize refers to the maximum complexity of the coefficients of expressions involving . Let the parametric complexity of this problem (as we defined earlier) be . For technical reasons, we will require that the problem under consideration be homogeneous (scaling values by a constant multiplies OPT by the same value); this does not change anything, since this is true for problems like min-cost flow and max-flow.

    Each numeric parameter in the input can be expressed as some , and without loss of generality we will assume that u and v are integers. We will actually need all parameters to be integral, so thinking of as the rational , we can rewrite all such parameters in the form , yielding a new instance with the same combinatorial structure (again, homogeneity makes things easy).

    We now have a two parameter system that describes an input to the problem. A third parameter will describe the threshold for this (optimization) problem. Specifically, the goal will be to decide whether the value obtained is at least . Let a be some "large enough constant", and consider all inputs where the threshold has bitsize at most and each parameter generated from has bitsize at most .

    This is a three-parameter system (d=3). We can therefore translate the structural result at the top of the post into a concrete statement regarding all problems parametrized in this manner.
    Let be some constant. Set the desired time bound , and let the number of processors . Then a linear PRAM algorithm that works on this parametrized input in the specified resource bounds induces a partition of by at most planes, where each face can be labelled as specified above.

    There's a more general version of this theorem that makes the tradeoff between time and processors more explicit: I'll skip that for now. Also note that the "5" in the exponent is somewhat arbitrary: it's just chosen to make things work unambiguously.

    2. An affine transformation

    We'll now encounter an ingenious geometric representation of the set of feasible inputs for the parametric problem. Let be the optimal function value as a function of the input P, and let G be its function graph. An input I to the problem is parametrized by the three parameters , and is feasible when , which can be restated (remembering that ) as the condition (dividing through by and using homogeneity. We will also assume wlog that is positive)

    At this point, a geometer will look at expressions like , and immediately see a projective plane. And that's what happens ! We can think of the space of possible parameter values as inhabiting a projective space, with playing the role of the z-direction (in which case the action, as it were, happens on the z=1 plane).
    The parameters define a ray in 3D that intersects the z=1 plane in the point (you should look at Figure 4.1 in the paper to help visualize what's going on).

    What then happens to the function graph G ? We can think of G as lying in the affine plane z=1 (parametrized by ), and fan(G) as the set of all points in that project onto $G$ (via the projective transform). We can now translate the feasibility condition into the geometric condition that the point lies "below" fan(G) in the axis.

    It helps to think of as an ordinary function in the affine plane, in which case the space of feasible values is the set of all points that project to points in the plane below its graph. In other words, all integer points in are labelled as 1 if they lie below fan(G) in the direction.

    In the next installment, we'll show that any arrangement of few hyperplanes in this space will not be able to capture all the 0 and 1 points correctly.

    Wednesday, June 04, 2008

    Name of a kind of weight function on a lattice ?

    Suppose we have a lattice with elements min, max, and the usual join and meet operations. I define a function w() that maps elements of the lattice to reals, such that
    • w(min) = 0
    • w(max) = 1
    • If x > y, then w(x) > w(y)
    • let z = join(x,y). Then w(x) + w(y) >= w(z)
    Note that the last condition is slightly weaker than the usual submodularity condition
    w(x) + w(y) >= w(z) + w(u) (where u = meet(x,y)).

    My question is: is there a well known name for such a weight function ? I've been trying to search various sources, and the closest I came to this was the notion of a 'rank' function for a lattice, which doesn't quite capture this sort-of-metric-property.

    Monday, June 02, 2008

    The pitfalls of being watched..

    Noah Snyder over at Secret Blogging Seminar points out one of the unreported pitfalls of blogging: your colleagues know what you're doing ! More importantly, they know that you're not doing the work you're supposed to be doing with them !

    This is particularly bad when someone's hounding me for a review, or a revision, or something else...

    Disqus for The Geomblog