## Monday, June 28, 2010

### TheoryCS discussion site: A call for help.

Many of you are familiar with Mathoverflow.net (MO), the discussion site where (mostly) professional mathematicians come to discuss research problems, get questions answered by experts in the area, and have very enlightening discussions on a host of topics in mathematics.

The conversation is at a very high level. The moderators (and the community at large) do an excellent job of pruning out the posters fishing for homework answers, the vague questions with answers that any google search could return, and off topic questions that are better served by a more amateur-centric forum. What remains are questions that are at least at the advanced graduate student level, and that frequently spur open-ended discussion among the advanced researchers in the area.

It's a dream site: it has much less clutter than sci.math (or comp.theory for that matter), and because of modern tagging and filtering technology, is a lot easier to navigate than the old Usenet.

While theoryCS questions are welcome at MO, they sometimes get neglected for a lack of critical mass in the audience. Many in the theoryCS community participate in MO, but it would really be ideal to have our own discussion site that's structured along the same lines with the same intent:
If you're a student, this could be an invaluable resource to learn about different aspects of TCS, and get to know researchers in the area. If you're an active researcher, this is a great place to post queries about topics that you might not be familiar with but need in your own work.

We're in the process of creating such a site, courtesy of Anand Kulkarni (aka polybot). Stack Exchange is the software system and the organization that maintains discussion sites like this, and has built a system to create, discuss and vet sites to ensure that they have enough critical mass to sustain themselves.

The theoretical computer science site is here. The way it works is as follows:
1. We start with a 'define' phase with prototypical questions that are right and not-right for the site. This has been completed, with a sufficient number of questions that people have voted for.
2. We move on to the 'commit' phase, where we are now. Here, we need commitments from people (with their actual names attached) that they will participate in the site once it goes into beta testing. We have a number of commitments so far, but we need much more in order to move to phase 3, which is
3. Beta testing, where the site goes active and we see if we can sustain it with questions and discussions for a while. If so,
4. The site gets created.
This is a great opportunity to create a site that can truly be our own, and that would be  a go-to location for all aspects of theoretical computer science, whether it be algorithms, complexity theory, quantum computing, game theory, programming language theory, logic, or whatever. Tagging and filtering means you don't have to get swamped in a sea of posts on topics you don't care about, and if you've used MO, you know how easy it is to have the site only display topics that are relevant to you.

What should you do ? Go to this site, and commit to participating in the beta. Committing costs nothing - it's literally one click after you authenticate. Then all you have to do is spread the word so we get enough people to move to the beta phase.

If you're skeptical about this (or maybe have wasted too much time on comp.theory in years gone by knocking down P vs NP cranks), do go to MO and see what a theoryCS site could become.

And if you're a theoryCS blogger with your own following, please spread the word ! you all know who you are :)

p.s there's some confusion about exactly how many people are needed to get to the next phase. The answer is that it's not based on numbers, but based on the reputation of the people committing (as measured by their activity on related sites - but not MO sadly). Most folks in our community are unlikely to have large reputation in the related (mostly programming) websites, so we'll need a good number of people (probably in the low 100s) to get there. Hence this appeal (note that if you commit and then use the 'share' link to get someone else to sign on, your "reputation" increases, and that improves the overall progress of the site in a cascading effect)

### Metrics via pullbacks

One of the things I end up having to do a lot of is ponder distance metrics. Not the nicely behaved norm-induced ones, but more bizarre entities. metrics over shapes, metrics on lattices, metrics on distributions, and the like.

There are many constructions for building metric structures on spaces for which it's not clear how to do so. One of the neatest methods is via pullbacks, exploiting the algebraic and continuous duals for vector spaces.

The basic idea is as follows: You want to build a metric on some (usually ill-formed) vector space V. Fortunately for you, the space V* of linear functionals over V is better behaved. Even better, you can define a norm on V*. This allows you to do a little magic.

Define a function || || on V as ||x|| = sup f(v), over all f in V*, where ||f|| <= 1. This is of course the "dual" norm. It can be shown that it indeed satisfies the properties of a norm. Once you have a norm, then d(x,y) = ||x -y ||. Voila !

These types of constructions are particularly useful when dealing with distributions (the Schwarz kind) and their geometric generalizations, the currents (which are a measure-theoretic way of defining surfaces). Distributions can be nasty - you can only interact with them via their linear functionals (the space of smooth functions with compact support). But this construction allows you to put nice metric structures on them.

Some examples of metrics arising in this manner:

• The l_1 distance between probability measures (or the total variation distance)
• The earthmover distance between probability measures (this is quite nifty)
• The current distance (between measures, or between currents).

## Sunday, June 27, 2010

### And for some Sunday entertainment

(dare I say XKCD-style) Flowcharts for the life of the tenured and untenured professor. A collaborative School of Computing effort between my colleagues John Regehr, Matthew Might and myself (who says professors can't collaborate inside a department!).

Incidentally, we also make up the vast majority of our department's blogging presence.

## Friday, June 25, 2010

Jon Katz and Matt Welsh have both written recently about the problems of dealing with crap papers, mainly the pain of having to review them. In an unrelated event, I got into a discussion with some colleagues about the problems of "crap research', and ended up formulating a theory: viz,
bad research is like spam email
in the sense that

1. There seems to be no way to stop it, and many economic incentives to continue it
2. You can't stop it by diktat
3. The only way to deal with it appears to be effective filtering. Like spam, bad research has less of an effect when it gains no attention.
There are other similarities:
1. we can block spam by filtering certain domains. We also tend to ignore certain kinds of conferences
2. we can block spam by blocking certain email addresses. We also might ignore certain researchers, or at least downweight their work after a series of bad experiences.
3. More explicit spam blocking policies create a false-negative problem. False-negatives are also a big problem in research.
But this analogy also suggests that we shouldn't be designing strategies to eliminate bad research. We should be designing better methods to focus attention on good research, via better filtering and highlighting mechanisms (social, authoritative or otherwise).

Personally, I think bad research is less of a problem than lots of cargo-cult research, where it looks a lot like good research is being done, but when you look closely, you realize that nothing of value has been added to the universe. Sadly, a lot of funded research is also like this.

PS: False negatives are a highly underrated problem in research. I think we vastly overestimate our ability to judge what kinds of research will have lasting value and what potential impact a paper can have. So it's important to err on the side of being more generous to papers, rather than less.

## Wednesday, June 23, 2010

### The Future of STOC

Lance has now created a blog, 'The future of STOC', where he's been posting responses from people who were asked to comment on the original proposal for modifying STOC. The responses are almost unanimously negative and make for interesting reading.

My response is linked there as a PDF: I thought I'd place the main text here, just to solicit some comments. After writing this, I've done some digging for data (which will hopefully appear in a post shortly) that brings up an interesting angle to the 'accept more papers' discussion.
My response: This is a terrible way of solving the problem, because...
There are better solutions!

It is puzzling to me that we're jumping straight to an alien (to us) conference model, when there are proven hybrid conference models that exist within the larger (conference-driven) computer science community. ICALP (theory), SIGMOD, VLDB and ICDE (the top three database conferences), ICML and NIPS (the top machine learning conferences), KDD and SDM (the main data mining conferences), SOSP (the main OS conference), and (to a degree) SIGCOMM and INFOCOMM (networking) all have the following model:
• A main research track'', with peer reviewed papers. Optionally, an industrial'' track for more applied work.
• A set of workshops, for which topics are solicited via a call for proposals.
• A set of tutorials, for which again topics are solicited via 5-page abstracts and reviewed. Panel discussions and demos (optionally)
The conference itself is a 5-day event, with research and industrial tracks, panels, and tutorials, all blended together  (workshops either bracket the conference, or are interleaved). I've been to many of the conferences listed above, and I can assert that I've met many people not originally of the community that attend because of these satellite events.

Other variants include limiting the number of actual talks, while still including all accepted papers in the proceedings  (the remainder of the papers are presented as posters). This opens up more time during the day for satellite events as well.

Note: I've begun noticing more tutorial events at STOC (STOC 2010 has a full day of these). This definitely is progress, although I believe that workshops draw more attendance. I also think it's important to solicit these from the community, rather than having the PC decide topics. Doing so both increases participation and increases the sense that the community is part of the whole conference.

Math meetings are fractured

I've never attended an AMS meeting myself, but from all accounts (and I have heard MANY), they are highly fractured. The meetings draw attendees from the spectrum of mathematical areas, but they all essentially group together in tiny miniconferences - I've heard numerous stories of multiple rooms in which the only people listening to a talk are the five other speakers and a few graduate students. This is not the way I want to see our flagship theory conference evolve.

People won't come

For better or for worse, we're used to the 'publish-attend-present' model of CS conferences, rather than the 'meet-greet-discuss' model of math/science conferences. I suspect that many people will not come to a STOC in which there is no proceedings and no real publication attached to the presentation (at least none that can go on a CV). There's nothing wrong with the model per se, but it's not something our community is comfortable with, and since other theory conferences won't go this route, I don't see how we'll ever get comfortable with it.

Bottom Line

I applaud the idea of shaking things up in regard to the format for STOC. I just feel very strongly that we should learn  from other models that exist within the computer science community, rather than making a radical shift towards a
model that's part of a very different framework for how the publishing/dissemination process works. I am unconvinced that the proposed model would solve the problem of attendance, and it has a good chance of making STOC entirely irrelevant.

### Social theory...

as in, doing theory socially. Three points of interest:
1. Luca Trevisan is soliciting ideas for building a schedule/recommendation engine for FOCS using collaborative filtering. He's on a short time frame, so you need to know what you're doing, but I dare say there's an excellent ALENEX submission waiting for anyone who has the time to work on this.
2. Anand Kulkarni is proposing a theory overflow site, much like Math Overflow (which many of you inhabit already). I've been relatively happy with MO, and they're quite friendly towards algorithms folks (although sometimes a little confused about the difference between theoryCS and programming). But I do often tire of wading through pages and pages of unrelated questions to get to interesting ones.

I don't know if there's enough global support for theory overflow, but I do know that MO has been a fantastic resource for research-level mathematics, and with enough participation, theory overflow could get there too. If you don't know what I'm talking about, go to mathoverflow.net. If you think that it's a waste of time, I'll mention that among the ACTIVE participants there are Terry Tao, Timothy Gowers, Richard Stanley and Bill Johnson (as in Johnson and Lindenstrauss)
3. Mark Reid has built a discussion site for ICML 2010 (ICML has been doing this for a few years now). Each paper at the conference gets a page, and anyone can post comments on the page. Authors can opt to get email whenever someone posts a comment, and can in this way interact with discussants. I wonder if something like this might soon become a de facto part of all conferences.

## Tuesday, June 22, 2010

### Case Study in Large Theory Conferences: ICALP

Luca Aceto had posted a comment on my last post about STOC, describing his experiences running ICALP as a large theory conference. I thought it was fascinating, and requested that he repost a longer version on his blog. That post is here, and I strongly encourage anyone who's been involved in this discussion to go and read it... now....

I wanted to highlight a few snippets from it that I feel reinforce a number of points that I've been arguing.
ICALP is a three-track conference, and has been so since 2005, though typically only two tracks are running in parallel at each point in time. At ICALP 2008, in addition we had 12 satellite events, including the DYNAMO training school for doctoral students
Note that ICALP had an attendance range of about 500 - where we'd like STOC to be. It fits in with the pattern I was describing: more satellite events appears to correlate with larger attendance.

As an aside, ICALP had Peter Winkler do a masterclass on math puzzles. Frankly, if we could just hire Peter Winkler and Persi Diaconis to do lectures at every STOC, our numbers would go into the stratosphere ;)
The workshops were held the day before ICALP or during the week-end following it. They were selected by the ICALP organizers amongst a fairly large number of proposals that we received in response to a call for workshops, based on their perceived scientific quality and on their potential interest to the ICALP community.
I've said this before, but I do think that if we go the route of having workshops/tutorials, the best way to do it is how other conferences do it: have a workshops chair solicit proposals, and decide from amongst them. The workshop organizers then take care of the work of getting speakers. It will ease the burden a lot.
I firmly believe that the task of organizing a conference like ICALP should be shared amongst several people. This certainly worked for us and helped us work more cheerfully, and overcome personal problems, mishaps and periods of crisis and panic that arose during the year before the conference took place
Very true. Again, most conferences that have lots of activities have a large organizing group, with proceedings chairs, arrangements chairs, workshop chairs, tutorial chairs, and so on. Apart from the fact that people's CVs get nicely bloated with service activities, more participation at the top level can actually help with overall attendance, as well as alleviating many of Michael's concerns (although technically he was more concerned with colocation, which is an idea I like, but does take more logistical coordination).

## Monday, June 21, 2010

### On acceptance rates and flagship conferences

There's been a lot of back and forth on ways of increasing attendance at STOC, and in our wonderful theory way, all of this has happened in a universe unencumbered by the presentation of actual data.

I thought I'd dig up statistics on what exactly goes on at a number of major conferences in different areas in computer science. My idea was to take some of the major areas in the field, identify their flagship conference (or one of two as the case may be), and compile statistics on acceptance rates, attendance, and general conference activities.

The areas I considered (with main conference in parentheses) were
• databases (SIGMOD)
• machine learning (ICML)
• operating systems (SOSP)
• networking (SIGCOMM)
• architecture (ISCA)
• graphics (SIGGRAPH)
and the results I got were interesting (all data that I compiled can be found in this spreadsheet: feel free to add other areas, or update numbers, as you see fit). Where I could, I tried to get a sense of attendance/acceptance rates from either asking people or looking at published numbers for recent years: the ACM DL has acceptance rates for many of the above. Information on conference activities were taken from the most recent year I could get data for (usually 2010 or 2009). The main points:
1. All of the listed conferences had attendance in the 500-600 range (except ISCA with average attendance of 400, and SIGGRAPH with 2000+ in the research side). So they are definitely conferences with attendance that STOC would like to mimic.
2. Acceptance rates varied, but most were below 20% (ICML being the exception at 25%). STOC is at 28% or so
3. Number of papers accepted varied widely (23 for SOSP, 150 for ICML). I find this particularly interesting: it would seem that attendance correlates more with the perception of being 'flagship' than the actual number of papers accepted.
4. Most conferences had long lists of colocated workshops. The smallest number was SIGCOMM last year with 5, and others had many more. STOC had none.
6. With the exception of ISCA last year, all the conferences had significant poster sessions, either consisting of all papers accepted, or as a separate track with many posters. STOC had none.
7. The conferences all had other activities: demos, industrial tracks, works in progress or other such things (ISCA being the exception). STOC had none.
8. Durations varied between 4 and 6 days (including the initial day). Most had 5. STOC is 4.
To me, there are two things that stand out from this.
1. The number of papers accepted does not appear to make a difference to the attendance. SOSP happens once every two years, and accepts 23-25 papers, and gets 500 attendees !! ICML gets a similar number of attendees with 150 papers accepted each year.
2. There are a TON of activities at these conferences. Indeed, I think ICALP and ESA match them in terms of level of activity, but certainly not STOC. I've been a proponent of satellite events around a conference to increase attendance, and the STOC/EC/CCC colocation does seem to have helped. I'm also intrigued by the idea of colocating SoCG with STOC.
You may draw your own conclusions...

p.s for the legions of readers who will start protesting that these communities are much larger than the theory community, I will merely point out that almost no one in this discussion thinks that the theory community is 300 strong: the question is more about getting the rather large theory community to show up in force for STOC.

UpdateMichael Mitzenmacher has a post up listing specific logistical issues that come up with expanding the set of activities at a conference. He points out that if we decide to go to multiple satellite events (whether as separate events or whatever), we'll have to adjust to a much greater degree of organizational commitment up front, as well no small amount of 'attitude adjustment'. For anyone who's ever attended a business meeting, this is a scary thought :)

## Saturday, June 19, 2010

### The Shape of Shape Analysis Research, Part III

Shape matching research in computational geometry is fundamentally distance-based. In other words, we start with a distance function, and then design algorithms to compute it, or minimize it under transformations, or approximate it, and so on.

There's an important problem with this point of view. While computing the distance between two shapes is an important tool in shape analysis, it's not the only problem. Other equally important problems include:
• Finding a shape similar to a query shape
• Matching pieces of shapes together
• Organizing shapes into groups (i.e clustering)
And so the problem with the distance-based viewpoint is that all you get at the end is an abstract metric space. You can compute d(x,y) in an appropriate amount of time (maybe), but you lack all the additional structure needed to solve these other problems efficiently. With our modern knowledge of metric embeddings, it's always possible to ask if these distances can be embedded in a more tractable space, but it turns out for measures of interest (Hausdorff, Frechet, earthmover), this cannot be done without incurring huge errors.

The idea of shape spaces turns this process around. Rather than starting with the distance, and trying to find a space to embed it in, shape-space based methods start with a mapping that takes a shape to a single point in a (usually curved) space, and use an induced metric (usually some kind of geodesic) as the distance.

By at least one unsourced account, this view of shape dates back to Riemann, but the modern formulation of this approach started with David Kendall, in the 70s. His idea was extremely elegant.

Consider a collection of closed simply connected regions of the plane (the shapes), each shape described by k points on its boundary. Each of these points can be described by the two coordinates (x,y), which we will write as the complex number x+iy. By a shifting transformation,  we can ensure that the centroid of each shape lies at the origin. This loses one (complex) degree of freedom, yielding a k-1 dimensional complex vector.

Next, consider what it means to rotate the shape around the origin. In the complex plane, this corresponds to multiplying by the complex number z = exp(i theta). Doing the appropriate projective transformation, this means that we can identify a shape with a single point in k-2 dimensional complex projective space.
The distance between two shapes is now defined as the geodesic distance between two points in this space.

There are a few important points to note here:
1. Each shape of k points is mapped to a single point in a k-2 dimensional space.
2. All shapes are assumed to have the same number of points, which correspond across shapes.
3. The space is constructed by quotienting the original representation (the k-dimensional complex vector) by the special orthogonal group.
This last point is particularly crucial: the invariance under transformations is folded directly into the representation, rather than being something to "solve" via minimization.

The general program outlined by Kendall (map shapes to points on a manifold quotiented by a suitable set of transformations) has led to many other constructions, among the more notable being Bookstein's shape space and the Michor-Mumford representation for planar closed curves invariant under diffeomorphisms (which bears a strong resemblance to a summed variant of the Frechet distance). These methods have (for reasons unknown to me) taken up residence primarily in the computer vision community.

A Critique.

There is much to like about the shape space approach to shape analysis. Fundamentally, by embedding shapes in a space with structure, it gives us both a distance measure and a geometry to play with, and this is invaluable. However, there are serious limitations to the ideas developed thus far.
• Computation: It's all very well to come up with a mathematically elegant formulation of a distance as a geodesic, but it's a lot harder to actually compute these distances. In practice, researchers often resort to heuristics with no guarantees beyond local convergence. To me, this is like building a beautiful mansion in a pit of mud: it's hard to get in and out with a lot of dirt and pain.
• Scalability: the mathematical complexity also makes it harder to do scalable computations on shapes.
• Global vs local features: I'll have more to say about this later, but these approaches (generally speaking) construct a global signature for a shape, which limits one's ability to do partial matching.
• Correspondences: The Kendall method at least requires explicit correspondences between points in each shape. Finding correspondences is one of the most annoying parts of shape analysis (and affect most methods for comparing shapes).
Next: We examine the problem of hearing shape, or how the Laplacian starts to figure in.

## Thursday, June 17, 2010

### Rebooting how we publish in CS.

Dan Wallach has a thought-provoking proposal on how to reboot the CS publication process from the ground up. Read the entire proposal here.

Here's an edited version of a response I sent to him (short version: I like it !)

I think the time is ripe for this: it seems that people are getting more and more used to using the arxiv/iacr/eccc for tech reports and DBLP as a de facto list of papers, and even regularly subscribing to arxiv rss feeds to see what's new. bibref management systems like Mendeley/citeulike would also really benefit from this.

While (like others) I'm concerned about facilitating ranking schemes too much (I personally think the h-index is an abomination, but that's a different discussion), I think that even if the only outcome of this was to have a centralized single repository for CS publications, that in itself would be a major benefit.

I'm less sure about attention/reputation mechanisms though. It's clear that one of the challenges for researchers today is the 'eyeballs problem': how to get attention to your work amidst the sea of publications. While one might argue that Google and page-rank have done a good job of this, i think that over time it's become more and more top heavy, with a few locations acquiring sticky reputation and sucking in attention, and while this might be ok for general news, it's not so for research, where more often than not, good ideas can come from less "well known" sources.

I don't think CSPub causes any additional problems in this regard - but it would seem like much more thought is needed to design *transparent* ranking schemes. While google can do what they want with their ranking scheme, and keep it as a trade secret, a public service such as CSPub should try to keep ranking methods as transparent as possible. (hack-proof ranking methods ? I know there's research on this !)

### It's over !!

Yes, by far the most stress-filled SoCG I've attended is now over. I'm hoping we didn't traumatize the attendees too badly.

I apologize for the lack of posts during the conference - I just didn't have enough time to compose intelligent thoughts about the talks that I actually did manage to attend, and even Bernard's metamorphosis into Steve Jobs (at least in talk style) went unremarked upon :).

There were lots of great talks though, and hopefully as I get more time, I'll be able to mention some of them.

## Monday, June 14, 2010

### The Shape of Shape Analysis Research: Part II

Shape analysis in the geometry community follows a fairly standard pattern. It goes something like this:
1. Fix class of shapes (points or curves, usually)
2. Define distance between two shapes
3. Minimize distance under transformations (rotations, translations, sometimes scaling)
4. Approximate distance if necessary
5. Study distance for special classes of shapes.
There are many distances that have been studied in this manner for point sets, including the bottleneck matching distance, the Hausdorff distance, the RMS matching distance and the earthmover distance. For curves, the list is much shorter. The Frechet distance is pretty much the only game in town, with a brief cameo by its first cousin, the dynamic time warping distance.

This process has brought forth a number of interesting ideas and tools - among them
• free space decompositions and equivalence classes of regions with respect to combinatorial structure in the solution (so you can enumerate solution types)
• how to approximate spaces of transformations via carefully chosen grids in order to get provable approximations for distance estimation
• connections between geometric matching and string matching via different kinds of hashing tricks.
I'd go as far as to argue that these tools are more important than the measures themselves, because of their applicability.

But while shape matching is a rich area of research within CompGeom, it's less clear what influence this research has had in the larger shape analysis environment. Some of the obstacles (some self-inflicted, some not) are:

Definitions involving 'max' terms.
Combinatorially, it's easier to define distance measures where the distance is governed by a max over some quantity (usually a max distance over pairs of points). It's easier to define the space of possible solutions, and make the requisite combinatorial arguments. But such measures are highly non-robust, since any one 'outlier' can cause the distance measure to report a large distance.

This problem is usually fixed by introducing 'outlier-sensitive' variants of the measure under consideration, which leaves some combinatorial structure intact (at a price), or by replacing 'max' measures by 'sum' measures, which can often be inelegant, and usually destroys most of the algorithmic tools developed for the original case.

Reliance on the 'expensive exact, cheap approximate' framework.
This might require some explanation. Most of the above shape matching measures can be computed for two fixed shapes relatively easily. But if you have to compute them under transformation groups, things get hairy really quickly. Running times in the realm of n^7 or n^8 for point sets in three dimensions are not unusual.

What's worse though is the kind of impracticality involved in these algorithms (yes I know, n^7 doesn't need further beating, but still...). They usually involve finding intersections of surfaces defined by medium-to-high-degree polynomials, and then walking through the arrangements thus defined.

There's no way on God's green earth that anyone in their right mind will implement these algorithms, so we usually apply the standard bait-and-switch: "if you love these expensive exact methods, you're REALLY going to like our fast and tasty approximations !!". There have been some really creative tools designed to solve shape matching problems approximately, but what they often do is hide high complexity in terms involving the error epsilon, while looking well behaved in n.

It's difficult to overstate the depth and beauty that approximation methods bring to the field of computational geometry as a whole, and you should read Sariel's soon-to-be-book on this topic to learn more. But in the narrow realm of shape analysis, this two-step 'expensive exact, sort-of-cheap-approximation' has the following problems:
• Designing exact algorithms with humungous running times only confuses the people you might want to have use your algorithms. They'll just turn around and use some other measure. Coming along afterwards and saying, "but wait, we can APPROXIMATE IT" doesn't help, because no one's vested enough in the measure to even care at this point.
• Hiding expensive dependencies in the error term is problematic. Theoretically, it's a sound form of analysis - isolating the dependency from the dominant 'input size' term. But practically speaking, a bound that's cubic in 1/epsilon is not a whole lot better than a bound that's (say) quadratic in n, for reasonable values of epsilon. Which is to say, they're both terrible. You can of course protest that in practice things will work a lot better (and they often do!), but again, you've lost your audience, who wasn't really vested in the distance measure in the first place !
Missing the forest for the trees.
This is an odd kind of objection to level at theoretical research. But here's the problem as I see it. If your focus is the primitive 'compute distance between two shapes', then you use that as the platform and layer on things like 'minimize under transformations; find near neighbors', and so on.

The problem is that this approach focuses on the 'tree' (the distance measure) while in my mind missing the 'forest': namely, the larger set of problems of shape analysis, of which computing the distance measure is but one. I'll develop this idea as I talk about other approaches to shape analysis - the point I want to make is that you have to view the particular distance measure between shapes in the context of  a large set of possible applications. A  measure that looks nice and
clean and well founded on its own may be less well suited for the rigors of supporting a class of analysis problems than a different measure that's maybe less elegant, but more flexible when seen in context.

A coda.
I was discussing some of the issues here with someone, and I think a clarification is important here.

I'm not saying that people can or cannot work on studying whatever distance measures they want. There are all kinds of interesting geometric puzzles that have cropped up while studying shape matching (subquadratic algorithm for the Frechet distance?).

But when we look at the CompGeom shape matching literature through the lens of "Is this a way of designing the theoretical foundations of the larger shape analysis research program", then the above objections become more relevant.

In the next installment, I'll switch over to the line of shape analysis research that originated with David Kendall and others, and present a similar critique of that body of work.

## Tuesday, June 08, 2010

### Active learning modules for grad algorithms ?

Active learning (the pedagogy, not the area of machine learning) is all the rage in undergraduate education. My understanding of it (limited, btw) is that it involves much more active engagement with the students, and much less lecturing. This meshes nicely with new trends in teaching, since so much information is available on the web, and so the traditional 'stand at blackboard and scribble for an hour' model seems a little out of date.

My question here is: has anyone tried active engagement modules for topics in graduate algorithms ?  (which means topics like randomization, network flows, and a fast review of basic algorithmic primitives, with an emphasis on proof techniques). I've experimented with group activities for NP-hardness reductions (team students up in groups and have them pick problems out of a hat to prove NP-hard) with mixed results.

My class size is in the 40s-50s range, and has a mix of beginning grads and advanced undergrads, divided up into people taking it as a requirement, people taking it out of curiosity and those taking it to help ace their google/m\$ interviews (no I'm not making this up - I did a poll)

## Monday, June 07, 2010

### Why double blind review occasionally annoys me.

1. Submit a paper to a conference that expects blind submissions
2. Resist the urge to place the paper on the arxiv, because of said blind submission policy, and the misguided belief that placing the paper online will violate the spirit of said policy
3. Watch as a stream of papers on conference topic magically appear on the arxiv.

## Friday, June 04, 2010

### bibtex style question

I have a BibTeX style hacking question, and am hoping the community at large can help me out.

Here's the problem: I have a set of names of authors. I wish to make two bibtex style files, such that
• In style file 1 (S1) any paper including someone in this set of authors will be rendered normally
• In style file 2 (S2) any paper including someone in this set of authors will be rendered with the author name underlined.
My current hack was to do the following. I created two versions of a 'names.bib' file. In both files, the names are stored as @strings, and in the second version, the strings are encoded as underlined i.e using \underline{name}.

In my master bib file, the names are entered merely as the string value, so if I stored a name as
@string{me = "Suresh Venkat"}  or @string{me = "\underline{Suresh Venkat}"}
I merely enter the author name as
author = {..other names... # me # .. other names}
While this works, the problem is that bibtex doesn't know (obviously) that the string 'me' needs to be formatted as a name, and so I get ugliness like
"Author, A., Author, B., Suresh Venkat and Author, C. "
in the final bbl instead of
"Author, A., Author, B., Venkat, S. and Author, C. "
Now I didn't expect my solution to work, but I don't know what will. Any ideas ?

## Wednesday, June 02, 2010

### A (minor) conundrum when citing related work

Suppose you're writing a paper in which the three key prior results are A, B, C. Let's say that C is the most recent of the three, and discusses A and B. But C completely misrepresents the work of A and B, to the extent that it starts to undermine the very premise of C !

Now you have to discuss these prior papers: what do you do ? The conservative approach is to ignore the issue, and merely discuss A, B, and C correctly. If it starts sounding like C doesn't make any sense in the light of the correct rendering of A and B, then that's too bad.

But suppose it really bothers you that C got away with this ? Is it appropriate to mention C's misinterpretation (as politely as possible) or is it not worth it ? Would your answer be different if the paper were for a conference or for a journal ? Would the identity of the authors of C matter ? Should you just suck it up and take the high road ?

## Tuesday, June 01, 2010

### Avner Magen

As has been announced by Lance and Mihai, Avner Magen just died in a climbing accident in Alaska (there's a memorial blog set up in his name).

I didn't know Avner personally, but I've "met" him through a few of his papers. Among the many things he did were some very nice early results in the theory of metric embeddings. With Nati Linial and Michael Saks, he showed how to embed trees into Euclidean metrics with low (O(log log n)) distortion. And in a later result, he showed how to do JL-style embeddings that preserved not only distances, but also higher order volumes (improvements here)

This last result has been of particular interest in some of the work I've been doing of late - we've been interested in arc-length preserving embeddings that relate to volume preservation, and I've also had a student looking at some near neighbor problems for higher dimensional objects.

It's very sad to read such recent works and know that the person who wrote them is no more. My condolences to his family and friends.