Wednesday, May 30, 2012

Workshop on coding theory, complexity and sparsity

Atri Rudra asks me to post an announcement for the second incarnation of the workshop on coding theory, complexity and sparsity. The first event went off quite well - I was sure that Dick Lipton had a post on it but I couldn't find it (Update: here it is, thanks to Atri). At any rate, do attend and get your students to attend: they have a great roster of speakers and travel support for students.
Efficient and effective transmission, storage, and retrieval of information on a large-scale are among the core technical problems in the modern digital revolution. The massive volume of data necessitates the quest for mathematical and algorithmic methods for efficiently describing, summarizing, synthesizing, and, increasingly more critical, deciding when and how to discard data before storing or transmitting it. Such methods have been developed in two areas: coding theory, and sparse approximation (SA) (and its variants called compressive sensing (CS) and streaming algorithms).
Coding theory and computational complexity are both well established fields that enjoy fruitful interactions with one another. On the other hand, while significant progress on the SA/CS problem has been made, much of that progress is concentrated on the feasibility of the problems, including a number of algorithmic innovations that leverage coding theory techniques, but a systematic computational complexity treatment of these problems is sorely lacking.
The workshop organizers aim to develop a general computational theory of SA and CS (as well as related areas such as group testing) and its relationship to coding theory. This goal can be achieved only by bringing together researchers from a variety of areas. We will have several tutorial lectures that will be directed to graduate students and postdocs.
These will be hour-long lectures designed to give students an introduction to coding theory, complexity theory/pseudo-randomness, and compressive sensing/streaming algorithms.
We will have a poster session during the workshop and everyone is welcome to bring a poster but graduate students and postdocs are especially encouraged to give a poster presentation.
Confirmed speakers:
  • Eric         Allender          Rutgers
  • Mark       Braverman      Princeton
  • Mahdi     Cheraghchi     Carnegie Mellon University
  • Anna      Gal                  The University of Texas at Austin
  • Piotr       Indyk               MIT
  • Swastik  Kopparty         Rutgers
  • Dick       Lipton             Georgia Tech
  • Andrew  McGregor       University of Massachusetts, Amherst
  • Raghu    Meka               IAS
  • Eric        Price                MIT
  • Ronitt   Rubinfeld          MIT
  • Shubhangi Saraf            IAS
  • Chris      Umans            Caltech
  • David    Woodruff         IBM 
We have some funding for graduate students and postdocs. For registration and other details, please look at the workshop webpage: https://sites.google.com/site/umccsworkshop2012/

Thursday, May 17, 2012

8F Computational Geometry

What is North Carolina known for?
  • Basketball?
  • Barbecue?
  • Great Universities?

    Well, this June it will be known for geometry, as SoCG will be held this year very near Durham, NC. Unfortunately, it will be at UNC and not Duke. (I kid, I know Jack will do a great job organizing.)

    If those things are not enough, Suresh and I (Jeff) are organizing an 8F-Computational Geometry workshop at SoCG. In case thats not clear, 8F stands for AITF, and AITF stands for Algorithms in the Field.

    The idea is to both showcase how computational geometry has had impact on "the field" and highlight some areas that we hope are ripe for future impact from the CG community. There are two days of talks Tuesday, June 19 on ongoing impact and Wednesday, June 20 on potential (more) impact. We already have a great line-up of speakers; on the first day we have talks by
  • Valerio Pascuci on Visualization,
  • Lars Arge on GIS,
  • David Mount on Data Retrieval, and
  • Nina Amenta on Meshing and Surface Reconstruction.
    On the second day we have talks by
  • Dan Halperin on Robotics and Automation,
  • Jie Gao on Mobile Networks,
  • Michael Mahoney on High Dimensional Data Analysis, and
  • Tom Fletcher on Medical Imaging.

    We encourage you all to attend, and to see what the Durham area has to offer. There is actually a full 4 days of activities at SoCG this year. The early registration deadline is next week (May 23) and its only $100 for students!


    We should mention that this 8F-Computational Geometry workshop is part of a larger push (through sponsorship) by NSF. There was a more general 8F Workshop last year, and there are more in the works.
  • Wednesday, May 16, 2012

    "In the long run..."

    "In the long run, we're all dead" is a famous quote attributed to John Maynard Keynes. The context was his arguments with economists of the time: he was trying to argue for government intervention in the markets to control inflation, rather than just letting it play out.

    It's an apt response also to occasional claims that asymptotics will eventually win out, especially with large data. Asymptotics will eventually win out, as long as everything else stays fixed.

    But that's the precise problem. Everything else doesn't stay fixed. Well before your $C n \log n$ algorithm beats the $c n^2$ algorithm, we run out of memory, or local cache, or something else, and the computational model changes on us.

    We come up with external memory models, and are informed that in fact even a factor of log N is too much. We design streaming models and Mapreduce and so on and so forth, and realize that all this communication is burning a hole in our atmosphere.

    Lesson to be learned (and re-learned): asymptotic analysis is a powerful tool, but it's only as good as the model of computation you're REALLY working with.

    Wednesday, May 09, 2012

    Multiplicative Weight Updates as Zombie Binary Search

    The multiplicative weight update method (MWU hereafter) is a neat algorithm design technique with applications in machine learning, geometry and optimization among others. However, it's viewed (and discussed) as an advanced technique, with very technical examples requiring lots of background (see the Arora-Hazan-Kale survey first example).

    After pondering the use of MWU for a while now, it seems to me that this should be taught as a standard algorithms tool that naturally follows from divide and conquer and prune-and-search. In what follows, I'll sketch out how this might work.

    A quick caveat: the MWU, like many powerful tools, has multiple interpretations, and the survey hints at many of them (for example, MWU  = derandomization of Chernoff bound). I don't intend to imply that there's only one way to interpret the technique, but that the approach I'll describe is the most accessible one when learning the method for the first time.



    The divide and conquer unit in an algorithms class might cover sorting and FFTs, driven by the standard D&C recurrence relation. Prune and search is introduced as a variation, with median finding and binary search being the prototypical examples.

    Prune-and-search works like magic, if you're seeing it for the first time. Do some work, throw away the rest, and your $n \log n$ running time goes down to linear. (As an Indian of a certain age, this always reminds me of "thoda khao, thoda phenko" (eat a bit, throw a bit) from Jaane Bhi Do Yaaro).

    But what makes it tick is determining the constant factor that needs to be thrown away. The most direct way to do this is deterministically: on a linear time budget, find a point that splits the input into two roughly balanced parts.

    But that's expensive in practice. Indeed, algorithms that use median finding as a subroutine try to avoid using a deterministic procedure because it can be slow. When you get to more advanced methods like parametric search, it gets even worse.

    The neat trick to apply here is to randomize ! We know that we don't have to find an exact split point - merely one that will approximately balance the two sides of the recursion. For median finding, we can pick three points at random, and take their median. With a sufficiently high probability, this median will create a 1/3-2/3 split, and away we go !

    This idea surfaces again and again, especially in the many randomized algorithms in computational geometry. As a design strategy, it's quite effective - design an algorithm that works if you can do balanced splits, and then find the split by choosing randomly. Invoke the Chernoff God and some satellite deities, and you're done.

    Which brings us to the MWU.

    Randomized splitting says that we're willing to lie a little about the split process, and things still work out. But whether you do randomized splitting or deterministic, the end result is still that you throw away some reasonable fraction of the input, NEVER TO SEE IT AGAIN.

    Suppose you can't even do that ?

    Think of noisy binary search (or 20 questions with a liar). Now, even your decision on what to prune is error-prone. You might be eliminating things that you need to consider later. So it's not clear that you can make any kind of search progress. But let's be reasonable and limit the adversary (or the noise) in some manner. Let's say that you'll only misclassify a small fraction of the input in each iteration (where small is "strictly less than half"). Let's also assume that I can tell you how important certain points are, so that you have to take that into consideration when defining your "small fraction".

    So how does MWU now work ?  I tell you a distribution over the input, and you give me back a rule that's reasonably good at (say) classifying points. I increase the importance of things you made mistakes on (so you try not to do it again), and repeat.

    Eventually, what happens is that the weight of things that you make mistakes on increases rapidly. But the overall weight of the input can't increase too much, because I only increase the weight of things you make mistakes on, which is not a lot. Intuitively, what happens is tha the first weight catches up with the second, at which point the algorithm is complete. You can show that if the updating schedule is chosen correctly, the process terminates in logarithmic steps.

    Essentially, MWU functions as a zombie binary search. You want to kill elements, but they keep coming back. Thankfully, each time they come back they're slightly weaker, so you have a better chance of hitting them next time. Eventually, head is severed from neck, and your zombies are dead (and your points are found).

    My only wish at this point is that whenever you think of MWU you think of zombies. Now that's impact :)

    Disqus for The Geomblog