At least a constant fraction of the above statements are true.
And if you are still unconvinced, here's a picture of Columbia University, where the workshops and tutorials will take place:
![]() |
Ruminations on computational geometry, algorithms, theoretical computer science and life
![]() |
Efficient and effective transmission, storage, and retrieval of information on a large-scale are among the core technical problems in the modern digital revolution. The massive volume of data necessitates the quest for mathematical and algorithmic methods for efficiently describing, summarizing, synthesizing, and, increasingly more critical, deciding when and how to discard data before storing or transmitting it. Such methods have been developed in two areas: coding theory, and sparse approximation (SA) (and its variants called compressive sensing (CS) and streaming algorithms).
Coding theory and computational complexity are both well established fields that enjoy fruitful interactions with one another. On the other hand, while significant progress on the SA/CS problem has been made, much of that progress is concentrated on the feasibility of the problems, including a number of algorithmic innovations that leverage coding theory techniques, but a systematic computational complexity treatment of these problems is sorely lacking. The workshop organizers aim to develop a general computational theory of SA and CS (as well as related areas such as group testing) and its relationship to coding theory. This goal can be achieved only by bringing together researchers from a variety of areas.
The Coding, Complexity and Sparsity workshop (SPARC 13) will be held in Ann Arbor, MI on Aug 5-7.
These will be hour-long lectures designed to give students an introduction to coding theory, complexity theory/pseudo-randomness, and compressive sensing/streaming algorithms. We will have a poster session during the workshop and everyone is welcome to bring a poster but graduate students and postdocs are especially encouraged to give a poster presentation.
This is the third incarnation of the workshop and the previous two workshops were also held in Ann Arbor in August of 2011 and 2012.
Confirmed speakers:
* Jin Yi Cai (University of Wisconsin, Madison)* Shafi Goldwasser (MIT)* Piotr Indyk (MIT)* Swastik Kopparty (Rutgers University)* Dick Lipton (Georgia Tech)* Andrew McGregor (University of Massachusetts, Amherst)* Raghu Meka (IAS)* Jelani Nelson (Harvard)* Eric Price (MIT)* Christopher RĂ© (University of Wisconsin, Madison)* Shubhangi Saraf (Rutgers University)* Suresh Venkatasubramanian (University of Utah)* David Woodruff (IBM)* Mary Wootters (Michigan)* Shuheng Zhou (Michigan)
We have some funding for graduate students and postdocs with preference given to those who will be presenting posters. For registration and other details, please look at the workshop webpage: http://eecs.umich.edu/eecs/SPARC2013/
Efficient and effective transmission, storage, and retrieval of information on a large-scale are among the core technical problems in the modern digital revolution. The massive volume of data necessitates the quest for mathematical and algorithmic methods for efficiently describing, summarizing, synthesizing, and, increasingly more critical, deciding when and how to discard data before storing or transmitting it. Such methods have been developed in two areas: coding theory, and sparse approximation (SA) (and its variants called compressive sensing (CS) and streaming algorithms).
Coding theory and computational complexity are both well established fields that enjoy fruitful interactions with one another. On the other hand, while significant progress on the SA/CS problem has been made, much of that progress is concentrated on the feasibility of the problems, including a number of algorithmic innovations that leverage coding theory techniques, but a systematic computational complexity treatment of these problems is sorely lacking.
The workshop organizers aim to develop a general computational theory of SA and CS (as well as related areas such as group testing) and its relationship to coding theory. This goal can be achieved only by bringing together researchers from a variety of areas. We will have several tutorial lectures that will be directed to graduate students and postdocs.
These will be hour-long lectures designed to give students an introduction to coding theory, complexity theory/pseudo-randomness, and compressive sensing/streaming algorithms.
We will have a poster session during the workshop and everyone is welcome to bring a poster but graduate students and postdocs are especially encouraged to give a poster presentation.
Confirmed speakers:
We have some funding for graduate students and postdocs. For registration and other details, please look at the workshop webpage: https://sites.google.com/site/umccsworkshop2012/
- Eric Allender Rutgers
- Mark Braverman Princeton
- Mahdi Cheraghchi Carnegie Mellon University
- Anna Gal The University of Texas at Austin
- Piotr Indyk MIT
- Swastik Kopparty Rutgers
- Dick Lipton Georgia Tech
- Andrew McGregor University of Massachusetts, Amherst
- Raghu Meka IAS
- Eric Price MIT
- Ronitt Rubinfeld MIT
- Shubhangi Saraf IAS
- Chris Umans Caltech
- David Woodruff IBM
Is the field is now moving too fast for models to even make sense ?A case in point: Google's MapReduce infrastructure has taken the world by storm, and you can even run MapReduce "in the cloud" on Amazon's servers. Howard, Sergei and Sid Suri had a very interesting paper on MapReduce at SODA last year, and there's a ton of academic work on algorithm design (in theory and practice) in this model. But if new reports are to be taken seriously, Google itself is moving on from MapReduce to other distributed data management systems.
What is the minimum number of distinct distances achievable by a set of n points in the plane ?I hope to have a longer guest post up soon on this, so I won't say much right now. What's neat is that this result is the implementation of a larger program/line of attack laid out by Gyorgy Elekes, who sadly died in 2008 before seeing this come to fruition. Micha Sharir has a beautiful writeup with the history of this approach (it predates the new results though). Bill Gasarch has a nice page collecting together the major developments in this area, including links to the Kakeya problem.