At least a constant fraction of the above statements are true.
And if you are still unconvinced, here's a picture of Columbia University, where the workshops and tutorials will take place:
![]() |
Ruminations on computational geometry, algorithms, theoretical computer science and life
![]() |
Kids: "Let's see the next episode now!"
Me: We'll have to wait a week for it.
Kids: "What do you mean, we have to wait a week ?"
I want to suggest here that, while reverse engineering might be a useful strategy for figuring out how an existing technology works, it is less useful for telling us how it came to work that way. Because reverse engineering starts from a finished technical object, it misses the accidents that happened along the way — the abandoned paths, the unusual stories behind features that made it to release, moments of interpretation, arbitrary choice, and failure. Decisions that seemed rather uncertain and subjective as they were being made come to appear necessary in retrospect. Engineering looks a lot different in reverse.But along the way, he makes an insightful observation about the very idea of structuralism as applied to algorithm design: namely, the idea that by breaking down the parts of the algorithm and understanding the pieces and how they fit together we can "understand" the algorithm at some higher level.
When you break an object down into its parts and put it back together again, you have not simply copied it — you’ve made something new. A movie’s set of microtags, no matter how fine-grained, is not the same thing as the movie. It is, as Barthes writes, a “directed, interested simulacrum” of the movie, a re-creation made with particular goals in mind. If you had different goals — different ideas about what the significant parts of movies were, different imagined use-cases — you might decompose differently. There is more than one way to tear apart content. (emphasis added)In other words, the value of a reductionist analysis of a piece of work is not just in understanding the parts and how they fit, but in understanding the choices that led to that particular decomposition.
A paper tells a story, and you have to shape your results - their ordering, presentation, and even what you keep and what you leave out - in order to tell a consistent and clear story.
The SIAM Symposium on Discrete Algorithms (SODA14) will be held January 5-7, 2014 at the Hilton Portland & Executive Tower in Portland, Oregon, USA. Algorithm Engineering and Experiments (ALENEX14) and Analytic Algorithmics and Combinatorics (ANALCO14) will be co-located with the conference on January 5 and 6, respectively.
The deadline for students to apply for travel support is approaching! The NSF, Google, IBM Research, and Microsoft Corporation have provided generous travel support for students interested in attending SODA14 or its associated meetings, ALENEX14 and ANALCO14. Application materials and a letter of recommendation from your advisor are due by November 15. Please visit http://www.siam.org/meetings/da14/tsupport.php for detailed funding information and application procedures.
Pre-registration for all three meetings is also available at http://www.siam.org/meetings/da14/reginfo.php.
Additional conference information is available at the following websites:
SODA14: http://www.siam.org/meetings/da14/ANALCO14: http://www.siam.org/meetings/analco14/ALENEX14: http://www.siam.org/meetings/alenex14/
Chorus:For more on the tailed off line at the end, see the wikipedia entry.
Theorists gave names to all the classes
in the beginning, in the beginning
Theorists gave names to all the classes
in the beginning long time ago
Verse 1:
They saw a woman slim and tall
forging steadily to her goal
she went by fast but that's no crime
uh I think I'll call her PTIME
Verse 2:
They saw a kid who soldiered on
he got no rest from dusk to dawn
he kept sweating in the same place
uh I think I'll call him PSPACE
Verse 3:
They saw a blind man comb his shack
to find his walkstick in the black
but once he found it he could see
uh I think I'll call him NP
Verse 4:
They saw a boy who walked and walked all night
never veered to the left or right
was kind of slow but not that bad
uh I think I'll call him PPAD
Verse 5:
There was a beast and on its back
it carried Heisenberg and Dirac
it was as weird as a beast can be
uh...
In TheoryVerse
Think of a carpenter of sorts
he has the flu he has the blues
He burns his oil every night
all night frustrated and confused
Bridge
And worst of all he can't complain
In theory his job is plain (twice)
Chorus
Build something sturdy and beautiful and useful
He's always short in one goal
He juggles true with elegant and useful
and he keeps dropping one ball
Verse 2
His buddies like him alright
except they love to criticize
They go and take his stuff apart
with clever words with clever eyes
Verse 3
Some nights a girl comes to his shop
to sip his tea learn something raw
his foggy mind his tangled soul
want her to stay but let her go
Geometry is the glue between statistics and computer science. -- Michael Jordan
That's why everyone gets stuck in it.
-- Graham Cormode
expected risk is upper bounded by (C/n) * geometric complexity of $S, \mathbf{x}^*$where $n$ is the number of samples.
Efficient and effective transmission, storage, and retrieval of information on a large-scale are among the core technical problems in the modern digital revolution. The massive volume of data necessitates the quest for mathematical and algorithmic methods for efficiently describing, summarizing, synthesizing, and, increasingly more critical, deciding when and how to discard data before storing or transmitting it. Such methods have been developed in two areas: coding theory, and sparse approximation (SA) (and its variants called compressive sensing (CS) and streaming algorithms).
Coding theory and computational complexity are both well established fields that enjoy fruitful interactions with one another. On the other hand, while significant progress on the SA/CS problem has been made, much of that progress is concentrated on the feasibility of the problems, including a number of algorithmic innovations that leverage coding theory techniques, but a systematic computational complexity treatment of these problems is sorely lacking. The workshop organizers aim to develop a general computational theory of SA and CS (as well as related areas such as group testing) and its relationship to coding theory. This goal can be achieved only by bringing together researchers from a variety of areas.
The Coding, Complexity and Sparsity workshop (SPARC 13) will be held in Ann Arbor, MI on Aug 5-7.
These will be hour-long lectures designed to give students an introduction to coding theory, complexity theory/pseudo-randomness, and compressive sensing/streaming algorithms. We will have a poster session during the workshop and everyone is welcome to bring a poster but graduate students and postdocs are especially encouraged to give a poster presentation.
This is the third incarnation of the workshop and the previous two workshops were also held in Ann Arbor in August of 2011 and 2012.
Confirmed speakers:
* Jin Yi Cai (University of Wisconsin, Madison)* Shafi Goldwasser (MIT)* Piotr Indyk (MIT)* Swastik Kopparty (Rutgers University)* Dick Lipton (Georgia Tech)* Andrew McGregor (University of Massachusetts, Amherst)* Raghu Meka (IAS)* Jelani Nelson (Harvard)* Eric Price (MIT)* Christopher RĂ© (University of Wisconsin, Madison)* Shubhangi Saraf (Rutgers University)* Suresh Venkatasubramanian (University of Utah)* David Woodruff (IBM)* Mary Wootters (Michigan)* Shuheng Zhou (Michigan)
We have some funding for graduate students and postdocs with preference given to those who will be presenting posters. For registration and other details, please look at the workshop webpage: http://eecs.umich.edu/eecs/SPARC2013/
The theory community hasn't seem to catch on yet. There should be some nice theoretical model that captures the vector and other operations of a GPGPU and then we should search for algorithms that make the best use of the hardware. The theory world writes algorithms for non-existent quantum computers but not for the machines that we currently use.Well.
People are really good at telling stories that weave together multiple causes and multiple contexts. Data analysis is pretty bad at narrative and emergent thinking...I've felt for a while now that it's time to design mechanisms for providing context and narrative to data analysis*. Some of the research my student Parasaran does is on metaclustering: essentially the meta-analysis of different perspectives (clusterings) of data to draw out a larger meaning. We've also just submitted a paper on how to derive local markers of prediction validity in clustering (rather than just relying on global measures of quality). And my seminar this semester is about the larger problems of explanations and accounting in data mining.