Tuesday, April 10, 2007

Measure concentration

I've been preparing a lecture on the Johnson-Lindenstrauss Lemma and dove deep into the beautiful world of measure concentration. Matousek's chapter on the topic is excellent in this regard, but I found that the flow of the results (from measure concentration towards the JL Lemma) was not to my taste either personally or pedagogically (it's possible to discuss the lemma algorithmically using elementary proofs without having to uncover the deep measure concentration machinery chugging below).

In that respect, Alexander Barvinok's lecture notes on measure concentration prove to be quite useful. He starts from simple measure concentration on the sphere, builds up to the JL Lemma, and then carries on forward into Levy's theorem, Dvoretsky's theorem and the Brunn-Minkowsky inequality. He starts with a good intuitive definition of measure concentration:
Any "reasonable" function defined on a "large" probability space takes values close to its average with high probability
which nicely ties it together (in my mind) with other large deviation results in information theory as well (he also has a section on rate distortion and Cramer's work).

Another useful nugged from Barvinok's notes is a thorough drilling on the use of the "Laplace trick": the technique of taking the exponential of a random variable and applying the Markov inequality in order to prove tail bounds (the standard proof of the Chernoff bound is one example of this).

3 comments:

  1. "I've been preparing a lecture on the Johnson-Lindenstrauss Lemma"

    For what class?

    ReplyDelete
  2. For my computational geometry class actually.

    ReplyDelete
  3. There is also my "own" version of the class notes, which follows Matousek presentation. See here. This also contains the alternative "shorter" proof.

    ReplyDelete

Disqus for The Geomblog