Tuesday, May 12, 2009

Joint distributions aren't that bad

I was reading Noam Nisan's exposition of correlated equilibria. It's interesting to note that finding a correlated equilibrium is easy: you have to solve a linear program in the relevant variables of the underlying distribution. This is in contrast to Nash equilibria.

What struck me is that this is a good example of where a joint distribution isn't that bad. In machine learning especially, trying to "learn" a joint distribution is hard firstly because the number of variables blows up, and because of nasty correlations that one has to account for. In fact, the area of variational methods often uses the trick of replacing a joint distribution by the "closest" product distribution, and optimizing over that instead.

Here though, the "replacing by a product distribution" takes you from a correlated equilibrium problem to a Nash equilibrium problem, and now the individual probabilities actually multiply, yielding a nonlinear problem that's PPAD-Complete.

1 comment:

  1. It is curious. There are other examples--for instance, single- versus multiprover proof systems in complexity. Having multiple noncommunicating provers can be seen as imposing a product-distribution or other factorization-type constraint on some variables, but this tends to make the problem (of determining the value of a given protocol) more difficult rather than easier.

    ReplyDelete

Disqus for The Geomblog