Ironically enough, given the geometric nature of an LP, this engine has not been particularly useful for geometric approximation problems. The high level reason (somewhat over-simplified) is that it is generally hard to encode specifics about geometry in a linear program. What this means is that (for example) if you want to solve a geometric covering problem and can't encode the geometry in your LP, you're stuck with the log n approximation that combinatorial set cover gives you, inspite of the fact that there are numerous examples of problems where throwing geometry in actually improves the approximation bound considerably.
To put it mildly, this is a pain in the neck. It basically means that you have to design methods that work for a specific problem, and there are only a few general purpose techniques available, many of them outlined in this survey.
But if your problem can be framed as a covering/hitting problem, then there's a general body of work that has provided better and better results, (and for an added bonus, actually has an LP-formulation). The story dates back to a paper on iterative reweighting by Clarkson, and actually goes back even earlier if you jump to the ML literature: Arora, Hazan and Kale have a nice survey on this general approach.
But I digress: the paper by Bronnimann and Goodrich put the technique in the language we're familiar with today: in short,
Given a range space that admits epsilon-nets of size (1/epsilon)f(1/epsilon), you can solve the hitting set problem to an approximation of f(OPT).
In particular, range spaces that admit epsilon-nets of size 1/epsilon admit constant-factor approximations for the hitting set problem. This led to a "formula" for approximating these problems: find an epsilon-net of an appropriate size, and declare victory. There's been much work on constructing epsilon-nets since then, partly motivated by this.
Another angle to this story was put forward by Clarkson and Varadarajan in their 2005 SoCG paper: the brief summary from that paper was:
Given a range space in which you can bound the complexity of the union of a random subcollection of size r by rf(r), you can find an epsilon net of size (1/epsilon)f(1/epsilon).
This approach gave another tool for finding good approximations for covering problems (and in particular gave a nice constant factor approximation for a thorny guarding problem).
At SoCG 2009, Kasturi Varadarajan presented an improved bound along these lines: his main result improves the transfer complexity: modulo some technical details,
Given a range space in which you can bound the complexity of the union of a random subcollection of size r by rf(r), you can find an epsilon net of size (1/epsilon)log f(1/epsilon).
Although this doesn't improve results in the constant factor regime, where f() is a constant, it makes a significant difference when f() is non-constant.
The interesting story here is that Kasturi's result was simultaneous with another paper by Aronov, Ezra, and Sharir that just appeared in STOC 2009. They get the same bound without the technical details that limit his result, however their methods are quite different.
Returning to LPs, what I failed to mention is that the epsilon-net construction technique used by Bronnimann and Goodrich can be seen as a byproduct of an LP-algorithm, as was shown by Even, Rawitz and Shahar, and Philip Long before that (thanks, Sariel)
So what is your high level take on why generally the epsilon net techniques don't seem to generalize to weighted problems (e.g. weighed set cover), like LP based techniques usually do? I understand why the proofs break down, but I really don't have a feel for whether this is because inherently the weighted geometric problems are harder than the unweighted geometric problems, or is it just a limitation in the proof/analysis techniques.
ReplyDeleteif you want to solve a geometric covering problem and can't encode the geometry in your LP, you're stuck with the log n approximation that combinatorial set cover gives you
ReplyDeleteMy favorite example is Rectangle Covering Problem (covering an orthogonal polygon with a minimum number of axis-parallel rectangles). The best known approximation factor is sqrt(logn) using a combinatorial approach. LP-based techniques doesn't seem to provide any new insights. I am lost :(
Suresh, your general comment that the LP formulation has to explicitly capture the geometry to be useful is not correct, in particular in the context of covering problems that you discuss in the post. The paper of Even et al. that you mentioned towards the end basically shows that the eps-net based approximations can be equivalently viewed as LP-based algorithms. Therefore the plain vanilla set cover LP that does not explicitly encode any geometry does well! What this says is that worst-case examples for the standard set cover LP cannot be encoded in certain geometric settings. The SoCG'09 paper of Ken Clarkson and Sariel Har-Peled and myself shows how the LP can be used more explicitly in the multicover setting where points may require to be covered multiple (and possible
ReplyDeletedifferent) times.
For the weighted case, we do not know if the LP is good or bad. No lower bounds on the integrality gap worse than those for the unweighted case are known. However, the upper bound techniques via eps-nets do not generalize to the weighted case directly (see the paper of Even et al for a weak result). Sariel and I found one interesting case where LP has O(1) gap even for weighted case - covering points by axis aligned squares (or bounded aspect ratio rectangles).
Another example of the standard LP doing well in geometric setting is for the maximum independent set problem. The paper of Timothy Chan and Sariel Har-Peled gives algorithms for independent set of pseudo-disks in the weighted case using an LP relaxation. Their LP implicitly adds clique inqualities which are quite useful.
Chandra, you're right: I was oversimplifying: actually i have a post queued up on the chan-har-peled result as well, and I was going to mention your paper with Ken and Sariel as well.
ReplyDeleteThe ERS is not the earliest reference. Philip Long made the same observation well before they did. The ref:
ReplyDelete@inproceedings{l-upsda-01,
author = {P. M. Long},
title = {Using the Pseudo-Dimension to Analyze Approximation
Algorithms for Integer Programming},
booktitle = WADS_2001,
series = "Lecture Notes Comput. Sci.",
volume = "2125",
year = {2001},
pages = {26--37},
}
Just giving credit where it is due...
--Sariel
Can anyone please provide links or sources from where we can learn about epsilon-nets from basic.
ReplyDelete