We're always searching for new ways to communicate cool ideas in theoryCS to the outside world, and the list gets more and more esoteric each time one thinks about it. But in my interactions with people, people generally unfamiliar with algorithm, and theoretical computer science in general, I'm constantly amazed at how counter-intuitive the very notion of an approximation appears to be.
A typical conversation goes something like this:
Person: Oh this problem is NP-hard so we ran this heuristic
Me: Have you tried approximating it ?
P: What do you mean ? Our heuristic seems to be reasonable.
M: I mean an algorithm that guarantees that it'll get close to optimal.
P: But how can you know that ? It's hard to find the optimal solution, global minimum, yadda yadda yadda...
M: We can often prove an answer is close to OPT without knowing OPT itself.
P: [total silence]
This kind of thing happens often in research conversations, as well as in the classroom (where it's less surprising I guess). In fact, one person was so amazed at the idea that even proving an approximation could be NP-hard that we spent a good semester plowing through the PCP theorem (at least parts of it).
I'm not dissing heuristics: it's often the case that either the approximation guarantees one can show are too weak, or that the claimed bounds don't match the empirical performance. But I do think it's important to at least ask the question of whether provable approximations exist, much in the way people in CS are now reasonable comfortable checking (or asking the resident theoretician !) if a problem is NP-hard.
But you can't ask till you know. This is one of the main reasons I teach a smattering of approximations in my graduate algorithms class (and why I teach some randomization as well). The concept of an approximation (especially when you can't find the actual answer) is a surprisingly slippery conceptual leap, and requires mental agility that goes beyond the standard "algorithms toolkit" that focuses on issues of performance.
it's very useful to give a concrete approach. For instance, the most typical is to analyze the optimal and tease out a lower bound. As in the case of metric TSP, this lower bound can be very intuitive, and getting factor 2 can be done to someone with no background in just a couple minutes and no paper or whiteboard (and people are happy to see that the elementary tools DFS and MST go so far).
ReplyDeleteWhat's even nicer is, if the person seems interested, giving the 1.5 factor solution is hardly more work.
AND THEN you can even very easily talk about how dropping the metric property destroys things by giving the resulting hamilton path oracle.
The first time someone was confused by approximation algorithms, this is what I went through, and they seemed very happy.
ALSO it is fun to talk about approximation algorithms for problems which are NOT NP-hard. For instance, convex solvers only guarantee getting epsilon close..
I died at the end there, sorry; most people ignore the log(1/eps) in convex so it's fun to point out. but that wasn't an example of what I meant by the ALSO
ReplyDeletePerson: Oh this problem is NP-hard so we ran this heuristic
ReplyDeleteMe: Have you tried approximating it ?
P: Sure, there's an old linear-time PTAS. If we mentioned it, we couldn't have written papers about the heuristic.
Results are nothing, technique is everything.
I don't think the notion of an approximation is CS theory specific. The concept of showing upper and lower bounds exists in other fields too, and if these bounds are close, then we have an approximation. (although its possible that outside CS, people don't usually care much about approximations that are as bad as an arbitrary constant factor (or even say, O(\sqrt{n})) away from what we are looking for!
ReplyDeleteI agree that the concept of an approximation algorithm is very important, and should be taught as part of standard theory. (I have 1+ lecture for approximation algorithms, and 1 lecture for heuristic algorithms, in my undergraduate algorithms class.) The DFS-MST for TSP is one of my standard examples, and it is really nice to teach. Students should understand that the question "Is there a good approximation algorithm?" should be one of their standard responses to "This problem is NP-hard?"
ReplyDeleteIn practice, my experience is most people don't care about worst-case guarantees; they care about performance on their instances. (The real problem with most approximation algorithms is not that the guarantees are too weak or that claimed bounds don't match empirical performance; it's that they suck compared even to easy heuristics, precisely because they're designed for the worst case. A general exception is greedy algorithms, where bounds don't generally match actual performance, but often greedy algorithms provide a reasonable baseline performance.) So it's no wonder that approximations aren't part of the usual conversation.
There is an entire field outside theoretical computer science called "approximation theory". And yes, they care very much about worst case bounds. And yes, they approximate function that are hard to evaluate or for which no closed form is known.
ReplyDeleteSimilarly, in numerical analysis, they have a priori bounds.
Of course, these bounds are not "computational" bounds, but the idea is the same.
However, there are many, many, many very important and practical problems where this bound-based research is just not very productive. What people care about is what happens on "real" data. It is very difficult to model this "real" data in a way that will allow theoretical investigation.
This is not to say that theoretical models are not useful, but they have to relate to your actual experiments.
I think we're lucky that this useful notion of approximation is as straightforward as it is; other areas have it much worse.
ReplyDeleteFor example, I remember a Scientific American puzzle column by Dennis Shasha, that made an audacious attempt to introduce readers to the concept of regret ratios (a widely-used if not uncontroversial notion of approximation for 'online algorithms'):
http://tinyurl.com/628n8r
A good attempt (and a neat area for puzzles), but how many readers would've absorbed the idea?
We do worst case complexity analysis because experience shows that the worst case running time of an algorithm is in most cases a good proxy for the actual running time of the algorithm. In the case of approximation algorithms, worst case analysis is generally a substantial overestimation of the average case performance, so if we hope to get applied people to use approximation algorithm results we need to give them a better measure. For example we can try to characterize where exactly the heuristic fails and parameterize on this, e.g. works best on graphs of low degree. This way if the user has reason to believe that the graphs are of low degree in practice they get a much better picture of actual performance.
ReplyDeleteAndy: I've learnt 'regret ratio' as 'competitive ratio', and in fact I've used that (in the classic ski vs rent formulation) to explain approximations as well. The difference in my mind which almost makes it easier is that you can imagine the oracle as someone who tells the future, and somehow it almost seems more palatable.
ReplyDeleteDaniel/Alex: I think what I wanted to point out (which Michael cottoned on to) was not so much that "everyone should use approximations", which I definitely don't believe for the reasons you both outline. What is more important is the very idea that an approximation is in fact possible at all.
p.s Daniel, I've actually talked with mathematicians who get hung up on this whole 'global optimum' thing and don't register the possibility of approximations. I'm aware of approximation theory in general, but it seems to me that there's something about the adversarial nature of approximation ratios that's quintessentially a CS notion (and don't get me started on PCPs).
but it seems to me that there's something about the adversarial nature of approximation ratios that's quintessentially a CS notion
ReplyDeleteFunny you mention that, as I was just thinking that a problem with our approach to heuristics in TCS is that we are over-trained in adversarial arguments.
If someone comes up with a heuristic we can almost instinctively find the case where it will fail. What if we also provided our students with techniques through the curriculum that would allow them to equally instinctively find a subset of the inputs on which the heuristic is optimal?
We're always searching for new ways to communicate cool ideas in theoryCS to the outside world, and the list gets more and more esoteric each time one thinks about it
ReplyDelete