Much of my time is spent on the interface between algorithms research and other areas, in the dreaded trenches of 'algorithmic modelling'. Algorithmic modelling can be a tremendously useful tool, if wielded skillfully, and in the few cases where one hits the jackpot, you have an instant argument for the value of theoryCS outside computer science. Most importantly, it provides a kind of rationale for the intrinsic value of the algorithmic method: that thinking algorithmically leads to new ways of thinking about computational problems, and hooks problems in other domains into the engine of algorithmic improvements, so that better algorithm design leads quickly to better solutions to "real-world" problems.
Such work however can be an endless source of frustration. Most commonly, there are cultural difference between areas, and terminology barriers that need to be overcome. But most of all, there's a sense of apathy towards any effort at producing formal guarantees for computational methods (either for running times, or quality). In general, methods based on formal algorithmic analysis still prove themselves in the experimental setting, rather than possessing any intrinsic value by virtue of being correct, having running time and quality guarantees etc.
This in itself can be defended: if the main bottlenecks towards providing an effective practical solution to a problem are not computational (which they often are though), then it's not clear that a practitioner needs to care about esoteric notions like formal guarantees (which can often be quite weak). What is harder to surmount is the 'I've dug my hole and I like it' phenomenon.
For example, the RMS (root-mean-square) distance is commonly used to compare two 3D protein models. But I've never heard any fundamental argument as to why this measure is superior to other possible candidates for matching shape. What's worse, the RMS measure is harder to minimize (under transformations) than other equally plausible measures of shape similarity. However, as the status quo, it's hard to do anything to change this, short of voluminous studies that argue for the relative merits of a different measure based on comparisons on shapes.
Lest it seem that I'm merely whining, Barbie-style, because 'Modelling is hard', let me summarize my main point:
The computational efficacy and guarantees associated with a particular modelling technique are not in and of themselves sufficient to convince a practitioner that this approach is valid, even if in all other respects, the method is identical to other, more entrenched methods.
This makes me wonder about the best way to do algorithmic modelling, if one chooses to do it at all.
Your main point --
ReplyDelete"The computational efficacy and guarantees associated with a particular modelling technique are not in and of themselves sufficient to convince a practitioner that this approach is valid, even if in all other respects, the method is identical to other, more entrenched methods."
I think your assessment of how things work holds true. (Although, honestly, I don't think "valid" is the word you're looking for; perhaps "worthwhile".) I wonder, however, why you are surprised? Usually these other fields have a huge "sunk cost" in the current framework, possibly including personalities who have personal reasons for promoting that framework. If we can't show them something that gives demonstrably better results in practice, it's not clear why they should change.
I'm guessing your response argument is that getting better results may be a multi-step process; first you have to develop the right models, and then eventually you'll end up with better answers. But I imagine it's hard to make a large commitment (switching modelling frameworks) without clear evidence of some immediately realizable gain.
In some ways, it sounds a lot like an economics-dictum I've heard. Making a new product that's 10% better is unlikely to be enough to enter and change the market when there are well-known entrenched players. You need products that are at least twice as good to break into a new market.
all of this is absolutely true. the 'game changers' in algorithmic modelling are where the impact is, rather than in the slow accretion of sound methods. Which is why over time I've moved away from simple modelling tasks.
ReplyDelete