I was listening to a talk today where the speaker made the following statement: 'We know that this works well in theory, and now we'd like to see if it works well in practice'. Now this is a statement I have probably heard hundreds of times over the past many years, and it is in no way noteworthy or unusual.

However, it did get me thinking. It is by now a truism that no one will seriously dispute that there is often a gap between the theoretical predictions offered by an algorithm analysis and the behaviour of this algorithm in practice. Contrary to what many people would conclude, this is of course not a black mark on the value of theoretical work. The power of algorithm analysis, poly-time, and even the much maligned O() notation, lies in the invariance of (classical) machine definitions under poly-time transformations, and the powerful mathematical structure that complexity theory acquires as a result.

This structure does have a tradeoff; we get mathematical elegance, but lose a certain modicum of "predictiveness" when it comes to analysing the behaviour of an algorithm "in practice". And I think herein lies the true value of algorithm engineering. If the algorithm that has better theoretical guarantees does better in practice (and this happens more often than one thinks), one's task is complete; if not though, all is not lost. Often, peculiarities in the data cause certain algorithms to behave better than others, even though from an asymptotic perspective a different outcome would be expected. Sometimes, analysing the running time is not the right measure: output-sensitivity in geometric algorithms is an example of a measure that captures certain desirable properties that can be missed with a blunt-edge worst-case running time.

But ultimately, the goal of algorithm engineering, or of good experimental algorithmic work, is to tease out the tradeoffs, parameters and special cases that govern which algorithm is the right one for a specific setting. If I am a practitioner, I am less interested in knowing the algorithm with the best asymptotic efficiency than I am in knowing which algorithm can provide the best performance for my situation. This does not mean that we should throw proofs out of the window: it means that often a much more delicate analysis of the behaviour of an algorithm is required in order to determine what works better.

I think I just wrote an ALENEX manifesto. Oh well :)