Two recent articles of his are exemplars:
1. Sorting out the genome, from September-October 2007. Here's a key extract:
Given two arrangements of a set of genes—say a b c d e f g and f e b a g c d—how do you determine what sequence of reversals produced the transformation? This example has a three-step solution, which you can probably find in a few minutes with pencil and paper. For larger genomes and longer chains of reversals, however, trial-and-error methods soon falter. Is there an efficient algorithm for identifying a sequence of reversals that converts one permutation into another?The article proceeds to lay out the entire history of sorting with reversals, pancake sorting, and related problems. It's a great case study for the study of algorithms in an application domain: how to model a problem, come up with algoritms, discover that the model doesn't quite capture what you want, rethink the model, come up with more algorithms, and so on. It touches on dynamic programming, greedy algorithms, NP-hardness, and approximations. I liked it so much I'll be devoting a lecture in my algorithms class to it, to illustrate how algorithms research works "in the real world".
The genetic reversal problem lies at the intersection of biology, mathematics and computer science. For some time, the prospects for finding a simple and efficient solution seemed dim, even with the most powerful tools of all three disciplines. But the story has a happy ending. A little more than a decade ago, computing gene reversals was still a subtle research problem; now it can be done with such ease that it's a matter of routine technology. If you need to know the "reversal distance" between two genomes, you can go to a Web site and get the answer in seconds.
2. Conquering divide, bit-player, October 9th, 2007.
The "hook" for this article is is Eric Allender's review article on the true complexity of division (short summary: Division is complete for DLOGTIME-uniform TC^0), which is a must-read in its own right. What Hayes discusses in his article is the reason to even ask this question, starting with the general lack of symmetry between grade-school methods for multiplication and division. I found this particularly insightful:
Why is division so hard?
At times I feel this is a dumb question, and that the universe doesn’t owe us an answer to such questions. Some things are just harder than others; that’s the way the world works. We can count ourselves lucky that multiplication has a polynomial-time algorithm; we have no right to expect that our luck will hold when we turn to division.
But then I get to thinking about physical systems that multiply and divide, where the two operations seem perfectly symmetrical. An example is an electrical circuit governed by Ohm’s law.
If we attach the terminals of a voltmeter to opposite ends of a resistor, we observe Ohm’s law in the form E = IR: Voltage is equal to the product of current and resistance. If we install an ammeter in series with the resistor, we see the effects of the law I = E/R: Current is equal to voltage divided by resistance. If you prefer pipes to wires, you can do the hydraulic version of the experiment, and there are also lots of mechanical schemes, where the same principle is illustrated with levers, gears, pulleys and such. In all of these systems, nature seems to divide just as easily as it multiplies. Why can’t we do the same with pencil and paper, or transistors, or neurons?
You need to keep in mind consumption of energy. The nature's algorithm you describes uses an amount of energy which is linear on the magnitude of the denominator to obtain a fixed number of digits in the answer.
ReplyDeleteIn contrast we can easily divide using algorithms whose running time is polylog(denominator).