tag:blogger.com,1999:blog-6555947.post112562303732900065..comments2019-10-21T02:32:14.509-06:00Comments on The Geomblog: No Free Lunch TheoremsSuresh Venkatasubramanianhttp://www.blogger.com/profile/15898357513326041822noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-6555947.post-1135119755759542982005-12-20T16:02:00.000-07:002005-12-20T16:02:00.000-07:00hi there! enjoyed your post. free personalshi there! enjoyed your post. <A HREF="http://www.freewebzite.com" REL="nofollow">free personals</A>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1132970490538091432005-11-25T19:01:00.000-07:002005-11-25T19:01:00.000-07:00"because they are extremely general hammers that t..."because they are extremely general hammers that typically don't have provably performance or running time guarantees"<BR/><BR/>Actually, neural network community has done quite a bit of interesting research into performance guarantees. For instance<BR/><BR/>- "P. L. Bartlett. For valid generalization, the size of the weights is more important than the size of the network", author gives bounds on misclassification rate using size of the weights.<BR/><BR/>- Work by the school lead by Watanabe studying how algebraic geometry of the set of best fitting models (which is more than 0 dimensional when the model is non-identifiable) relates to statistical efficiency of estimators, in particular neural network<BR/><BR/>- Work by the school of Amari studying differential geometry of neural networks, and also into statistical apects of neural network estimators.<BR/><BR/>- John Moody introduced in 1991 "Generalized Prediction Error" penalty which generalized AIC for non-linear models, in particular, neural networks<BR/><BR/>Neural networks are often used as non-parametric estimators, ie, "general hammers", but they can also be designed for a particular task. Check out Yann LeCun's website for some impressive applications. In particular http://www.cs.nyu.edu/~yann/research/dave/index.html <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://yaroslavvb.blogspot.com" REL="nofollow" TITLE="yaroslavvb at gmail dot com">Yaroslav Bulatov</A>Yaroslavhttps://www.blogger.com/profile/17683284937008290386noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125784581714560622005-09-03T15:56:00.000-06:002005-09-03T15:56:00.000-06:00The problem I have with this is that I can think o...<I>The problem I have with this is that I can think of far more places where human creativity trumps what a computer has been trained to do. This is not a knock on AI per se. </I> <BR/><BR/>For any given problem, my money would be first on formal analysis yielding an efficient solution (that's why I'm a theoretician). If, on the other hand, we have good evidence that finding a solution analitically is very difficult and yet a good feasible solution is needed regardless, then I'd place my money on a computer search. <BR/><BR/>Depending on the problem, the search heuristic would be different. GA excels in problems where the search space is not isomorphic to a subset of R^n. The fact that non-standard combination operators are an integral part of the heuristic makes GA better suited for these types of problems. <BR/><BR/>For other problems, other heuristics are better. <BR/><BR/><I>I don't view the NFL theorems as negative results; more as warnings about merchants bearing snake oil. In that respect, they are useful. </I><BR/><BR/>Sure, there were (are?) some over-enthusiastic proponents of GA, and the NFL theorems help to keep thinks in perspective. <BR/> <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>Alex Lopez-OrtizAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125766463697692902005-09-03T10:54:00.000-06:002005-09-03T10:54:00.000-06:00I don't view the NFL theorems as negative results;...I don't view the NFL theorems as negative results; more as warnings about merchants bearing snake oil. In that respect, they are useful.  <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>SureshAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125766197059047082005-09-03T10:49:00.000-06:002005-09-03T10:49:00.000-06:00The moral of the NFL story is that we do not live ...The moral of the NFL story is that we do not live in a world where all functions are equally likely or equally interesting. No Free Lunch states that in the space of all possible functions, for every one where an algorithm works there will be one where it doesn't work. For instance, for every function with a gradient that leads to an optimum, there will be one where the gradient is deceptive, and leads away from the optimum. Thus a gradient descent algorithm will perform, on average, no better than a gradient-opposing one.<BR/><BR/>The argument relies on the presumption that we are talking about all possible functions. In reality, though, most possible functions would be considered "random" (a can of worms, yes, but appropriate here). I have a friend who likes to say that if you took the number of functions that have been studied by humans and divided it by the number of possible functions, the result would be, essentially, zero.<BR/><BR/>It seems to me that there is something different about the subset that we do work with. One clue is that most of those functions are encoded as formulas, rather than look-up tables. In other words, they are orderly, compressible, simple. A function that can only be described by a lookup table is a random one, by several definitions. Even when we are working with raw data, we are almost aways searching for the compressible form, the formula that summarizes the underlying function.<BR/><BR/>So ... the No Free Lunch theorems apply in a kind of dreamworld, the Land of Infinite Possibilities, and not in the world of real working mathematics, where systems cohere in an orderly way. In the years since their publication, it has seemed to me that the great controversy blew up, blew around a little bit, and then blew away. NFL is not a real challenge to anybody.<BR/> <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://geomblog.blogspot.com/2005/09/no-free-lunch-theorems.html" REL="nofollow" TITLE="passer at by dot com">Passerby</A>Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125761602167741902005-09-03T09:33:00.000-06:002005-09-03T09:33:00.000-06:00"The point is that in some instances the formal st..."<I>The point is that in some instances the formal study of the problem is more difficult than having the computer automatically search for a solution.</I> "<BR/><BR/>I think this is the main point. In other words, the GA search strategy is a "conjecture-generating device", used to generate ideas where humans have missed out. <BR/><BR/>The problem I have with this is that I can think of far more places where human creativity trumps what a computer has been trained to do. This is not a knock on AI per se. My point is merely that in all kinds of "pattern matching" tasks that require one to make creative leaps, it has been hard to get a computer to do it independently. <BR/><BR/>So to me, as a conjecture generating machine, the GA doesn't have a good cost-benefit ratio, and as an engineering heuristic, it leaves me, the algorithm designer, in the dark about what's really going on, with no ability to explain its behaviour, or even predict what kinds of problems might be amenable to this kind of approach (local search and PLS maybe?) <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>SureshAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125760498231312532005-09-03T09:14:00.000-06:002005-09-03T09:14:00.000-06:00The point is that in some instances the formal stu...The point is that in some instances the formal study of the problem is more difficult than having the computer automatically search for a solution. A good example is the evaluation function of Deep Thought/Deep Blue. AI chess researchers had spent an enormous amount of time trying to understand the ins and outs of evaluation functions. In contrast, the Deep Blue team (which had relatively little knowledge of chess or AI) created a very good evaluation function using a GA search technique. (The function was later on further refined by hand under the efforts of Joel Benjamin). <BR/><BR/>Another example is the design of an amplifier circuit. This is a well studied, yet quite difficult problem. Koza's team uncovered a new design that was in many ways superior to existing designs.<BR/><BR/>In both cases the parallel GA search time consumed several years of CPU time. <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>Alex Lopez-OrtizAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125691807609302382005-09-02T14:10:00.000-06:002005-09-02T14:10:00.000-06:00I guess what irks me is the claim to 'generality',...I guess what irks me is the claim to 'generality', and the use of these methods as hammers. The way I see it, this is s catch-22. Either you believe that GAs are general, and require no problem specific tuning, in which case the NFL theorems apply. If you don't, and believe that specific fine tuning of a GA is required to obtain local/global maxima, then why on earth wouldn't you spend the same time trying to understand the problem formally ? <BR/><BR/>TSPs (and I should have linked the paper) are a good example of this. A GA-like algorithm is one of the best practical heuristics for performing TSPs, but this has come after a wealth of understanding of problem, and in fact even to establish the quality of the GA-based heuristic, we need a better formal understanding of TSP lower bounds.  <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>SureshAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125631489641928562005-09-01T21:24:00.000-06:002005-09-01T21:24:00.000-06:00Isn't "blind search" a bit of a misnomer (**)? Usu...Isn't "blind search" a bit of a misnomer (**)? <BR/><BR/>Usually the combination operators and fitness function in GAs are highly fine tuned to the specific problem being solved. While it is true that generally such fine tuning never explicitly involves the first derivative in some (many?) cases it implicitly encodes such information. <BR/><BR/>For example, for a specific (and rather successful) application that I have in mind, one of the combination operators was an exhaustive local search of the neighbourhood of the current solution, with the local maximum in this neighbourhood being the sole survivor (first derivative anyone?)<BR/><BR/>One of John Koza's students told me once that in their Stanford group they used to spend a month or two fine-tunning the combination operators and fitness function before unleashing the search. The harderst part was doing so without biasing the shape of the solution. In other words, one wants combination operators that preserve whatever is good about the current solution, rather than combination operators that will make the current solution look the way you believe good solutions ought to look like. [Koza's group has the distinction of having filed and obtained a patent for a device discovered by a GA.]<BR/><BR/><BR/>(**) Admittedly the "blind search" moniker was advanced by some over-enthusiastic GA researchers. In principle this would be a "nice" property of GAs, because if true it obviates the need to program a refined search algorithm.  <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A>Alex Lopez-OrtizAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-1125628199440256632005-09-01T20:29:00.000-06:002005-09-01T20:29:00.000-06:00We work like a horse.We eat like a pig.We like to ...<B>We work like a horse.<BR/>We eat like a pig.<BR/>We like to play chicken.<BR/>You can get someone's goat.<BR/>We can be as slippery as a snake.<BR/>We get dog tired.<BR/>We can be as quiet as a mouse.<BR/>We can be as quick as a cat.<BR/>Some of us are as strong as an ox.<BR/>People try to buffalo others.<BR/>Some are as ugly as a toad.<BR/>We can be as gentle as a lamb.<BR/>Sometimes we are as happy as a lark.<BR/>Some of us drink like a fish.<BR/>We can be as proud as a peacock.<BR/>A few of us are as hairy as a gorilla.<BR/>You can get a frog in your throat.<BR/>We can be a lone wolf.<BR/>But I'm having a whale of a time!<BR/><BR/><I>You have a riveting web log <BR/>and undoubtedly must have <BR/>atypical & quiescent potential <BR/>for your intended readership. <BR/>May I suggest that you do <BR/>everything in your power to <BR/>honor your encyclopedic/omniscient <BR/>Designer/Architect as well <BR/>as your revering audience.<BR/>As soon as we acknowledge <BR/>this Supreme Designer/Architect, <BR/>Who has erected the beauteous <BR/>fabric of the universe, our minds <BR/>must necessarily be ravished with <BR/>wonder at this infinate goodness, <BR/>wisdom and power.</I> <BR/><BR/>Please remember to never <BR/>restrict anyone's opportunities <BR/>for ascertaining uninterrupted<BR/>existence for their quintessence.<BR/><BR/><I>There is a time for everything, <BR/>a season for every activity <BR/>under heaven. A time to be <BR/>born and a time to die. A <BR/>time to plant and a time to <BR/>harvest. A time to kill and <BR/>a time to heal. A time to <BR/>tear down and a time to <BR/>rebuild. A time to cry and <BR/>a time to laugh. A time to <BR/>grieve and a time to dance. <BR/>A time to scatter stones <BR/>and a time to gather stones. <BR/>A time to embrace and a <BR/>time to turn away. A time to <BR/>search and a time to lose. <BR/>A time to keep and a time to <BR/>throw away. A time to tear <BR/>and a time to mend. A time <BR/>to be quiet and a time to <BR/>speak up. A time to love <BR/>and a time to hate. A time <BR/>for war and a time for peace. </I><BR/><BR/>Best wishes for continued ascendancy,<BR/>Dr. Howdy</B><BR/><A HREF="http://ilovehowdy.blogspot.com/" REL="nofollow"><B>'Thought & Humor'</B></A> <BR/><BR/><B>P.S. One thing of which I am sure is <BR/>that the common culture of my youth <BR/>is gone for good. It was hollowed out <BR/>by the rise of ethnic "identity politics,"<BR/>then splintered beyond hope of repair <BR/>by the emergence of the web-based <BR/>technologies that so maximized and <BR/>facilitated cultural choice as to make <BR/>the broad-based offerings of the old <BR/>mass media look bland and unchallenging<BR/>by comparison."</B><BR/><BR/>{Please note that this letter about your<BR/>esteemed site promotes no merchandise -<BR/>but is simply a missive of good will to you.}<BR/><BR/><BR/>//////////////////////// <BR/><BR/><A></A><A></A>Posted by<A><B> </B></A><A HREF="http://geomblog.blogspot.com/2005/09/no-free-lunch-theorems.html#comments" REL="nofollow" TITLE="none at none dot edu">howdy</A>Anonymousnoreply@blogger.com