tag:blogger.com,1999:blog-6555947.post1422523083922602684..comments2023-11-23T00:34:43.496-07:00Comments on The Geomblog: SODA 2007: Day 1Suresh Venkatasubramanianhttp://www.blogger.com/profile/15898357513326041822noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-6555947.post-23942763130397681262007-02-16T18:37:00.000-07:002007-02-16T18:37:00.000-07:00BTW I think the Micali et al paper is an excellent...BTW I think the Micali et al paper is an excellent one. I have nothing against it. Just pointing out that there are other things in the world beside polynomial time.Arvind Narayananhttps://www.blogger.com/profile/02495762505427759752noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-35343659225764258072007-02-16T18:34:00.000-07:002007-02-16T18:34:00.000-07:00"In my view, the whole 'polytime=efficient' is com..."In my view, the whole 'polytime=efficient' is completely misunderstood, but that's a story for another day."<BR/><BR/>I'd love to hear about it. I've seen the standard arguments in defense of it from theory people and I'm not really convinced.<BR/><BR/>"Maybe we do need better models for real data, but where might they come from ? a small space source not such a well understood entity, is it ?"<BR/><BR/>That's precisely my point. There needs to be more work done in understanding them. For instance there was this recent paper http://theory.lcs.mit.edu/~cpeikert/pubs/mpsw.pdf Optimal error correction against comptuationally bounded noise, based on the idea that the universe is a computer, and this includes noise. I really feel that looking at space bounded noise is as important as computationally bounded, and I was looking into this with a couple of other people and I still have a 2 page draft of ideas buried under a mountain of other stuff. But really, proper theory people should be looking at this stuff.<BR/><BR/>we don't even know if L is strictly contained in P !!!<BR/><BR/>A complete red herring. We don't know if P is strictly contained in NP either, but that didn't stop anyone.<BR/><BR/>P.S blogger has the worst commenting system ever. In particular, I have no way to be notified of responses and I hate captchas and the text box is too small.Arvind Narayananhttps://www.blogger.com/profile/02495762505427759752noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-38635835870644485472007-02-11T03:12:00.000-07:002007-02-11T03:12:00.000-07:00The Google model of computation doesn't extend to ...The Google model of computation doesn't extend to everything though: I mean, if I have a small system running experiments on some data sets, I can't use the google model to deal with my data: I still have to work with my one or two machines and do what I can with it. <BR/><BR/>It is indeed true that as sizes get larger, asymptotic bounds become *more* relevant. But at the same time it is also true that the difference between 2N and 150N becomes important, especially for streaming problems where *anything* superlinear is hopeless.<BR/><BR/>In my view, the whole 'polytime=efficient' is completely misunderstood, but that's a story for another day. Maybe we do need better models for real data, but where might they come from ? a small space source not such a well understood entity, is it ? we don't even know if L is strictly contained in P !!!Suresh Venkatasubramanianhttps://www.blogger.com/profile/15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-91188769992448647572007-02-11T00:14:00.000-07:002007-02-11T00:14:00.000-07:00This doesn't make any sense. You could call me an ...This doesn't make any sense. You could call me an applied theorist; I read theory papers and do some theory but also write a lot of code. I work with large datasets.<BR/><BR/>First, there is no evidence at all that computers aren't getting faster as much as they used to. Even if individual components don't get faster, their cost drops, so you can compute massively parallely like google does.<BR/><BR/>Second, as data sizes get larger, the input size parameter 'N' increases, and so the difference between, say, a quadratic and log-linear time becomes greater and greater, making the constants *less* important.<BR/><BR/>If theory gets bad press, it is for things like the obsession with "polynomial time = efficient". My pet peeve is that nontrivial models for the input distribution of the data are never considered, only the two extremes of worst-case and uniformly random are ever talked about. Yet, real life data is always somewhere in between, and you need to have a model for it. Theory has the tools to do this. For instance, "a data stream generated by a small space source." Yet such modeling is rarely done.Arvind Narayananhttps://www.blogger.com/profile/02495762505427759752noreply@blogger.com