One of the more striking examples of the power of randomization is its use in the estimation of the volume of a polytope. Given an oracle that declares (for a given point) whether it's inside or outside the polytope, a randomized algorithm can get an arbitrarily good approximation to the volume, even though the problem is #P-Complete. A deterministic algorithm, on the other hand, will fail miserably.
A related question that I was curious about is the surface area estimation problem. Given a polytope defined by the intersection of hyperplanes (we can assume that the input promises to yield a bounded polytope for now), can we estimate its surface area efficiently ? Now, one can use a volume oracle on each hyperplane to determine the area of the face it contributes to (if at all), but I was wondering if there was a more direct method (especially since the above approach is far more wasteful than necessary, especially in small dimensions).
There's even a practical reason to do fast surface area estimation, at least in 3D: this is used as a heuristic for building efficient space partitionings for ray tracing.
Ruminations on computational geometry, algorithms, theoretical computer science and life
Sunday, April 29, 2007
Estimating the surface area of a polytope
Labels:
research
Reactions: |
Thursday, April 26, 2007
Things not stated on the Korean visa form
If you're travelling to Korea for SoCG, and you need a visa, then there are a few things not mentioned on the Korean consulate web page:
- If you're a permanent resident, you have to attach a copy of your green card
- If you work for a company, you need a letter from your employer. If you are exalted enough to be employed by a university in the honorable profession of "professor", you don't need one.
- Processing time is about one week.
Labels:
miscellaneous,
socg
Reactions: |
Tuesday, April 24, 2007
It's the Sans Serif smackdown
In the right corner: HELVETICA !!
p.s In the top corner, ARIAL:
Helvetica, which had its world premiere at the conference, presents the life story of something all of us encounter on a daily (or even hourly) basis. Created in 1957 by the Swiss modernist designer Max Miedinger as a response to the cluttered typography and design of the postwar era, Helvetica's clean neutrality and balanced use of the empty space surrounding letters quickly made it a go-to font for public signage, advertising, corporate logos and works of modernist design around the worldIn the left corner: COMIC SANS MS:
[...]
Filmmaker Gary Hustwitt revels in his fascination with something so commonplace that it blends almost entirely into a context-less background, becoming a detective of sorts to unveil the myriad everyday places Helvetica is hiding (“It's a disease,” Hustwitt said of his obsessive font-spotting).
Earz: I found a weird website on typography, it was written in Italian I think, and had images of a gravestone lettered in comic sans. What does that say to you?Personally, I prefer Trebuchet MS.
That would only be appropriate if the deceased were a clown or comedian, but other than that, I'd come back to haunt whoever did that if I were the dead guy.
p.s In the top corner, ARIAL:
It's been a very long time since I was actually a fan of Helvetica, but the fact is Helvetica became popular on its own merits. Arial owes its very existence to that success but is little more than a parasite—and it looks like it's the kind that eventually destroys the host. I can almost hear young designers now saying, "Helvetica? That's that font that looks kinda like Arial, right?"
Labels:
humor
Reactions: |
Monday, April 23, 2007
Miscellaneous links
It's a good day for math blogging. Gil Kalai, guest blogging for Terence Tao, explains the weak -net conjecture for convex sets (and can I say that I'm very jealous of the impeccable LateX typesetting on wordpress.com). Leo Kontorovich presents an interesting example of the breakdown of discrete intuition from the Steele book on the Cauchy-(Bunyakovsky)-Schwarz inequality. Andy Drucker talks about why convexity arises naturally when thinking about multi-objective optimization.
Labels:
blogosphere,
research
Reactions: |
Friday, April 20, 2007
FOCS/STOC Reviewing
Bill Gasarch has a long list of questions about paper review processes at FOCS/STOC (what happened to SODA?) following up on Vijay Vazirani's not-radical-enough guest post about submitting videos along with submissions. Since it's tedious to post a list of answers to so many questions in comments, I'll post it here, and let Blogger trackback work its magic.
- Is the community really driven by these conferences? An Underlying assumption of these discussions has been that someone judges us based on the number of STOC/FOCSs we have. Who is this mysterious someone? Is it our departments? Our Colleges? Ourselves? Granting agencies?
Well it's not really that mysterious: I'd say all of the above. - Is it bad that we are so judged?? PRO: Its good to have a central place where you know the good papers are. CON: The rest of the items on this list are about what problems there are in judging quality CON: Some of these papers are never put into proper journal form. CAVEAT: Is the Journal-Refereeing system all that good to decry that it is lacking here?
Nothing wrong with being judged. However, the assumption that FOCS/STOC is the repository of all "good papers" is problematic. And personally, I'm tired of people complaining all the time about journal refereeing. The SAME people who review papers cursorily for conferences are the ones who do it for journals. If conference reviewing is "OK", what makes journal refereeing so terrible? - Other fields do not have high-prestige conferences- why do we and is it a good thing?. Our field moves fast so we want to get results out fast. It is not clear that FOCS/STOC really do this. Using the web and/or Blogs can get the word out. Important new results in theory get around without benefit of conferences. For results just below that threshold its harder to say.
I'm sorry. I'm having a hard time getting through the fog of hubris around this statement. I fail to understand how we can have the gall to stand up and say that CS theory moves a lot faster than any field that abjures conferences. Bio journals publish every few weeks, and there are mountains of papers that show up. For a closer-to-home example of the herd mentality that often drives research in theoryCS, papers in string theory appear at high rates without needing the acceleration of conferences. - Are the papers mostly good?
As I was saying to someone the other day, it's unlikely that you'll find more than a couple bad FOCS/STOC papers in each proceedings. So, Yes. - Is their a Name-School-bias? Is their a Name-person-bias? Some have suggested anonymous submissions to cure this problem.
After a recent SODA, someone levelled this complaint against the committee, both in terms of committee composition, as well as paper acceptance. There is some info on this matter here. - Is their an area-bias? There are several questions here: (1) is the list-of-topics on the conference annoucement leaving off important parts of theory? (2) is the committee even obeying the list as is? (3) have some areas just stopped submitting?
I'd argue that there is (mostly) faddism rather than bias; certain topics gain a certain cachet in certain years, and often papers that in other years might not have cracked a FOCS/STOC list do so. However, (3) is definitely true to a degree for computational geometry and data structures. I've lost count of the number of people who claim that they only submit to STOC/FOCS because of tenure pressure. It's not clear to me whether actions match words in this regard, but the prevailing sentiment, at least in these two areas, is strong. - Is their a Hot-area-bias?
YES: see above. - Is their a mafia that controls which topics gets in?
I've never been on a FOCS/STOC committee, but I'd strongly doubt this. I think our community is full of strong personalities, and it's hard to imagine that many people working together with such cohesion :) - Is their a bias towards people who can sell themselves better? To people that can write well?
Undoubtedly so. However, this cannot be viewed as a problem per se. It's just a function of human nature; if you understand something better, or it seems clearer, you will often view it favorably. And there's nothing wrong in creating an evolutionary pressure towards better writing and "framing" - Is their a bias towards making progress on old problems rather than starting work on new problems?
Not really. I think there's a mix of preferences, and that reflects individual interests. - Is their a bias towards novel or hard techniques?
Again, I suspect people have their own viewpoints; some like beautiful proofs, some like novel techniques, some are impressed by technical wizardry , and so on. - Is it just Random? Aside from the clearly good and clearly bad papers, is it random? Is even determining clearly good and clearly bad also random? One suggestion is to make it pseudo-random by using the NW-type generators. This solves the problem in that since it really is random it is less prestigous and most of the problems on this list go away. Would also save time and effort since you would not need a program committee.
Well since the vast majority of submissions to any conference are in the mushy middle, we have to invoke the proposition that I think is attributed to Baruch Awerbuch: the probability of a paper being accepted is a random variable whose expectation is a function of the paper quality, and whose variance is a function of the program committee. - Are there many very good papers that do not get in? It has been suggested that we go to double sessions so that more get in. If the quality of papers has been going up over time this might make sense and would not dilute quality.
This goes back to the dual conflicting roles of a conference: quality stamping and dissemination. You will almost certainly dilute the first by increasing paper acceptances, but you will only help the second. - Is 10 pages too short for submissions? This was part of Vijay's Video Suggestion. Are figures and diagrams counted for those 10 pages? If they are they shouldn't be.
Too short for what ? Reviewers, with the number of papers they need to review, can hardly be expected to read 20 page submissions. And as an aside, if you can spend the time on a video review to explain the paper, why can't you use the same time to write a GOOD outline of the main ideas in one section of the paper itself ? - Are many submissions written at the last minute and hence badly written?
YES - Are many submissions written by the authors taking whatever they have by the deadline and shoving it into a paper?
OFTEN - Since the conference is about all of theory, can any committee do a good job?Vijay was partially addressing this problem by trying to find a way to make their job easier.
Committees try, with extensive outsourcing, but it's essentially an impossible task, and one should not expect perfection. - Do other conferences have these problems? That is, the more specialized conferences- do they have similar problems? Why or why not?
Allowing PC members to submit changes the review dynamics tremendously. It increases the reviewer pool, reduces reviewer load, but also reduces the quality of review ratings. Most other communities allow PC submissions, so it's really hard to compare. I am NOT advocating going this route. - Do you actually get that much out of the talks? If not then it is still valuable to to go for the people you meet in the hallways?
Schmoozing is much more effective for me than attending talks. But that's just me. Others may have different viewpoints. - For all the items above, even if true, are they bad? Some have suggested that bias towards big-names is okay.
Bias towards "big-names" is braindead. Bias towards quality is perfectly fine. A correlation between "big-names" and quality does not imply that one should be replaced by the other.
Labels:
community
Reactions: |
Tuesday, April 17, 2007
On judgements
Paul Graham has a short note on judgements which seems apropos to dealing with rejection of various kinds (papers, jobs, grants, etc):
Our early training and our self-centeredness combine to make us believe that every judgement of us is about us. In fact most aren't. This is a rare case where being less self-centered will make people more confident. Once you realize how little most people judging you care about judging you accurately—once you realize that because of the normal distribution of most applicant pools, it matters least to judge accurately in precisely the cases where judgement has the most effect—you won't take rejection so personally.
Labels:
miscellaneous
Reactions: |
Monday, April 16, 2007
Factoidals
Brian Hayes has a very interesting article in American Scientist on the 'factoidal', a stochastic version of the factorial that he coined. It's defined as follows: for input n, pick numbers at random between 1 and n, multiplying them together until you pick a 1.
He expands at some length on the stochastic properties of this function: unsurprisingly, the arithmetic mean of samples of this function doesn't converge with more samples, but the geometric mean does. What's a little more surprising is that the distribution isn't log-normal. One might expect that since upon taking logs, we're averaging a collection of random numbers between 1 and log n, we might expect the average of the logs to display normality, but that doesn't happen.
The explanation is a nice take-way nugget from this article. In short, work by William Reed and Barry Hughes from 2002 shows (this example taken from their abstract) that if you take an exponentially growing process (for example ), and truncate it at a random time given by an exponential process (for example, with parameter ), then the resulting random variable exhibits the power law distribution
He expands at some length on the stochastic properties of this function: unsurprisingly, the arithmetic mean of samples of this function doesn't converge with more samples, but the geometric mean does. What's a little more surprising is that the distribution isn't log-normal. One might expect that since upon taking logs, we're averaging a collection of random numbers between 1 and log n, we might expect the average of the logs to display normality, but that doesn't happen.
The explanation is a nice take-way nugget from this article. In short, work by William Reed and Barry Hughes from 2002 shows (this example taken from their abstract) that if you take an exponentially growing process (for example ), and truncate it at a random time given by an exponential process (for example, with parameter ), then the resulting random variable exhibits the power law distribution
Labels:
research
Reactions: |
Thursday, April 12, 2007
Latex For Blogger
This is a neat plugin. And just to test it, here's the Johnson-Lindenstrauss Lemma:
For any and any integer n, let be a positive integer such that
Then for any set of points of n points in , there exists a map such that for all ,
Not bad at all !
For any and any integer n, let be a positive integer such that
Not bad at all !
Labels:
latex
Reactions: |
Tuesday, April 10, 2007
Knuth Prize
Nancy Lynch from MIT has been awarded this (one and half) year's Knuth Prize for her work in reliability of distributed computing. I remember first hearing about her work when I studied distributed computing and Byzantine agreement problems back in IITK.
This has been a good (and well deserved) year for women in computing: first Frances Allen, and now Nancy Lynch.
This has been a good (and well deserved) year for women in computing: first Frances Allen, and now Nancy Lynch.
Labels:
awards
Reactions: |
The Cauchy-BUNYAKOVSKY-Schwarz inequality
Episode #2583 of "By the law of large numbers (of Russian mathematicians), your theorem has been proved by some Russian before you even thought of it":
Viktor Bunyakovsky worked on Number Theory as well as geometry, mechanics and hydrostatics. He discovered the Cauchy-Schwarz inequality 25 years before Cauchy or Schwarz.
Labels:
humor
Reactions: |
Measure concentration
I've been preparing a lecture on the Johnson-Lindenstrauss Lemma and dove deep into the beautiful world of measure concentration. Matousek's chapter on the topic is excellent in this regard, but I found that the flow of the results (from measure concentration towards the JL Lemma) was not to my taste either personally or pedagogically (it's possible to discuss the lemma algorithmically using elementary proofs without having to uncover the deep measure concentration machinery chugging below).
In that respect, Alexander Barvinok's lecture notes on measure concentration prove to be quite useful. He starts from simple measure concentration on the sphere, builds up to the JL Lemma, and then carries on forward into Levy's theorem, Dvoretsky's theorem and the Brunn-Minkowsky inequality. He starts with a good intuitive definition of measure concentration:
Another useful nugged from Barvinok's notes is a thorough drilling on the use of the "Laplace trick": the technique of taking the exponential of a random variable and applying the Markov inequality in order to prove tail bounds (the standard proof of the Chernoff bound is one example of this).
In that respect, Alexander Barvinok's lecture notes on measure concentration prove to be quite useful. He starts from simple measure concentration on the sphere, builds up to the JL Lemma, and then carries on forward into Levy's theorem, Dvoretsky's theorem and the Brunn-Minkowsky inequality. He starts with a good intuitive definition of measure concentration:
Any "reasonable" function defined on a "large" probability space takes values close to its average with high probabilitywhich nicely ties it together (in my mind) with other large deviation results in information theory as well (he also has a section on rate distortion and Cramer's work).
Another useful nugged from Barvinok's notes is a thorough drilling on the use of the "Laplace trick": the technique of taking the exponential of a random variable and applying the Markov inequality in order to prove tail bounds (the standard proof of the Chernoff bound is one example of this).
Labels:
research
Reactions: |
Wednesday, April 04, 2007
Inventor of 'idf' passes on
Karen Sparck-Jones, professor at Cambridge University, winner of the Lovelace Medal and the Allen Newell Award (among other awards), died today. She was a pioneer in NLP and information retrieval, and is also the inventor of one of the most fundamental notions in information retrieval: the concept of IDF (or inverse document frequency).
Information retrieval methods are at the heart of web search today, even in the age of link structures and PageRank. One of the most elementary and yet most enduring ideas in information retrieval is the concept of a document as a "bag of words"; namely that even if we encoded a document as a (weighted) set of terms occuring in it, we can extract meaningful information from it. The 'bag-of-words' model itself dates back to at least to Gerald Salton's IR book in 1968; it is probably much older.
Once you think of documents as bags of "terms", you need a way of weighting these terms in a way that allows "similar" documents to have a similar set of weights. Term frequency (TF) is one way of doing this: weight a term with a number proportional to how often it appears. This makes some sense; a document about soccer might use the word 'football' or 'soccer' fairly often, as compared with a document on cricket.
However, words like 'a', 'an' and 'the' also appear very often in a document. How does one prevent these terms from washing away any semantic context one might have ? One idea is to weight the term frequency by something called 'Inverse Document Frequency' (IDF). For a given collection of documents, count in how many documents a fixed term appears. This number is the term's document frequency. Clearly, a term that appears in all documents can't be a useful distinguisher of a document and so the larger this number is, the less relevant the term. Therefore, by multiplying the term frequency by a function inversely related to the document frequency (if p is the fraction of documents a term appears in, the IDF is often set equal to log(1/p)), you get a more accurate estimate of the importance of the term.
This to me is a very modern way of dealing with documents, and yet the idea itself was proposed by Karen Sparck-Jones back in 1972, well before we could imagine search engines, and fast computers that could crunch these numbers in a jiffy. I'm not an expert on IR methods, but I dare say that a simple TF-IDF computation is, even today, a first preprocessing step when organizing document collections.
(Via Hal Daume)
Information retrieval methods are at the heart of web search today, even in the age of link structures and PageRank. One of the most elementary and yet most enduring ideas in information retrieval is the concept of a document as a "bag of words"; namely that even if we encoded a document as a (weighted) set of terms occuring in it, we can extract meaningful information from it. The 'bag-of-words' model itself dates back to at least to Gerald Salton's IR book in 1968; it is probably much older.
Once you think of documents as bags of "terms", you need a way of weighting these terms in a way that allows "similar" documents to have a similar set of weights. Term frequency (TF) is one way of doing this: weight a term with a number proportional to how often it appears. This makes some sense; a document about soccer might use the word 'football' or 'soccer' fairly often, as compared with a document on cricket.
However, words like 'a', 'an' and 'the' also appear very often in a document. How does one prevent these terms from washing away any semantic context one might have ? One idea is to weight the term frequency by something called 'Inverse Document Frequency' (IDF). For a given collection of documents, count in how many documents a fixed term appears. This number is the term's document frequency. Clearly, a term that appears in all documents can't be a useful distinguisher of a document and so the larger this number is, the less relevant the term. Therefore, by multiplying the term frequency by a function inversely related to the document frequency (if p is the fraction of documents a term appears in, the IDF is often set equal to log(1/p)), you get a more accurate estimate of the importance of the term.
This to me is a very modern way of dealing with documents, and yet the idea itself was proposed by Karen Sparck-Jones back in 1972, well before we could imagine search engines, and fast computers that could crunch these numbers in a jiffy. I'm not an expert on IR methods, but I dare say that a simple TF-IDF computation is, even today, a first preprocessing step when organizing document collections.
(Via Hal Daume)
Reactions: |
Tuesday, April 03, 2007
Generalized(?) Dyck Paths
A Dyck path is a useful metaphor for dealing with Catalan numbers. Suppose you are walking on an n X n grid, starting at (0,0). You can make a horizontal move to the right or a vertical move upwards. A Dyck path is a path in this grid where
People have studied "generalized" Dyck paths, where the grid is now rectangular (n X m), and the step lengths are appropriately skewed. However, what I'm interested in is the following seemingly simple generalization:
- the path ends at (n,n)
- the path may touch (but never goes over) the main diagonal
People have studied "generalized" Dyck paths, where the grid is now rectangular (n X m), and the step lengths are appropriately skewed. However, what I'm interested in is the following seemingly simple generalization:
Let a (n,k)-Dyck path be a Dyck path with the modification that the path, instead of ending at (n,n), ends at (n,k) (k <=n). What is the number of (n,k)-Dyck paths ?It seems like this should have been looked at, but I've been unable so far to find any reference to such a structure. I was wondering if readers had any pointers ? Note that this number is at most C_{n-1} since any such path can be completed into a unique Dyck path.
Labels:
research
Reactions: |
Subscribe to:
Posts (Atom)