Wednesday, December 13, 2006

Three thoughts on Anatoly Vershik's article...

Via Peter Woit and Luca Trevisan comes a pointer to an article by Anatoly Vershik in the new Notices of the AMS, lamenting the role of money prizes in mathematics. Three thoughts:
  • "the newspapers, especially in Russia, are presently “discussing” a completely different question: Is mathematical education, and mathematics itself, really necessary in contemporary society ". At the risk of sounding patronizing, I find it terribly worrisome that the place that spawns such amazing mathematicians, and has such a legendary training program for scientists, should even indulge in such a discussion. Especially now, with all the handwringing in the US about the lack of mathematical training at school level, it seems a particularly bad time to abdicate what is a clearly a competitive advantage.

  • He talks about not understanding "the American way of life" as regards how money is viewed. There's a juxtapositon of images that I've always been struck by, and that tennis lovers will recognize: At Wimbledon, the winner is crowned with a fanfare, royalty, and a trophy (or plate); the prize money is never really discussed. At the US Open on the other hand, along with the fanfare comes the huge check handed out by some corporate sponsor while the PA blares out the amount. The trophy presentation, although making for good photo-ops, seems almost anticlimactic.

    I am a little skeptical though whether offering prizes like the Clay prize convinces people that mathematics is a lucrative profession. After all, this hasn't happened for the Nobel prizes.

  • On the false-duality: I've heard a variation of this argument many times. It goes basically like this: "Either you're interested in subject X and don't need motivation, or you aren't, in which case no amount of motivation is going to help". This is possibly true for identifying students likely to make the transition to being professionals in subject X. In fact, I've heard an anecdote from the world of music, about a maestro who would tell all his students that they would fail professionally at being musicians. His argument was that only the ones who cared enough to prove him wrong had what it took to survive.

    One has to realize though that the teaching of a subject is not about creating Mini-Mes: only a small fraction of the students we come in contact with will become professional computer scientists/mathematicians/whatever. But a large fraction of these students will vote, many of them will go onto position of influence either in industry or government, and they will all contribute to a general awareness of the discipline. So it's a mistake to give up on motivating students; even if they never end up proving theorems for a living, a better appreciation for those who do will help all of us.


Categories:

Sunday, December 10, 2006

Sorting algorithm animations

Pat Morin has a great collection of Java sorting applets. You can line up three at a time and have them sort a random permutation of one through N. It's fun to see insertion sort or bubble sort earnestly plodding along long after merge sort or quick sort has blazed through.


Categories:

Tuesday, December 05, 2006

Author ordering and game theory.

There are typically two major ways of ordering authors on a paper. In theoryCS, we (mostly) use the lexicographic ordering (by last name); in many other areas, author ordering by relative contribution is common. There are variants: advisors might list themselves last by default even within a lexicographic or contribution-based system, and middle authors may not always be ordered carefully, etc etc.

Since paper authorship conveys many important pieces of information, author ordering is an important problem. It's an even bigger problem if you have hundreds of authors on a paper, some of which may not even know each other (!). This is apparently becoming common in the HEP (high energy physics) literature, and an interesting article by Jeremy Birnholtz studies the problem of authorship and author ordering in this setting. The study is sociological; the author interviews many people at CERN, and derives conclusions and observations from their responses.

As one might imagine, not too many of the problems of 1000-author papers are translatable to our domain. After all, these papers are bigger than our conferences, and I doubt anyone has never needed a "publication committee" when writing their paper. And yet, the interviews reveal the same kinds of concerns that we see all the time. Is a certain ordering scheme shortchanging certain authors ? Did a certain author do enough to merit authorship ? Who gets to go around giving talks on the work ?

Towards the end of the paper, the author makes an interesting (but unexplored) connection to game theory. The players in this game are the authors, and what they are trying to optimize is perceived individual contributions by the community (the "market"). Intuitively, lexicographic ordering conveys less information about author contributions and thus "spreads" contributions out: however, it's not symmetric, in the sense that if we see a paper with alphabetically ordered authors, it could be a product of a truly relative contribution ordering that yields this ordering, or a lexicographic ordering. In that sense, authors with names earlier in the alphabet are disadvantaged, something that seems counter-intuitive.

As it turns out, there's been some work on the equilibrium behaviour of this system. To cite one example, there's a paper by Engers, Gans, Grant and King (yes, it's alphabetically ordered) that studies the equilibrium behaviour of author ordering systems with a two-author paper in a market. Their setup is this:
  • The two players A and B decide to put in some individual effort.
  • The relative contribution of each (parametrized by the fraction of contribution assigned to A) is determined as a (fixed but hidden) stochastic function of the efforts.
  • The players "bargain" to determine ordering (lexicographic or contribution). The result is a probability of choosing one kind of ordering, after which a coin is tossed to determine the actual ordering
  • The work is "published", and the market assigns a value to the paper as a whole, and a fraction of this value to A, based on public information and other factors.
Now all of these parameters feed back into each other, and that's where the game comes from. What is a stable ordering strategy for this game ? It turns out that lexicographic ordering does yield equilibrium behaviour, and contribution-based ordering does not.

What's even more interesting is that if we look at merely maximizing research output (the external "quality" of the paper), then this is not maximized by lexicographic ordering, because of the overal disincentive to put in more effort if it's not recognized. However, this does not suggest that always using contribution-based ordering is better; the authors have an example where this is not true, and one intuition could be that if there's a discrepancy between the market perception of contribution and individual contributions, then there is a disincentive to deviate too much from the "average" contribution level.

It's all quite interesting. Someone made a comment to me recently (you know who you are :)) about how assigning papers to reviewers made them value research into market-clearing algorithms. I like the idea of applying game theory to the mechanisms of our own research.

(HT: Chris Leonard)

Previous posts on author ordering here, and here.


Categories:

Friday, November 24, 2006

European Workshop on Computational Geometry.

Like CCCG and the Fall Workshop on Computational Geometry, the EWCG is a workshop where you can present results that will appear in expanded form in a conference or journal.

Submission deadline: Jan 8, 2007.
Workshop dates: Mar 19-21, 2007, Graz, Austria.




Categories:

Monday, November 20, 2006

On your marks, get set....

Gentlefolk, STAARRT YOUR LATEX !!!!!!!!!
Program Title:

Theoretical Foundations (TF07) Program Solicitation
Date: February 19, 2007
  • Approximately 15 small awards at $60,000/year or less will be made. For example, projects by new faculty may require NSF support for only one student or for summer salary. Most small awards will go to (or preference will be given to) PI's who have not previously been a PI or coPI on an NSF award.
  • Up to 55 awards will be made with an average grant size of $125,000/year for durations up to three years.
  • Up to 5 awards of up to $500,000/year for well-integrated projects of larger scope are anticipated.
The STOC deadline was over ONE WHOLE HOUR ago ! Get cracking !

p.s Thanks, Chandra.
Categories:

Thursday, November 16, 2006

On writing versus blogging

I've had this blog for going on two and a half years now, and it's interesting to see how writing posts has influenced my "other" writing.

There is the obvious difference in interface. When I add citations to a paper, I almost wish I had a tool that could highlight text and add a clickable link to a reference (the emacs extension RefTeX is great, though: it makes adding citations, references and labels blindingly easy).

I structure sentences differently as well. Links in blogs are added en passant, without interrupting the text. Citations in papers have to be worked into the sentence structure carefully (that is, if you believe the maxim that a citation is not a noun). This causes no end of confusion when I write sentences in a paper; I often have to rephrase the sentence to conform to "normal" citation format. I will add though that the parenthetical style of mentioning citations makes sense with written documents, but with online hyperlinked PDF documents I would actually prefer blog-style linking. But then again, we still have paper proceedings, so there's a long way to go for that...

It's natural that a blog post is more chatty and personal, and a paper is more formal. But writing a blog has encouraged me to be less stuffy and more breezy in parts of a paper that merit it (introductions, discussion, etc). This can only be a good thing; as the famous war cry goes, 'Eschew obfuscation' !

Writing a blog also shakes out some of the ghastlier linguistic tics that infect my writing. It's actually shocking how many there are, and how easily they evade detection.

I wouldn't recommend writing a blog solely as a way of improving (or expanding) your writing skills. But it does have benefits beyond being a soapbox for one's bloviations.


Categories:

Tuesday, November 14, 2006

Chernoff bounds, error correcting codes, and automata.

Andy Drucker, a first-year grad student at UCSD, has an interesting Math/CS blog. Some of the highlights are a post on showing constructive Chernoff bounds via error-correcting codes, and a love-poem to automata :).


Categories:

Monday, November 13, 2006

Reversal in the decline of foreign students in the US

This can only be good news:
The number of new foreign students coming to the United States grew this school year, after several years of weakness that followed the terrorist attacks of 2001, according to a survey to be released today by the Institute of International Education. [..]

According to the survey, conducted by the institute and other education groups, the number of new international students at American colleges and universities increased 8 percent this fall over last, to 142,923.

Another sign of a turnaround was a sharp upturn in student visas, said Allan E. Goodman, president of the institute. Dr. Goodman said the State Department issued a record 591,050 student and exchange visas in the 12 months ending in September, a 14 percent increase over the previous year and 6 percent more than in the year leading up to the 2001 attacks.

It's not that admitting more foreign students is a good thing in and of itself; it's more that this is a useful indicator of how competitive the US is in the marketplace of "idea generation"; for decades, the US had a monopoly on the "idea factories", and in recent years, there's been growing competition from the rest of the world (especially from China), capitalizing on the panic and overreaction following 9/11.

Update (11/17): On the other hand, there's this:
The latest IEE Open Doors report finds that the number of international students enrolled in computer and information science programs in the U.S. declined in academic year 2005/2006, as it has each year since 2002/2003. This occurred even as the number of new foreign students in all programs increased between the Fall of 2004 and 2005 and as total enrollment of foreign students stabilized.
It may not be surprising that foreign student enrollment in CIS has dropped. After all, there's a national trend of dropping enrollment in computer science. The question is whether this drop is more than the overall national trends, and how different the undergraduate/graduate enrollment statistics are.


Categories:

Tuesday, November 07, 2006

Voting

I am a politics junkie, which given my non-citizen status is as about as useful a pasttime as learning Klingon ("Heghlu'meH QaQ jajvam !!!!"). However, my life is deeply affected by laws passed here, and ensuring that not-completely brain-dead zombies come to power is a GOOD THING ("Hab SoSlI' Quch!"). So thanks to all of you that voted, are voting, or will be voting, and no thanks to Lance's prediction page for wasting so many hours of my life :).
Categories:

Warning labels, continued...

This could become a nice series. Luca has a "creationist"-style warning for NP-hardness in the comments, and here's another:

WARNING: The next several slides (pages) contain equations. Your soul may be crushed.



Categories:

Monday, November 06, 2006

Warning label for an NP-hardness proof

A note about the proofs: Transformation arguments can be intricate and
highly formal. Furthermore, since the polynomially-equivalent problem might
not bear any obvious intuitive relation to the voting problem, the proof might
establish computational difficulty without seeming to explain its source. In this
sense, the conclusion is more important than the argument.
From "How hard is it to control an election?", by Bartholdi, Tovey, and Trick.



Categories:

Friday, November 03, 2006

Tamper-proof voting systems

Digg is a collaborative news site, much like Slashdot, where people post articles that seem interesting and topical. Through a voting scheme, articles get "dugg", and rise to the top of the site. In fact, getting "dugg" is now an inadvertent denial-of-service attack much like being "slashdotted" used to be.

The problem is that people attempt to game the system to bump up traffic to their sites, by forming voting coalitions, making fake accounts, etc etc. Needless to say, there can often be real money (in the form of advertising) behind this, so there's a lot of incentive to cheat the system.

This is not a new problem; Google and other search engines have battled search engine optimizers for a long time now. There are companies that claim to improve your location in a Google search result (for a small fee of course). An interesting side effect of all this gamesmanship is that search engines like Google and aggregators like Digg have to keep many details of their ranking process secret. Obviously, some of this is for IP protection, but protecting against vote riggers is also a major concern. A tertiary consequence of this lack of transparency is that such sites are now vulnerable to lawsuits by disgruntled sites complaining about bias in search results; Google has had to fend off lawsuits based on some claim of bias leading to monetary damage.

The latest such problem has hit Digg, where in an attempt to block out users trying to game votes on articles they want to push, the management has managed to freeze out and frustrate some of the most prolific users. A user-driven site like Digg that has many competitors can't really afford to be annoying its most valuable contributors.

So (finally), here's the technical question. Although Arrow's theorem tells us that in general, voting schemes can always be defeated, I don't know if the result is constructive. In other words, even if there is a voting strategy that can break one of the criteria for a reasonable voting scheme, it may not be easy to find such a scheme.

So, in the spirit of RSA, is there a way of designing a voting scheme that can be published (thus addressing issues of transparency), but is computationally intractable to game ? Any cryptographers know if this has been studied ?


Categories:

Thursday, November 02, 2006

CS beyond algorithms: Would that it were so....

From an NYT article on a new research program in "Web science":

Ben Shneiderman, a professor at the University of Maryland, said Web science was a promising idea. “Computer science is at a turning point, and it has to go beyond algorithms and understand the social dynamics of issues like trust, responsibility, empathy and privacy in this vast networked space,”
In fact I'd be happy if this were so. Alas, I spend more time encountering computer science that hasn't even discovered algorithms yet !


Categories:

Wednesday, November 01, 2006

The Snowblowing (or leaf raking) problem

Lifehacker asks:
Reader David wants to know the best way to rake leaves. My first thought: "Get someone else to do it." But seriously, assuming you have a square or rectangular yard, what's the most efficient raking pattern? Should you make multiple smallish piles or aim for one big one (the latter being best for running jumps, of course)? Surely there's a mathematical or engineering principle that can be applied here.
As it turns out, this year's WAFR (Workshop on the Algorithmic Foundations of Robotics) has a paper on essentially this problem (with leaves replaced by snow), by Arkin, Bender, Mitchell and Polishchuk:
We introduce the snowblower problem (SBP), a new optimization prob-
lem that is closely related to milling problems and to some material-handling prob-
lems. The objective in the SBP is to compute a short tour for the snowblower to
follow to remove all the snow from a domain (driveway, sidewalk, etc.). When a
snowblower passes over each region along the tour, it displaces snow into a nearby
region. The constraint is that if the snow is piled too high, then the snowblower
cannot clear the pile.
This is not to be confused with the Snowplow problem, a puzzle in elementary differential equations:
One day it started snowing at a heavy and steady rate. A snowplow started
out at noon, going 2 miles the first hour and 1 mile the second hour. What
time did it start snowing?
p.s Actually, it's true. The slides are more interesting...


Categories:

Monday, October 30, 2006

Sunday, October 29, 2006

Saturday, October 21, 2006

There, but for the grace of god, ...

This week's issue of the NYT Magazine has a long story on a case of scientific fraud from the University of Vermont, where a prominent researcher was sentenced to a year and a day in jail (as well as being banned from ever getting public funding) for falsifying results in dietary studies.

It seems unnecessary, (and too easy) to blame the researcher involved; the story is damning enough. What struck me though, reading though the description of how events transpired, was how banal, how mundane the fraud was, and how utterly common the driving forces were; the usual toxic mix of a desire for fame, the pressure to get money, how universities encourage people to bring in grants.
Steven Heymsfield, an obesity researcher at Merck Pharmaceuticals in New Jersey, [...] added that Poehlman’s success owed more to his business sense and charisma than to his aptitude as a scientist.

“In effect, he was a successful entrepreneur and not a brilliant thinker with revolutionary ideas,” Heymsfield wrote me via e-mail. “But deans love people who bring in money and recognition to universities, so there is Eric.”




Categories:

Friday, October 13, 2006

Computer scientists sit in a cube and program all day...

Science+Professor+Woman=Me has an interesting experience talking to a bunch of freshmen about science (emphasis mine):
...today I gave a guest lecture to a group of freshmen who all said they were not interested in science. It turns out that they had no idea what science really involves. I listed a bunch of scientific questions and asked if these were things they wanted to know. Yes! They did. So we talked about these for a while, and then they thought of more questions, and it was a very fun. We also talked about how research is done - how you come up with the questions,how you go about answering, discovering, testing. The students said they hadn't known that these were the kinds of things that scientists did. They imagined that we just worked in our labs making chemicals or looking at data on computer monitors all day. I doubt if any of them were inspired to become scientists, but I felt pretty good about changing their perceptions of science and scientists.
I wonder what would happen if we did this for CS.


Categories:

Monday, October 09, 2006

Deadlines, manic behaviour and happiness

Much as I rant about the idiosyncratic "conference as primary publication venue" nature of computer science, there's no denying the pleasures of working down to the wire to meet a deadline. The adrenaline rush, the highs (and the post-deadline lows), and the feeling that my mind is working at 200 mph... aaah. It's like a drug habit I can't shake (which is why, inspite of vowing after each deadline never to do it again, I ... do it again. Classic addict behaviour).

It turns out that all I'm really doing is maximizing happiness. Who'da thunk it ?
When people are made to think quickly, they report feeling happier as a result. They also say they are more energetic, more creative, more powerful, and more self-assured. In short, they reported a whole set of experiences associated with being "manic."
And if your paper, written with the sweat of your fevered brow, fueled by zillions of cups of coffee, delivered by divine inspiration from the Book to your mind, gets rejected ? Just think quickly:
...the effect of thought speed was just as powerful as the effect of the content of the thoughts. In other words, the speed of people's cognitive processing was just as important as what they processed in determining their mood. Even thinking sad thoughts at a fast pace made people relatively happy.

Categories:

Saturday, October 07, 2006

"We're making it less random to make it feel more random."

Maybe we should use the iPod to explain all concepts in theoretical computer science:
Steven Levy really liked Steely Dan, but so too, it seemed, did his iPod. Like a lot of people, he began to wonder about its shuffle - was the random function really random or a result of dirty tricks, blunders... or even telepathy?
Read more about it at the Guardian (HT: The Mathematics Weblog)




Categories:

Tuesday, October 03, 2006

On Models For Research

In a review of Lee Smolin's new book, 'The Trouble With Physics', Sean Carroll writes:
Faculty positions and grant money are scarce commodities, and universities and funding agencies are naturally risk-averse. Under the current system, a typical researcher might spend five years in graduate school, three to six as a postdoc, and another six or seven as an assistant professor before getting tenure – with an expectation that they will write several competent papers in every one of those years. Nobody should be surprised that, apart from a few singular geniuses, the people who survive this gauntlet are more likely to be those who show technical competence within a dominant paradigm, rather than those who will take risks and pursue their idiosyncratic visions.
It's worth pointing out here that there are many different models for being a successful researcher. And when I say successful, all I mean is that you contribute interesting results to the community and your work is appreciated. Indeed, finding out what model works for you is an important part of developing your identity as a researcher.

We develop our sense of what the 'ideal' researcher looks like from people around us: the advisor, the mentor, the researcher whose papers we pore over. Invariably, some will influence us more than others, and we'll start adopting, unconsciously almost, some of their style and taste for problems and lines of attack. All of this is good, and natural. But it's important to remember that like you form your own identity as a person by drawing on influences and modifying them, you must do the same as a researcher.

It's worth pointing out because I don't know how deeply we think about models of research, and what style of work makes us happier (problem solver ? theory builder ? voluminous production ? multiple collaborations ? sitting in a room and contemplating? ). Once you find your "comfort zone", you'll be a lot more content with your work, and in all likelihood you'll produce quality work.


Categories:

Flying while brown, part II

I used to call it, 'Travelling while Asian', but let's face it: it's the brown skin that counts.

Here's the latest, from Boing Boing:

A 32-year-old man speaking Tamil and some English about a sporting rivalry was questioned at Sea-Tac Airport and missed his flight Saturday because at least one person thought he was suspicious.

[....]

The man was speaking Tamil, a language largely used in India, Sri Lanka and Singapore, on his cell phone at the departure gate and on the aircraft. An off-duty airline employee heard the conversation and informed the flight crew.

All those years I spent fending off attempts to teach me Tamil by my parents, grandparents, aunts, uncles, cousins, great-aunts and great-uncles and second cousins twice removed are now finally worth it ! I am safe !!

I wonder what will happen if I speak Hindi....

Categories:

Thursday, September 28, 2006

Four legs good, two legs bad...

Not that it will do me much good when the Komitat comes to round up all foreigners, but:
Thank you for joining the American Civil Liberties Union. Your gift will help us continue the fight for our basic civil liberties. Below is your donation information.


Ready to do more?

Invite your friends to join in standing up for freedom with the ACLU

Contact Information

Name: Suresh Venkat

Address: xxxx -xxxxx

Email Address: sureshv@gmail.com




Categories:

Wednesday, September 27, 2006

"Who are you ? How did you get in my house ?"

xkcd (funny geeky webcomic) cites Don Knuth on array indices.


Categories:

A new voting scheme

Ron Rivest has just put out a voting scheme that is remarkable for its simplicity. It addresses the following questions:
  • How do you tell if your vote was counted
  • How might you get proof that you voted
  • How can this be concealed from (a) someone trying to coerce you (b) someone trying to find out your vote or tamper with it (c) from you !
I'm not an expert on voting schemes, so I'll leave the detailed analysis to the experts. I will note that the scheme is NOT cryptographic, and is surprisingly elegant in its basic method. One major drawback: it requires each voter to vote on 3 ballots in a specific way: in the era of hanging chads and butterfly ballots, this might be viewed as a major drawback.

Here's the idea: suppose you have to vote on one of two candidates Alice and Bob. You vote three times, once on each of three ballots. If you prefer Alice, you vote for her on two of the three ballots, and vote for Bob on the remaining ballot. As a receipt, you take back a copy of one of the three ballots you used. The three ballots are separated and cast individually as separate ballots into the counting box.

The neat idea here is that each candidate gets n + C votes, where C is the number of people who voted for them. So it's still easy to determine winners, and because the single receipt ballot could have come from any combination of votes, the true intention of the voter cannot be determined from their receipt. There is some heuristic reasoning about the possibility of tampering and where the main weaknesses lie.

An important side effect of the fact that you can't determine a vote from a receipt is that vote selling is not possible: how will you prove to the vote buyer that you voted as they asked ?

(From comp.risks via Adam Buchsbaum)


Categories:

Monday, September 25, 2006

Mathematics as blood sport

I for one am glad that we have left the challenge era of mathematics far behind us. From Wikipedia's entry on the cubic equation:

In 1530, Niccolo Tartaglia (1500-1557) received two problems in cubic equations from Zuanne da Coi and announced that he could solve them. He was soon challenged by Fiore, which led to a famous contest between the two. Each contestant had to put up a certain amount of money and to propose a number of problems for his rival to solve. Whoever solved more problems within 30 days would get all the money.

Tartaglia received questions in the form x3 + mx = n, for which he had worked out a general method. Fiore received questions in the form x3 + mx2 = n, which proved to be too difficult for him to solve, and Tartaglia won the contest.

Later, Tartaglia was persuaded by Gerolamo Cardano (1501-1576) to reveal his secret for solving cubic equations. Tartaglia did so only on the condition that Cardano would never reveal it. A few years later, Cardano learned about Ferro's prior work and broke the promise by publishing Tartaglia's method in his book Ars Magna (1545) with credit given to Tartaglia. This led to another competition between Tartaglia and Cardano, for which the latter did not show up but was represented by his student Lodovico Ferrari (1522-1565). Ferrari did better than Tartaglia in the competition, and Tartaglia lost both his prestige and income.

You have to wonder: when you lay down a mathematical challenge, do you throw a quill at the feet of your rival ?


Categories:

Thursday, September 21, 2006

Turingistan

Bernard Chazelle has updated what has been called 'The IPod essay'. An excerpt:
Let's try a thought experiment, shall we? You're the unreconstructed Algorithm skeptic. Fresh from splitting your playlist, Alice, naturally, is the advocate. One day, she comes to you with a twinkle in her eye and a question on her mind: “What are the benefits of the central law of mechanics?” After a quick trip to Wikipedia to reactivate your high school physics neurons and dust off the cobwebs around them, you reply that F=ma does a decent job of modeling the motion of an apple as it is about to crash on Newton's head: “What's not to like about that?” “Oh, nothing,” retorts Alice, “except that algorithms can be faithful modelers, too; they're great for conducting simulations and making predictions.” Pouncing for the kill, she adds: “By the way, to be of any use, your vaunted formulas will first need to be converted into algorithms.” Touché.
Much fun. Go read.


Categories:

Wednesday, September 20, 2006

The magic of √2

Bill Gasarch has an entertaining dialogue on √2 in the latest issue of SIGACT News (which also has an interesting variant of art-gallery problems in the Geometry column). It's a remarkable coincidence, because I was just about to post an entry about a 7th grade classroom problem that revolves around properties of √2). Unlike T, T, F, S, E, this problem actually does have some really nice math behind it.

The problem is as follows:
You live on a street where the houses are numbered 1,2, 3, etc.. You notice that the sum of house numbers prior to yours equals the sum of house numbers after yours. What is your house number, and how many houses are there on the street ? Your answer should be in the 30s
Some quick algebra on n, the number of houses, and k, your house address, reveals that the numbers satisfying this condition yield a square triangular number. Namely, n and k must satisfy the equation

n(n+1)/2 = k2

It turns out that numbers satisfying this equation can be derived from solutions to Pell's equation:

x2 - 2 y2 = 1

where x, y are integers. Pell's equation actually gives good rational approximations to √2, in the form x/y, where x, y are solutions. What's more, if x/y is a convergent in the continued fraction of √2, then x2y2 is a square triangular number (i.e can be written as n(n+1)/2 or k2).

There's also a general form of the solution, that I first found (courtesy David Applegate) in the Hardy/Wright book on number theory. The "trick" here is to realize that the appropriate way to solve this is over the field of numbers of the form a + b√2.

It's rather satisfying that a simple extra credit problem for a 7th grade math class can yield such nuggets.


Categories:

STOC 2007 Deadline

STOC 2007 approaches. As before, you can add the deadline to a Google calendar by clicking here.
Categories:

Sunday, September 17, 2006

Looming submission deadlines.

Apropos of my post on implementing geometric algorithms, it seems like a good time to mention the upcoming ALENEX deadline.
The aim of the ALENEX workshop is to provide a forum for presentation of original research in the implementation and experimental evaluation of algorithms and data structures.
If you use Google Calendar, click to add the deadline to your calendar.

Another deadline that's coming up even sooner is for the Fall Workshop on Computational Geometry, the annual math-conference-style event for geometers. This year it's in Smith College on Nov 10-11, and is being run by Ileana Streinu. One of the special events this year is a 3D Printing demo by Joe O'Rourke. The deadline is Sep 22, and it should already be in your calendar by now, but if not, click here .

p.s if you want to create buttons like above for your event, here's the link.
Categories:

Thursday, September 14, 2006

Implementing geometric algorithms.

Over at bit-player, Brian Hayes has a tale of woe about implementing a geometric algorithm. Not even a complicated one at that, merely an algorithm to check if two line segments intersect. His story is the usual one: general position is a fiction thought up by sadistic geometers (but of course !), and even the simplest of ideas needs many special cases to take of degenerate data.

Apart from being surprised that he didn't have problems with precision (how DO you tell if both endpoints of a line segment are identical), I sympathize with his plight. As anyone who's ever implemented a geometric algorithm will tell you, it's much harder than it seems, and to an extent, the idealized algorithms geometers design can be hard to implement. CGAL and LEDA have made life a lot easier, but robustness and exact computation are still important (if highly uncool) areas of computational geometry.

Categories:

Wednesday, September 13, 2006

Matrix wizardry

While we're on the topic of books,
vec(AXB) = (BT ⊗ A) vec(X)
This identity is your best friend if you're doing any kind of matrix calculus. To know more about it, and all you might ever want to know about Kronecker products, the vec() operator, and matrix calculus, check out The Matrix Cookbook, a 50+ page reference guide for all things matrix and calculus.



p.s the reason the identity is so handy is that it allows you to get the variable matrix X out from in between A and B.

Categories:

New textbooks in theoryCS...

There appears to be a outpouring of new theoryCS textbooks. I had mentioned the Mitzenmacher/Upfal book on probability and randomized algorithms some time ago. Recently, Jon Kleinberg and Éva Tardos published Algorithm Design, the book with THE MOST elaborate real-world exercises I have seen. Sanjoy Dasgupta, Christos Papadimitriou and Umesh Vazirani have another algorithms text coming out as well.

The not-so-latest addition to this list is a new book on complexity theory by Sanjeev Arora and Boaz Barak. For a long time now, Papadimitriou's book has been the definitive text in this area, but it has begun showing its age, especially with all the new developments in complexity theory. The Arora/Barak book is not yet published, but has been placed online for comments.

The magic of Papadimitriou's book was in making complexity classes spring to life as the denizens of the Zoo that they really are. Complexity theory can be a dry topic if not imbued with a sense of direction and purpose, helping us understand the WHY of how these classes came to be. If my cursory scanning of the Arora/Barak book indicates anything, it is that this new book is a worthy successor.

The two books cover similar material in the foundational sections; where I think the new book takes off is in its coverage of the more "modern" aspects of complexity theory: the profound results on pseudorandomness, hardness and cryptograpy, interactive proofs and approximations, natural proofs, communication complexity and lower bound methods, and even quantum computing.


Categories:

Thursday, September 07, 2006

The Indian grad student experience, and more...

I occasionally read the Chronicle of Higher Education: I guess it's de rigeur for my academic colleagues. It's often too stuffy for me, but this issue has two articles that are real gems (and had me nodding my head in agreement paragraph after paragraph).

Unless you've hung around Indian grad students, you have no idea of the kind of organizational skills that we can muster up when it comes to helping fellow Indian students acclimatize. I recall being picked up at SFO by a "senior" grad student and driven to Stanford, and supplied with all kinds of useful information when I got there, (including copies of all the variations of the driving tests the California DMV uses!). Apparently, things have become much more sophisticated: check out this description of the Indian student organization at NC State (and yes, I do own a copy of The Inscrutable Americans; the letter Gopal writes to his father at the beginning of the book is priceless).

The other article could have easily been written about my parents, and their general bewilderment at the kind of work I do (I remember the first time I went to work in shorts while my father was visiting, and how scandalized he was) . It's hilarious: do check it out.


Categories:

Wednesday, September 06, 2006

Data hosting on the web

I recently lost my cellphone, and all my contacts with it. I got a new phone (off ebay, no less) and cingular sent me a SIM card so I could preserve my number. But my contacts were all gone. Now I hear about a new service called ZYB.com that will maintain a web backup of your contacts. The idea is that they collect your contact info (via SMS) and store it, and you can retrieve it (again via SMS) on any phone that allows for text messaging.

It's a cool idea: although there are doohickeys you can get to sync your palm/outlook/PC addressbook with a phone, they are usually vendor specific. This is more general, and is aligned with the whole "store your data on the web" philosophy that's all the rage nowadays (mail, calendars, online spreadsheets, ...)

Privacy of course is the issue, especially with contact info, a gold mine of real telephone numbers and often email addresses. ZYB of course makes all the right noises about privacy, but ultimately you have to trust them.

Why must that be ? It can't be that hard to store data encrypted, and decrypt on the fly only when a user provides a pass-phrase ? The problem with SMS specifically is that the user might have to type the pass phrase out in the open, but maybe there's an IPsec equivalent of SMS out there ? And even if there isn't, wouldn't server-side encryption obviate the need to trust a private entity ?

Given how ubiquitous crypto has become, I think I'd need to be convinced why obvious schemes like this can't be used before handing over data to an entity that I have to "trust".

p.s another point that came up in discussion was the lifetime of such a company. What's the guarantee they won't go belly up in a year and sell your data, or worse, lose it ?


Categories:

SODA results trickling in...

The full list should be available soon. Unusually, no information about accepts/submits was provided in the author notification.

Here's our (accepted) paper.

Update: 135 papers accepted, with some caveats. There were 4 merges, and with two papers, I believe that there will be two separate papers in the (already extra-large) proceedings, but only one talk. The rationale for this escapes me. Effectively there will be 137 papers in the proceedings though. This translates to a 36% acceptance rate.

Update II: The list of accepted papers is here.

Categories:

Thursday, August 31, 2006

Undergrads and arXiv ?

Via Daniel Lemire, this prediction:
Today’s undergrads have never thought about the world any differently – they’ve never functioned without IM and Wikipedia and arXiv, and they’re going to demand different kinds of review for different kinds of papers.
< snarky researcher on>
If you are an undergrad who's never functioned without the arXiv, please please pretty please apply for a summer job at AT&T, and request me as your mentor.

I shall now wait for a deluge of applications to clog my mailbox.

< snarky researcher off>


Categories:

Wednesday, August 30, 2006

Planar graphs and Steinitz's theorem

One way of representing a 3-connected planar graph is as the 1-skeleton (i.e the vertices and edges) of a convex polytope. A neat way of visualizing this is by shining a light above one face of the polytope and looking at the shadow formed (see David Eppstein's beautiful picture). Steinitz's theorem says that in fact this is an exact characterization: convex polytopes are exactly those polytopes whose edge graphs are planar.

There are different ways this convex polytope can be constructed, and it can be made to take various forms. Oded Schramm, in a paper titled 'How to Cage an Egg' that should go on my list of cool paper titles, shows that given any convex polytope and any smooth strictly convex body in R3, there is a combinatorially equivalent convex polytope that can be embedded such that each edge touches the surface of the convex body (in particular, we could use a sphere). There are many other results of this flavor; Ziegler's book on polytopes has more, as does Jeff Erickson's page on constructing convex polytopes.

But the question that puzzles me is this. Steinitz's original proof yields a polytope with rational coordinates (and thus integer coordinates). However, the values involved are doubly exponential in the number of vertices. Subsequently, Onn and Sturmfels showed that a "mere" single exponential suffices (specifically, their bound is of the form n169n3). It is open whether a polynomial (quadratic?) bound suffices.

Enter László Lovász, and the Colin de Verdière number of a graph. It would take too long to explain this number here (van der Holst, Lovász and Schrijver have a survey); basically it's the second eigenvalue of an associated matrix and has the interesting property of correlating with planarity, outer planarity and linkless embeddability for different values of the number. Lovász and Schrijver showed that a Colin de Verdière matrix (a matrix achieving the number) can be used to embed a planar graph on the sphere (using the geodesics as edges) so that each face is the intersection of the sphere with a convex polyhedral cone.

Lovász, in later work, then showed that this matrix can be used to generate a Steinitz representation of a planar graph as well. Which finally brings me to the question I wanted to ask:

Is there anything known about the size complexity of the Steinitz representation that the Lovász construction generates ? My references for the size of the Steinitz representation are quite old (the Ziegler book and the Onn/Sturmfels paper) and maybe there is a known polynomial upper bound ? If not, could the Lovász construction be a good candidate ?

It's worth noting that for higher dimensional polytopes, it is known that a doubly exponential representation is the best one can hope for. As often happens, the 3D case is special in this regard.

Update: David Eppstein points out that Chrobak, Goodrich and Tamassia showed a bound of O(n log n) bits per vertex (instead of n3) in SoCG 1996 (they apparently weren't aware of the Onn/Sturmfels result though).

Update II: Isn't the internet great ! Another commenter points out a monograph by
Jürgen Richter-Gebert
on the realization of polytopes. Here's where things get interesting. The monograph (dated 1996) has two Steinitz realizations. One is a general bound that uses 18n2 bits per vertex (better than Onn/Sturmfels) and the other, which applies to simplicial polytopes (specifically realizations of the 3-connected planar graphs that Chrobak et al consider), uses linear number of bits per vertex (something like n * log 43) ! So this work improves on the Chrobak et al work, while being roughly concurrent (Both author groups are unaware of each other's work).

Of course all of this predates the Lovász construction, so the possibility of further improvement still exists.

Categories:

Saturday, August 26, 2006

Mathematics and Wikipedia

From yet more ink spilled over l'affaire Perelman, a description of mathematics apropos for these modern times:
Mathematics is supposed to be a Wikipedia-like undertaking, with thousands of self-effacing scriveners quietly laboring over a great self-correcting text.



Categories:

A language for computational topology

sigfpe makes a case for Haskell, using simple Haskell code to compute Betti numbers.

Categories:

Friday, August 25, 2006

Tuesday, August 22, 2006

The "I-Team" vs the A-Team.

From Terence Tao's Field's medal announcement:
He did this work in collaboration with four other mathematicians, James Colliander, Markus Keel, Gigliola Staffilani, and Hideo Takaoka. Together they have become known as the "I-team", where "I" denotes many different things, including "interaction"... The word "interaction" also refers to interactions among the team members, and indeed collaboration is a hallmark of Tao's work.
Consider this, from the A-Team:
I like mathematical progressions, but we're really picky about whom we work for.
Given one of the things Tao won the Fields Medal for, this is strangely appropriate.


Categories:

Replacing impossibility by infeasibility

One of the great conceptual breakthroughs in cryptography was the idea that an impossibility - this code cannot be cracked - can be replaced by infeasibility - this code is computationally hard to crack - with great success. Indeed, the notion that all you need to do is fool a computationally bounded adversary is at the core of derandomization.

It's an idea that transcends theoryCS - that you can model entities by computationally bounded objects, and can then prove interesting things about them that you couldn't have done if the entity was all powerful. And it's realistic ! As long as you're willing to believe in the effective variant of the Church-Turing thesis (all effective computations are in BPP; we'll pretent quantum computing doesn't exist for now), then it accurately models any computational entity you can choose to build.

Which brings us to social choice and voting theory. As you may well be aware, Arrow's theorem is a rather depressing impossibility result for designing a "fair" election. Either you lose some nice property (no dictatorship!), or some adversary can find a way to beat the system another way. But given all of the above, a more plausible question one might ask is: can a computationally bounded adversary crack an election ? (Disclaimer: I have no evidence that Diebold is a computationally bounded adversary; it certainly has no resource constraints)

What got my attention were two new papers on the arxiv, one by Faliszewski, Hemaspaandra, and Hemaspaandra, and the other by Hemaspaandra, Hemaspaandra, and Rothe.

The first paper asks the question that Katherine Harris must have surely wondered about, "How Hard Is Bribery in Elections?". The short answer is, "It depends", and the long answer is that depending on the type of voting scheme and who's doing the rigging (a briber or the voters themselves), the problem goes between being in P and being NP-Complete.

The second paper studies the problem of when the organizer of an election seeks to rig it:
Electoral control refers to attempts by an election's organizer (``the chair'') to influence the outcome by adding/deleting/partitioning voters or candidates.
In some circles, this is also known as the venue selection process for SODA. What the authors show is that by combining elections in a certain way, the overall election can be made resistant (i.e NP-hard) to control by the organizer.


Categories:

Saturday, August 19, 2006

Originality vs understanding, and the nature of creativity.

One of the comments in the SIGGRAPH forum I talked about earlier was about the (mistaken) emphasis on original trivial contributions versus "incremental" but more profound contributions. The statement is of course stacked (who would argue in favor of "trivial" contributions !), but it brings up a point worth noting.

You can be creative while being original, and indeed this is the standard way one thinks about creativity. But solely emphasizing originality misses a deeper truth about why we do research, which is the acquiring of wisdom and understanding. Indeed, it takes a certain, different kind of creativity to deeply understand certain concepts, often by establishing connections between known areas or making subtle observations about the relationships between entities. Indeed, in computational geometry, we have a name for one such kind of creativity: "Chan's method" :)

Originality and understanding are part of the larger process of acquiring knowledge and wisdom about a domain. A new area of knowledge profits from originality because problems start blossoming forth, and ideas come fast and furious. Over time, deep understanding leads to the building of connections among known concepts, and this further strengthens the area. Mature areas like mathematics find great value in the establishing of connections; the whole of category theory can be viewed as one gigantic connection generator.

Computer science is a young discipline; like all young disciplines, new ideas are valued, and they come so fast that there is often little time to reflect and make connections. My personal belief (your mileage may vary) is that at least in the realm of algorithms and geometry we've reached a kind of consolidation limit point, where more and more often we are running up against problems that call for brand new ways of thinking, and where "more of the same" kinds of results seem less interesting. In such a setting, results that prove less, but connect more, seem more valuable to me.



Categories:

Wednesday, August 16, 2006

SIGGRAPH, hiring, and peer review.

There's been an interesting flareup in the graphics community prompted by the posting of a note by Michael Ashikhmin on his web page. Ashikhmin is a known graphics researcher (with SIGGRAPH papers to his credit) who recently (June this year) proclaimed his desire to leave the graphics community because of
my deep disgust for the state of affairs within computer graphics research community and my inability to fit well within existing system
His grievances are many, and appear to be a direct consequence of the hegemonic nature of SIGGRAPH, by far the most prestigious conference in graphics. Specifically, he argues that all the usual problems with conference reviewing (extreme subjectiveness, poor quality of the reviews, clique-formation) are exacerbated by the overwhelming influence SIGGRAPH papers have on one's career (being hired, advancing in one's career, getting tenure, etc).

None of the objections are particularly novel; indeed, I have yet to go to a theory conference where people have NOT complained about the reviewers. However, what takes this beyond just your ordinary embittered-researcher-rant is that many prominent researchers in graphics appear to publicly both agree with him.

Michael says in his letter that senior graphics researchers recommended that he host a forum devoted to discussing this issue, and once he set the forum up, many well known and respected graphics researchers (John Hart, Marc Levoy, and Jonathan Shewchuk among others), commented publicly on the matter. In fact, Marc Levoy (who's the papers chair for SIGGRAPH 2007) went further and organized a town hall meeting at SIGGRAPH 2006 to discuss paper reviewing procedures (I don't know what transpired there).

There are many comments on the forum from anonymous commenters who claim to be published authors at SIGGRAPH. As far as I can tell, not one person disagrees with the primary claims that Michael makes, although Marc does attempt to mount a defense of the paper review process, while still acknowledging the main problems, and outlining strategies that he will employ in 2007 to fix some of them.

Many good suggestions were made. One of the primary ones was to add a new SIGGRAPH-like conference so that one venue didn't have to take all the load (STOC and FOCS were cited favorably here). Prohibiting committee members from submitting was another idea (again, the theory community was cited), although this was frowned upon by Marc Levoy, who complained that he wouldn't be able to find people for a committee (he did aver that he wouldn't be submitting anything).

This is probably the first time I've seen this kind of discussion take place (partly) on the record without dismissing the complaint as sour grapes. The question of course is whether anything will come of it in the long term. It's worth mentioning that even in our conservative (by nature, not by politics) research world, change can happen, and can often happen rapidly. Witness the collective revolt by academicians of all stripes against Elsevier, and closer to home, consider the split in ICRA (one of the main robotics conferences) to form RSS (Robotics: Science and Systems).


Categories:

Tuesday, August 15, 2006

Poincare's conjecture

Dennis Overbye has a great article in the NYT today about the resolution(?) of Poincare's conjecture. He does an excellent job describing both the conjecture itself and the main line of attack that Perelman follows, while narrating the history of the proof both accurately and without unnecessary romanticization of the mathematicians involved.

For a more detailed nontechnical survey that goes more into the roles of the different players, see Allyn Jackson's article on the latest Notices of the AMS. A more technical account is presented in a manuscript by Shing-Tung Yau.


Categories:

Thursday, August 10, 2006

Guns don't kill people, people kill people.

I was watching CNN, and a talking head (who happened to be a former Somebody in Israeli aviation) was making a point that I thought was quite reasonable. He said that it's too hard to focus on the method of making explosives, since explosives sufficient to destroy a plane in mid-air can be made from a variety of chemicals. It was more important, in his view, to focus on the people executing the missions; it was much harder to hide oneself from an acute observer than it was to hide chemicals.

Israeli immigration is notorious for doing exactly this; I've never been to Israel, but colleagues tell me of being interrogated to within an inch of their lives about the details of theorems in the papers they were presenting. After 9/11, I remember similar arguments being made in the US, but they quickly got bogged down in issues of racial profiling, and we quickly found that randomly pulling out gentle Caucasian grandmothers from Iowa was a reasonable thing to do.

So what's the computer science angle here ? If we think of computer security, I'd argue that many of the discussions revolve around blocking techniques: this attack on that cryptosystem, that weakness in this OS, and so on. It seems like the security precautions being put into place on the airlines now are just like that: a weakness is revealed, a (usually overblown) patch is put in place, and so on.

This approach appears completely unsuited for protecting against "real" security hacks though ! The much-derided 'humint' approach appears to be far more effective (and far less overtly intrusive to the population at large).


Categories:

Precision, recall, and bottles of water.

The latest airline explosion plot has sparked a predictable overreaction by the much-beloved TSA, which once again demonstrates that it lacks a basic understanding of precision and recall.

Let's recap, shall we ? Suppose you're trying to find all elements of a subset S in a universe U, and return as an answer the set H.

Precision measures the quality of your answer; what fraction of the elements of H are indeed in S ? The higher the precision, the lower the false-positive rate.

Recall measures the efficacy of the procedure; what fraction of the elements of S did you actually find ? The higher the recall, the lower the false-negative rate.

As any statistician will tell you, false-positives and false-negatives are complementary; increase one, and the other decreases. Search engines need high precision, (they also need good ranking, but that's a different story).

The TSA is clearly going for high recall. Banning bottles of water will surely eliminate any future plans for liquid explosives that use water, but it also eliminates the many (how shall I say) *innocent* uses of water ?

p.s I don't mind the short term enforcement of stricer guidelines while law enforcement attempts to figure out what kinds of explosives were being designed. I just have no faith that the new draconian regulations will be relaxed any time soon, considering how long it took to allow us to carry nail clippers on board flights again.


Categories:

Thursday, August 03, 2006

Timestamping using the arXiv...

When is it appropriate to post an article to the arXiv ? You could opt to post when
  • A journal /conference accepts the paper. You could also just post it on your website at that point, although the arXiv does allow for better dissemination.
  • You submit to a journal/conference. I'm told that this is often the model in areas of physics. Seems reasonable enough, although at least in CS, the fear of being "scooped" might prevent you from doing so, not to mention the often-byzantine "double-blind" policies of some conferences.
  • You've dried yourself off from the shower you were taking when inspiration struck. Ok, so maybe I'm exaggerating slightly, but...
Can the arXiv be a valid reference for time-stamping ? In other words, can you use a document posted on the arXiv as proof of 'prior art' ? Consider the following scenario: Santa and Banta* are both working furiously (and independently) on a brand new result in synthetic polymorphic differential logic. Santa submits his paper to the ACM Symp. on SPDL, while Banta misses the deadline. Banta, smartly enough, posts his manuscript on the arXiv, while Santa's paper is deemed not algorithmic enough and is rejected summarily from SPDL.

When IEEE SPDL comes along, who claims precedence ? Certainly not Santa, since he has no document to be cited ? But can Banta claim precedence merely by posting on the arXiv ? there has been no peer review after all, and his proof could be full of holes.

It would be easy enough to declare that Banta has no claim to precedence, since there is no peer-reviewed cited work available. But there are two problems with this:
  • It negates the value of the arxiv ! After all, if I cannot claim any kind of precedence, but can have someone pick over my result and improve it, what incentive do I have for posting anything ? One answer to this could be that my result is cast-iron, and can't be improved, but this happens far less often, and cannot be a useful answer in all situations.
  • It ignores common practice within the community ! People will often talk about new results they have, and if the result is important enough, word will spread, and ownership will be (informally) established.
  • People cite technical reports all the time ! One of the founding papers of streaming algorithms was (and has remained) a DEC technical report.
Personally, I'd still like to use the "peer-reviewed publication" as the best model for timestamping. But this also explains why the arXiv appears more popular among physicists and mathematicians, who publish primarily in journals. After all, the likelihood of a paper getting rejected by a journal can be reduced to a far lower number than the corresponding number for a conference, and so publishing on the arXiv is a relatively risk-free venture. I also suspect that people hold off on more controversial work, sending it out to the arXiv only when already accepted to a peer-reviewed venue.



As my Indian readers will know, Santa and Banta often star in jokes making fun of the lack of intelligence of a particular ethnic community in India. At least in this anecdote, they are both smart researchers; consider it my contribution to ethnic amity. :)


Categories:

Musings on the arXiv.

Things that make you go 'hmmmm', or why arXiv can't replace peer review:
The polynomial solution of graph isomorphism problem is obtained by consideration symmetry properties of regular $k$-partitions that, on one hand, generalize automorphic $k$-partitions (=systems of $k$-orbits of permutation groups), and, on other hand, schemes of relations (strongly regular 2-partitions or regular 3-partitions), that are a subject of the algebraic combinatorics.
Although I appreciate the value of the arXiv, there are some things about it that I still don't get.
Take the paper excerpted above. It claims to solve graph isomorphism in polynomial time, a result that if true would be quite momentous. The abstract already warns me that the technical material would be complex, and (call me parochial, shallow, what-have-you) the complete absence of grammatical structure does raise some tiny red flags.

So where does this leave me ? I'd like to know whether there is any merit to this paper. In the absence of peer review, I'd have to wade through it myself, or hope that someone with more technical expertise in the material does it for me, and announces their verdict in a forum where I can hear about it.


Categories:

Monday, July 24, 2006

Where duality helps

If you do geometry (or approximation algorithms, for that matter) long enough, it becomes second nature to flip between primal and dual space. Points become hyperplanes, convex hulls become envelopes, and so on.

Here's a question that has been puzzling me: What is a good example of a problem where flipping between primal and dual (or even just going from one to the other) is the key to its solution ? Now obviously, there are NP-hard problems that admit a primal-dual solution, where by definition we need the dual space. I don't mean these kinds of problems.

I'm merely wondering if there is a relatively simple problem with a relatively simple solution that would be hard to explain without duality. The easiest that comes to mind is how the complexity of the convex hull in 3D is linear (because the complexity of the lower envelope in 3D is linear). Another potential example is the rotating calipers method for finding diameter/width etc: it can be much easier to see what's going on in the dual space.


Categories:

Tuesday, July 18, 2006

Windows and Linux, side by side

Captain's log, Sector 43, 3rd Quadrant, HELL. Temperature, -19 F:

Microsoft Corp. on Monday said it is teaming up with Linux supplier XenSource to allow computers to run the upcoming version of Microsoft's Windows server operating system on computers that are simultaneously running Linux software.

In the latest sign that it is dropping its resistance to Linux, Microsoft of Redmond, Washington is teaming up with Palo Alto, California-based XenSource to compete with VMware, now the biggest supplier of so-called ``virtualization'' software.




Categories:

Friday, July 14, 2006

Thursday, July 13, 2006

Visa problems when travelling to the US ?

John Langford advertises an effort to document cases of visa problems for researchers trying to enter the US.
A serious effort is under way to raise this as in issue in need of fixing. Over the long term, effectively driving research conferences to locate outside of the US seems an unwise policy. Robert Schapire is planning to talk to a congressman. Sally Goldman suggested putting together a list of problem cases, and Phil Long setup an email address immigration.and.confs@gmail.com to collect them.

If you (or someone you know) has had insurmountable difficulties reaching a conference in the US, please send an email with:

Name:
Email address:
Conference:
Difficulty:
Details: (be brief please)

We expect most of the problem cases are students, so don’t be shy.
I'll second that. DON'T BE SHY. Efforts like this can actually work, and what makes them happen is access to the right people, and a good set of cases to fight for. The more cases there are (and all of us know of at least a few), the more powerful an argument we have. So please send mail to the above address.


Categories:

Tuesday, July 11, 2006

7 blasts in Bombay

Developing...

all blasts along the Western Railway, in first-class compartments during the evening rush hour (around 1830). Death tolls are unconfirmed, but expected to be high. Mumbai Help is the central information source right now.

Friday, July 07, 2006

EurekaUK: Great discoveres coming out of Britain in the last 50 years.

An interesting list, and this item was interesting:
Section four: Discoveries for the digital age

Manchester: birth of the first working computer
Two University of Manchester scientists, Freddie Williams and Tom Kilburn, are credited with running the world's first stored program computer.

I grew up thinking that one of ENIAC/EDSAC/EDVAC was the first stored-program computer. It turns out that ENIAC came close to being one, but didn't. There's more info at Wikipedia. According to Wikipedia, the Manchester "Baby" was developed in 1948, which was more than 50 years ago :)



Categories:

SODA submits: stunning drop

Muthu announces that there were 380 submissions at SODA this year. This is a stunning drop: last year, some 450 were submitted, and the year before, 487 (although many of those were short papers). This is the lowest submission count since 2002, coinciding with a much-larger-than-normal increase in the committee size.



Categories:

Wednesday, June 28, 2006

FOCS 2006

So the FOCS paper list is out, and it appears that there are a number of very interesting (geometry) papers on the list. I'll have to wait till people start putting papers online to actually comment on them, but I'd say that even looking at geometry alone, there appear to some very interesting papers, on point location, the upper bound theorem, clustering and k-means, and algorithms for hyperbolic spaces.



Categories:

Wednesday, June 21, 2006

Breaking news: soccer balls no longer polyhedral !

The soccer ball provides a great example of Euler's formula (I'm told it's also fun to kick it around and stuff). It consists of 20 hexagons and 12 pentagons, and a standard question one can ask is: why can't we tile the whole surface with hexagons ? This shape is also called the Buckyball, and is a special case of a carbon allotrope called a fullerene.

Apparently, the soccer balls being used at the World Cup are no longer polyhedral. They consist of 14 pieces, many of which look a lot like the squashed oval shape you see on tennis balls and baseballs. This makes the ball rounder and faster, and apparently gives it baseball-like effects when moving through the air. Players (and especially goalkeepers) have been complaining about this, but then they complain every World Cup, so....


Categories:

Tuesday, June 20, 2006

Changing the way power-law research is done.

From time to time, I have had Michael Mitzenmacher comment on issues relating to power-law research. Michael now has an editorial in the latest issue of Internet Mathematics, on new directions for power-law research.

I highly recommend reading it, whether you work in this area or not. It addresses the main point that has always made me uncomfortable about power-law research: that almost anything looks like a power-law if you squint hard enough. Michael makes the argument that
while numerous models that yield power law behavior have been suggested, and in fact the number of such models continues to grow rapidly, no general mechanisms or approaches have been suggested that allow one to validate that a suggested model is appropriate.
There is a larger point here about "algorithms on data" and "algorithms on structures" that I want to talk about, but I'll keep that for a later post.


Categories:

Wednesday, June 14, 2006

All very blunt-rollin' shiznit

Dick Karp recommends: "Keep the party crackin while I'm steady rappin':"

Lance Fortnow is stoked. "Real niggas recognize the realness"

Ok, maybe not.



Categories:

Tuesday, June 13, 2006

SoCG 2006: The "shape" of things to come...

Sedona is very hot. Not hot enough to fry an egg on the pavement, but still very hot. Hot enough that any hiking activities must be conducted at 5:30 AM or so, so that one may be back in one's air-conditioned ozone-layer killing hotel by 7:45 or so, when things really start heating up.

On Monday, a bunch of us had hiked out to the Bell Rock, a rather large bell-shaped rock near the hotel. On Wednesday (I was too cowardly to brave the elements on Tuesday), we decided to climb as far up the rock as we could go. Lest you think that I am some kind of rock climbing savant, I will assure you that no such thing is true. The rocks were rough enough and the trails well marked enough that we could get pretty high up without needing technical apparatus of any kind.

Here, in all its beauty, is Bell Rock:




From this angle, it's hard to explain how high we got, but it was high enough :). It started raining as we descended. The rocks proceeded to get very slick, but in every other way the rain was highly welcome. The rest of the morning was pleasantly cool as a result; so much so that it was hard to stay indoors and attend the first session.

Even though it was day 3 of the conference, which usually means my brain is fried and I am longing to return home, I really wanted to attend the first session. You see, this session was one of many on the burgeoning area of computational topology, a topic that now has a significant presence within computational geometry.

In a sense, there is no surprise that computational topology has become an important area. One of the main application areas of geometry is the understanding of shape, whether it be the triangulated meshes that constitute shapes in an animation, or the intricate surfaces induced by electrostatic forces at the surface of a protein molecule.

Topology provides much of the underlying mathematics of shape. Not all of it: metric properties of the shape are obviously important. But a good deal of it. A classic example of this is the following question:
given points sampled from a shape, under what conditions can I reconstruct a shape that is geometrically and topologically similar to the original shape ?
Many results over the years have attempted to answer this question via "sampling criteria" of the form, "if you sample points so that they satisfy some condition SC, then there's an algorithm that will correctly reconstruct the shape within some error bounds". The goal is to get as relaxed conditions SC as possible, given that we often don't have control over the sampling process.

Much work at this year's SoCG was on new sampling criteria and new ways of representing the medial axis (an important structure that acts as a kind of "skeleton" of a shape). In other words, the mathematics and the algorithmics of reconstructing shapes.

Another thread of work is on a notion called 'persistence'. Persistence is a little tricky to define intuitively (and I strongly recommend Afra Zomorodian's excellent monograph), but the basic idea is that you first create a sequence of evolving structures for a shape. One example of this could be the level sets of a terrain as you increase the "level". Then, you look for features that are "long-lasting", or "persistent" within this sequence. Such features are likely to be more permanent parts of the shape, and can be used to identify key parts of the shape. Much work has now gone into computing persistence, doing it in a stable fashion, and trying to generalize it in various settings.

In general, persistence is a way of identifying important elements of a shape, and can often be a "subroutine" in higher level topological analysis of shape.

A third thread that's somewhat distinct from the above, but deals squarely with "algorithmic topology" is exemplified by work on computing paths on shapes of different topological characteristics. Applications of this work are less immediate, but one can imagine using such algorithms to deal with the manifolds generated via surface reconstruction etc. I'm sure Jeff Erickson has more to say about this.

To work in this area, you need a very good grasp of topology (combinatorial, algebraic), some Morse theory, and of course good algorithmic and geometric intuition. The first two topics are not usually covered in a graduate degree in algorithms, and that makes this field a bit harder to get into, but its ranks are growing.

Computational topology is definitely a new and challenging direction in geometry. I wouldn't be surprised if it ended up having its own conference sooner or later: I hope not though. The cross-fertilization between topology and more traditional computational geometry is one of the more interesting trends in the field right now.



Categories:

Euler meets Homer... D'oh !

More news on Jeff Westbrook, our theory "mole" inside Hollywood (He's not Jennifer Garner, but he does just fine). Jeff now writes for the Simpsons, and is apparently trying to get an Euler's theorem reference into the show.

(HT: David Poole via Chris Volinsky)


Categories:

Disqus for The Geomblog