Sunday, October 31, 2004

E-voting: The Blog

A new computer science blog, this one on e-voting. It has been started by a number of experts in e-voting, including AT&T Labs alumnus Avi Rubin. Here is the first message, posted by Ed Felten:

Welcome to evoting-experts.com.

Our panelists will be posting news and commentary on e-voting issues. We hope our site will serve as a central source of information and insight about e-voting, straight from some of the leading independent experts.

This is a non-partisan site. Our goal here is not to advance any political agenda, but to help ensure that all votes are counted fairly and accurately, and to provide honest expert commentary to the public and the press.


Given the amount of scrutiny that this election will face, and the amount of FUD that will be generated by the parties and the media over voting processes, this will hopefully be a good resource to understand the real issues behind e-voting and how it affects this year's prelude to a recount.

Thursday, October 28, 2004

Mathematics vs Statistics

This amusing excerpt is from an interview of Bradley Efron, inventor of the bootstrap method in statistics:
I thought I was going to be a mathematician. I went to CalTech, and I think I would have stayed a mathematician if mathematics was like it was a hundred years ago where you computed things, but I have no talent at all for modern abstract mathematics. And so I wanted to go into something that was more computational. After CalTech I came to Stanford. And statistics was definitely better.
Incidentally, Efron's book is a very nice introduction to the bootstrap method, striking that delicate balance between 'So how do I use this stuff' and 'So why does this really work'

Wednesday, October 27, 2004

Some NSF info

I would have called this 'NSF news' except for the fact that it is over three months old. The CRA had a conference in Snowbird in July, and among the presentations there was one on trends in NSF funding, by Greg Andrews from CISE. The presentation itself is quite short: some interesting facts...
  • Submissions to CISE have gone up 125% from 1997-2003, and are expected to grow even faster in 2004.
  • The CCF (which includes most of theory, graphics and geometry, but not databases) had a 15% accept rate for CAREER awards, but only 5% for other proposals.
  • It appears that there will be no non-CAREER competition for CCF grants in FY 2006. This is attributed to budget pressures, and it is indicated that things will be 'back to normal' in FY 2007.
As a comparison, acceptance across the board in CISE is roughly 16%. CNS (Computer and Network Systems) had much higher success rates (30% CAREER, 17% otherwise).

Not being at a university, I don't know how much of this is already old news. This post was prompted by a lunchtime discussion on pressures to publish, write grants etc. It also appears from anecdotal evidence that there are more people submitting more proposals than ever (akin to the situation with paper submission to conferences).

Tuesday, October 26, 2004

Dynamic Optimality - some progress....

A binary search tree is probably the most basic data structure in computer science, and is one of the first structures you come across when learning algorithms. Given a set of keys and an order defined on the keys, a binary search tree maintains the property that the key of a node is greater than all keys of left descendants and less than all keys of right descendants.

One of the most important outstanding problems in data structures is designing an optimal BST. Structures like red-black trees and splay trees can be shown to take O(log n) time (amortized) per insert/delete/find operation, and this is tight, in the sense that there exist sequences of updates of length m for which Omega(mlog n) tree operations are required. For charging purposes, a single 'splay' or 'rotation' takes one unit of time.

However, if we restrict ourselves to a static universe, with only searches, can we do better with respect to the optimal offline solution ? The Dynamic Optimality conjecture, first stated by Sleator and Tarjan, conjectures that splay trees cost only a constant factor more than an optimal offline algorithm. Note that this optimal algorithm must change the tree (hence the term 'Dynamic'); if the optimal offline algorithm is forced to choose a fixed tree, then splay trees are asymptotically optimal.

A new FOCS 2004 paper by Demaine, Harmon, Iacono and Patrascu takes a crack at the Dynamic Optimality conjecture, presenting an algorithm that comes with O(log log n) of the optimal offline solution.

Monday, October 25, 2004

Elections can be educational !

Jordan Ellenberg, novelist and math prof at Princeton, does the impossible: he provides a lucid explanation of both Bayesian analysis and Nash equilibria in the context of electoral strategy.

He also reads an awful lot...

Update: We could have a course on electoral math: Tall, Dark and Mysterious points out yet another set of articles on election math, this time focussing on the Banzhaf power index (a method for determining the relative power of blocs in a block voting system). For extra credit, consider the following:
The runtime of the programs doing these computations is already pretty high (O(2n)), but I wonder if there are any probabilistic variations on this index as applied to the electoral college.
Is there a more efficient approach ?

INDUCE is dead ?

A few months back, I had mentioned an abomination known as the INDUCE Act, that would generalize wildly the class of actions that could be construed as copyright infringement (e.g. making a device that could be used for copyright infringement). It is worth pointing out here that the software industry (one of the largest holder of copyrights) was against this bill.

Congress (or more specifically Orrin Hatch) was trying to get this bill passed, and apparently it appears to be DOA at least for this term. Read more about it in this engadget interview with Wendy Seltzer, an attorney with the EFF.

Saturday, October 23, 2004

Back to my roots :)

The first computer I ever programmed on was the ZX Spectrum 48k (yep, 48K memory)



External storage was a tape recorder (and watch that volume control otherwise the data transfer gets hosed !). I wrote BASIC programs, and a lot of assembly code video games. Come to think of it, I learnt some of my first AI techniques on this machine.

I also got an early lesson in technology envy; a friend of mine then acquired the 128k Spectrum, and thus I went very quickly from king of the hill to insanely jealous second-best :)

This nostalgic rant was brought upon by this site, allegedly one of the oldest continually running websites on the web.

Thursday, October 21, 2004

Didactic Writing/Research

William Gibson talks about writing, and didactic novels, arguing that a novel loses aesthetic quality if it seeks to further a point of view. In another way of saying it,
...no genuinely valuable interrogation of reality can take place, and the result will be a literary virtuality built as exclusively from the author's expressed political philosophy as that author can manage. This is best understood, an excellent teacher of mine said, by asking ourselves whether or not a fascist can write a good novel.

...A fascist can't write a good novel because writing a good novel, in the end, is about relinquishing control of the text.
In a way, this could be true for research as well. In theoretical work we often possess a hammer, and go hunting nails to pound, but some of the best kind of research is the kind that beautifully slots into the problem being addressed, to the extent that one can only say 'How else could this problem have been tackled ?', and yet, possesses a generality (akin to universal truths in good literature) that appeals to our shared aesthetic of beauty and enriches the field as a whole.

Proof parodies...

More proof parodies, this set more in the line of philosophical proofs rather than mathematical proofs (via Oxblog).

All the 'proofs' deal with proving the claim 'p', and here is one of the best;
Most people find the claim that not-p completely obvious and when I assert p they give me an incredulous stare. But the fact that they find not- p obvious is no argument that it is true; and I do not know how to refute an incredulous stare. Therefore, p.

Counter-examples

Mathematics has a long tradition of using counter-examples as a way of illuminating structure in theory. Especially in more abstract areas like topology, canonical counterexamples provide a quick way of teasing out fine structure in sets of axioms and assumptions.

A brief foray through Amazon.com revealed catalogues of well known counter examples in topology, analysis, and graph theory. On the web, there are pages on counterexamples in functional analysis, Clifford algebras, and mathematical programming.

What would be good candidate areas for a list of counter-examples in theory ? Complexity theory springs to mind: simple constructions (diagonalization, what have you) that break certain claims.

In combinatorial geometry, one might be able to come up with a list of useful structures. Personally, I find the projective plane to be a useful example to demonstrate the limits of combinatorial arguments when reasoning about geometric objects.


Wednesday, October 20, 2004

FOCS attendance.

Adam Klivans notes that FOCS attendance is down, to about 172 registered attendees (which is like the reverse of announced attendance at sports events; more people show up than the official registered list), in comparison to STOC 2004, which had 100 more people.

The number of papers accepted at STOC this year was 72, in comparison to 62 at FOCS. In general though, (and I only went back a few years because I got tired of opening proceedings), STOC appears to accept 15-20 papers more than FOCS on average (75-80 vs 60-65). There was no significant submission increase (surprising given the trend lines for other conferences), and so one can only surmise that location had at least something to do with the low attendance. After all, if you are submitting (and getting accepted) more papers than before, and if funding is down, you'd have to be pretty careful about choosing meetings to go to, especially if you are not presenting, and are thus not constrained to attend.

Although the number of accepted papers is not out of the norm for FOCS, one does have to wonder whether there are really only that few papers worth accepting ? I suspect this is constrained by the whole multi-track vs single-track, conference-as-prestige-stamp vs conference-as-meeting-place issue, and FOCS represents one extreme point.

Tuesday, October 19, 2004

Making scientific promises

Today, Arnold Schwarzenegger endorsed a proposition supporting funding for embryonic stem cell research. One of the many claims made by supporters of stem cell research is that it can help find a cure for Alzheimer's Disease, which afflicts nearly 3% of Americans between the ages of 65 and 74.

The concept of using embryonic stem cells to cure such diseases is tantalizing: in principle, the idea that such cells can be "nudged" into forming different kinds of adult cells indicates that cells (like brains cells) that do not regenerate can be replaced/replenished using stem cells.

What worries me though are the kinds of claims that are being made on behalf of stem cell research. Strategically, one can understand the desire to relate this to actual disease prevention (almost all NIH grant proposals mention a connection cancer in the first few paragraphs !), but it also appears at least plausible that there is a long way to go from the basic science of stem cell development to an actual disease treatment. Suppose a cure for Alzheimer's is really fifty years away, or longer. Is there a risk of a 'crying wolf' effect, where the promises of the research are so far that policy makers start becoming more skeptical ?

The reason I even bring this up is because I am reminded of a similar plight that overtook AI after its heyday in the 60s. Extreme amounts of hype, and the claim that soon computers could mimic humans, gave way to a serious backlash, and then finally a more nuanced understanding of the potential and limits of AI-related disciplines (fields like robotics/vision/learning appeared to flourish once they were not bound to the chains of "intelligence" and had more specific, local goals).

This may sound heretical, but sometimes working away from the limelight can be a lot better for a field; the real questions can be answered without having to worry about politics and controversy entering the picture (as in warming, global).

There is an element of blaming the victim here, I admit. After all, stem cell researchers would probably have been content to labor in obscurity if the issue hadn't been brought front and center by administration policy way back in 2001. Critics may complain that there is a serious ethical issue at stake here; I actually feel the ethical dilemma is more manufactured than real, hinging as it does on 'angels on the point of a pin'-like discussions about exactly when life starts.

Monday, October 18, 2004

Fall CG Workshop: Deadline approaching

As in, tomorrow !

This year's workshop has a special emphasis on open problems, and as always, previously published work is also welcome. The CG Workshop is one of the nicest venues to meet with other geometers and discuss new and old problems.

Sunday, October 17, 2004

Interview with Michael Atiyah and Isadore Singer

Via Not Even Wrong, a fascinating interview with Michael Atiyah and Isadore Singer, winners of the 2004 Abel Prize.

I took the liberty of reproducing some sections of this rather long interview: it was a pleasure to hear their views on matters that we all wrestle with.

On proofs:
Any good theorem should have several proofs, the more the better. For two reasons: usually, different proofs have different strengths and weaknesses, and they generalize in different directions - they are not just repetitions of each other. And that is certainly the case with the proofs that we came up with. There are different reasons for the proofs, they have different histories and backgrounds. Some of them are good for this application, some are good for that application. They all shed light on the area. If you cannot look at a problem from different directions, it is probably not very interesting; the more perspectives, the better!
On the specialization in math: this has been a topic of discussion in theory as well, and their holistic view of mathematics is comforting to those of us who see the inevitable splitting of the subareas of theoretical computer science. It also reminds me of Avi Wigderson's lecture at STOC this year on the way different areas in theory are connected.
It is artificial to divide mathematics into separate chunks, and then to say that you bring them together as though this is a surprise. On the contrary, they are all part of the puzzle of mathematics. Sometimes you would develop some things for their own sake for a while e.g. if you develop group theory by itself. But that is just a sort of temporary convenient division of labour. Fundamentally, mathematics should be used as a unity. I think the more examples we have of people showing that you can usefully apply analysis to geometry, the better. And not just analysis, I think that some physics came into it as well: Many of the ideas in geometry use physical insight as well - take the example of Riemann! This is all part of the broad mathematical tradition, which sometimes is in danger of being overlooked by modern, younger people who say "we have separate divisions". We do not want to have any of that kind, really.
On why researchers tend to get specialized (too) quickly:
In the United States I observe a trend towards early specialization driven by economic considerations. You must show early promise to get good letters of recommendations to get good first jobs. You can't afford to branch out until you have established yourself and have a secure position. The realities of life force a narrowness in perspective that is not inherent to mathematics. We can counter too much specialization with new resources that would give young people more freedom than they presently have, freedom to explore mathematics more broadly, or to explore connections with other subjects, like biology these day where there is lots to be discovered.
And finally, an eloquent argument for simplicity in proofs:
The passing of mathematics on to subsequent generations is essential for the future, and this is only possible if every generation of mathematicians understands what they are doing and distils it out in such a form that it is easily understood by the next generation. Many complicated things get simple when you have the right point of view. The first proof of something may be very complicated, but when you understand it well, you readdress it, and eventually you can present it in a way that makes it look much more understandable - and that's the way you pass it on to the next generation! Without that, we could never make progress - we would have all this messy stuff.

Wednesday, October 13, 2004

Just one theorem...

Maria Farrel laments that learning statistics does require mathematical skill, contrary to what instructors might think. However, the really interesting quote comes from a commenter at Kevin Drum's place:
if you understand the central limits theorm, then stats is (are?) relatively easy to understand. the problem is to get an instructor who understands and can teach the central limits theorm. i had one when i was getting my master's degree at a southern state directional university. the stats prof at the national university where i got my ph.d. was so impressed that i understood the SLT that he exempted me from 2 semesters of graduate statistics, and that hasn't hurt me in the least in my profession.
2 semesters for knowing the CLT ? I wonder if proving an NP-hardness result in the right direction warrants exemption from an algorithms class :)

Derrida and the 'two cultures'

I often find it amusing to rant (in appropriate company) about the lack of understanding of (or even interest in) science and mathematics shown by people in the 'humanities', even though at the same time, the exclusivity and inaccessibility of what I do can be a source of (shameful) joy.

But to be fair, the argument goes both ways. I like to think that I try to read and be aware of the arts (to whatever extent possible), but I know nothing about probably the most important literary theorist of this century, a revolutionary probably comparable to Einstein in the way he shook the foundations of his discipline.

Jacques Derrida died last week, and beyond the Cliff Notes-like 'Derrida... deconstructionism...text is everything...no meaning outside the text...', there is little I can say about his work.

Is it shameful ? Probably not. Should I stop complaining about the lack of awareness of the sciences among humanities folks ? Probably yes. Should I stop talking like Donald Rumsfeld ? Most certainly...

uh..Duh !!

Caffeine Withdrawal is Real!

Tuesday, October 12, 2004

Computing with reals...

Dealing with real numbers (as opposed to arbitrary fixed precision rationals) has always been an annoying problem facing the theory of computation. Many problems that we tend to address in theory are combinatorial in nature, and so it has been possible to either ignore the issue of how one deals with reals (ed: hmm...'deals with reals'...I like the sound of that), or pay a token obeisance via notions like strong NP-completeness.

Why is dealing with real numbers tricky ? Well, the first problem is one of representation: clearly we can't write down a real number. The second one is one of computation; assuming we have some black box that can store a real number, how can we add two such numbers ? multiply them ? do anything more complicated ? And how do we assign a cost to these operations ?

These are very complex questions, and there are far too many angles to cover in a single entry (or even in a single survey). But to emphasize that this is not merely a toy intellectual question, let me present some examples.
  • TSP in the plane is not known to be in NP. The problem here is that the computation of a distance involves calculating square roots, and this introduces precision issues with the number of bits needed to represent an answer.
  • A similar situation arises for Minimum Weight Triangulation, one of the original vexing problems in CG.
  • A careless use of operators can collapse complexity classes rather easily. As Jeff points out in this post, the floor function can be exploited to collapse PSPACE to P !
It is therefore rather important to understand the complexity of operations on reals and be very careful about the definitions one uses in this regard. In practice what one often does is assume a set of operations and assign unit cost to each (the so-called unit-cost RAM model). Of course, the floor example above shows that we have to be somewhat careful in doing so.

The work by Ben-Or, and later by Steele and Yao, on algebraic decision trees is based on performing single algebraic operations with unit cost, and then proving lower bounds on the complexity of algorithms in terms of the number of "sign tests" needed. One can think of this as a generalization of a simple comparison test, and it is fairly effective at proving lower bounds in settings where standard models don't work too well.

In fact, one can generalize algebraic sign tests to polynomial satisfiability tests; given a collection of polynomials over a set of variables, the decision is an appropriate sign signature (this poly must be positive, that one must be negative, etc). These are the so-called semi-algebraic sets, and going back to Tarski's work on the first order decision theory of the reals, (it is decidable to check if a given sign signature can be achieved), much work has been done on understanding the structure of these sets. It is worth noting that the Collins decomposition, a kind of 'vertical decomposition' for semi-algebraic sets, is one of the tools exploited by Mulmuley in his proof separating P from NC without bit operations (Mishra has a nice survey on computational algebraic geometry).


The idea for this post started when Chandra Chekuri pointed me to a survey by Lenore Blum in the Notices of the AMS titled 'Computing over the Reals: Where Turing Meets Newton'. What spurred me further was a discussion I had at dinner the other night; a graduate student was asking the following question:

Is the Mandelbrot set undecidable ?

The discussion continued onto a longer argument over the algorithms from numerical analysis and how they compare to algorithms in the traditional 'Turing' sense that operate over 0-1 inputs.

The Blum survey is a fascinating overview of the landscape of real computations (Blum, Shub and Smale have a book on complexity and real computation as well). Critically, it develops tools for talking about the complexity of algorithms on reals, by developing ideas of computation over a ring with "black-box" rational, algebraic operators as atomic computing units.

Thus, an answer to the above question becomes immediate:

The Mandelbrot set is undecidable.

She also talks about the P vs NP question over complex numbers and the reals, (the "classical" P vs NP question is really over Z2), and establishes a connection between Hilbert's Nullstellensatz and satisfiability via the choice of ring. A quote from the article:
I have endeavored to give an idea of how machines over the reals tempered with condition, approximation, round-off, and probability enable us to combine tools and traditions of theoretical computer science with tools and traditions of numerical analysis to help better understand the nature and complexity of computation.
Studying computation over reals is important stuff. Computational omplexity theory cannot claim to be a definitive theory of computation if it has no language to express the vast array of numerical algorithms that operate on reals. Moreover, at least a few people seem to think that the inability of a Turing machine to represent non-discrete computations is a severe blow to the Church-Turing thesis. Whether this is plausible or not, a well developed theory of computing with reals would go a long way towards clarifying the matter.

Sunday, October 10, 2004

Graph Drawing 2004: Conference report...

I have noted in the past that graph drawing is an interesting area on the boundary of geometry, combinatorics, graph theory, and visualization. This year's graph drawing conference was held in Harlem at the City College from Sep 29 to Oct 2, and Yehuda Koren was kind enough to submit a conference report. Here it is, with minor edits:



Graph Drawing 2004 was held on Sept. 29-Oct. 2, in City College, NY.

More than 50 full papers were presented, dealing with all aspects of drawing graphs/digraphs. These include works ranging from pure graph-theory to design of user interfaces, and geometry-related papers along with heuristics for graph embedding and much more.

The invited speakers were Paul Seymour from Princeton University who spoke on "The Structure of Claw-Free Graphs", and Erik Demaine from M. I. T. on "Fast Algorithms for Hard Graph Problems: Bidimensionality, Minors, and Local Treewidth" (ed: Erik and Mohammed Taghi Hajiaghayi have two papers on this topic in SODA 2005)

As usual some talks were more interesting, some less, and some I missed. To get some impression for those of you that somehow missed this event; I want to mention here three works with a strong geometry motivation.

1. "Partitions of Complete Geometric Graphs into Plane Trees" by P. Bose, F. Hurtado, E. Rivera-Campo and D.R. Wood.

They deal with a known open problem: "Does every complete geometric graph with an even number of vertices have a partition of its edge set into plane spanning trees?"

A geometric graph is one in which the vertices are a set of points in the plane in general position (ed: amen)(that is, no three are collinear) and edges are closed segments connecting the points. Plane spanning trees contain all n vertices of original graph and no two edges intersect.

Of course, for a complete graph with n vertices we have n(n-1)/2 edges, so the partition must be into n/2 spanning trees. The answer is known to be affirmative for convex complete graphs (where convex means that the vertices are in convex position), so what left for the authors in this case is to characterize all such partitioning for convex complete graph, which is pretty nice.

The authors also show the existence of such partitioning for a broader family of graphs. And for desert they show that when the graphs are not necessarily spanning (and then we may partition into more than n/2 trees), an upper bound for the number of trees would be n-\sqrt(n/12).

2. "Reconfiguring Triangulations with Edge Flips and Point Moves", by G. Aloupis, P. Bose and P. Morin

The authors deal with transformations between different triangulations. They show that O(nlogn) edge-flips and point moves can transform any geometric near-triangulation on n-points to any other near-triangulation on n (different) points. Thus, they improve a previous O(n2) bound. They also suggest a more general point move that further reduces the complexity to O(n).


3. "No-three-in-line-in-3D" by A. Por and D. Wood.

Their work is based on Erdös' proof in 1951 (of a problem proposed by Dudeney in 1917) that one can place O(n) points in nxn grid with no three points are collinear.

The authors prove a smooth generalization to 3-D: One can place O(n2) point in an nxnxn grid where no three are collinear. They also consider additional generalizations more relevant to graph drawing.

Saturday, October 09, 2004

Academic blogging

Academic blogging appears to be a new meme on the blogosphere, and there are some interesting thoughts on the value of academic blogging. An article in the Guardian by Jim McClellan describes some of the ways in which academic bloggers (defined roughly as bloggers writing from academe or on academic disciplines) use blogs. There are the obvious:
Blogging is allowing academics to develop and share their ideas with an audience beyond the universities
the fairly obvious:
Academic researchers are drawn to blogs because they're useful knowledge management tools. MacCallum-Stewart says that her site quickly became a kind of "mind gym", a place to test out and develop ideas and to hone her prose style. The social networking side of blogging became very important here, she says. Her blog helped her build links and share ideas with researchers in the area at other universities.
and the more creative:
University tutors are also experimenting with blogs as teaching tools, using them to disseminate links and information to classes, sometimes as places where students can collaborate on group projects.
Crooked Timber has a stunning list of academic bloggers, and it is not even close to being comprehensive (especially in the area of scientific blogs) [They don't have a section on computer science, that I hope to rectify :), and now have: look in Computers/Media/Communications..]. And this year's BloggerCon has a session on blogging in the academic world.

I don't know if blogging can create discussions where none exist, but I have seen many interesting discussions over at Lance's blog (and some here) on topics that are either technical or fairly closely related to technical matters (mechanics of our academic process etc). I am (now) naturally suspicious of any claim that a new technology can revolutionize existing systems, but it is definitely plausible that academic blogging can change in some small way the manner in which research is conducted, as well as influence (more obviously) the dissemination of research into the mainstream.

Wednesday, October 06, 2004

Kolmogorov and poetry

An interesting anecdote about Andrei Kolmogorov, from Moscow Summer, by Mihajlo Mihajlov:
Voznesensky triumphantly told me about the well-known Soviet mathematician and designer of electronic computers Andrei Kolmogorov, who for several years analyzed texts and sent his assistant to poetry-readings with the intention of finding the 'linguistic key' to versification and constructing a computer which would replace various poets.
What Kolmogorov needed was Trurl's Electronic Bard....

Blogging = XX or XY ?

Tall, Dark And Mysterious talks about women in math, and says this:
On the academic front: long hair aside, there are few ways in which I am conventionally feminine. One way, however, in which I adhere closely to my biological (or socialized?) fate is that I am interested in, and good at, distilling my subjects of interest to non-experts.
Hmm. That sounds like blogging to me. So does blogging reveal the feminine side ? Or is it just an ego trip...

Read the article: it's good...


(Via 3d Pancakes)

Tuesday, October 05, 2004

Although I disagreed with Lance and Jeff on whether theory folks are too nice to ask hard questions, let us assume for argument's sake that there is some merit to this argument (there definitely is some just in comparison with other non-CS fields). What might be the cause of this restraint ?

A distinct possibility is the fairly objective nature of our discipline itself. Suppose that tomorrow I present a paper describing an algorithm that can do range searching on collections of fetid schizoid tapeworms in logarithmic time. Now, what is clearly true is that my algorithm takes logarithmic time; about this there can be no objection. However, it is likely that some folks will find the area of fetid schizoid tapeworms somewhat uninteresting; they might be partial towards energetic mobile ringworms instead. There might still be others that claim that most schizoid tapeworms are merely malodorous, for which logarithmic time range searching queries are trivial.

However, they cannot dispute the objective validity of my algorithm, and their dislike of my choice of object is not in itself objectively defensible.

This, to me, is one possible reason why theoreticians don't argue so much. It is not that we don't have strong opinions; if you talk to people one-on-one they will often criticize results based on non-technical reasons. It is that maybe people are less comfortable with non-formal reasoning processes.

Aha, but then you will say that "pure" mathematicians should not be the cantankerous bunch that they often can be. Indeed. However, theoretical computer science is most close in spirit to combinatorics, a field defined not by a foundational structure of axioms and theorems, but by a myriad of problems all connected together in countless different ways. It is hard to make objective judgements about the value of problems; most problems are quite interesting to think about in the abstract, at least to someone. It is a lot easier to evaluate the quality of a theoretical edifice, and in fact one of the knocks on combinatorics is precisely the "apparent" lack of such an edifice.

Monday, October 04, 2004

foot-in-mouth...

I can only hope the folks at the University of Michigan don't read my blog...

New CS blogs

Two new blogs to add to the blogroll:
Maybe soon we can have our own carnival !

Too nice ?

Lance asks, and Ernie elaborates on, the following question:
Are theoretical computer scientists too nice ?
Lance uses evidence (lack of questioning at theory seminars) to infer niceness. Jeff goes further and argues that this is true well beyond theory. I'm not sure I agree. I think this issue goes to the heart of another discussion that Lance initiated: the balkanization of theory into subfields.

The assumption is that if a theoretician gives a talk that is attended by other theoreticians, then lively discussion should ensue. However, given the vast scope of theory, how realistic is it for an audience consisting of (say) geometers to be able to discuss details of a speaker's talk on (say) the latest randomized rounding trick for LP-based approximations ? (example chosen completely at random :)).

Obviously one can discuss things like the general motivation etc, but as some commenters pointed out, the motivation is often not the most interesting part of a work. And in theory talks, where one can measure fairly objecively the contribution of a particular work, discussions about contribution are either trivial, or require a much deeper understanding of the sub-area of the talk.

The reason I bring this up is because it is NOT my experience that theory talks are viewed as missives from the Gods, to be watched reverentially and applauded solemnly. At the Math Seminar talks at AT&T, we have heated discussions, and it is very hard for a speaker to get beyond the first ten slides without some serious pounding :). I think the 'message from God' model applies more to conference presentations, where
  1. the problem of sub-area matching comes up: there are probably only a few people who really understand the speaker's work, and they probably figure they can catch the person later on so as not to bore the audience (I have felt that way, and maybe I am wrong to feel so)
  2. Time is really limited: you can't get into any kind of discussion, and it takes a while to grok some of the more technical details.Although Lance uses the Econ Workshop as a model for aggressive questioning, that is really a seminar, and not a typical conference venue
To support this, I can provide as evidence our AT&T one-hour Cookie Talks, that are typically less theoretical and directed at a larger audience of systems, database, programming languages, algorithms folks, and so are less technical. In these talks there are lots of discussions, and these can often get acrimonious (we sometimes have to warn visiting speakers !).

Nobel Prize for Medicine. or coffee rules !

From the NYT:
American researchers Richard Axel and Linda B. Buck shared the 2004 Nobel Prize in physiology or medicine on Monday for their work on the sense of smell -- showing how, for example, a person can smell a lilac in the spring and recall it in the winter
But this is the best quote:

Told of his honor, Axel told Swedish public radio: "That's really marvelous, I'm so honored."

[... snip ...]

Asked what he would do first, he replied: "I'm going to have a cup of coffee."

Amen to that....

Saturday, October 02, 2004

NYT: bastion of luddites the world over

I shall now propose a new law of technology discovery:

If the New York Times reports on a new technological phenomenon, the only people who don't know about it are the ones without computers, or any technology whatsoever.
The latest issue of 'Tech Files' is an article by James Fallows (a journalist whose reporting in the Atlantic is always magnificient) on new trends in technology to ease the consumer's life, and talks about Firefox as a new alternative to IE.

Sigh..

He also gets some important points wrong:See end of post....

I also keep using IE, meanwhile, because parts of the Internet are coded to accept nothing else. The handy Google toolbar, for example, works only with IE.

Actually Firefox has a default built-in Google toolbar, and in fact allows plugins for a number of different search engines (like Opera, from what I am told). It also has a cool extension to load a page in IE if necesssary. This can be convenient for some annoying misbehaved pages



Update: Talk about the speed of the internet. I emailed NYT and got a response from James Fallows immediately (I guess the fact that I had commented on an article that hadn't yet been published helped somewhat :)).

He points out correctly that the Google toolbar itself cannot be used on Firefox. However, one can search Google using the search toolbar extension that Firefox provides, and that I was referring to.

Update II: Ken Clarkson points out that in fact Firefox DOES have a google toolbar. So I was wrong about being wrong....

Friday, October 01, 2004

Here we go again

The H-1 visa is a work permit that allows foreigners to work full-time in the US (student internships are different). There is a quota of 65,000 H-1 visas each year; these are issued starting in October and continue to be issued until the quota runs out.

In the late 90s, in response to the growing tech boom and the need for extra manpower, Congress agreed to increase the limit to 195,000 for a three year period, ending in 2002. This was necessitated by an absurd situation where people had to apply for visas in March to have any hope of being far enough along in the queue to get a visa by October. Since most graduating students only got jobs in May/June....

The end of the three-year period coincided with the tech crash, and the extra quota was no longer needed. Plus, with outsourcing quickly becoming a political hot potato, it became politically infeasible to keep a larger quota.

And so here we are today, having come full circle (emphasis mine):
A federal official on Friday said the annual limit for the controversial guest worker program has been met for fiscal year 2005, which runs from Oct. 1, 2004, to Sept. 30, 2005. United States Citizenship and Immigration Services, which processes applications for the H-1B program, is no longer accepting petitions for visas for initial employment for this fiscal year, said the official, who spoke on condition of anonymity.

The visas, which allow skilled foreign workers to work in the United States for up to six years, have frequently been used by technology companies. That the cap has been reached as of the first day of the fiscal year is sure to stir up debate over the visa program. Businesses are seeking an exemption from the annual cap for foreign students graduating from U.S. schools with master's and doctorate degrees. Labor groups oppose the proposal.

If you are reading this and wondering why you should care, here's why:
  1. If you are a graduating foreign student, you have to plan your job search very carefully. Given that you will probably have to file an H-1 application in Mar/Apr, you need to have a job lined up as well as a commitment from your employer to file for the visa.
  2. If you are involved in any way in recruiting in an academic or corporate setting, you should know that any foreigner you hire will have this problem, and so you need to be able to act quickly on filing their visa petition. Since most students typically only have a year's worth of grace via their student visa, you only get one shot and filing the H-1 petition.
Although I now have this thing, I spent my share of time in H-1 hell. It is not pleasant.

Update: A reader points out:
The H-1B cap doesn't apply to nonprofit educational institutions (such as universities)

Friday moment of zen...

From Tim Gower:
If you are trying to solve a problem, see if you can adapt a solution you know to a similar problem.

By using this principle, one can avoid starting from scratch with each new problem. What matters is not the difficulty of the problem itself but the difficulty of the difference between the problem and other problems whose solutions are known.

Self-absorption, redux...

On the self-absorption of science, a pithy line from Bitch, Ph.D:
People are not brains on sticks. People have lives.


Disqus for The Geomblog