Sunday, December 21, 2008

More experiments in algo-teaching

(ed. note: think of this as a 'theory'-chaser to take the bad taste of cricket-blogging out of your mouth)

Another experiment that I continued this time was a set of lectures on "fun topics". The idea is to convey some ideas about a fun topic in theoryCS without too much jargon, but with enough meat to indicate that there's something deeper going on beneath the fun stuff.

I run these lectures at the end of the semester (when everyone's worn out already :)), and throw them open to the general public: both times I did this, I got substantial attendance from people not in my class, and at least in one case, someone who attended the lecture last year actually decided to take my class this year.

Not all topics are amenable to such a treatment, but my hope is to build up a suite of such lectures: the nice thing is that they can then be reused for other venues: for example, I've given variants of these talks to freshmen and high school students, and will do an undergraduate colloquium in the math department next semester.

For all of these lectures, I've pilfered shamelessly from the original works, as well as great websites developed by the authors. This post can be viewed as a shout-out and a thank you.

1. Pancake flipping:
Brian Hayes wrote a great article on pancake flipping for the American Scientist. In it, he links it to the problem of genomic rearrangement, and in doing so, ends up with a beautiful tale of the interaction between theoretical problems and practical constraints, all told in context of a topic that everyone can relate to.

I made two-color pancakes in different sizes, and distributed them to the students at the start of class: I briefly described the problem, and let them go at it. It was quite entertaining, and put the more formal discussion later on in context.

2. Zero knowledge proofs
ZK proofs are great for this kind of setting: they are completely counter-intuitive, (and so have the 'bizarre' factor), and are easily explained using popular metaphors. I used Moni Naor's Sudoku demo page, as well as the version of the proof that involves slicing and dicing the puzzle. This was done with class participation (I was the prover, and there were multiple verifiers).

The second demo I ran was the Yao protocol for the Millionaire's problem (how do two millionaires determine which is richer, without either knowing the worth of the other). Again, I did this interactively with class volunteers and an RSA applet to speed things along. More details here.

3. Quantum Computing
No demos for this one, but I gave a crude high level view of what quantum computing is about (about one-step up from the "do everything in parallel" explanation). The main goal here was to convey the basic ideas of what a qubit is, what a quantum circuit looks like, and how the Bell inequalities show that quantum computing is much more bizarre than classical randomness. Here, Dave Bacon/Umesh Vazirani lecture notes proved indispensable, coupled with the Nielsen-Chuang book, and a recent blog post by Michael Nielsen.

4. Computational Origami
I haven't quite worked the kinks out of this one, but the basic idea is to demonstrate the principles behind computational origami (and where the 'computation' comes in) by looking at the question: What shapes can you make by folding, followed by a single cut.

There's a cool backstory to this: essentially the first example of such an algorithm was how Betsy Ross designed the 5 point star for the American Flag, and it lends itself to a nice demo in class.

Secondly, it's a classic example of resource-bounded computation: limiting yourself to one cut. Thus, it makes for a good illustration of how computation appears in all kinds of problems.

Thirdly, computational origami actually shows up in many real-world problems, most notable one involving folding mirrors for a telescope to be launched into space. If we're trying to convey a sense of computational thinking, this is a great example.

The actual problem itself was solved in this paper by Demaine, Demaine and Lubiw. Unfortunately, I have yet to find a way of explaining it that will make sense to non-expert geometers, and that's one weakness with this particular story: Erik's page has some nice examples to demo, but it's hard to convey the deeper algorithmics behind the problem.

Lessons learned:
  • As always, know your audience. What works for a graduate class doesn't work for sophomores, and certainly doesn't work for eigth-graders :)
  • Interactivity is key: if you allow people to participate, they're more involved, and will probably take away something positive
  • Keep it light: resist the urge to get too mathematical. There are many beautiful topics in theoryCS that can be explained without jargon, and many others for which with some effort, jargon can be removed.
  • A 'bizarro' factor helps: In general lectures that I've given, I find that presenting something completely counter to people's expectations catches their attention immediately, and leaves a lasting impression. ZK proofs have that property. Bell's inequality sort of does, but the impact would be more immediate if I could actually run a quantum experiment to demonstrate violation of Bell's inequality ;)
  • Relate it to the real world: again, it's not enough to cite applications: one should try to show them as far as possible. In this regard, the Betsy Ross story is great, because it relates to something everyone (in this country) knows.
My goal is to add one or two such lectures to my arsenal each time I teach the class. For next time, I'm seriously considering running a live auction of some kind to demonstrate some concept in algorithmic game theory (suggestions on how best to do this are always welcome). Are there other such topics that might lend themselves to an entertaining, edifiying and educational experience ?

On run-chases

(ed. note: this is about cricket, not algorithms, or geometry, or computer science. you have been warned)

South Africa, the Netherlands of cricket, finally won a big game, beating Australia in Perth after a historic run-chase of 414. This of course follows India's famous run-chase, beating England in Chennai by chasing down 387. As Cricinfo points out, 9 of the top 25 run chases have come in the last 8 years (in a tally that dates back to 1902).

There's a detailed statistical analysis of these chases, but no speculation as to why they're becoming more frequent. The answer seems obvious to me though: the increasing scores in one-day cricket. A quick search of Statsguru indicates that of the 258 overall scores above 300 (first or second team) one 1-day games, 179 of these happened after 2000. Even to a casual observer, it's clear that the scores in 1-day games have gone up (and don't get me started on Powerplays).

Frankly, when all this fuss was being made about India's run chase, I couldn't quite understand why, because if you think of this as a one-day game, it's not too hard (and in fact India's coaches thought the same way!).

All in all, two exciting Test matches (and how often have we been able to say that)

Wednesday, December 17, 2008

Videotaping talks

Videolectures.net is a company that has taken on the job of videotaping and packaging conference talks. They're based out of Slovenia, and offer a good service: they take your talk and your slides, and sync up the talk video and slides so someone watching later on can follow along.

For examples, you can see my talk at ETVC, and here are the other talks. I first heard about this company when I was googling a paper and discovered that the ICML talk was online.

I originally thought that they only handle events in Europe, but they appear to have covered this year's KDD as well (although the coverage appears strangely limited).

Saturday, December 13, 2008

Practical applications of 1-medians

From Optimal Home Location:
Have you been looking across many different neighborhoods for a place to buy or rent? Have you been contemplating whether to buy closer to your job, your spouse's job or your kids' school? Do not worry - this is not a simple geometric triangulation but a centuries old mathematical problem. Optimal Home Location tool is synthesizing math algorithms and Google Maps together in order to pinpoint optimal home location for any commute scenario. It is easy to use and fun to play with. All you need to do it [sic] enter all the addresses your family routinely visits throughout the day and then click on the map icons to define commute route for each member of your family. The tool automatically computes optimal home location that will minimize total combined commute for all members of your family

Monday, December 08, 2008

NSF bleg

This appears in the supplemental documents section of the upcoming CISE call:
In the Supplementary Documents Section, include a list of all PIs, Co-PIs, Senior Personnel, paid Consultants, Collaborators and Postdocs to be involved in the project. This list should be numbered and include (in this order) Full name, Organization(s), and Role in the project, with each item separated by a semi-colon. Each person listed should start a new numbered line.

Does anyone know what a "Collaborator" is ? is it merely your other-institution co-PI on a collaborative proposal ?

Saturday, December 06, 2008

I had a number of responses to people from my programming project post: I thought I'd post the responses here, rather than in comment fields.

* On rolling your own: I'm intrigued by the many suggestions to use Sphere Online. I've heard of topcoder and Project Euler before, and decided not to use topcoder for ease of pedagogy (I wanted problems where the key algorithmic idea was "preidentified", so I could give a DP problem in the homework on dynamic programming). I was also not convinced that topcoder focused on the algorithmic aspects of the problem as opposed to raw speed: the fact that it was set up as a time limited competition by default was also a pain in the neck.

* On copying: this is an unsolvable problem IMO. Since I was choosing problems from the pre-canned list at the ACM server, I was at the mercy of the online solution providers. Judging by the results, my students are either very honest, or don't know how to find these sites :). I've spied on the related forums, and they tend to be somewhat militaristic about not letting people post code directly, although hints are always supplied. As an aside, for theory problems this is a royal pain: I've had to mask things in various ways to prevent a google search, and I have my own way of creating problems that I'm happy to reveal to someone who asks me directly :).

* on what to do for geometry: the problem is not the lack of a good code base. In fact it's the reverse problem. CGAL for example has many quality solutions already pre-coded, so I can't even ask students to use CGAL as a base framework. Given the limited time I have before semester starts, its debatable how much coding and test generation I can do on my own, so stay tuned...

I am less than smart :)

I posted my note on programming assignments, and then wondered why there were no comments. It turns out that I forgot to monitor my moderation list, and when I checked, there were tons of comments ! apologies to all the commenters: your comments are available now, and I'll start replying shortly.

Tuesday, December 02, 2008

programming assignments

Inspired in part by Michael Mitzenmacher's exhortation:
Here's my claim: theory does untold damage to itself every year by not having programming assignments in the introductory classes on algorithms and data structures.
I tried an experiment in my algorithms class this year. Using the ACM programming competition online judge as an evaluation tool, I assigned one programming assignment in each of the first three assignments in my class (later assignments went into approximations and randomization, which the site didn't really cater to). Coupled with this, I used the ACM Programming challenges book by Skiena and Revilla to guide my choice (they break the problems down nicely by dominant technique).

The way the system works is this: you're given a problem definition and an input/output specification. You write your code in one of three languages, using a specific compiler flag sequence, and then upload your code. The system runs your code through a battery of tests, and then reports whether you had
  • compile-time errors
  • failure to terminate in the prespecified time limit
  • incorrect answers
  • run-time errors
  • all correct answers
What worked:
  • Students actually liked the idea of programming assignments: I received numerous variations of the comment, "I didn't understand the algorithm, but once I coded it...". They were less enthused by the server, but more on that later.
  • People started getting quite competitive: the site maintains stats on the best implementation thus far, and even after students satisfied the requirements of the assignment, by matching the desired time limit, they tried to optimize their code further.
  • The questions are well-designed, and usually need algorithmic tricks rather than hacks. For example, one question is to compute the closest pair, and any optimized brute force algorithm will fail, but the standard deterministic divide and conquer will work fine.
What didn't:
  • The server would occasionally flake, especially a few hours before assignment deadline time :)
  • I/O specifications were often tricky: the problem specifications would leave out details about what one could assume about the input, so simple things like "don't assume that when a range [x,y] is given in the input, that x <=y" needed to be discovered.
  • The error messages are cryptic to a fault. Compile time errors are linked to the point in the code where this happens, but for any other kind of error, you are merely told that an error occurred. This caused major frustration.
  • With Java especially, getting a correct implementation within the time bound seemed harder than with C/C++. Since Java is often the first language students learn, this is annoying, especially since the whole point of such exercises is to abstract away as far as possible the particular idiosyncracies of a language.
  • From a grading point of view, it's very painful to evaluate each person's submission. There's no easy way except to do it manually.
Overall, my experience was mixed-to-slightly-positive: I'll probably do this again, but I might also include examples where I design the test inputs and do local evaluation. For some problems (like with randomization) I might have to design the problem from scratch.

Now what I'd like is something similar for my computational geometry class :)

Thursday, November 20, 2008

While in Paris...

My latest reason for being off the air has to do with the amazingly bad internet capabilities of Paris hotels. Yes, I'm in Paris, city of lovers, but certainly not of lovers of wifi. There are at least 30 networks visibile wherever you go, but they're all secure, so no mooching. The hotel-provided wifi is actually a generic service that costs 22 E/day for connectivity, with all kinds of bandwidth caps and a very slow connection. If I were to splurge for the "business" level, I get the luxury of paying 27 E/day, with unclear benefits (presumably I can now download my bootleg bittorrents (just kidding)).

Other things I've noticed since I last came to Paris: (which is not to say that they are new, just that I just noticed them):
  • Every second store on the podunk street my hotel is on is a fancy clothing store. Clearly the world-wide economic collapse has not hit.
  • Speaking of world-wide economic collapses, it really hurts to have a weak dollar. $7 espressos, sigh...
  • ....but it's always a pleasure to walk into a cafe and order a 'cafe' and just know that something good will appear. This is in contrast to the unbounded depth circuit needed to specify a proper cup of coffee at Starbucks.
  • Speaking of Starbucks, how on earth can they even survive in Paris ? I mean, you go to a Starbucks here, and you get the same experience as in the US, ending with a paper cup of coffee of questionable quality that you drink perched on a high bar stool. On the other hand, you go to a cafe, and they serve you with nice cups, and a little cookie, and let you sit there for hours nursing your coffee, and will even give you the WEP key for their secured WiFi. It's no contest !
  • You can change the world while nursing your coffee. I was staying in the 14th Arrondissement (the Montparnasse area) and had to have a coffee at the Dome cafe, a place apparently frequented by Lenin and Trotsky before the Revolution. I have to say that at the time I went, the clientele looked like they were plotting a revolution... in 1907.... I'd link to a verification of this, but I can't make any sense out of the search results on google.fr
  • Speaking of which, how does one tell google NOT to return results in french ? every time I edit the URL to go to google.com, it sends me back to google.fr. Suivant !!!
And why am I blogging from Paris, you might ask ? Well here's why. I'm an invited speaker, no less. Conference blogging was limited because there was no wifi at the conference site either. It's a long story involving military schools (and maybe even NASA). Details will appear shortly.

Friday, November 14, 2008

Coffee..

On the evolution of coffee drinking, by Malcolm Gladwell (he of The Tipping Point and Blink). I particularly like this line about Trotsky:
Give a man enough coffee and he's capable of anything.

Monday, November 10, 2008

Items...

I've noticed an inverse correlation between blogging frequency and "actual work", so boy must I have been working hard !!

Two items of note, as my blog and I pass in the night:
  • Michael Nielsen links to a great way of advertising a speaker: use Wordle on their work (delicious feeds/research papers)
  • ICML 2009 is going to a reviewing model where you can specify which area chairs you want your papers directed to (area chairs and interest areas will be listed). John Langford, one of the area chairs, goes further with what is essentially a personalized reviewer manifesto. An excellent idea ! He lays out his principles, and authors are fore-warned.

STOC abstracts due today !

According to Michael M, there are 150 and counting already. Papers are due next week. And here's where you do it !

Tuesday, October 28, 2008

SODA 2009 South Pacific Event

A note from Howard Karloff (please email him for more information):

To SODA '09 attendees:

I have reserved, for SODA attendees and their friends/spouses/significant others, 50 tickets in the loge section at Lincoln Center for the 7:00 PM, Tuesday, Jan. 6 show of "South Pacific," so popular a show that already both Saturday Jan. 3 performances are sold out. If you want to buy one or two tickets, send mail to howard@research.att.com with "South Pacific" as the subject field, specifying the number of tickets you want in the body of the e-mail. Tickets will go to the first SODA attendees who pay by credit card via PayPal. I'll send instructions by e-mail. The tickets are $120 each--the face value is $115, and the extra $5 is to cover the PayPal fee--and are NONREFUNDABLE: once you pay, there will be no refunds. However, I imagine someone at SODA, some random theatregoer, or an ebay user would be thrilled to take tickets to such a popular show off your hands.

So, please, start those e-mails coming. Should the demand far exceed 50 tickets, I may arrange a second outing, to a different play, either Saturday or Tuesday. A few reminders:

"South Pacific" starts at 7PM, not the typical 8PM start
time, on the last day of SODA.

The venue is Lincoln Center, not in the heart of Broadway.

Once you pay, there's no backing out.

See you in New York in a couple months!

Howard Karloff

Friday, September 26, 2008

Announcements

Friday, September 19, 2008

About that whole political outreach business

that I was talking about in this post, here's what I find in my RSS reader today:
The names of seven distinguished scientists nominated by the President to serve on the National Science Board (NSB) were sent to the Senate for confirmation on September 16, 2008. Drawn from industry and universities, and representing a variety of science and engineering disciplines and geographic areas, these four new and three incumbent NSB members were selected for their preeminence in research, education or public service. When confirmed by the Senate, they will serve six-year terms to expire in May of 2014.
...

Diane L. Souvaine, of Massachusetts

Diane Souvaine is a computational geometer, who is professor and department chair of computer science at Tufts University, and holds a secondary appointment in the department of mathematics. Her current research focuses on the design and complexity analysis of geometry algorithms to solve problems from a variety of venues, ranging from computational statistics to geometric modeling to self-assembly of nano-structures. She also directs summer week-long institutes involving computational thinking for middle school mathematics teachers, funded by the Massachusetts Board of Higher Education. From 1992 to mid-1994, she served first as acting associate director and then as acting director of the NSF's Science and Technology Center on Discrete Mathematics and Theoretical Computer Science (DIMACS), while she was associate professor of computer science at Rutgers University. She has been a Fellow at the Radcliffe Institute for Advanced Study, and a member of the School of Mathematics at the Institute for Advanced Study in Princeton, NJ

Very cool indeed...

Wednesday, September 17, 2008

CS advocacy in the political realm

Two articles caught my eye recently:
In the first article, Peter Lee makes the argument (with an assist from Peter Harsha) that there are good political reasons for CS folks to publish in Science, Nature and the like (even if many of us snigger at the kinds of CS research that ends up there: I know I have horror stories about articles that are accepted over the objections of the specialist CS reviewers). One argument is that of influence: whether we like it or not, CACM is not the preferred reading material for Congressional aides and staffers that have the ears of elected representatives, but Science/Nature are on the radar. Moreover, these publications have well-honed PR engines to get articles out to reporters: often embarassingly overhyped, but hey, you know what they say about publicity :).

Which brings me to the second article on Barack Obama's science advisors. What's interesting is that 4 of 5 of them are from the life sciences. This is not to demean the importance of life sciences in the current policy environment (GMO crops, stem cells and biofuels are all on the political radar), but there are a good number of technical hot-button issues as well (voting machines, copyright issues, electronic eavesdropping, privacy), and it definitely wouldn't hurt to have a CS-oriented person near the "ear of the man", so to speak, to articulate a view of the importance of these issues and how the research community is dealing with them.

I'm a lowly untenured professor, who used to be a lowly lab researcher, so I have no good ideas on getting personally involved in such matters. But I can see (partially thanks to CCC and the SIGACT Theory group) how advocacy can, over time, lead to fundamental changes in the landscape of support for our efforts, so at the very least, I can consider the idea of disseminating my work (when appropriate) beyond our "incremental" conferences and exhort others to do the same.

Tuesday, September 09, 2008

SODA list is out

Some happy, some not so happy :)

Kudos to Claire Mathieu and the PC for not only including the list of papers, but also the list of abstracts !! Finally !!

As a public service, here's a TeX-formatted PDF of the list of abstracts, for those who find serif font easier to read than a stream of text on a web page. Warning, it's 44 pages long, and will have formatting errors. I wrote a perl script for the overall parsing and did lots of local fixing by hand for math-mode stuff, but it's not perfect.

Monday, September 01, 2008

Fonts !

John Holbo at CT does a not-review of books on fonts (or faces ? I'm confused now). In any case, this is clearly a book I need to get.

Tuesday, August 26, 2008

The tensor product trick

Blogging has been slow this summer as I (surprise surprise) was actually working. Today the semester starts, so I expect to be blogging more as I attempt to clear my head from the detritus of semester minutiae.

Terry Tao has a new post up in the "tricks wiki" started by Timothy Gowers. The "trick" can be summarized concisely as: if you want to prove an inequality of the form X < Y, but can only prove X < CY, then take "tensor powers" of the objects X and Y and prove the weaker inequality, and then take a "root".

What's interesting to me is that this is none other than a generalization of the "standard" amplification trick that is most commonly used in hardness results. The easiest application is the proof that CLIQUE admits no absolute approximation: if it did, take graph powers, find the additive-error number, and take square roots, and you have an exact solution (since the value must be integral). Generalizations exist: you can use the same argument to show that CLIQUE admits no constant factor approximation, and even more intricately (invoking the PCP theorem) that independent set has no polynomial-factor approximation.

It makes me wonder what other "standard" analysis tricks draw their lineage from generalizations in the world of "real math". For example, tricks involving building exponentially growing covers tend to show up frequently in topology and analysis.

Monday, August 18, 2008

$10 million for complexity theory...

Via His Quantum Holiness, news comes of the NSF Expeditions awards: each award is $10 million, and four were given out. One of them was for complexity theory, and the team is a star-studded list of complexity and algorithms folks, led by Sanjeev Arora. Here's the blurb:
In their Expedition to Understand, Cope with, and Benefit From Intractability, Sanjeev Arora and his collaborators at Princeton University, Rutgers University, New York University and the Institute for Advanced Study will attack some of the deepest and hardest problems in computer science, striving to bridge fundamental gaps in our understanding about the power and limits of efficient algorithms. Computational intractability, a concept that permeates science, mathematics and engineering, limits our ability to understand nature or to design systems. The PIs hope to better understand the boundary between the tractable and the intractable. This has the potential to revolutionize our understanding of algorithmic processes in a host of disciplines and to cast new light on fields such as quantum computing, secure cryptography and pseudorandomness. The research team plans to draw on ideas from diverse fields including algorithms, complexity, cryptography, analysis, geometry, combinatorics and quantum mechanics
Congratulations ! For getting the award, and demonstrating that hard-core complexity theory CAN get funded...

Update: Sanjeev Arora comments...

Tuesday, August 05, 2008

Math != calculation, part 537...

From the NYT, on scoring the i-can't-believe-it's-not-a-sport sport of gymnastics:
The new system is heavy on math and employs two sets of judges, an A panel and a B panel, to do the computations. Two A-panel judges determine the difficulty and technical content of each routine. Six B-panel judges score routines for execution, artistry, composition and technique.

The A-panel judges’ scorecards start at zero, and points are added to give credit for requirements, individual skills and skills performed in succession.

The A panel counts only the gymnast’s 10 most difficult skills, which are ranked from easiest to most difficult (from A to G for women and from A to F for men). An A-level skill, like a back handspring in the floor exercise, is worth one-tenth of a point. The value increases by one-tenth of a point for each subsequent level, meaning a B-level skill is worth two-tenths and an F-level is worth six-tenths.

Required elements add a maximum of 2.5 points to the score. Extra points, either one-tenth or two-tenths, are given for stringing skills together.

Each judge adds the marks, then the two reach a consensus. Elite gymnasts usually have a difficulty score in the 6’s; the toughest routines generally have difficulty scores in the high 6’s or 7’s.

[...]
The system rewards difficulty. But the mistakes are also more costly.

Which is where the judges on the B panel come in. They rate the execution, artistry and technique of a routine, starting at a score of 10.0 and deducting for errors.

This score, called an execution score, is where the perfect 10.0 still exists. But reaching it is unlikely.

A slightly bent knee can be a deduction of one-tenth of a point. A more drastically bent knee can cost three-tenths. In this system, the deductions jump from one-tenth to three-tenths to five-tenths. A fall costs a gymnast eight-tenths. In the old system, a fall was a five-tenths deduction.

The highest and the lowest of the judges’ scores are thrown out. The remaining four scores are averaged to obtain the final B-panel score.

On the scoreboard, the final score appears in big numbers, just above the gymnast’s marks for difficulty and execution.

Apart from my grumble about the level of 'math', it's an interesting way of doing the scoring.

I wonder if this could work for conferences: replace 'degree of difficulty' by 'hardness of problem, general hotness of the area, etc', and then you could deduct points for things like
  • More than two digits after the decimal point in the approximation ratio
  • exponent of running time requires the \frac environment
  • More than two parameters in the running time
  • Gratuitous use of O() notation to hide dependencies you don't like (yes I'm talking to you, high dimensional clustering folk)
  • Requiring your winged pigs to be large in dimension, have extra glitter on the wing tips, and carry golden harps in order to make the horses take off (Hi there, complexity denizens)

Tuesday, July 22, 2008

The concept of an approximation

We're always searching for new ways to communicate cool ideas in theoryCS to the outside world, and the list gets more and more esoteric each time one thinks about it. But in my interactions with people, people generally unfamiliar with algorithm, and theoretical computer science in general, I'm constantly amazed at how counter-intuitive the very notion of an approximation appears to be.

A typical conversation goes something like this:

Person: Oh this problem is NP-hard so we ran this heuristic
Me: Have you tried approximating it ?
P: What do you mean ? Our heuristic seems to be reasonable.
M: I mean an algorithm that guarantees that it'll get close to optimal.
P: But how can you know that ? It's hard to find the optimal solution, global minimum, yadda yadda yadda...
M: We can often prove an answer is close to OPT without knowing OPT itself.
P: [total silence]

This kind of thing happens often in research conversations, as well as in the classroom (where it's less surprising I guess). In fact, one person was so amazed at the idea that even proving an approximation could be NP-hard that we spent a good semester plowing through the PCP theorem (at least parts of it).

I'm not dissing heuristics: it's often the case that either the approximation guarantees one can show are too weak, or that the claimed bounds don't match the empirical performance. But I do think it's important to at least ask the question of whether provable approximations exist, much in the way people in CS are now reasonable comfortable checking (or asking the resident theoretician !) if a problem is NP-hard.

But you can't ask till you know. This is one of the main reasons I teach a smattering of approximations in my graduate algorithms class (and why I teach some randomization as well). The concept of an approximation (especially when you can't find the actual answer) is a surprisingly slippery conceptual leap, and requires mental agility that goes beyond the standard "algorithms toolkit" that focuses on issues of performance.

Thursday, July 10, 2008

The Katayanagi Prize

Via the Baconmeister (er.. the Quantum Pontiff), we hear that Christos Papadimitriou and Erik Demaine have received the Katayanagi Prize. Christos gets it for "research excellence":
The Katyanagi Prize for Research Excellence will be awarded annually to an established researcher with a record of outstanding, and sustained achievement.
and Erik gets it for "emerging research leadership":
The Katyanagi Emerging Leadership Prize will be awarded annually to an individual recognized as an emerging research leader.
Congratulations to both of them, (and do read the Dr. Dobbs interview with Christos, where he shows that he knows what "Guitar Hero" is :))

Wednesday, July 09, 2008

Edge people...

Bill Gasarch has an interesting post up comparing attendance numbers at SoCG and CCC. He introduces the interesting notion of 'edge people': people in related areas who wouldn't normally attend the conference, but would because of locality.

This of course makes me a little worried about 2010. We have a rather large 'edge' community at the U. of Utah itself within SCI. But Utah as such is fairly geographically isolated (and indeed, there's a general belief that our department suffers a little in perception as a consequence), so we don't get the 'edge' crowd that places like NYC and Maryland potentially get.

I say 'potentially': as always, it's probably possible to get data on this. Specifically, if one had access to historical attendance lists (not numbers) for SoCG, one could first identify candidate 'edge' people (either by subjective eyeballing, or even more numerically as people who only show up "rarely" and publish "rarely"), and then figure if there's a correlation between location and presence of an edge crowd.

Similarly, one might even do this for CCC vs SoCG although as one commenter points out, the larger number of total unique authors on SoCG papers might be sufficient to explain the discrepancy in attendance (Paul Beame's argument about competing conferences is also valid: I couldn't imagine going to FOCS, STOC, SODA and SoCG every single year, in addition to the various DB/data mining conferences I attend).

Wednesday, July 02, 2008

Women in Computing workshop, and a new blogger

Sorelle Friedler, UMD grad student in geometry, tireless staff volunteer at SoCG 2008, and former AT&T summer visitor, has a new blog. One of the first things she writes about is the Women in Theory workshop at Princeton that ran Jun 14-18. Add the feed, folks !

Tuesday, July 01, 2008

Money Money Money

(ed: three NSF posts in a row ? maybe you're a little obsessed with funding ?

As CRA and USACM are now reporting, the president just signed a bill that among other things allocates 62.7M in supplemental funding for the NSF. However,...
Funding for the National Science Foundation is split 70/30 between education and research programs at the Foundation with $40 million going toward the Robert Noyce Teacher Scholarship Program, which “seeks to encourage talented science, technology, engineering, and mathematics majors and professionals to become K-12 mathematics and science teachers
Which means that only 22.7M is left to be divvied up among the many directorates. Optimistically, that's roughly 44 new proposals, over ALL of NSF. Hmmm....

Monday, June 30, 2008

An open letter to journals not mirrored by ACM and DBLP

Dear journal,
If it isn't too much of a problem, could you possibly attach a BibTeX link to each article that you list ? For the sake of my cruelly abused fingers ? Could you ? PLEASE ?

NSF Update, continued: limits on submissions

I had heard a rumor about this a few months ago, and now it's confirmed: there will be a CISE-wide limit on the number of proposals submitted by a PI (or co-PI/senior personnel). Here's the relevant text (from the IIS call, but it's part of the coordinated solicitation):

In any contiguous August through December period, an individual may participate as PI, Co-PI or Senior Personnel in no more than two proposals submitted in response to the coordinated solicitation (where coordinated solicitation is defined to include the Information and Intelligent Systems (IIS): Core Programs, the Computer and Network Systems (CNS): Core Programs and the Computing and Communication Foundations (CCF): Core Programs solicitations). For example, between August 2009 and December 2009, an individual may participate as PI, co-PI or Senior Personnel in one proposal submitted to a core program in CCF and in a second proposal submitted to a core program in CNS, or an individual may participate as PI, co-PI or Senior Personnel in two proposals submitted to an IIS core program, etc.
The key difference is this: in the past, each separate solicitation (whether TF, or IIS, or whatever) had its own separate limit on the number of proposals: now the cap is summed over the coordinated solicitation. I expect there to be major rumbling about this.

One saving grace is that this only applies to the coordinated solicitation: other cross-discipline programs have their own separate caps.

Saturday, June 28, 2008

The social web makes me dizzy.

  1. I have a web page
  2. I have a blog
  3. I have started using twitter.
  4. As it so happens, I have a facebook account
  5. and I often share interesting items via Google Reader
So far so good. But here's where the fun starts.
  1. The shared items from GR appear on my blog
  2. Shared items, and blog posts, appear on my web page via lifestream
  3. facebook updates appear in GR via an RSS feed
  4. blog posts, twitter updates, facebook updates, and GR updates appear on friendfeed
  5. I can read friendfeed via the web, or via twhirl, (and also twitter)
  6. I can post comments to friendfeed posts via twhirl
  7. Someone can post a comment to a blog post seen on friendfeed and the update appears on twhirl. I can reply to this comment and it appears on their friendfeed
It seems that the whole point of the social web is to keep us updating so much that we end up not doing anything worth announcing !

NSF Update

(ed: please ignore if you don't get (or plan to get) funding from the NSF)

A missive from Jeanette Wing sent out today tells of consolidation in the standard solicitations for CCF/CNS/IIS. To quote:
Each of CISE's three divisions will have one solicitation that covers its core activities normally covered in the past by many solicitations. The Coordinated Solicitation will be the simultaneous release of these three solicitations, describing all core research programs within each division.

...

The Cross-Directorate Solicitation will describe cross-cutting research
programs, those with interests that cut across the entire CISE Directorate and are managed entirely within CISE. (Excluded are cross-cutting programs joint between CISE and other entities; see Further Notes.) For FY09-FY10, they will be:

- Data-Intensive Computing
- Network Science and Engineering
- Trustworthy Computing
Main bit of note: the "small" solicitations (i.e for amounts less than 500K) will be due in December. This in all likelihood includes the new Algorithmic Foundations solicitation. This is earlier than the Feb-March deadlines we had the last two years.

Tuesday, June 24, 2008

The F-1 process

I've been out of grad school long enough that I should know better, but PhD Comics still rings true in a way few comics can (xkcd is another). Of course, now I will find myself identifying more with the bearded old professor than the students, but still...

Today's strip sums up the foreign student visa experience perfectly. What they could have also added was a panel where Tajel explains the umpteenth bazillionth time to a clueless local, "Yes, we need a visa to go to X".

Monday, June 23, 2008

FOCS 08 list ?

The FOCS 2008 page has a link for accepted papers, but it's empty for now. We know at least 4 of the papers that got in, but does anyone know of a list ?

Update: that was quick! here it is.

Tuesday, June 17, 2008

SoCG 2010: The countdown begins

Ok, so it's a fairly long countdown :).

As mentioned earlier, Valerio Pascucci and I will be co-organizing SoCG 2010 somewhere in the Greater Salt Lake Area, including but not limited to the Great Salt Lake itself, the largest man-made excavation on earth, the best powder on earth, and the home of the (not-quite) fastest growing religion on earth.

I have never done this before, and have been consulting previous organizers for advice, things to do, things not to do, and so on. So I thought it would be interesting to blog about the conference organizing process in real-time, so to speak, with the not-so-humble hope that it might help future organizers as well. Don't expect anything juicy, or controversial ! For that you'll have to catch me in an off-the-record moment. As usual, I'll have a tag devoted solely to posts about the conference if you want to track only these posts.

I should also note that although I might carelessly lapse into the first person singular when talking about the organizing, 'I' should be read as short-hand for 'Valerio and myself', especially if something good happens :). Any bumbling should be assumed to be credited to me alone.

Moving on...

Conferences are like weddings, I am realizing, in that the key things to nail down are the venue, venue, and venue (followed by food, and entertainment). Based on a straw poll of people at the conference, it seemed like either hosting SoCG on campus or at Snowbird have roughly equal support (a downtown hotel was less favorable). So the first order of business is to request bids from both Snowbird and the campus conference services. Once we get these, we can start taking a closer look at the numbers that show up.

Scheduling point: the 2010 World Cup will start around the same time as the conference, as pointed out by Marc van Kreveld. Given the size of our European contingent, this is something we definitely need to worry about. On the other hand, maybe we should have country-themed sessions, with national colors and all :)

Saturday, June 14, 2008

A 'Nixon going to China' moment

Via Mathematics under the Microscope, the IMA-commissioned report on the validity of citation statistics as a means for measuring quality. It seems appropriate that this sentence was written (and methinks, could only be written) by mathematicians:
"Numbers are not inherently superior to sound judgments"

Thursday, June 12, 2008

SoCG 2008: Invited Talks

A Discrete Laplace-Beltrami Operator

Alexander Bobenko gave the first invited talk at the conference. As with Mathieu Desbrun's talk from a few years ago, this talk was on the area of 'discrete differential geometry', particularly focusing on a discrete Laplace-Beltrami operator, and how to compute Delaunay triangulations on polyhedral surfaces.

The general principle behind efforts in discrete differential geometry is quite apropos for a CS audience. Rather than doing a 'numerical discretization' of differential geometry to enable computations, the idea is to discretize the entire theory, so that the continuous theory emerges as the limit of the discrete theory.

This goes against the flow of much of what we often see in discrete computer science, where the idea is to relax to an underlying continuous domain in order to learn things about discrete structures. It also makes a lot of sense, because if successful, it gives a 'native' version of differential geometry that's naturally robust without having to worry about the level of discretization etc.

The slides are quite readable on their own, and I'd recommend that anyone who's interested take a look. In summary the idea is this. Given a function on a manifold, you can write down an energy operator (called the Dirichlet energy) such that the gradient of this energy is the Laplace-Beltrami operator on the manifold.

If you do this for piecewise linear functions defined on a simplicial surface in 3-space, then you get an expression for the Laplacian: however, this expression is not intrinsic (it depends on the embedding of the surface in 3-space). In the world of differential geometry, being intrinsic is the whole point, so this is not a happy state of affairs.

It turns out that if you use a triangulation of the surface constructed by using an 'intrinsic Delaunay triangulation' (more on that in a second), and then define an interpolation for a function on the vertices in the usual manner, then the resulting energy function is minimal, and the Laplacian you get *is* intrinsic. An intrinsic Delaunay triangulation is constructed much like you'd construct a regular Delaunay triangulation, with the caveat being that you might have to use geodesics on the surface as "edges" of the triangles, and some triangles might end up being degenerate (so it's not a "normal" triangulation).

But in any case, doing this allows you to work with a well-defined Laplace operator on a discrete structure, and the second part of the talk showed how you can construct discrete minimal surfaces using such an operator.

Geometry is Everywhere (part XLVII)

Ken Clarkson gave the second invited talk, which was a romp through the world of metric spaces and the various notions of dimensions used to describe them. Ken doesn't yet have talk slides online, but somewhere in the union of this, this and this is much of the talk material. The main premise was to weave together different notions of dimension for a metric space, and how they can be used to do geometry in the abstract. If I'm not giving more details, it's because the talk covered so much ground that it's hard to know where even to start. It was wildly entertaining though, and there was a general clamor afterwards for Ken to write a book on this topic (which I hope he does).

And the "47" ? A cryptic shoutout to Pomona College alumni: if you're one, you probably understand the reference.

Tuesday, June 10, 2008

SODA 2009 deadlines: Jun 26/Jul 3

Via CC: SODA 2009 server is up, and the deadline is Jul 3. New things this time:
  • Pre-submission of abstracts (like SoCG) on June 26
  • Papers must have "complete proofs": any details that can't fit can go to the appendix.
I foresee LONG appendices this time :)

Monday, June 09, 2008

SoCG 2008: Business meeting

Bullets from the business meeting:
  • 140 attendees: higher than the abysmally low 111 from last year, but not as good as the 180 reached when we had SoCG in Brooklyn (with the boat ride and the open bar. aahhh...)
  • 42/130 papers accepted. Immediate rejection of papers with no abstracts submitted earlier
  • John Hershberger is the 2009 SoCG PC Chair. Wants papers on the themes of coffee, microbrews, and rain. Good thing I just finished my paper on 'Maximizing coffee production while minimizing fermentation during the rainy season". This paper will be submitted from an ISP somewhere in Europe with 3 random co-authors added on, as per Monique's instructions for maximizing paper acceptance.
  • SoCG 2010 will be in... TA DA.... Salt Lake City !! Yes, yours truly and Valerio Pascucci (newly arriving in Utah) will be organizing the 2010 sausage festival in the land of fry sauce and wonderbread. Mark your calendars NOW !!
And with that, we broke for dinner.

SoCG 2008: Day 1

A good first day for the conference: we were inside mostly, so the murderous heat didn't affect us in any real way. The conference is hosted in the business school, which is really nice and plush, like all business schools are. Of course, you have to get used to seeing scrolling stock tickers.

The most entertaining talk by far was Jeff Erickson's talk on the homotopic Frechet distance. This is a variant of the Frechet distance where you have obstacles in the space the leash is moving in, and the leash has to move continuously around obstacles (in other words, two adjacent leash positions should be homotopic). They have a polynomial time algorithm in which the main nontrivial step is showing that the set of homotopy classes of paths that can yield optimal paths is bounded.

The Frechet distance is called a man-dog distance because you can think of it as the shortest leash length needed when a man walks on one curve and a dog walks on the other. During the talk, Jeff used Marty and Erik Demaine to demonstrate the algorithm, carefully omitting to specify who was man and who was dog. For obstacles, he used two graduate students on the paper: the deeper significance of "grad student = obstacle" I will leave for the reader to ponder.

In the category of "Talk that clearly was something interesting, if only I was smart enough to understand it", was Manor Mendel's talk on p-convexity. The deeper message was very intriguing: namely, that obstructions to embedding trees with small distortion in normed spaces take the form of a kind of strong convexity of the norm.

Many other talks were quite interesting: Timothy Chan's dynamic core sets talk, Tasos Sidiripoulos's talk on "circular embeddings", a way of generalizing the well known Treemap visualization tool so that the regions are no longer necessarily long and skinny, and Ken Clarkson's talk on Johnson-Lindenstrauss projections for manifold distances.

It was a good day.

p.s Twitting was an interesting experience. I'm not sure what to make of it. It provides more instant gratification to the (few) people following along, and it didn't distract me too much during the talks (I can't say what it did to my neighbours, but I didn't get any dirty looks, so...)

It's definitely different from blogging, which is more long form (yes, it's 2008: a blog post is now officially "long form"). I'll do some more of it tomorrow, and see how it goes.

SoCG 2008

I'm here at SoCG 2008 in College Park, Maryland. It's a sweltering 100 F and likely to get worse tomorrow: I think I'll attend lots of talks :)

I might also try a little experiment. During the conference, I'll post quick updates on twitter, while leaving this blog for longer-length posts. If you don't know what twitter is, it's like a microblogging environment: every post ("tweet") has to be 140 characters or less.

If you have a twitter account, you can follow me. If not, you can track the RSS feed, but most feed readers don't update these feeds that often, so you're likely to get bursts of posts.

p.s twitter tends to croak under heavy load, and the Apple Developer's conference is Monday, so hopefully the tweets will get out.

Sunday, June 08, 2008

A seminar on randomization

I'll be running a graduate seminar on randomization next semester. The goal is to cover, in 14 80 minute-long student presentations, a range of topics all centered around our ability to toss coins.
Some of the constraints:
  • Student background is good, but not extensive (i.e they are familiar with Chernoff bounds, but not much else; not much complexity background, and Kleinberg-Tardos-level algorithms familiarity)
  • Breadth is important: it's important to me that students become familiar with a wide range of topics rather than drilling deeply down into one (you could run an entire semester-long course on concentration of measure). This goes back to the previous point: at this stage, I'd like students to get as much wide exposure as feasible, in the hope that it will make later, deeper material more accessible to them later on.
  • Source material should be accessible and presentable (by students) in 80 minute chunks.
Here's a list of topics: I'd love to hear what people think, and get suggestions on changes/additions/deletions. Pointers to source material will also be helpful.
  • A randomized algorithm either has an uncertain running time, or a likelihood of error. How do we analyze such algorithms ? This will lead directly to an investigation of tail bounds on distributions, martingales, and the method of bounded differences. We will also look at the famous minimax principle for proving lower bounds on the complexity of randomized algorithms.
  • We'll take a tour of the many places where randomization leads to more efficient (and more elegant) algorithms (hashing, approximate counting, randomized incremental constructions, randomized rounding)
  • We'll get acquainted with BPP (the randomized analogue of P), and other random complexity classes, and how randomness and hardness are connected. We'll also look at interactive proofs.
  • Where does randomness come from ? If we have a deterministic procedure to generate random bits, how is it random anymore ? We'll talk about pseudorandomness (the art of making more from less).
  • Since it's tricky to make random bits, we'd like to minimize the effort involved. How can we 'derandomize' algorithms ?
  • (if time permits) Quantum computing would appear to be "probability with complex numbers". What makes it similar (and different) to probabilistic computation ?
  • Saturday, June 07, 2008

    They're coming out of the woodwork !!

    Well, maybe not, but it's getting harder and harder to keep track of all the TCS blogs out there. I just discovered that James Lee has a "trial blog" (a tlog ?) with some very interesting posts. Doing the P vs NC posts has made me jealous of all the wordpress.com folks and their nifty LaTeX embedded in the posts: I think I'll switch over at some point myself.

    Friday, June 06, 2008

    A perspective on solitude

    From Smilla's Sense Of Snow:
    Cantor illustrated the concept of infinity for his students by telling them that there was once a man who had a hotel with an infinite number of rooms, and the hotel was fully occupied. Then one more guest arrived. So the owner moved the guest in room number 1 into room number 2; the guest in room number 2 into number 3; the guest in 3 into room 4, and so on. In that way room number 1 become vacant for the new guest.

    What delights me about this story is that everyone involved, the guests and the owner, accept it as perfectly natural to carry out an infinite number of operations so that one guest can have peace and quiet in a room of his own. That is a great tribute to solitude.
    And here's a more prosaic, but entertaining, take on Sam Cantori and his hotel.

    Update: as commenter hfv points out, I (and Peter Høeg) got the attribution wrong. It's really Hilbert's story.

    Thursday, June 05, 2008

    P vs NC V: Linear PRAMS II

    This is where we left off last time:
    If a set of inputs parametrized by d parameters is accepted in the linear PRAM model with p processors in t time, then there is a way of partitioning using hyperplanes, so that each cell of the resulting arrangement can be labelled 0 or 1 so that an input z is accepted iff it lies in a cell labelled with a 1.
    The above lemma describes the geometric structure induced by an efficient computation on a linear PRAM. Now, we'll see how a high parametric complexity problem breaks this structure.
    We will do this by demonstrating that such a problem has too many accepting and non-accepting inputs close to each other to allow for such a cell labelling with these few hyperplanes.

    1. Setup

    We start with some parametrized problem (with parameter ) with cardinality n and bitsize . Remember that the bitsize refers to the maximum complexity of the coefficients of expressions involving . Let the parametric complexity of this problem (as we defined earlier) be . For technical reasons, we will require that the problem under consideration be homogeneous (scaling values by a constant multiplies OPT by the same value); this does not change anything, since this is true for problems like min-cost flow and max-flow.

    Each numeric parameter in the input can be expressed as some , and without loss of generality we will assume that u and v are integers. We will actually need all parameters to be integral, so thinking of as the rational , we can rewrite all such parameters in the form , yielding a new instance with the same combinatorial structure (again, homogeneity makes things easy).

    We now have a two parameter system that describes an input to the problem. A third parameter will describe the threshold for this (optimization) problem. Specifically, the goal will be to decide whether the value obtained is at least . Let a be some "large enough constant", and consider all inputs where the threshold has bitsize at most and each parameter generated from has bitsize at most .

    This is a three-parameter system (d=3). We can therefore translate the structural result at the top of the post into a concrete statement regarding all problems parametrized in this manner.
    Let be some constant. Set the desired time bound , and let the number of processors . Then a linear PRAM algorithm that works on this parametrized input in the specified resource bounds induces a partition of by at most planes, where each face can be labelled as specified above.

    There's a more general version of this theorem that makes the tradeoff between time and processors more explicit: I'll skip that for now. Also note that the "5" in the exponent is somewhat arbitrary: it's just chosen to make things work unambiguously.

    2. An affine transformation

    We'll now encounter an ingenious geometric representation of the set of feasible inputs for the parametric problem. Let be the optimal function value as a function of the input P, and let G be its function graph. An input I to the problem is parametrized by the three parameters , and is feasible when , which can be restated (remembering that ) as the condition (dividing through by and using homogeneity. We will also assume wlog that is positive)

    At this point, a geometer will look at expressions like , and immediately see a projective plane. And that's what happens ! We can think of the space of possible parameter values as inhabiting a projective space, with playing the role of the z-direction (in which case the action, as it were, happens on the z=1 plane).
    The parameters define a ray in 3D that intersects the z=1 plane in the point (you should look at Figure 4.1 in the paper to help visualize what's going on).

    What then happens to the function graph G ? We can think of G as lying in the affine plane z=1 (parametrized by ), and fan(G) as the set of all points in that project onto $G$ (via the projective transform). We can now translate the feasibility condition into the geometric condition that the point lies "below" fan(G) in the axis.

    It helps to think of as an ordinary function in the affine plane, in which case the space of feasible values is the set of all points that project to points in the plane below its graph. In other words, all integer points in are labelled as 1 if they lie below fan(G) in the direction.

    In the next installment, we'll show that any arrangement of few hyperplanes in this space will not be able to capture all the 0 and 1 points correctly.

    Wednesday, June 04, 2008

    Name of a kind of weight function on a lattice ?

    Suppose we have a lattice with elements min, max, and the usual join and meet operations. I define a function w() that maps elements of the lattice to reals, such that
    • w(min) = 0
    • w(max) = 1
    • If x > y, then w(x) > w(y)
    • let z = join(x,y). Then w(x) + w(y) >= w(z)
    Note that the last condition is slightly weaker than the usual submodularity condition
    w(x) + w(y) >= w(z) + w(u) (where u = meet(x,y)).

    My question is: is there a well known name for such a weight function ? I've been trying to search various sources, and the closest I came to this was the notion of a 'rank' function for a lattice, which doesn't quite capture this sort-of-metric-property.

    Monday, June 02, 2008

    The pitfalls of being watched..

    Noah Snyder over at Secret Blogging Seminar points out one of the unreported pitfalls of blogging: your colleagues know what you're doing ! More importantly, they know that you're not doing the work you're supposed to be doing with them !

    This is particularly bad when someone's hounding me for a review, or a revision, or something else...

    Wednesday, May 28, 2008

    P vs NC IV: Linear PRAMs I

    1. Prelude

    Taken together, the P vs NC paper is quite intimidating in its size and scope. And yet, when it's cut down to bite-sized blogging chunks, the pieces seem to follow inexorably in a manner that leaves one asking, 'how else could it have been'. There's the key inspiration of using parametric complexity, the magic trick we'll see more of today. But the rest of it uses the kind of geometric arguments that are familiar to anyone who's read papers on the complexity of arrangements. Make no mistake: taking care of issues of bit complexity requires delicate arguments. But the main thrust of the approach is surprisingly easy to explain, ONCE we've accepted the magic trick.

    In this post and the next, we'll begin to see glimmers of the WHY for the parametric argument. In brief, using parametric complexity allows us to navigate an affine subspace of the space of all possible inputs, and then we let geometry take over and guide us.

    2. Linear PRAMs

    We're close to seeing the main killer argument at the heart of the paper. But as with many arguments, it's easier if we look at a skeleton argument applied to a somewhat simpler case. We already saw a toy example of this argument, and now we'll move to a more nontrivial model, that of linear PRAMs.

    In the first post, we defined a linear PRAM as a PRAM-without-bitops in which all multiplications are by a constant. We'll see a slightly stronger lower bound for computations in this model. Specifically, we will show that for a problem with parametric complexity and bitsize , there exists a large enough constant b such that the problem cannot be solved on a linear PRAM with p processors in time . This generalizes the bound that will hold for the stronger model: namely, time with processors.

    Substituting what we've already seen for the parametric complexity of max-flow, we recover a lower bound of time using p processors, for an n-node network.

    Notes:
    • For clarity, I'm skipping over the specification of bitlengths: these will fold in later, but throwing in all these numbers right now confuses the flow.
    • As before, the lower bound applies for approximations and randomization: how this happens will be discussed later on.
    3. Bounding the shape and number of branchings

    The high-level proof strategy first requires us to bound the number of possible branchings of an algorithm in this model, and then requires us to describe the "cells" of these branchings in some geometric manner. As we did earlier, we'll say that two inputs x, x' are t-equivalent if the machine executes the same set of instructions on x and x' for the first t steps of the computation.
    This defines an equivalence relation, and our goal will be to estimate the number of equivalence classes in this relation (as a function of t).

    Now let's introduce the parametrization. Suppose that each input x is a function of d parameters , where each particular input variable is expressed as a linear function of the parameters. We'll further assume that these linear forms are integral, and the parameters range over integer values with some bounded bitlength.

    The partitioning of the inputs x into equivalence classes induces a partitioning of the parameters in the obvious way (two parameter vectors z, z' are t-equivalent if the corresponding x, x' are). Let denote the number of t-equivalent classes in the parameter space. The main result we can show now is that:
    For all t, is bounded by
    Here's the proof argument (and for those of you who've waded through parametric search, this will be familiar): First of all, remember that since all operations are linear (since one term of each multiplication is a constant), each arithmetic operation is some linear function of the input. From this, and the assumption that memory pointers are not functions of the input (this is the circuit view of the computation), it follows that the contents of each memory location are a linear function of the equivalence class the input lies in, and the time t.

    Now imagine simulating the execution of the algorithm on an as-yet unspecified parameter vector z (as we do in parametric search). What this means is that arithmetic operations are linear forms defined over the symbolic value of the parameter. When a branch occurs, the branch outcome is determined by comparing two linear forms, which reduces to testing the sign of a linear expression. Let's take some z in a fixed t-equivalence class C. From the above remarks, each branch involves testing at most p linear forms on z, each of which can be viewed as a hyperplane in . If we now look at the arrangement of these hyperplanes, then each cell represents a fixed set of decisions for the branch outcomes. By standard arguments, the complexity of this arrangement is .

    Now how does this arrangement evolve at the next time step ? Clearly, any single piece of the equivalence class C can be split by the arrangement into smaller pieces, based on decisions that are made at the next time step. But from above, there are only such splits possible. Therefore, , yielding the desired bound.

    A corollary is that if we track any particular t-equivalence class, it is split by at most pt linear forms (one for each time step and each processor), and so it is sufficient to check the signs of these forms to determine whether any particular z belongs to C.

    Finally, taking a union over all possible equivalence classes, we get the following result:
    If a set of inputs parametrized by d parameters is accepted in the linear PRAM model with p processors in t time, then there is a way of partitioning using hyperplanes, so that each cell of the resulting arrangement can be labelled 0 or 1 so that an input z is accepted iff it lies in a cell labelled with a 1.
    In the next part of this argument, we will show that a problem with high parametric complexity cannot be represented this way: that is, there must be some cell that contains inputs in the language, and inputs not in the language.

    Disqus for The Geomblog