Via Luca Aceto, news of a new effort to create an open access electronic proceedings. The idea is that you organize your conference/workshop, and apply for inclusion of your proceedings in the EPTCS: you handle the reviewing, they take the papers and manage the long term archiving (via the arxiv).
I'm imagining this will only be useful for small conferences/workshops, but it's a good way of making sure you don't have a dead-link problem a few years later when the person who hosted the website for the conference leaves their institution and forgets to hand over maintainence to someone else. Also, the arxiv imprimatur will make sure the papers will get more visibility than otherwise.
Ruminations on computational geometry, algorithms, theoretical computer science and life
Thursday, April 30, 2009
New open access proceedings archive
Labels:
community,
publishing
CG Steering Comittee Elections: Part II
Marc van Kreveld has been taking nominations for the next set of members of the CG steering committee. He now has a list of nominees (people nominated at least twice, and willing to serve). His instructions are these:
If anyone else on the list below wishes to post anything about their suitability for a spot, I'd be happy to host their posting without comment.
And now, here's the list: you will note that all CG bloggers have been nominated :)
To vote, send me an e-mail with at most five names from the list below. Put "Vote SC" (or something similar) in the e-mail subject line, or reply to this message. The possibility to vote ends on May 15. Of course, your votes will be kept confidential.The list is below: much to my embarrassment, I have been nominated. There are many good folks on this list, and I'm not sure explicit campaigning is in good taste, but I will say that I feel the "divide" between CG and the rest of theory has been made too much of a big deal, and does no one any favors. In whatever ways the steering committee can help to keep us involved and participating in larger theory endeavours, I'd do my best to be involved.
If anyone else on the list below wishes to post anything about their suitability for a spot, I'd be happy to host their posting without comment.
And now, here's the list: you will note that all CG bloggers have been nominated :)
- Dan Halperin
- David Eppstein
- David Mount
- Erik Demaine
- Guenter Rote
- Jack Snoeyink
- Jean-Daniel Boissonnat
- Jeff Erickson
- Joseph Mitchell
- Leo Guibas
- Mark de Berg
- Matthew Katz
- Monique Teillaud
- Otfried Cheong
- Rolf Klein
- Sariel Har-Peled
- Sue Whitesides
- Suresh Venkatasubramanian
Thursday, April 23, 2009
WADS paper list
As previously blogged, WADS paper list (sans abstracts ! come on, WADS people, get with the program).
Two interesting observations:
Two interesting observations:
- WADS is no longer a workshop ! It's the Algorithms And Data Structures Symposium (ADSS?). I see that years of complaining that workshops get no respect has had an effect.
- There seems to be a preponderance of geometry papers (or should I say, OxDE papers :)). Is WADS (ADSS) becoming a satellite CompGeom conference ? this would not be a bad thing at all, considering that SODA is the only current satellite (I kid, I kid). Or maybe WADS (ADSS) is becoming a satellite of SODA/ESA ? that would be reasonable too.
Tuesday, April 21, 2009
Start your engines for Bethesda: STOC early reg deadline Apr 28
Aravind Srinivasan reminds us that the STOC early registration deadline is Apr 28 (only a week away!), so do go ahead and register.
Aravind also notes:
This is particularly pertinent for me because I've been having similar conversations with ACM about SoCG 2010: they are generally concerned about attendance based on data from conferences being organized right now, and have been tamping down attendance estimates as a result.
Aravind also notes:
Especially given the current economic climate, ACM is concerned about
getting enough registrants and booked hotel rooms for STOC.
This is particularly pertinent for me because I've been having similar conversations with ACM about SoCG 2010: they are generally concerned about attendance based on data from conferences being organized right now, and have been tamping down attendance estimates as a result.
Friday, April 17, 2009
Complexity book out !
The Arora-Barak complexity book is out ! Of course they've had the PDF version around for a while, and I've been using it so extensively I almost HAVE to buy the book now.
Many/multi-core research
There's been a great deal of chatter on the blogs about the multicore workshop being organized at UMD. I'll assume that everyone already knows about the workshop, and will jump straight to some thoughts.
First, two points. I'm at a department where there's a LOT of excitement about multicore work from a systems end. For the last two years we've had internal departmental panel discussions on various aspects of multicore research (architecture, compilers, programming languages, verification etc) and I've even chipped in little bits on the potential theoretical ramifications.
Secondly, I spent a good few years working on GPU algorithms ( a kind of precursor to things like the Cell, and multicore in general), both on the practical side of implementing algorithms on a GPU, and on the slightly more theoretical side of trying to abstract out the core metaphors defining "GPU-efficient" algorithms.
To an extent, I know whereof I speak, and I have a kind of guarded curiosity about multicore work. I think it's an excellent time to have this workshop. There's huge interest across the board in CS about multicore: the evangelists believe it's a game changer, and even the skeptics agree that it requires some redesign of our thinking about the entire systems "stack" (chip level, memory level, OS level, programming level, etc).
Why I am guarded though ? It's because the models are changing very rapidly. There's the Cell-style SPU model that looks a lot like a stream processing system. There's the Nvidia CUDA programming model that abstracts a very complicated memory hierarchy over an architecture you don't have direct access to, and there are of course all the actual multicore chips for which the interfaces are evolving. I don't think anyone knows what the right abstractions are for how to think about memory access on multicore system (and from a theory perspective, this is probably the key question), and so there's always the danger of settling on a model that changes before you have time to explore it in depth (this was happening a lot in the early days of GPU work).
But I see no reason why that should stop us from paying close attention and chipping in our contributions. I do think there are abstractions that may not capture all the nitty-gritty of what's happening with multicore systems, but capture some higher-order phenomena that aren't affected too much by the day to day changes. And I think that effective multicore memory modelling could have the same impact as I/O and streaming models of computation did (and these contributions have percolated well beyond the theory community).
The trick here is not to think about multicore in terms of 2/4/8/16 cores, but to imagine a system with millions of cores, small amounts of local memory, and a cost for transferring information. One of the limitations of PRAM in my mind is its ignoring of the real cost of memory access, while focusing on the parallelism: I think a careful modelling of memory transfer is what will make multicore algorithmic research interesting, and in that sense it follows the path from I/O to cache-oblivious models, with streaming thrown in for good measure. Even MapReduce and the MUD models might be useful in this regard.
So do we have a clean model ? No. Should theoreticians stay away ? Well that really depends on your style of operation. Some people like jumping in the mud, and some like to wait till structure emerges. That's really up to the individuals. But if NO ONE jumps in, then we essentially make the statement that theoreticians have no interest in computational models that aren't "sexy". If (as I like to think) algorithms is the true God's language of computation in all its forms, then it's inconsistent to just sit on the sidelines.
I think I just talked myself into attending the workshop.
First, two points. I'm at a department where there's a LOT of excitement about multicore work from a systems end. For the last two years we've had internal departmental panel discussions on various aspects of multicore research (architecture, compilers, programming languages, verification etc) and I've even chipped in little bits on the potential theoretical ramifications.
Secondly, I spent a good few years working on GPU algorithms ( a kind of precursor to things like the Cell, and multicore in general), both on the practical side of implementing algorithms on a GPU, and on the slightly more theoretical side of trying to abstract out the core metaphors defining "GPU-efficient" algorithms.
To an extent, I know whereof I speak, and I have a kind of guarded curiosity about multicore work. I think it's an excellent time to have this workshop. There's huge interest across the board in CS about multicore: the evangelists believe it's a game changer, and even the skeptics agree that it requires some redesign of our thinking about the entire systems "stack" (chip level, memory level, OS level, programming level, etc).
Why I am guarded though ? It's because the models are changing very rapidly. There's the Cell-style SPU model that looks a lot like a stream processing system. There's the Nvidia CUDA programming model that abstracts a very complicated memory hierarchy over an architecture you don't have direct access to, and there are of course all the actual multicore chips for which the interfaces are evolving. I don't think anyone knows what the right abstractions are for how to think about memory access on multicore system (and from a theory perspective, this is probably the key question), and so there's always the danger of settling on a model that changes before you have time to explore it in depth (this was happening a lot in the early days of GPU work).
But I see no reason why that should stop us from paying close attention and chipping in our contributions. I do think there are abstractions that may not capture all the nitty-gritty of what's happening with multicore systems, but capture some higher-order phenomena that aren't affected too much by the day to day changes. And I think that effective multicore memory modelling could have the same impact as I/O and streaming models of computation did (and these contributions have percolated well beyond the theory community).
The trick here is not to think about multicore in terms of 2/4/8/16 cores, but to imagine a system with millions of cores, small amounts of local memory, and a cost for transferring information. One of the limitations of PRAM in my mind is its ignoring of the real cost of memory access, while focusing on the parallelism: I think a careful modelling of memory transfer is what will make multicore algorithmic research interesting, and in that sense it follows the path from I/O to cache-oblivious models, with streaming thrown in for good measure. Even MapReduce and the MUD models might be useful in this regard.
So do we have a clean model ? No. Should theoreticians stay away ? Well that really depends on your style of operation. Some people like jumping in the mud, and some like to wait till structure emerges. That's really up to the individuals. But if NO ONE jumps in, then we essentially make the statement that theoreticians have no interest in computational models that aren't "sexy". If (as I like to think) algorithms is the true God's language of computation in all its forms, then it's inconsistent to just sit on the sidelines.
I think I just talked myself into attending the workshop.
Monday, April 13, 2009
NSF (and twitter)
Although the NSF has lots of new stimulus money for disbursement, allocation to proposals has been stalled for a while as the agency figures out how much money gets to go where etc. And then a few hours ago came this, on the NSF twitter feed:
Finally, movement! Allocations of the FY09 Funds and ARRA are being finalized, and we expect the apportionment from OMB later this week.Can I just comment on how cool it is that the NSF has a twitter feed ?
Friday, April 10, 2009
Is HAM SANDWICH PPAD-Complete ?
Reading the algorithmic game theory book, I discover that HAM SANDWICH is in PPAD (not surprising in retrospect when you think of Brouwer's fixed point theorem as the canonical PPAD problem). Is HAM SANDWICH PPAD-Complete ?
Update: I should clarify - this is the problem I'm referring to:
In 2D, the problem is to bisect two sets (the ham and the bread).
Update: I should clarify - this is the problem I'm referring to:
Given n sets of 2n points (each) in n dimensions, is there a single hyperplane that bisects each set (i.e divides it into two sets of size n).
In 2D, the problem is to bisect two sets (the ham and the bread).
Labels:
game theory,
geometry,
PPAD
P vs NP seminar
I'm thinking of topics to discuss over the summer with students here, and decided on introducing people to complexity theory via the question, "Why haven't we solved the P vs NP problem?". The title is a little tongue-in-cheek: the real idea is to use P vs NP to explore some concepts in complexity theory, for an audience that has no real familiarity with the topic.
Thus far, I'm working off the "diagonalization-oracles-boolean-circuits-razborov-rudich" story line, with an addition at the end for algebraization, and P vs NC. But since this isn't my real area of expertise, I'm throwing out a request for suggestions: what topics am I missing when discussing the various attacks on P vs NP ?
p.s I'm using Scott Aaronson's P vs NP survey as a rough starting guide for references.
Thus far, I'm working off the "diagonalization-oracles-boolean-circuits-razborov-rudich" story line, with an addition at the end for algebraization, and P vs NC. But since this isn't my real area of expertise, I'm throwing out a request for suggestions: what topics am I missing when discussing the various attacks on P vs NP ?
p.s I'm using Scott Aaronson's P vs NP survey as a rough starting guide for references.
Wednesday, April 08, 2009
Committees and conferences...
FOCS 2009 PC size: 20
SODA 2009 PC size: 27
ICDE 2010 PC size: 230
IJCAI 2009 PC size: 600
SoCG 2008 attendance: ~150
SODA 2008 attendance: ~300
It's an impressive feat when a conference PC is larger than an entire other conference. It also explains why everyone has more PC memberships than I do :)
SODA 2009 PC size: 27
ICDE 2010 PC size: 230
IJCAI 2009 PC size: 600
SoCG 2008 attendance: ~150
SODA 2008 attendance: ~300
It's an impressive feat when a conference PC is larger than an entire other conference. It also explains why everyone has more PC memberships than I do :)
Labels:
conferences,
miscellaneous
Wednesday, April 01, 2009
CG Steering Comittee Elections
Via Marc van Kreveld comes the announcement of CG Steering Committee elections. The way it works is that people send nominations to Marc before Apr 14. He screens them, checks if the nominated individuals are willing to serve, and then starts an election on May 1 for 5 candidates to run the committee for the next three years. Deadline for voting is May 14.
The curent committee is:
and Marc's email address is marc at cs dot uu dot nl.
The curent committee is:
- Pankaj Agarwal
- Jeff Erickson
- Marc van Kreveld (secretary)
- Joe Mitchell
- Guenter Rote (chair)
and Marc's email address is marc at cs dot uu dot nl.
Subscribe to:
Posts (Atom)