Friday, April 17, 2009

Many/multi-core research

There's been a great deal of chatter on the blogs about the multicore workshop being organized at UMD. I'll assume that everyone already knows about the workshop, and will jump straight to some thoughts.

First, two points. I'm at a department where there's a LOT of excitement about multicore work from a systems end. For the last two years we've had internal departmental panel discussions on various aspects of multicore research (architecture, compilers, programming languages, verification etc) and I've even chipped in little bits on the potential theoretical ramifications.

Secondly, I spent a good few years working on GPU algorithms ( a kind of precursor to things like the Cell, and multicore in general), both on the practical side of implementing algorithms on a GPU, and on the slightly more theoretical side of trying to abstract out the core metaphors defining "GPU-efficient" algorithms.

To an extent, I know whereof I speak, and I have a kind of guarded curiosity about multicore work. I think it's an excellent time to have this workshop. There's huge interest across the board in CS about multicore: the evangelists believe it's a game changer, and even the skeptics agree that it requires some redesign of our thinking about the entire systems "stack" (chip level, memory level, OS level, programming level, etc).

Why I am guarded though ? It's because the models are changing very rapidly. There's the Cell-style SPU model that looks a lot like a stream processing system. There's the Nvidia CUDA programming model that abstracts a very complicated memory hierarchy over an architecture you don't have direct access to, and there are of course all the actual multicore chips for which the interfaces are evolving. I don't think anyone knows what the right abstractions are for how to think about memory access on multicore system (and from a theory perspective, this is probably the key question), and so there's always the danger of settling on a model that changes before you have time to explore it in depth (this was happening a lot in the early days of GPU work).

But I see no reason why that should stop us from paying close attention and chipping in our contributions. I do think there are abstractions that may not capture all the nitty-gritty of what's happening with multicore systems, but capture some higher-order phenomena that aren't affected too much by the day to day changes. And I think that effective multicore memory modelling could have the same impact as I/O and streaming models of computation did (and these contributions have percolated well beyond the theory community).

The trick here is not to think about multicore in terms of 2/4/8/16 cores, but to imagine a system with millions of cores, small amounts of local memory, and a cost for transferring information. One of the limitations of PRAM in my mind is its ignoring of the real cost of memory access, while focusing on the parallelism: I think a careful modelling of memory transfer is what will make multicore algorithmic research interesting, and in that sense it follows the path from I/O to cache-oblivious models, with streaming thrown in for good measure. Even MapReduce and the MUD models might be useful in this regard.

So do we have a clean model ? No. Should theoreticians stay away ? Well that really depends on your style of operation. Some people like jumping in the mud, and some like to wait till structure emerges. That's really up to the individuals. But if NO ONE jumps in, then we essentially make the statement that theoreticians have no interest in computational models that aren't "sexy". If (as I like to think) algorithms is the true God's language of computation in all its forms, then it's inconsistent to just sit on the sidelines.

I think I just talked myself into attending the workshop.

2 comments:

  1. Suresh -- Just chiming in my agreement. Well said.

    Here's another thought. The claim is we need good models. If none of us "get our hands dirty" actually getting involved and working on these systems, where will the good models come from? Best to be involved...

    ReplyDelete
  2. If there were a model that was general enough to have all current species of multicore computers as special cases, it seems like analyzing and comparing special cases of that model from a theoretical viewpoint could even point implementers in new and (hopefully) better directions.

    Or maybe there isn't a single direction that is better, but at least such a theoretical analysis might give a uniform framework in which to compare different implementations of multicore at a high level.

    ReplyDelete

Disqus for The Geomblog