Thursday, February 25, 2016

Time to cluster !

After many blog posts, much discussion, and some contract back and forth, I'm happy to announce that +Sergei Vassilvitskii and I have signed a contract with Cambridge University Press to write a book on clustering.

"What's that?", you say. "You mean there's more than one lecture's worth of material on clustering?". 

In fact there is, and we hope to produce a good many lectures' worth of it.

Information on the book will be available at http://clustering.cc. It currently forwards to my blog page collecting some of the posts we wrote, but it will eventually have its own content.

Now the painful part starts.

Sunday, February 14, 2016

Cartograms only exist in years divisible by 4...

Every four years, America suddenly discovers that it likes football soccer. Every four years also, America discovers the existence of the cartogram. The last time I ventured into this territory (here's the link, but link rot has claimed the images), I was accused of being a liberal shill. And here's more on cartograms and the 2004 election.

Wednesday, February 10, 2016

Teaching a process versus transmitting knowledge

Ta-Nehisi Coates is forced to state who he's voting for in the next election. He struggles with the act of providing an answer, because
Despite my very obvious political biases, I’ve never felt it was really my job to get people to agree with me. My first duty, as a writer, is to myself. In that sense I simply hope to ask all the questions that keep me up at night. My second duty is to my readers. In that sense, I hope to make readers understand why those questions are critical. I don’t so much hope that any reader “agrees” with me, as I hope to haunt them, to trouble their sense of how things actually are.
In the last few years, I've had to think a lot about what it means to teach a core class (algorithms, computational geometry) for which most material is already out there on the Web. I think of my role more as an explorer's guide through the material. You can always visit the material by yourself, but a good tour guide can tell you what's significant, what's not, how to understand the context, and what is the most suitable path (not too steep, not too shallow) through the topic.

That's all well and good for teaching content. But when it comes to teaching process, for example with my Reading With Purpose seminar, I have to walk a finer line. I've spent a lot of time with my own thoughts trying to deconstruct how I read papers, and what parts of that process are good and useful to convey, and what parts are just random choices.

I want to make sure my students are "haunted and troubled" by the material they read. I want them to learn how to question, be critical, and find their own way to interrogate the papers. I want them to do the same critical deconstruction of their own process as well.

On the other hand, the "stand-back-and-be-socratic" mode is very hard to execute without it seeming like I'm playing an empty game that I have no stakes in, and so I occasionally have to share my "answers". I fear that my answers, coming from a place of authority, will push out their less well-formed but equally valid ways of approaching the material.

I deal with this right now by constantly trying to emphasize why different approaches are ok, and try to foster lots of class discussion, but it's a struggle.

Note: I'm certain that this is a solved problem in the world of education, so if anyone has useful tips I'm eager to get pointers, suggestions, etc. 

Monday, February 08, 2016

Who owns a review, and who's having the conversation ?

I had floated a trial balloon (this is election season after all!) about making reviews of my papers public. I had originally thought that my coauthors would object to this plan, for fear of retribution from reviewers, or feelings of shame about the reviews (things that I was concerned with).

What I didn't expect was strong resistance from potential reviewers, with people going as far as to say that a) they might refuse to review my papers in the future or b) I'd be deterring reviewers from doing any kind of reviewing if my idea caught on.

I was surprised by this response. And then I was surprised by my surprise. And I realized that the two perspectives above (mine, and the commenters) come from fundamentally different views on the relation between reviewer, author and arbiter (aka the PC/journal).

This post is an attempt to flesh that out.

Dramatis Personae:

There are three players in this drama: the (A)uthor, (R)eviewer, and the PC chair/editor whom we'll call the (J)udge. A submits a paper, R reviews it, and J (informed by R) makes a final decision.

So how do they interact ?

View 1: A and R have a conversation.

My idea of publishing reviews comes from this perspective. R looks over the paper and discusses it (anonymously) with A, facilitated by J. The discussion could be one-shot, two-shot, or a longer process. The discussion is hopefully informative and clarifying. Ideally it improves the paper. This is an approximation of what happens in a journal review, and I think is how many authors imagine the process as working.

Posting the discussion is then helpful because it provides some context for the work (think of it like the comments section of a page, but chaperoned, or an overheard research discussion at a cafe).

It's also helpful to keep all parties honest. Reviewers aren't likely to write bad reviews if they know it might become public. In fact, a number of conferences that I'm involved with are experimenting with making reviews public (although this is at the behest of J, not A).

View 2: J and R have a conversation

J requests that R make an assessment of the paper. R reads it over, forms an opinion, and then has a short conversation with J. In a conference setting, J has other constraints like space and balance, but R can at least provide a sense of whether the paper is above the bar for publication or not. This is how most reviewers imagine the process working.

At the end of the process, J decides (in concert with R) how much of the review to share with A, ranging from just the decision bit to the entire review (I don't know of any conference that shares the conversation as well).

Who owns the review, and who's having the conversation? 

The difference between these two perspectives seems to be at the root of all the complaining and moaning about peer review in our community (I'm not talking about the larger issues with peer review in say the medical community). Authors think that they're operating in View 1, and are surprised at the often perfunctory nature of the review, and the seeming unwillingness of reviewers to engage in a discussion (when for example there's a rebuttal process).

Reviewers on the other hand live in View 2, and are begrudging at best with comments that are directed at the author. In fact, the harshness and seeming arbitrariness of the review (as perceived by the author) can be explained simply as: they weren't really written for you to read !

The view also changes one's perspective on ownership. If a review is a conversation between J and R, then it's an outrageous idea to let A (who's only getting the review out of kindness) publish it for all to see. But if the review is meant to help A write a better paper, then why can't A publish the discussion ?

So what's the upshot of all of this ? 

There are many good reasons not to publish my reviews. Probably the most important reason (as was pointed out to me) is that the very fact that I can speculate out loud about doing this demonstrates a kind of privilege. That is to say, if I do publish critical reviews of my work, I'm likely to take less of the blame and more of the credit than coauthors who are more disadvantaged (students, minorities, women). If you don't believe me, I encourage you to read Tamara Munzner's series on a major brouhaha in the Vis community triggered by a public review (posted by a reviewer).

Another good reason is that if some of my coauthors object (and so I don't post reviews for papers with them) and others don't (and so I do), that in itself sends signals of the "what are you afraid of" form that can again fall disproportionately on my coauthors.

A third reason is that anonymous never stays that way. Eventually, if enough reviews get posted, some enterprising NLPer will write a simple predictor to identify styles in reviews, cluster reviews likely written by the same individual, and then cross-reference with any leaked information (for example if they're on a PC) to leak some information.

But here are some bad reasons (that were posted in response to my post):

  • Reviewers will be scared away and it's hard enough to get them to review in the first place ? Really? Reviewers have such fragile egos ? This is a classic slippery slope argument with no real basis in truth. And given how many younger researchers are desperate to get a chance to review papers, I suspect that as soon as someone stops, someone else will pick up the slack.
  • Reviewers own the copyright of their writing, and it would be a copyright violation. IANAL, but I don't think the people raising this point are either. And this is very backwards reasoning. We can decide in good faith whether we think posting reviews is a good idea or not, but using legal arguments seems like a cop-out. There are always ways to fix that at PC formation time. 
  • It's ethically wrong to post reviews. I don't understand how it's an ethical issue. The only way there could be an ethical issue is if reviewers were promised that the reviews would stay confidential. But that's never the case: reviewers are exhorted to share the reviews with the authors. And again, this has the causality backward. Whether we should publish reviews or not should not depend on what we currently might have in the way of expectations. 
I should note that NSF proposal reviews (that are much more secret) are shared with the author without conditions, and there's no prohibition against posting them. In fact seeing proposal reviews can be a great way to understand how reviewers think. 

Bottom line: I won't be posting my reviews any time soon, which is a pity because I genuinely think that this provides a degree of accountability for reviewers that they currently don't have. But it was very interesting to think through this out loud and understand the perspective others brought to the discussion. 



Saturday, February 06, 2016

ITA FTW: Bayesian surprise and eigenvectors in your meal.

I've been lounging in San Diego at my favorite conference, the Information Theory and Applications workshop. It's so friendly that even the sea lions are invited (the Marine Room is where we had the conference banquet).

Sadly this year I was mired in deadlines and couldn't take advantage of the wonderful talks on tap and the over 720 people who attended. Pro tip, ITA: Could you try to avoid the ICML/KDD/COLT deadlines next time :) ?

ITA always has fun events that our more "serious" conferences should learn time. This time, the one event I attended was a Man vs Machine cookoff. Which I thought was particularly apropos since I had just written a well-received article with a cooking metaphor for thinking about algorithms and machine learning.

The premise: Chef Watson (IBM's Watson, acting as a chef) designs recipes for dinner (appetizer/entree/dessert) with an assist from a human chef. Basically the chef puts in some ingredients and Watson suggests a recipe (not from a list of recipes, but from its database of knowledge of chemicals, what tastes 'go well together' and so on. This was facilitated by Kush Varshney from IBM, who works on this project.

Each course is presented as a blind pairing of Watson and Human recipes, and its our job to vote for which one we think is which.

It was great fun. We had four special judges, and each of us had a placard with red and blue sides to make our votes. After each course, Kush gave us the answer.

The final score: 3-0. The humans guessed correctly for each course. The theme was "unusualness": the machine-generated recipes had somewhat stranger combinations, and because Watson doesn't (yet) know about texture, the machine-recipes had a different mouthfeel to them.

This was probably the only time I've heard the words 'Bayesian surprise' and 'eigenvector' used in the context of food reviews.


Thursday, February 04, 2016

Making all my reviews public (and annotated): A question.

I was reading a post on G+ about a musician who keeps all her performance reviews on her website and annotates them with a response. Not to "fight back", but to add to the reviews (that are occasionally negative).

I'm very tempted to do the same thing myself with my submissions. I think this will provide more clarity about the nature of the review process, about the often honest and revealing discussions that take place behind the peer-review walls, and about how subtleties in the writing can change the perception of a work. I suspect that as a consequence I'll be more circumspect about submitting something half-baked (which might be a good thing). I'll have to be careful not to get defensive in my responses to the reviews (which is always hard). And I may not be able to get away as easily with "changing the introduction" to get a paper in (which happens shockingly often).

Of course the biggest problem will be getting my co-authors (who are often my students) to agree beforehand. So here's my question:
Would you work with me if you knew I was planning to make all my reviews public? 

Monday, February 01, 2016

On "the moral hazard of complexity-theoretic assumptions"

(ed: In a recent CACM editorial,  CACM editor-in-chief +Moshe Vardi  discussed Babai's result on graph isomorphism and the recent hardness result for edit distance in the context of a larger discussion on the significance of complexity-theoretic assumptions. +Piotr Indyk  (one of the authors of the edit distance result and an occasional guest blogger) posted the following as a response. This response has also been posted as a comment on Moshe's post.)

(ed: Update: After Piotr posted the comment, Moshe responded, and then Piotr responded again. Please visit the article page to read the exchange between the two). 

In a recent CACM editorial, Dr. Vardi addresses what he calls a "a moral hazard of complexity-theoretic assumptions" and "a growing gap between current theory and practice of complexity and algorithms". Given that the article mentions a paper that I am a co-author of [ "Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)", STOC'15], I believe it is appropriate for me to respond. In short, I believe that much of the analysis in the article stems from a confusion between press coverage and the actual scientific inquiry. Indeed, it is unclear whether Dr. Vardi addresses what he believes to be a "media" phenomenon (how journalists describe scientific developments to a broad public) or a "scientific" phenomenon (how and why the scientists make and use assumptions in their research and describe them in their papers). In order to avoid conflating these two issues, I will address them one by one, focusing on our paper as an example.

  1. Media aspects: The bulk of the editorial is focused on some of the press coverage describing recent scientific developments in algorithms and complexity. In particular, Dr. Vardi mentions the title of a Boston Globe article covering our work ("For 40 Years, Computer Scientists Looked for a Solution that Doesn't Exist.") . As I already discussed this elsewhere (https://liorpachter.wordpress.com/2015/08/14/in-biology-n-does-not-go-to-infinity/#comment-4792 ), I completely agree that the title and some other parts of the article leave a lot to be desired. Among many things, the conditional aspects of the result are discussed only at the end of the article, and therefore are easy to miss. At the same time, I believe some perspective is needed. The inaccuracy or confusion in popular reporting of scientific results is an unfortunate but common and longstanding phenomenon (see e.g., this account https://lpsdp.files.wordpress.com/2011/10/ellipsoid-stories.pdf of press coverage of the famous Khachiyan's linear programming algorithm in the 1970s). There are many reasons for this. Perhaps the chief one is the cultural gap between the press and the scientists, where journalists emphasize accessibility and newsworthiness while scientists emphasize precision. As a result, simplification in scientific reporting is a necessity, and the risk of oversimplification, inaccuracy or incorrectness is high. Fortunately, more time and effort spent on both sides can lead to more thorough and nuanced articles (e.g., see https://www.quantamagazine.org/20150929-edit-distance-computational-complexity/ ). Given that the coverage of algorithms and complexity results in popular press is growing, I believe that, in time, both scientists and journalists will gain valuable experience in this process.
  2. Scientific aspects: Dr. Vardi also raises some scientific points. In particular:
    •  Dr. Vardi is critical of the title of our paper: "Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false).". I can only say that, given that we are stating the assumption explicitly in the title, in the abstract, in the introduction, and in the main body of the paper, I believe the title and the paper accurately represents its contribution.
    • Dr. Vardi is critical of the validity of SETH as a hardness assumption: this question is indeed a subject of a robust discussion and investigation (see e.g., the aforementioned Quanta article). The best guess of mine and of most of the people I discussed this with is that the assumption is true. However, this is far from a universal opinion. Quantifying the level of support for this conjecture would be an interesting project, perhaps along the lines of similar efforts concerning the P vs. NP conjecture ( https://www.cs.umd.edu/~gasarch/papers/poll2012.pdf ). In any case, it is crucial to strengthen the framework by relying on weaker assumptions, or replacing one-way reductions with equivalences; both are subjects of ongoing research. However, even the existing developments have already led to concrete benefits. For example, failed attempts to prove conditional hardness of certain problems have led to better algorithms for those tasks.

Finally, let me point out that one of the key motivations for this line of research is precisely the strengthening of the relationship between theory and practice in complexity and algorithms, a goal that Dr. Vardi refers to as an important challenge. Specifically, this research provides a framework for establishing evidence that certain computational questions cannot be solved within concrete (e.g., sub-quadratic) polynomial time bounds. In general, I believe that a careful examination of the developments in algorithms and complexity over the last decade would show that the gap between theory and practice is shrinking, not growing. But that is a topic for another discussion.

Disqus for The Geomblog