Rambles around computer science
Diverting trains of thought, wasting precious time
Mon, 02 Feb 2015
Thoughts on peer review
My PLDI submission was rejected. I'm not too sad about this, since the
reviews were basically encouraging, and
on balance I agree that the paper can still be improved and
was arguably not (quite) ready yet.
However, despite certain innovations, namely two-phase review and
rebuttal, the conference review process is as creaky as ever. This is
rather dispiriting, and the longer I spend doing computer science
research, the more bizarre our system seems. I filled in the anonymous
post-conference survey, but since my comments apply more widely than to
a single instance of a single conference, I feel like forgoing my
anonymity and re-posting them here (lightly edited). I hope they don't
have too much of a sour-grapesy flavour—at least, that's not really how
I'm feeling.
-
The main disappointment is that although the review and rebuttal process
is sound on paper, the reality is one of overworked reviewers, writing
"lazy"-seeming reviews (I'm not calling them lazy people), making the
review and rebuttal process apparently hasty and haphazard.
- One example: each of my paper's negative PLDI reviews, even in their
post-rebuttal state, contain one or more substantial technical
misunderstandings or apparently-rebutted criticisms (either left
standing in the review or, in one case, repeated without any
acknowledgement of my having rebutted it). How should I interpret this?
Is it that my rebuttal wasn't considered valid, or that it wasn't read
or understood?
- Another example: the summary verdict included with the notification,
presumably written after the PC meeting, is a nice idea. But it bears
frustratingly unclear relation to the overall spectrum of criticisms
advanced by the reviews. My impression from the reviews (even after
post-rebuttal updates) was that my paper needs more thorough evaluation.
But the summary says it mainly needs clearer exposition. It's as if the
summary was written by a single reviewer (apparently the nuttiest)
rather than reflecting a consensus. This discrepancy creates a lot of
uncertainty for me, as an author, so makes the "feedback" a lot less
actionable.
- The only answer seems to be more thorough and careful reviewing. And
this can only happen by reducing resubmission-generated reviewer load,
by publishing more papers overall as a field. I personally favour a
fully journal-style model of publication, rather than the frenzy of
conference submission and review cycles. Conferences themselves should
be events intended for discussion rather than publication/dissemination.
Currently we have the bizarre situation where publication rates are
limited by what can be fitted into an essentially fixed calendar of
three-day events, and where researchers don't attend conferences except
to present published papers. I know that PLDI is necessarily a
conference, but if its attitude to presentation slots and publications
could move further towards embracing a more progressive model, I'd be
much more hopeful about our discipline.
Something I didn't elaborate in my survey response: what would this
“progressive model” be? It might involve reserving a smaller
part of the programme for short presentations of recent journal
publications, and a larger part of the programme for short presentations
of high-quality in-progress work. This would be work at a substantially
more advanced stage than a typical workshop paper—possibly
reasonably mature already, but not yet accepted for publication in a
journal. Initially, it seems important to ring-fence the slots for the
latter kind of presentation, to avoid having already-published work
always trump the not-yet-published stuff. Eventually, in a
well-functioning community there would not be much call for the former
kind of slot, since most journal publications would describe things that
had been presented at previous conferences.
[/research]
permanent link
contact
validate this page