Missed the trifecta

In an earlier post I wrote about an unusual submission trifecta I had this fall:  I submitted to each of NSDI, STOC, and Sigmetrics (a pure systems conference, a pure theory conference, and hybrid conference) within the span of a couple months.

As far as I know, no one has managed to complete this triple crown of acceptances in a given year (even when the list is broadened to STOC/FOCS, Sigcomm/NSDI, and Sigmetrics/Performance), though there is at least one person who has come quite close: Brighten Godfrey, who managed to have an NSDI and Sigmetrics paper within 12 months of each other, with a SODA paper in between…

Well, we now have heard back from all three and, unfortunately, we didn’t succeed with the triple crown.  We got an accept from NSDI and two accepts from Sigmetrics, but didn’t make it into STOC.  Drat, so close! So, the triple crown remains elusive for another year…

Three very different communities

Given the timing of the notifications, I’ve been thinking a lot about the contrasts between the communities.  Of course, the work in the communities is very different, but there are lots of contrasts across the three communities that these conferences represent beyond just the work that make it difficult to co-exist in them.

Things as simple as the styles of the titles of the papers in the communities tend to be distinctive.  For example, just from the titles of the papers we submitted, you can probably tell which was submitted to which conference:

  • GRASS: Trimming stragglers in approximation analytics
  • Pricing data center demand response
  • Energy procurement strategies in the presence of intermittent sources
  • The complexity of Nash equilibria as revealed by data

The first follows the common systems trope of “strained acronym: detailed topic description with slight relationship to the acronym”…it was submitted to NSDI.   Of course, the STOC paper has “complexity” in the title, and the Sigmetrics paper titles each combine a theory-friendly term (e.g.”pricing” and “strategies/intermittent” with something practical (e.g. “data centers” and “energy procurement”) .

Beyond the titles, there are many other stylistic differences between the communities, too.  I will eventually put together a post on the differences in writing papers for these communities, but at the moment, since we received reviews of our papers from these conferences so close together, one thing that is very striking to me is the difference in the style of the reviews across the communities.  Until I saw them each back-to-back, it hadn’t hit me how different the reviews are.

Some differences between the reviews in the communities are easy to anticipate.  NSDI/Sigcomm & Sigmetrics/Performance reviews focus more on the practical implications/applicability of the work than STOC/FOCS, but Sigmetrics/Performance reviews are more willing than NSDI/Sigcomm to overlook weaknesses on the application side if there is a strong analytic contribution.  This is all very natural given the targets of the communities.

The lengths of the reviews are also pretty predictable — Sigmetrics/Performance  and NSDI/Sigcomm reviews tend to be pretty long and detailed, about 2-3+ times the size of STOC/FOCS reviews.  Also, there tend to be more reviewers per paper at Sigmetrics/Performance and NSDI/Sigcomm than at STOC/FOCS.  These differences are natural given that the PCs at Sigmetrics/Performance and NSDI/Sigcomm are much larger than those of STOC/FOCS.  So, you get fewer reviews and, since the reviewers need to handle far more papers, you get less detail in each review.

Surprisingly though, while one might expect that STOC/FOCS reviews would focus heavily on the analytic parts of the paper and that reviewers would engage deeply with those parts of the work more so than in Sigmetrics/Performance and NSDI/Sigcomm, in my experience, this is simply not the case!  For example, in this year’s reviews, there was more discussion of the analytic parts of the work in our NSDI review than in our STOC review! …and the Sigmetrics reviewers provided, by far, the most insightful and detailed comments about the analytic parts of the papers we submitted (including detailed discussions about proof techniques, alternative proofs, and generalizations).

Those examples come from this year, but thinking back over previous years’ reviews, I’ve seen from these and related conferences (for my papers and other people’s papers), it seems not out of the ordinary…  I guess there are not too many folks that submit across these three communities, but I’m curious if this is a consequence of the work I’m most familiar with being more “core” in NSDI/Sigcomm and Sigmetrics/Performance than it is in STOC/FOCS, or if it is really a difference in the style of the reviewing.  I guess it could also be a consequence of the overload felt by people on the STOC/FOCS PCs…

Another big difference in the reviews is that the Sigmetrics reviews were, by far, the most concerned with the citation of, and differentiation from, related work.  Having been on the PC many times in the past, I’ve seen this consistently.  The PC is often very concerned with making sure that work from other fields doesn’t come to the middle-ground conference because it “wasn’t good enough” for the core conference (e.g., NSDI/Sigcomm or STOC/FOCS).  As a result, the placement of the work in the broader field is often discussed at length during the PC meeting.  It’s interesting that this doesn’t show up nearly as much in NSDI/Sigcomm reviews or STOC/FOCS reviews though…

I wonder how many good papers Sigmetrics loses because of this stylistic issue as a consequence of the fact that people from other communities aren’t used to including such a detailed related work section in their papers.  Personally, I know that last year at Sigmetrics, we had a paper accepted only as a poster (instead of a full paper) due explicitly to reviewers’ worries about the related work in machine learning … but then the same paper was accepted to COLT a few months later.  Have others had similar experiences?

Advertisements

One thought on “Missed the trifecta

  1. Pingback: Rigor + Relevance | Sigmetrics is growing!

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s