Beyond worst-case analysis

Well, last week was the final workshop in the semester-long “Algorithms & Uncertainty” program that I helped to organize at the Simons institute.  There are a few weeks of reading groups / seminars / etc. left, but the workshop marks the last “big” event.  It’s sad to see the program wrapping up, though I’m definitely happy that I will be spending a lot less time on planes pretty soon — I’ve been flying back and forth every week!

The last workshop was on “Beyond worst-case analysis” and the organizers (Avrim Blum, Nir Ailon, Nina Balcan, Ravi Kumar, Kevin Leyton-Brown, Tim Roughgarden) did a great job of putting together a really unique program.  Since there’s not “one way” to go beyond worst-case, the week included a huge array of interesting ways to get results beyond what is possible from worst-case analysis, whether it be learning commonalities of different types of instances, semi-random / smoothed analysis models, or extracting features that enable learning what algorithmic approach to use.

If you’re interested in getting a broad overview of the approaches, I highly recommend watching Avrim’s introductory talk, which gave a survey of a ton of different approaches for going “beyond worst-case” that have received attention recently.  Tim also has a great course on the topic that is worth checking out…

My own contribution was the final talk of the week, which gave an overview of our work on how to use properties of prediction noise to help design better online optimization algorithms.  The idea is that predictions are crucial to almost all online problems, but we don’t really understand how properties of prediction error should influence algorithm design.  Over the last few years we’ve developed some models and results that allow initial insights into, e.g., how correlation structure in prediction errors changes the “optimal” algorithm form.  Check out the video for more details!

Advertisements

Algorithms in the Field

One of the great new NSF programs in recent years is the introduction of the “Algorithms in the Field” program, which is a joint initiative from the CCF, CNS, and IIS divisions in CISE.  It’s goal is almost a direct match with what I try to do with my research:  it “encourages closer collaboration between (i) theoretical computer science researchers [..] and (ii) other computing and information researchers [..] very broadly construed”.  The projects it funds are meant to push the boundaries of theoretical tools and apply them in a application domain.

Of course this is perfectly suited to what we do in RSRG at Caltech!  We missed the first year of the call due to bad timing, but we submitted this year and I’m happy to say it was funded (over the summer when I wasn’t blogging)!

The project is joint with Steven Low, Venkat Chandrasekaran, and Yisong Yue and has the (somewhat generic) title “Algorithmic Challenges in Smart Grids: Control, Optimization, and Learning.

For those who are curious, here’s the quick and dirty summary of the goal…taken directly from the proposal.

Continue reading