Beyond worst-case analysis

Well, last week was the final workshop in the semester-long “Algorithms & Uncertainty” program that I helped to organize at the Simons institute.  There are a few weeks of reading groups / seminars / etc. left, but the workshop marks the last “big” event.  It’s sad to see the program wrapping up, though I’m definitely happy that I will be spending a lot less time on planes pretty soon — I’ve been flying back and forth every week!

The last workshop was on “Beyond worst-case analysis” and the organizers (Avrim Blum, Nir Ailon, Nina Balcan, Ravi Kumar, Kevin Leyton-Brown, Tim Roughgarden) did a great job of putting together a really unique program.  Since there’s not “one way” to go beyond worst-case, the week included a huge array of interesting ways to get results beyond what is possible from worst-case analysis, whether it be learning commonalities of different types of instances, semi-random / smoothed analysis models, or extracting features that enable learning what algorithmic approach to use.

If you’re interested in getting a broad overview of the approaches, I highly recommend watching Avrim’s introductory talk, which gave a survey of a ton of different approaches for going “beyond worst-case” that have received attention recently.  Tim also has a great course on the topic that is worth checking out…

My own contribution was the final talk of the week, which gave an overview of our work on how to use properties of prediction noise to help design better online optimization algorithms.  The idea is that predictions are crucial to almost all online problems, but we don’t really understand how properties of prediction error should influence algorithm design.  Over the last few years we’ve developed some models and results that allow initial insights into, e.g., how correlation structure in prediction errors changes the “optimal” algorithm form.  Check out the video for more details!

A tale of two metrics: competitive ratio and regret

Throughout our work on data center workload management over the past few years, one common theme that emerged was that of online convex optimization. Whether we were looking at load shifting, dynamic right-sizing, geographical load balancing, or data center demand response, we consistently ended up with an online convex optimization problem that we needed to solve. So, I wanted to do a short post here highlighting something particularly surprising (at least to us) that we uncovered last year.

Continue reading