(Nearly) A year later

It’s been one year since I started as executive officer (Caltech’s name for department chair) for our CMS department…and, not coincidentally, it’s been almost that long since my last blog post!  But now, a year in, I’ve got my administrative legs under me and I think I can get back to posting at least semi-regularly.

As always, the first post back after a long gap is a news filled one, so here goes!

Caltech had an amazing faculty recruitment year last year!  Caltech’s claim to fame in computer science has always been pioneering disruptive new fields at the interface of computing — quantum computing, dna computing, sparsity and compressed sensing, algorithmic game theory, … Well, this year we began an institute-wide initiative to redouble our efforts on this front and it yielded big rewards.  We hired six new mid-career faculty at the interface of computer science!  That is an enormous number for Caltech, where the whole place only has 300 faculty…

Continue reading

Advertisements

A report from “The mathematics of planet earth”

The mathematics of planet earth is a joint initiative from a consortium of mathematical sciences organizations around the world (organized nominally by DIMACS) that has the goal of showcasing how mathematics can be useful in tackling our world’s problems.  It started last year as a year-long focus, but has now expanded and will continue for the coming years as well.   I’ve been to a few events organized under this program, but the reason for this post is to highlight the recent workshop on “Data-aware energy use” organized at UCSD a week or so ago.

Continue reading

A tale of two metrics: competitive ratio and regret

Throughout our work on data center workload management over the past few years, one common theme that emerged was that of online convex optimization. Whether we were looking at load shifting, dynamic right-sizing, geographical load balancing, or data center demand response, we consistently ended up with an online convex optimization problem that we needed to solve. So, I wanted to do a short post here highlighting something particularly surprising (at least to us) that we uncovered last year.

Continue reading

It’s almost time for NEGT again

Every year (since 2009) in the fall, all of the folks in southern California that work at the border of economics and CS/EE get together for the Network Economics and Game Theory (NEGT) workshop.  The hosting duties rotate between USC, UCLA, and Caltech, and this year the honor falls to us here at Caltech.

We’ve just finished finalizing the program — and it’s a great one.  So, if you’re in the area, come on by!

We’re holding it on Nov 20-21.  We’ll have a very reasonable start time each day of 10am so that folks try to avoid LA traffic in the morning, and we’ll end both days with a reception so that you can avoid traffic on the way home, too.  Markus Mobius (MSR) and Tim Roughgarden (Stanford) are the keynotes, and then we have a great list of invited speakers from all across Southern California to round out the program.

Attendance is free, but please register early, if possible, so that we can plan the catering!  Also, we’ll have a poster session for students to present work (and work-in-progress).  If you’re interested, just sign up when you register.

Communication and Power Networks: Forward Engineering (Part II)

In Part I of this post, I have explained the idea of reverse and forward engineering, applied to TCP congestion control.   Here, I will describe how forward engineering can help the design of ubiquitous, continuously-acting, and distributed algorithms for load-side participation in frequency control in power networks. One of the key differences is that, whereas on the Internet, both the TCP dynamics and the router dynamics can be designed to obtain a feedback system that is stable and efficient, a power network has its own physical dynamics with which our active control must interact.

Continue reading

Communication and Power Networks: Forward Engineering (Part I)

This blog post will contrast another interesting aspect of communication and power networks: designing distributed control through optimization.  This point of view has been successfully applied to understanding and designing TCP (Transmission Control Protocol) congestion control algorithms in the last 1.5 decades, and I believe that it can be equally useful for thinking about some of the feedback control problems in power networks, e.g., frequency regulation.

Even though this simple and elegant theory does not account for many important details that an algorithm must deal with in a real network, it has been successfully put to practice.  Any theory-based design method can only provide the core of an algorithm, around which many important enhancements must be developed to create a deployable product. The most important value of a theory is to provide a framework to understand issues, clarify ideas, and suggest directions, often leading to a new opportunity or a simpler, more robust and higher performing design.

In Part I of this post, I will briefly review the high-level idea using TCP congestion control as a concrete example.  I will call this design approach “forward engineering,” for reasons that will become clear later.   In Part II, I will focus on power: how frequency regulation is done today, the new opportunities that are in the future, and how forward engineering can help capture these new opportunities.

Continue reading

A report from (2 days of) Stochastic Networks

June is a month that is dominated by conference travel for me, with three of my favorite conferences all typically happening back-to-back.  The third (and final) of these this year was Stochastic Networks.  The little one at home prevented me from being able to join for the whole conference, but I was happy to be able to come for the first two days.

Stochastic Networks is an applied probability conference that is the type of event that doesn’t happen often enough in computer science.  Basically, it consists of 20-25 invited hour-long talks over a week.  The talks are mostly senior folks with a few junior folks thrown in, and are of an extremely high quality.  And, if you do the math, that makes an average of 4-5 talks per day, which means that the days leave a lot of time for conversation and interaction.  Because of the quality of the speakers, there are still lots of folks that attend even if they aren’t presenting (which makes for somewhere around a 100+ person audience, I’d guess), so it becomes a very productive event, both in terms of working with current collaborators and in terms of starting up new projects.

Continue reading