Algorithms in the Field

One of the great new NSF programs in recent years is the introduction of the “Algorithms in the Field” program, which is a joint initiative from the CCF, CNS, and IIS divisions in CISE.  It’s goal is almost a direct match with what I try to do with my research:  it “encourages closer collaboration between (i) theoretical computer science researchers [..] and (ii) other computing and information researchers [..] very broadly construed”.  The projects it funds are meant to push the boundaries of theoretical tools and apply them in a application domain.

Of course this is perfectly suited to what we do in RSRG at Caltech!  We missed the first year of the call due to bad timing, but we submitted this year and I’m happy to say it was funded (over the summer when I wasn’t blogging)!

The project is joint with Steven Low, Venkat Chandrasekaran, and Yisong Yue and has the (somewhat generic) title “Algorithmic Challenges in Smart Grids: Control, Optimization, and Learning.

For those who are curious, here’s the quick and dirty summary of the goal…taken directly from the proposal.

We are at the cusp of an historic transformation of our energy systems. Driven by the goal of sustainability, the management of the electricity grid is becoming more distributed, dynamic, and open. Over the coming decade the grid will grow into the largest and most complex example of the internet of things, with hundreds of millions of active endpoints that introduce frequent, random, and large fluctuations in supply, demand, voltage and frequency. However, there are significant challenges facing grid operators as this transformation begins. A key bottleneck in this transformation is algorithmic. In particular, the current algorithms managing the design and operation of the grid often do not scale or are not appropriate for such an enormous, distributed, dynamic, network of distributed energy resources. Further, the development of algorithms that do apply to this emerging smart grid requires pushing the boundaries of control, optimization, and learning. The goal of the proposal is to tackle the algorithmic challenges underlying the transformation of the power grid. Concretely, we focus on three core algorithmic challenges facing the power grid during this period of transformation: control, optimization, and learning.

Control: Control problems in power systems are crucial and varied, e.g., frequency and voltage control. What makes these problems so difficult is the physical laws that define the dynamics of the system. We propose to study an optimization-based approach to the design of feedback controllers for cyber-physical networks such as smart grids. The method designs the controller state, information structure of the cyber network, and distributed control policies so that the closed-loop system is asymptotically stable, and every equilibrium point of the closed-loop system is an optimal solution of a given optimization problem.

Optimization:Optimal power flow problems underlie numerous power system applications; however they are challenging because the power flow equations that define them are nonconvex. Such problems belong to a class of optimization problems termed exponential programs (EPs). In preliminary work we have developed, for a special class of EPs called signomial programs, a new hierarchy of convex relaxations based on relative entropy optimization. Our goal in the proposed work is to extend this approach to develop a complete hierarchy of convex relaxations for EPs based on relative entropy optimization. This will immediately yield a fundamentally new approach for solving optimal power flow problems in power systems, one with the potential to have enormous practical impact.

Learning:One of the biggest challenges in cyber-physical networks in general, and power systems in particular, is that data about the system is too expensive (or simply impossible) to obtain in real time. This leads to challenges for control and optimization because even simple optimization and control algorithms usually involve evaluating the objective value for many (even all) possible options, e.g., greedy algorithms. Clearly, this is prohibitively expensive. Our preliminary results have shown that it is often possible to learn a policy that is near-optimal efficiently, despite not having access to the objective function at run time. Our goal in this proposal is to extend these preliminary results to optimization and control problems facing power systems in order to allow them to “learn to optimize” in real time.

So, if you’re thinking of where to apply for grad school or a postdoc and these projects appeal to you — apply to Caltech!  We’re hiring…

Advertisements

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s