Algorithms in the Field

One of the great new NSF programs in recent years is the introduction of the “Algorithms in the Field” program, which is a joint initiative from the CCF, CNS, and IIS divisions in CISE.  It’s goal is almost a direct match with what I try to do with my research:  it “encourages closer collaboration between (i) theoretical computer science researchers [..] and (ii) other computing and information researchers [..] very broadly construed”.  The projects it funds are meant to push the boundaries of theoretical tools and apply them in a application domain.

Of course this is perfectly suited to what we do in RSRG at Caltech!  We missed the first year of the call due to bad timing, but we submitted this year and I’m happy to say it was funded (over the summer when I wasn’t blogging)!

The project is joint with Steven Low, Venkat Chandrasekaran, and Yisong Yue and has the (somewhat generic) title “Algorithmic Challenges in Smart Grids: Control, Optimization, and Learning.

For those who are curious, here’s the quick and dirty summary of the goal…taken directly from the proposal.

Continue reading

Videos on universal laws and architectures

I’ve posted the beginnings of what I hope will become an extensive library of videos, papers, notes, and slides exploring in more detail both illustrative case studies and theoretical foundations for the universal laws and architectures that I superficially referred to in my previous blog posts.  For the moment, these are simply posted on dropbox, so be sure to download them, since looking at them in a browser may only give a preview…

I’m eager to get feedback on any aspects of the material, and all the sources are available for reuse.

In addition to the introductory and overview material, of particular interest might be a recent paper on heart rate variability, one of the most persistent mysteries in all of medicine and biology, which we resolve in a new but accessible way.  There are tutorial videos in addition to the paper for download.

Rigor + Relevance beyond Caltech

I’ve been busy traveling for most of the last month, so missed my chance for timely postings about a couple of exciting happenings that highlight how theory can impact practice. But, here are two that I can’t resist a quick post about, even if it is a bit late.

First, this year’s ACM SIGCOMM award recipient is George Varghese, for “sustained and diverse contributions to network algorithmics, with far reaching impact in both research and industry.”  I think that’s a pretty nice summary. George’s work has been an inspiration for me since grad school when I worked through his book on Network Algorithmics. He is one of the masters at finding places where theoretical/algorithmic foundations can have practical impact.  He’s also a master at communicating theoretical work to systems folks.  As a result, he’s one of the few folks that can consistently get theory-driven work into Sigcomm, which makes him the envy of many in the community.  It’s really great to see someone with such strong theory chops being recognized by the networking systems community.

The second piece of news is another highlight about the impact of Leslie Lamport‘s work.  Mani wrote a great post about him when he received the Turing award earlier this year, in case you missed it.  I finally got to see him give a technical talk at the Microsoft Faculty summit this year, and was excited to hear about how some of his work on model checking was finally getting adopted by industry.  He mentioned in his talk that Amazon was beginning to adopt it for some large internal projects, and was even budgeting time for development for TLA+, a formal specification language that Leslie invented.   After hearing this, I started to dig around, and it turns out that James Hamilton has recently blogged about incorporating TLA+ into the development of DynamoDB.

Continue reading

Communication and Power Networks: Forward Engineering (Part II)

In Part I of this post, I have explained the idea of reverse and forward engineering, applied to TCP congestion control.   Here, I will describe how forward engineering can help the design of ubiquitous, continuously-acting, and distributed algorithms for load-side participation in frequency control in power networks. One of the key differences is that, whereas on the Internet, both the TCP dynamics and the router dynamics can be designed to obtain a feedback system that is stable and efficient, a power network has its own physical dynamics with which our active control must interact.

Continue reading

Communication and Power Networks: Forward Engineering (Part I)

This blog post will contrast another interesting aspect of communication and power networks: designing distributed control through optimization.  This point of view has been successfully applied to understanding and designing TCP (Transmission Control Protocol) congestion control algorithms in the last 1.5 decades, and I believe that it can be equally useful for thinking about some of the feedback control problems in power networks, e.g., frequency regulation.

Even though this simple and elegant theory does not account for many important details that an algorithm must deal with in a real network, it has been successfully put to practice.  Any theory-based design method can only provide the core of an algorithm, around which many important enhancements must be developed to create a deployable product. The most important value of a theory is to provide a framework to understand issues, clarify ideas, and suggest directions, often leading to a new opportunity or a simpler, more robust and higher performing design.

In Part I of this post, I will briefly review the high-level idea using TCP congestion control as a concrete example.  I will call this design approach “forward engineering,” for reasons that will become clear later.   In Part II, I will focus on power: how frequency regulation is done today, the new opportunities that are in the future, and how forward engineering can help capture these new opportunities.

Continue reading

Communication and Power Networks: Flow Optimization (Part II)

In Part I of this post, we have seen that the optimal power flow (OPF) problem in electricity networks is much more difficult than congestion control on the Internet, because OPF is nonconvex.   In Part II, I will explain where the nonconvexity comes from, and how to deal with it.

Source of nonconvexity

Let’s again start with congestion control, which is a convex problem.

As mentioned in Part I, corresponding to each congestion control protocol is an optimization problem, called network utility maximization. It takes the form of maximizing a utility function over sending rates subject to network capacity constraints. The utility function is determined by the congestion control protocol: a different design to adapt the sending rate of a computer to congestion implies a different utility function that the protocol implicitly maximizes. The utility function is always increasing in the sending rates, and therefore, a congestion control protocol tends to push the sending rates up in order to maximize utility, but not to exceed network capacity. The key feature that makes congestion control simple is that the utility functions underlying all of the congestion control protocols that people have proposed are concave functions. More importantly, and in contrast to OPF, the network capacity constraint is linear in the sending rates. This means that network utility maximization is a convex problem.

Continue reading

Communication and Power Networks: Flow Optimization (Part I)

I have discussed in a previous post that digitization (the representation of information by zeros and ones, and its physical implementation and manipulation) and layering have allowed us to confine the complexity of physics to the physical layer and insulate high-level functionalities from this complexity, greatly simplifying the design and operation of communication networks.  For instance, routing, congestion control, search, and ad markets, etc. do not need to deal with the nonlinearity of an optical fiber or a copper wire; in fact, they don’t even know what the underlying physical medium is.

This is not the case for power networks.  

The lack of an analogous concept of digitization in power means that we have been unable to decouple the physics (Kirchhoff’s laws) of power flow from high-level functionalities.  For instance, we need to deal with power flows not only in deciding which generators should generate electricity when and how much, but also in optimizing network topology, scheduling the charging of electric vehicles, pricing electricity, and mitigating the market power of providers.   That is, while the physics of the transmission medium is confined in a single layer in a cyber network, it permeates through the entire infrastructure in a cyber-physical network, and cannot be designed away.

How difficult is it to deal with power flows?

This post (and the one that follows) will illustrate some of these challenges by contrasting the problem of congestion control on the Internet and that of optimal power flow (OPF) in electricity.

Continue reading