This blog post will contrast another interesting aspect of communication and power networks: designing distributed control through optimization. This point of view has been successfully applied to understanding and designing TCP (Transmission Control Protocol) congestion control algorithms in the last 1.5 decades, and I believe that it can be equally useful for thinking about some of the feedback control problems in power networks, e.g., frequency regulation.
Even though this simple and elegant theory does not account for many important details that an algorithm must deal with in a real network, it has been successfully put to practice. Any theory-based design method can only provide the core of an algorithm, around which many important enhancements must be developed to create a deployable product. The most important value of a theory is to provide a framework to understand issues, clarify ideas, and suggest directions, often leading to a new opportunity or a simpler, more robust and higher performing design.
In Part I of this post, I will briefly review the high-level idea using TCP congestion control as a concrete example. I will call this design approach “forward engineering,” for reasons that will become clear later. In Part II, I will focus on power: how frequency regulation is done today, the new opportunities that are in the future, and how forward engineering can help capture these new opportunities.
In Part I of this post, we have seen that the optimal power flow (OPF) problem in electricity networks is much more difficult than congestion control on the Internet, because OPF is nonconvex. In Part II, I will explain where the nonconvexity comes from, and how to deal with it.
Source of nonconvexity
Let’s again start with congestion control, which is a convex problem.
As mentioned in Part I, corresponding to each congestion control protocol is an optimization problem, called network utility maximization. It takes the form of maximizing a utility function over sending rates subject to network capacity constraints. The utility function is determined by the congestion control protocol: a different design to adapt the sending rate of a computer to congestion implies a different utility function that the protocol implicitly maximizes. The utility function is always increasing in the sending rates, and therefore, a congestion control protocol tends to push the sending rates up in order to maximize utility, but not to exceed network capacity. The key feature that makes congestion control simple is that the utility functions underlying all of the congestion control protocols that people have proposed are concave functions. More importantly, and in contrast to OPF, the network capacity constraint is linear in the sending rates. This means that network utility maximization is a convex problem.
I have discussed in a previous post that digitization (the representation of information by zeros and ones, and its physical implementation and manipulation) and layering have allowed us to confine the complexity of physics to the physical layer and insulate high-level functionalities from this complexity, greatly simplifying the design and operation of communication networks. For instance, routing, congestion control, search, and ad markets, etc. do not need to deal with the nonlinearity of an optical fiber or a copper wire; in fact, they don’t even know what the underlying physical medium is.
This is not the case for power networks.
The lack of an analogous concept of digitization in power means that we have been unable to decouple the physics (Kirchhoff’s laws) of power flow from high-level functionalities. For instance, we need to deal with power flows not only in deciding which generators should generate electricity when and how much, but also in optimizing network topology, scheduling the charging of electric vehicles, pricing electricity, and mitigating the market power of providers. That is, while the physics of the transmission medium is confined in a single layer in a cyber network, it permeates through the entire infrastructure in a cyber-physical network, and cannot be designed away.
How difficult is it to deal with power flows?
This post (and the one that follows) will illustrate some of these challenges by contrasting the problem of congestion control on the Internet and that of optimal power flow (OPF) in electricity.
In part I of this post, we have seen how a layered architecture has transformed the communication network. What is so difficult about a layered architecture for the power network? Let’s again look first at its role in the communication network.
DARPA started a packet network in 1969 with four nodes at UCLA, UCSB, SRI (Stanford Research Institute), the University of Utah, that grew into today’s Internet. Early to mid 1990s was when the world at large discovered the Internet. The release of the Mosaic browser in 1993 by the National Center of Supercomputing Applications of the University of Illinois, Urbana-Champaign, has probably played the most visible role in triggering this transition. But 1990s was also the time when multiple technologies and infrastructures have come together to ready the Internet for prime time. What exactly was the role of layering?
Adam took the courageous dive into the world of blogging about us at Caltech living in “the gap.” I dare not commit to regular contributions as he admirably has, but have agreed to write a series of posts contrasting R&D for power and communication networks. This is the first in this series, and my first-ever blog post!
Smart grid is in vogue these days, for good reasons. As always, excitement garners people, ideas, and resources; but if not well-managed, can create disillusion that pushes the pendulum back. It’d be impossible to predict how the current resurgence of interest in power systems R&D will play out, but the confluence of powerful forces will likely (and hopefully) drive dramatic advances in the coming decades. We plan to chat about these in this space over the coming months.