Communication and Power Networks: Forward Engineering (Part II)

In Part I of this post, I have explained the idea of reverse and forward engineering, applied to TCP congestion control.   Here, I will describe how forward engineering can help the design of ubiquitous, continuously-acting, and distributed algorithms for load-side participation in frequency control in power networks. One of the key differences is that, whereas on the Internet, both the TCP dynamics and the router dynamics can be designed to obtain a feedback system that is stable and efficient, a power network has its own physical dynamics with which our active control must interact.

Continue reading

Communication and Power Networks: Forward Engineering (Part I)

This blog post will contrast another interesting aspect of communication and power networks: designing distributed control through optimization.  This point of view has been successfully applied to understanding and designing TCP (Transmission Control Protocol) congestion control algorithms in the last 1.5 decades, and I believe that it can be equally useful for thinking about some of the feedback control problems in power networks, e.g., frequency regulation.

Even though this simple and elegant theory does not account for many important details that an algorithm must deal with in a real network, it has been successfully put to practice.  Any theory-based design method can only provide the core of an algorithm, around which many important enhancements must be developed to create a deployable product. The most important value of a theory is to provide a framework to understand issues, clarify ideas, and suggest directions, often leading to a new opportunity or a simpler, more robust and higher performing design.

In Part I of this post, I will briefly review the high-level idea using TCP congestion control as a concrete example.  I will call this design approach “forward engineering,” for reasons that will become clear later.   In Part II, I will focus on power: how frequency regulation is done today, the new opportunities that are in the future, and how forward engineering can help capture these new opportunities.

Continue reading

Communication and Power Networks: Flow Optimization (Part II)

In Part I of this post, we have seen that the optimal power flow (OPF) problem in electricity networks is much more difficult than congestion control on the Internet, because OPF is nonconvex.   In Part II, I will explain where the nonconvexity comes from, and how to deal with it.

Source of nonconvexity

Let’s again start with congestion control, which is a convex problem.

As mentioned in Part I, corresponding to each congestion control protocol is an optimization problem, called network utility maximization. It takes the form of maximizing a utility function over sending rates subject to network capacity constraints. The utility function is determined by the congestion control protocol: a different design to adapt the sending rate of a computer to congestion implies a different utility function that the protocol implicitly maximizes. The utility function is always increasing in the sending rates, and therefore, a congestion control protocol tends to push the sending rates up in order to maximize utility, but not to exceed network capacity. The key feature that makes congestion control simple is that the utility functions underlying all of the congestion control protocols that people have proposed are concave functions. More importantly, and in contrast to OPF, the network capacity constraint is linear in the sending rates. This means that network utility maximization is a convex problem.

Continue reading

Communication and Power Networks: Flow Optimization (Part I)

I have discussed in a previous post that digitization (the representation of information by zeros and ones, and its physical implementation and manipulation) and layering have allowed us to confine the complexity of physics to the physical layer and insulate high-level functionalities from this complexity, greatly simplifying the design and operation of communication networks.  For instance, routing, congestion control, search, and ad markets, etc. do not need to deal with the nonlinearity of an optical fiber or a copper wire; in fact, they don’t even know what the underlying physical medium is.

This is not the case for power networks.  

The lack of an analogous concept of digitization in power means that we have been unable to decouple the physics (Kirchhoff’s laws) of power flow from high-level functionalities.  For instance, we need to deal with power flows not only in deciding which generators should generate electricity when and how much, but also in optimizing network topology, scheduling the charging of electric vehicles, pricing electricity, and mitigating the market power of providers.   That is, while the physics of the transmission medium is confined in a single layer in a cyber network, it permeates through the entire infrastructure in a cyber-physical network, and cannot be designed away.

How difficult is it to deal with power flows?

This post (and the one that follows) will illustrate some of these challenges by contrasting the problem of congestion control on the Internet and that of optimal power flow (OPF) in electricity.

Continue reading

Universal Laws and Architectures (Part III)

This post is the final piece of a discussion and about a research program to address challenges surrounding the formalization of the study of “architecture.” If you missed it, be sure to start with Part I and Part II, since they provide context for what follows…

The pedagogy and research challenge so far

What is (I claim) universal and (somewhat unavoidably) taken for granted in both the doing and telling of the stories so far  is the enormous hidden complexity in the layered architectures, mostly in what I am calling the respective OSes, but in the layered architectures generally, as is our use of computers, phones, and the Internet.  The “hourglass” in apps/OS/HW reflects diversity in apps and hardware, and the lack of diversity in OSes.  There is no hourglass in complexity, at least within a single system, and the OSes can be highly complex.  This can be effective as long as that complexity is largely hidden, though gratuitous complexity can erode robustness and efficiency.  Layered architectures are most effective exactly when they disappear and everything just learns, adapts, evolves, and works, like magic.  Matches won and antibiotics survived, and the architecture makes it look easy.  It is this hidden complexity that we must both reverse and forward engineer, with both theory and technology.

Continue reading

Universal Laws and Architectures (Part II)

This post is a continuation of a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” If you missed it, be sure to start with Part I, since it provides context for what follows…

Basic layered architectures in theory and practice

If architecture is the most persistent and shared organizational structure of a set of systems and/or a single system over time, then the most fundamental theory of architecture is due to Turing, who first formalized splitting computation into layers of software (SW) running on digital hardware (HW).  (I’m disturbed that Turing has been wildly misread and misinterpreted, but I’ll stick with those parts of his legacy that are clear and consistent within engineering, having to do with computation and complexity.  I’ll avoid Turing tests and morphogenesis for now.)

Continue reading

Universal Laws and Architectures (Part I)

Steven’s previous posts discussing architectural issues in communication and electricity networks (part I, part II) naturally lead to a discussion of architecture more broadly, which has been topic of interest for quite a while at Caltech…

In particular, the motivation for this blog post is to start a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” For me, architecture is the most persistent and shared organizational structure across a set of systems and/or within a single system over time.  But nothing in this subject is clear or resolved.

Continue reading