Communication and Power Networks: Forward Engineering (Part I)

This blog post will contrast another interesting aspect of communication and power networks: designing distributed control through optimization.  This point of view has been successfully applied to understanding and designing TCP (Transmission Control Protocol) congestion control algorithms in the last 1.5 decades, and I believe that it can be equally useful for thinking about some of the feedback control problems in power networks, e.g., frequency regulation.

Even though this simple and elegant theory does not account for many important details that an algorithm must deal with in a real network, it has been successfully put to practice.  Any theory-based design method can only provide the core of an algorithm, around which many important enhancements must be developed to create a deployable product. The most important value of a theory is to provide a framework to understand issues, clarify ideas, and suggest directions, often leading to a new opportunity or a simpler, more robust and higher performing design.

In Part I of this post, I will briefly review the high-level idea using TCP congestion control as a concrete example.  I will call this design approach “forward engineering,” for reasons that will become clear later.   In Part II, I will focus on power: how frequency regulation is done today, the new opportunities that are in the future, and how forward engineering can help capture these new opportunities.

Continue reading

Advertisements

Communication and Power Networks: Flow Optimization (Part II)

In Part I of this post, we have seen that the optimal power flow (OPF) problem in electricity networks is much more difficult than congestion control on the Internet, because OPF is nonconvex.   In Part II, I will explain where the nonconvexity comes from, and how to deal with it.

Source of nonconvexity

Let’s again start with congestion control, which is a convex problem.

As mentioned in Part I, corresponding to each congestion control protocol is an optimization problem, called network utility maximization. It takes the form of maximizing a utility function over sending rates subject to network capacity constraints. The utility function is determined by the congestion control protocol: a different design to adapt the sending rate of a computer to congestion implies a different utility function that the protocol implicitly maximizes. The utility function is always increasing in the sending rates, and therefore, a congestion control protocol tends to push the sending rates up in order to maximize utility, but not to exceed network capacity. The key feature that makes congestion control simple is that the utility functions underlying all of the congestion control protocols that people have proposed are concave functions. More importantly, and in contrast to OPF, the network capacity constraint is linear in the sending rates. This means that network utility maximization is a convex problem.

Continue reading

Communication and Power Networks: Flow Optimization (Part I)

I have discussed in a previous post that digitization (the representation of information by zeros and ones, and its physical implementation and manipulation) and layering have allowed us to confine the complexity of physics to the physical layer and insulate high-level functionalities from this complexity, greatly simplifying the design and operation of communication networks.  For instance, routing, congestion control, search, and ad markets, etc. do not need to deal with the nonlinearity of an optical fiber or a copper wire; in fact, they don’t even know what the underlying physical medium is.

This is not the case for power networks.  

The lack of an analogous concept of digitization in power means that we have been unable to decouple the physics (Kirchhoff’s laws) of power flow from high-level functionalities.  For instance, we need to deal with power flows not only in deciding which generators should generate electricity when and how much, but also in optimizing network topology, scheduling the charging of electric vehicles, pricing electricity, and mitigating the market power of providers.   That is, while the physics of the transmission medium is confined in a single layer in a cyber network, it permeates through the entire infrastructure in a cyber-physical network, and cannot be designed away.

How difficult is it to deal with power flows?

This post (and the one that follows) will illustrate some of these challenges by contrasting the problem of congestion control on the Internet and that of optimal power flow (OPF) in electricity.

Continue reading

Can 19th-century technology solve the energy storage dilemma?

Energy storage is basically a holy grail for the power system community these days.  If we had cost-effective, large-scale energy storage, many of the challenges that go along with incorporating renewable energy into the grid would disappear.  But, we don’t, and the basic feeling is that we need some sort of new idea to get there…

My bet is that if you ask a grade-schooler about how best to store energy, one of the first ideas they’d suggest is to use roll a heavy rock up a hill when you have excess energy and then, when you want energy later, extract it as the rock rolls down the hill…

Over the last few years I’ve been suggesting to folks that this idea isn’t as crazy as it sounds, and it seems that there were others of similar minds! Dave Rutledge recently pointed me to a new energy storage startup called ARES that does essentially that.

Continue reading

Death spirals

One of the challenges of renewable integration that often goes undiscussed are the “death spirals” that are associated with adoption. We’ve been thinking a lot about these issues at Caltech over the past few years…

Two motivating stories

To highlight what we mean by a “death spiral”, let us first consider an example of  consumers in Southern California who use a lot of power from the power grid.  They clearly have an incentive to install rooftop solar since the price you pay for each incremental kilowatt-hour you consume increases with the total amount that you consume. That means that if you consume less you fall into a lower tier in which the price of the next kilowatt hour you consume is low; whereas if you consume a lot, the corresponding price you pay is high. This convex price structure is an incentive for high consumers to reduce consumption; it is also, however, an incentive for installing rooftop solar so that the consumer’s net consumption falls into a low tier.

But what are the consequences of the fact that incentives for adoption are much stronger for high consumers?

Continue reading

Market Power in Electricity Markets

Over the last few years, I’ve gotten very interested in issues surrounding the incorporation of renewable energy into IT and, more generally, the smart grid.  One of the issues that has particularly grabbed my attention is that of “market power.”

Now, market power is typically one of those fuzzy concepts that academics like to ignore — often with a phrase such as “we assume that agents are price-takers.” I’ve done this plenty of times myself, and often, this is an okay way to get insight about a problem.  But, as I’ve gotten involved in electricity markets, it has become more and more clear that you can’t get away with ignoring market power issues in this context.

Unfortunately, quantifying (and even defining) market power is a tricky thing — and if done badly, it can lead to damaging regulatory problems.  But, on the other hand, if it is ignored, the problems can be equally bad.

Continue reading

Universal Laws and Architectures (Part II)

This post is a continuation of a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” If you missed it, be sure to start with Part I, since it provides context for what follows…

Basic layered architectures in theory and practice

If architecture is the most persistent and shared organizational structure of a set of systems and/or a single system over time, then the most fundamental theory of architecture is due to Turing, who first formalized splitting computation into layers of software (SW) running on digital hardware (HW).  (I’m disturbed that Turing has been wildly misread and misinterpreted, but I’ll stick with those parts of his legacy that are clear and consistent within engineering, having to do with computation and complexity.  I’ll avoid Turing tests and morphogenesis for now.)

Continue reading