I have discussed in a previous post that digitization (the representation of information by zeros and ones, and its physical implementation and manipulation) and layering have allowed us to confine the complexity of physics to the physical layer and insulate high-level functionalities from this complexity, greatly simplifying the design and operation of communication networks. For instance, routing, congestion control, search, and ad markets, etc. do not need to deal with the nonlinearity of an optical fiber or a copper wire; in fact, they don’t even know what the underlying physical medium is.

*This is not the case for power networks. *

The lack of an analogous concept of digitization in power means that we have been unable to decouple the physics (Kirchhoff’s laws) of power flow from high-level functionalities. For instance, we need to deal with power flows not only in deciding which generators should generate electricity when and how much, but also in optimizing network topology, scheduling the charging of electric vehicles, pricing electricity, and mitigating the market power of providers. That is, while the physics of the transmission medium is confined in a single layer in a cyber network, it permeates through the entire infrastructure in a cyber-physical network, and cannot be designed away.

*How difficult is it to deal with power flows?*

This post (and the one that follows) will illustrate some of these challenges by contrasting the problem of congestion control on the Internet and that of optimal power flow (OPF) in electricity.

**Congestion control and OPF**

Congestion control is the problem of deciding, for every computer on the Internet, how fast it should send its traffic into the network. If they send too slowly, the application performance suffers, but if they send too fast, they congest the network, and performance can suffer much more. It is a challenging problem, because there is no centralized authority that coordinates these decisions. The problem must be solved in a distributed fashion by each computer individually in real time using only local information, yet the global behavior that results from the interaction of these local algorithms must be stable, robust, and efficient. A useful way to think about it is to interpret these individual actions as carrying out a distributed algorithm over the network in real time to solve a certain optimization problem (“network is the computer” in a different sense).

For us, OPF in power networks is the problem of minimizing a certain cost function, such as the total power loss over a network, the fuel cost of electricity generation, or some user disutility, subject to certain constraints such as the Kirchhoff’s laws that govern power flows on a network. OPF is fundamental in power system operations and planning, as it underlies numerous applications such as unit commitment (which generators to use), economic dispatch (how much these generators should generate), state estimation (estimation of the full network state from partial measurements), stability and reliability assessment (will the network converge to a new stable equilibrium in case of a contingency), volt/var control (provision of reactive power to stabilize voltages), demand response (adaptation of load to fluctuating supply), etc. At the core of each of these applications is an OPF problem.

Both congestion control and OPF optimize network flows, both are critical for the operation of their respective networks, both have a long history in research and in practice, and both continue to evolve (understandably slowly) to meet new challenges. Mathematically, the key difference is that congestion control for wireline networks is a *convex* problem, while OPF is a *nonconvex* problem.

This has three important implications.

**Implications of convexity/nonconvexity**

One key insight from optimization theory that has emerged in the last two decades is that convexity is the watershed between what is efficiently computable (in polynomial time) and what is not. This means that, for large enough networks, there is no hope of solving OPF to optimality, unless some hidden convexity structure in OPF is exploited, or distributed algorithms are developed. We will discuss such an approach – solving OPF through convex relaxation – in the next post.

The second implication of convexity is that simple algorithms, such as first-order hill-climbing algorithms, are usually effective. Convexity imposes such a strong structure that even a naïve iterative procedure that, in each iteration, looks at the *local* cost landscape and takes a small step towards reducing the cost (i.e., in some downhill direction, hence the notion of hill-climbing) will eventually arrive at a *global* optimal. This is due to the property that local optimality implies global optimality (first-order optimality condition is sufficient, not just necessary) for convex problems. For nonconvex problems, on the other hand, such simple algorithms can get stuck in potentially bad locally-optimal points.

The third implication is that simple algorithms tend to be very robust for convex problems. As alluded to earlier, congestion control can be interpreted as exactly such a first-order algorithm carried out by computers and routers on the Internet in real time in a distributed fashion to implicitly solve a global convex optimization problem. Even though the early congestion control algorithms were not designed with an optimization problem in mind, their protocol actions turn out to indeed solve a network utility maximization problem over the Internet. The robustness of first-order optimization algorithms then implies that the dynamic behavior of these congestion control protocols is simple, stable, and predictable. A nonconvex problem, on the other hand, may have multiple equilibrium points (i.e., lots of locally-optimal points), and iterative algorithms for solving it may have complicated dynamics.

So, the take-away from this post is simply that OPF is much more difficult than congestion control, partly because of its nonconvexity. In Part II of this post, we’ll come back and discuss what the source of the nonconvexity is, and how it can be overcome.