In part I of this post, we have seen how a layered architecture has transformed the communication network. What is so difficult about a layered architecture for the power network? Let’s again look first at its role in the communication network.
DARPA started a packet network in 1969 with four nodes at UCLA, UCSB, SRI (Stanford Research Institute), the University of Utah, that grew into today’s Internet. Early to mid 1990s was when the world at large discovered the Internet. The release of the Mosaic browser in 1993 by the National Center of Supercomputing Applications of the University of Illinois, Urbana-Champaign, has probably played the most visible role in triggering this transition. But 1990s was also the time when multiple technologies and infrastructures have come together to ready the Internet for prime time. What exactly was the role of layering?
One of the most useful features of layering is that it insulates higher-layer mechanisms from the physics of the transmission medium. All the nonlinearity of signal generation, propagation and detection in airwave, copper wire, or optical fiber is hidden in the physical layer of the protocol stack, so that the designers of medium access control, routing, congestion control, search algorithms, social networks, etc need not worry about this complexity. In fact every layer only needs to know, and conform to, the standardized interfaces of the immediate layers below and above, and nothing else. Plug and play, and it will all work (in theory, practice is messier, but let’s sweep this under the rug for now).
This insulation is the result of two major conceptual ideas and their physical implementation. One is layering as we discussed above. The other is digitization. The purpose of communication network is to transfer information. Once we discovered that information can be represented by zero’s and one’s and invented efficient ways to represent, transmit, detect, and control them, the rest more or less falls into place…. in retrospect of course! We owe this to the pioneering contributions of many giants at (and outside) AT&T Bell Labs, Nyquist, Shannon, Bardeen, Brattain, Shockley, etc. [I wonder if anyone has tried to trace the fraction of modern world’s GDP, or welfare, to all the groundbreaking inventions at Bell Labs, what a world treasure!]
In stark contrast to communication networks, both elements are missing in power networks. First, the purpose of power networks is to transfer energy. What is the corresponding conceptual idea and its physical implementation that will bring about the simplification that digitization has brought to communication?
Second, a power network is a cyber-physical system — where there are physical laws that cannot be designed away. A power network is governed by the Kirchhoff’s laws. This physics permeates the entire system: not only must the basic operation of the engineering network, such as the control of generators or setting of transformers or switching of capacitor banks, respect Kirchhooff’s laws, but also everything above, including the real-time pricing of electricity, the market design for ancillary services, demand response mechanisms, and the optimization of charging schedule of electric vehicles, etc.
I feel that the most important layering concept for power must allow us to insulate higher-level mechanisms from the physics of power flows. But it is not clear what that can be. If there will be a breakthrough, my guess is that it will come from hardware. Perhaps FACTS devices will eventually allow us to route power at will, inexpensively and ubiquitously. Perhaps storage will change the game, if large-scale inexpensive non-toxic ubiquitous storage will ever become a reality. Even then, Kirchhoff’s laws dominate.
Information is virtual and therefore can take different physical forms. Energy is fundamentally physical. Perhaps I’m completely unrealistic in hoping that layering can make a power network virtual?