Communication and Power Networks: Flow Optimization (Part I)

I have discussed in a previous post that digitization (the representation of information by zeros and ones, and its physical implementation and manipulation) and layering have allowed us to confine the complexity of physics to the physical layer and insulate high-level functionalities from this complexity, greatly simplifying the design and operation of communication networks.  For instance, routing, congestion control, search, and ad markets, etc. do not need to deal with the nonlinearity of an optical fiber or a copper wire; in fact, they don’t even know what the underlying physical medium is.

This is not the case for power networks.  

The lack of an analogous concept of digitization in power means that we have been unable to decouple the physics (Kirchhoff’s laws) of power flow from high-level functionalities.  For instance, we need to deal with power flows not only in deciding which generators should generate electricity when and how much, but also in optimizing network topology, scheduling the charging of electric vehicles, pricing electricity, and mitigating the market power of providers.   That is, while the physics of the transmission medium is confined in a single layer in a cyber network, it permeates through the entire infrastructure in a cyber-physical network, and cannot be designed away.

How difficult is it to deal with power flows?

This post (and the one that follows) will illustrate some of these challenges by contrasting the problem of congestion control on the Internet and that of optimal power flow (OPF) in electricity.

Continue reading

Advertisements

A report from NSDI

Last week, I attended NSDI for the first time in quite a few years… I only managed to be at the conference for a day-and-a-half, but there was a lot of interesting stuff going on even in just that short time.

For me, it’s always stimulating to attend pure systems conferences like NSDI, given the contrast in research style with my own.  For example, there were more than a few papers where somewhere in the implementation, a quite challenging resource allocation problem came up, and the authors just applied a simple heuristic and moved past it without a second thought.  For me, I’d be distracted for months trying to figure out optimality guarantees, etc.  That’s, of course, a lot of fun to do and sometimes pays off, but it’s always good to see a reminder that often simple heuristics are good enough…

If you only look at four papers, which should they be?

Well, of course, you should start with the best paper award winner:

The topic of this paper highlights that, despite the fact that NSDI is a true systems conference, there were a definitely a few papers that took a theoretical/rigorous approach to design.  (Of course our paper did, but there were others too!)

Continue reading

Universal Laws and Architectures (Part III)

This post is the final piece of a discussion and about a research program to address challenges surrounding the formalization of the study of “architecture.” If you missed it, be sure to start with Part I and Part II, since they provide context for what follows…

The pedagogy and research challenge so far

What is (I claim) universal and (somewhat unavoidably) taken for granted in both the doing and telling of the stories so far  is the enormous hidden complexity in the layered architectures, mostly in what I am calling the respective OSes, but in the layered architectures generally, as is our use of computers, phones, and the Internet.  The “hourglass” in apps/OS/HW reflects diversity in apps and hardware, and the lack of diversity in OSes.  There is no hourglass in complexity, at least within a single system, and the OSes can be highly complex.  This can be effective as long as that complexity is largely hidden, though gratuitous complexity can erode robustness and efficiency.  Layered architectures are most effective exactly when they disappear and everything just learns, adapts, evolves, and works, like magic.  Matches won and antibiotics survived, and the architecture makes it look easy.  It is this hidden complexity that we must both reverse and forward engineer, with both theory and technology.

Continue reading

Universal Laws and Architectures (Part II)

This post is a continuation of a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” If you missed it, be sure to start with Part I, since it provides context for what follows…

Basic layered architectures in theory and practice

If architecture is the most persistent and shared organizational structure of a set of systems and/or a single system over time, then the most fundamental theory of architecture is due to Turing, who first formalized splitting computation into layers of software (SW) running on digital hardware (HW).  (I’m disturbed that Turing has been wildly misread and misinterpreted, but I’ll stick with those parts of his legacy that are clear and consistent within engineering, having to do with computation and complexity.  I’ll avoid Turing tests and morphogenesis for now.)

Continue reading

Universal Laws and Architectures (Part I)

Steven’s previous posts discussing architectural issues in communication and electricity networks (part I, part II) naturally lead to a discussion of architecture more broadly, which has been topic of interest for quite a while at Caltech…

In particular, the motivation for this blog post is to start a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” For me, architecture is the most persistent and shared organizational structure across a set of systems and/or within a single system over time.  But nothing in this subject is clear or resolved.

Continue reading