I’ve posted the beginnings of what I hope will become an extensive library of videos, papers, notes, and slides exploring in more detail both illustrative case studies and theoretical foundations for the universal laws and architectures that I superficially referred to in my previous blog posts. For the moment, these are simply posted on dropbox, so be sure to download them, since looking at them in a browser may only give a preview…
I’m eager to get feedback on any aspects of the material, and all the sources are available for reuse.
In addition to the introductory and overview material, of particular interest might be a recent paper on heart rate variability, one of the most persistent mysteries in all of medicine and biology, which we resolve in a new but accessible way. There are tutorial videos in addition to the paper for download.
This post is the final piece of a discussion and about a research program to address challenges surrounding the formalization of the study of “architecture.” If you missed it, be sure to start with Part I and Part II, since they provide context for what follows…
The pedagogy and research challenge so far
What is (I claim) universal and (somewhat unavoidably) taken for granted in both the doing and telling of the stories so far is the enormous hidden complexity in the layered architectures, mostly in what I am calling the respective OSes, but in the layered architectures generally, as is our use of computers, phones, and the Internet. The “hourglass” in apps/OS/HW reflects diversity in apps and hardware, and the lack of diversity in OSes. There is no hourglass in complexity, at least within a single system, and the OSes can be highly complex. This can be effective as long as that complexity is largely hidden, though gratuitous complexity can erode robustness and efficiency. Layered architectures are most effective exactly when they disappear and everything just learns, adapts, evolves, and works, like magic. Matches won and antibiotics survived, and the architecture makes it look easy. It is this hidden complexity that we must both reverse and forward engineer, with both theory and technology.
This post is a continuation of a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” If you missed it, be sure to start with Part I, since it provides context for what follows…
Basic layered architectures in theory and practice
If architecture is the most persistent and shared organizational structure of a set of systems and/or a single system over time, then the most fundamental theory of architecture is due to Turing, who first formalized splitting computation into layers of software (SW) running on digital hardware (HW). (I’m disturbed that Turing has been wildly misread and misinterpreted, but I’ll stick with those parts of his legacy that are clear and consistent within engineering, having to do with computation and complexity. I’ll avoid Turing tests and morphogenesis for now.)
Steven’s previous posts discussing architectural issues in communication and electricity networks (part I, part II) naturally lead to a discussion of architecture more broadly, which has been topic of interest for quite a while at Caltech…
In particular, the motivation for this blog post is to start a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” For me, architecture is the most persistent and shared organizational structure across a set of systems and/or within a single system over time. But nothing in this subject is clear or resolved.
In part I of this post, we have seen how a layered architecture has transformed the communication network. What is so difficult about a layered architecture for the power network? Let’s again look first at its role in the communication network.
DARPA started a packet network in 1969 with four nodes at UCLA, UCSB, SRI (Stanford Research Institute), the University of Utah, that grew into today’s Internet. Early to mid 1990s was when the world at large discovered the Internet. The release of the Mosaic browser in 1993 by the National Center of Supercomputing Applications of the University of Illinois, Urbana-Champaign, has probably played the most visible role in triggering this transition. But 1990s was also the time when multiple technologies and infrastructures have come together to ready the Internet for prime time. What exactly was the role of layering?
Adam took the courageous dive into the world of blogging about us at Caltech living in “the gap.” I dare not commit to regular contributions as he admirably has, but have agreed to write a series of posts contrasting R&D for power and communication networks. This is the first in this series, and my first-ever blog post!
Smart grid is in vogue these days, for good reasons. As always, excitement garners people, ideas, and resources; but if not well-managed, can create disillusion that pushes the pendulum back. It’d be impossible to predict how the current resurgence of interest in power systems R&D will play out, but the confluence of powerful forces will likely (and hopefully) drive dramatic advances in the coming decades. We plan to chat about these in this space over the coming months.