In honor of the upcoming olympics, I figured I’d write a post highlighting something that JK, Bert, and I came up with in the process of writing our book on heavy tails.
One of the topics that is interwoven throughout the book is a connection between “extremal processes” and heavy-tails. In case you’re not familiar with extremal processes, the idea is that the process evolves as the max/min of a sequence of random variables. So, for example,
Of course, the canonical example of such processes is the evolution of world records. So, it felt like a good time to post about them here…
After a long hiatus without a proper group outing — this weekend, we got (almost) the whole group together and headed out to Santa Monica for a hike. Bose picked a nice ~8 mile route for us up to Parker Mesa and back,which gave us some nice views of the coast as well as the nearby mountains. It’s always fun to get together outside of the office…especially when we can take advantage of the beautiful January weather here in southern California!
This post is the final piece of a discussion and about a research program to address challenges surrounding the formalization of the study of “architecture.” If you missed it, be sure to start with Part I and Part II, since they provide context for what follows…
The pedagogy and research challenge so far
What is (I claim) universal and (somewhat unavoidably) taken for granted in both the doing and telling of the stories so far is the enormous hidden complexity in the layered architectures, mostly in what I am calling the respective OSes, but in the layered architectures generally, as is our use of computers, phones, and the Internet. The “hourglass” in apps/OS/HW reflects diversity in apps and hardware, and the lack of diversity in OSes. There is no hourglass in complexity, at least within a single system, and the OSes can be highly complex. This can be effective as long as that complexity is largely hidden, though gratuitous complexity can erode robustness and efficiency. Layered architectures are most effective exactly when they disappear and everything just learns, adapts, evolves, and works, like magic. Matches won and antibiotics survived, and the architecture makes it look easy. It is this hidden complexity that we must both reverse and forward engineer, with both theory and technology.
Over the last few years, I’ve gotten very interested in issues surrounding the incorporation of renewable energy into IT and, more generally, the smart grid. One of the issues that has particularly grabbed my attention is that of “market power.”
Now, market power is typically one of those fuzzy concepts that academics like to ignore — often with a phrase such as “we assume that agents are price-takers.” I’ve done this plenty of times myself, and often, this is an okay way to get insight about a problem. But, as I’ve gotten involved in electricity markets, it has become more and more clear that you can’t get away with ignoring market power issues in this context.
Unfortunately, quantifying (and even defining) market power is a tricky thing — and if done badly, it can lead to damaging regulatory problems. But, on the other hand, if it is ignored, the problems can be equally bad.
This post is a continuation of a discussion and about a research program to address an essential but (I think) neglected challenge involving “architecture.” If you missed it, be sure to start with Part I, since it provides context for what follows…
Basic layered architectures in theory and practice
If architecture is the most persistent and shared organizational structure of a set of systems and/or a single system over time, then the most fundamental theory of architecture is due to Turing, who first formalized splitting computation into layers of software (SW) running on digital hardware (HW). (I’m disturbed that Turing has been wildly misread and misinterpreted, but I’ll stick with those parts of his legacy that are clear and consistent within engineering, having to do with computation and complexity. I’ll avoid Turing tests and morphogenesis for now.)
I’m very excited to announce that Caltech will introduce a new graduate degree next year: a PhD in Computing and Mathematical Sciences (CMS).
While we didn’t get the approval in time to advertise it before students applied this year, I cannot resist mentioning it right now, since I hope that some of the students that we admit to other programs at Caltech this year will choose to switch over and be part of it…
In part I and part II of this post, I went over the conspiracy and catastrophe principles informally and formally… But, since the book we’re writing is on heavy-tails, I figured I’d dwell a little longer on the catastrophe principle before moving on. In particular, I still have to get to the third part of the title: “subexponential distributions.”