I’d like to announce a postdoc opportunity at Caltech for those on the energy-side of things. The program is run by the Resnick Institute, which is the overarching center for energy research of all forms on campus. It includes the things that we (Steven, Mani, me, etc) do in power systems, as well as lots of other activities across materials, chemistry, physics, aeronautics, etc. So, it’s a great place for interdisciplinary work.
Here’s the blurb about the postdoc fellowship:
About the Resnick Sustainability Institute Post-Doctoral Fellowship: The Resnick fellows will have support for up to two years to work on creative, cross-catalytic research that complements the existing work of the Caltech faculty, or that creates new research directions within the mission areas of the Resnick Sustainability Institute. Eligible candidates will have completed their PhD within five years of the start of the appointment, and should have secured a commitment from one or more Caltech faculty members to serve as a mentor and provide office/lab space for the length of the fellowship. Candidates can come from any country, provided they are proficient in English. Applications consisting of a research proposal, cover letter, recommendations and CV can be submitted through our website: http://resnick.caltech.edu/fellowships-apply.php. The fellowship will provide an annual salary of $65,000 plus benefits, $6,000/year in research budget, and relocation allowance of $3,000. Any questions can be directed to email@example.com.
Note that this is not the only postdoc program available for folks that want to join RSRG. We also look for postdocs through the CMI program, and that call will come out later in the fall. Applications for CMI tend to be due in December.
A bit of news on the data center front, for those who may have missed it: Facebook recently announced the deployment of a new power-efficient load balancer called “Autoscale.” Here’s their blog post about it.
Basically, the quick and dirty summary of the design is to adapt the number of active servers so that it’s proportional to the workload, and adjust the load balancing to focus on keeping servers “busy enough” so that they don’t end up in a situation where lots of servers are very lightly loaded.
So, the ideas are very related to what’s been going on in academia over the last few years. Some of the ideas are likely inspired by Anshul Ghandi and Mor Harchol-Balter et al.’s work (who have been chatting with Facebook over the past few years), and it’s actually quite similar in the architecture to the “Net Zero Data Center Architecture” developed by HP (that incorporated some of our work, e.g. these papers, which are joint with Minghong Lin, who now works with the infrastructure team at Facebook).
While this isn’t the first tech company to release something like this, it’s always nice to see it happen. And, it will give me more ammo to use when chatting with people about the feasibility of this sort of design. It is amazing to me that I still get comments from folks about how “data center operators don’t care about energy”… So, to counter that view, here’re some highlights from the post:
“Improving energy efficiency and reducing environmental impact as we scale is a top priority for our data center teams.”
“during low-workload hours, especially around midnight, overall CPU utilization is not as efficient as we’d like. […] If the overall workload is low (like at around midnight), the load balancer will use only a subset of servers. Other servers can be left running idle or be used for batch-processing workloads.”
Anyway, congrats to Facebook for taking the plunge. I hope that I hear about many other companies doing the same in the coming years!
The typical story surrounding data centers and energy is an extremely negative one: Data centers are energy hogs. This message is pervasive in the media, and it certainly rings true. However, we have come a long way in the last decade, and though we certainly still need to “get our house in order” by improving things further, the most advanced data centers are quite energy-efficient at this point. (Note that we’ve done a lot of work in this area at Caltech and, thanks to HP, we are certainly glad to see it moving into industry deployments.)
But, the view of data centers as energy hogs is too simplistic. Yes, they use a lot of energy, but energy usage is not a bad thing in and of itself. In the case of data centers, energy usage typically leads to energy savings. In particular, moving things to the cloud is most often a big win in terms of energy usage…
More importantly, though, the goal of this post is to highlight that, in fact, data centers can be a huge benefit in terms of integrating renewable energy into the grid, and thus play a crucial role in improving the sustainability of our energy landscape.
In particular, in my mind, a powerful alternative view is that data centers are batteries. That is, a key consequence of energy efficiency improvements in data centers is that their electricity demands are very flexible. They can shed 10%, 20%, even 30% of their electricity usage in as little as 10 minutes by doing things such as precooling, adjusting the temperature, demand shifting, quality degradation, geographical load balancing, etc. These techniques have all been tested at this point in industry data centers, and can be done with almost no performance impact for interactive workloads!
In Part I of this post, I have explained the idea of reverse and forward engineering, applied to TCP congestion control. Here, I will describe how forward engineering can help the design of ubiquitous, continuously-acting, and distributed algorithms for load-side participation in frequency control in power networks. One of the key differences is that, whereas on the Internet, both the TCP dynamics and the router dynamics can be designed to obtain a feedback system that is stable and efficient, a power network has its own physical dynamics with which our active control must interact.
This blog post will contrast another interesting aspect of communication and power networks: designing distributed control through optimization. This point of view has been successfully applied to understanding and designing TCP (Transmission Control Protocol) congestion control algorithms in the last 1.5 decades, and I believe that it can be equally useful for thinking about some of the feedback control problems in power networks, e.g., frequency regulation.
Even though this simple and elegant theory does not account for many important details that an algorithm must deal with in a real network, it has been successfully put to practice. Any theory-based design method can only provide the core of an algorithm, around which many important enhancements must be developed to create a deployable product. The most important value of a theory is to provide a framework to understand issues, clarify ideas, and suggest directions, often leading to a new opportunity or a simpler, more robust and higher performing design.
In Part I of this post, I will briefly review the high-level idea using TCP congestion control as a concrete example. I will call this design approach “forward engineering,” for reasons that will become clear later. In Part II, I will focus on power: how frequency regulation is done today, the new opportunities that are in the future, and how forward engineering can help capture these new opportunities.
Germany has been ahead of the curve in terms of pushing the integration of renewable generation. Its so-called “energy revolution,” or Energiewende, is something it has been both heralded and criticized for over the years. In my opinion, the impacts of Energiewende on the energy industry have to a large extent been positive, since the investment has served to motivate and fund a lot of technological advances, especially for solar. However, Energiewende has certainly made some big mistakes over the years which, in a few cases, have threatened to hurt investment in renewables in other countries.
Well, June is conferences season for me, so despite a new baby at home I went off on another trip this week — sorry honey! This time it was ACM Sigmetrics in Austin, where I helped to organize the GreenMetrics workshop, and then presented one of our group’s three papers on the first day of the main conference.