It seems like I’ve been waiting forever to make this post! Back in the fall, I helped to organize an Alumni College at Caltech centered around the theme of “CS+X”. Now, I’m very excited to announce that the videos from the event are up!
What is an alumni college you ask? Well, instead a homecoming game or something like that, we get alumni back to Caltech by promising a day of research talks, well really thinks like TED talks! So, Alumni College focuses on a different theme each year, and then does a day of provocative talks on that topic. This year the theme was “Disrupting Science and Engineering with computational thinking” i.e., the disruptive power of CS+X.
As I’ve written about before, we view “CS+X” as what makes Caltech’s approach to computer science so distinctive compared to other schools. We pride ourselves on inventing fields and then leaving them for others once they’re popular so that we can invent the next field. So, “seeding fields and then ceding them”…
In any case, the alumni college was a day filled with talks on CS+X from researchers at Caltech that are leading new fields… We covered CS+Astronomy, CS+Physics, CS+Biology, CS+Economics, CS+Chemistry, CS+Energy, and so on…
You can watch all of them on Youtube here. Enjoy!
One of the great new NSF programs in recent years is the introduction of the “Algorithms in the Field” program, which is a joint initiative from the CCF, CNS, and IIS divisions in CISE. It’s goal is almost a direct match with what I try to do with my research: it “encourages closer collaboration between (i) theoretical computer science researchers [..] and (ii) other computing and information researchers [..] very broadly construed”. The projects it funds are meant to push the boundaries of theoretical tools and apply them in a application domain.
Of course this is perfectly suited to what we do in RSRG at Caltech! We missed the first year of the call due to bad timing, but we submitted this year and I’m happy to say it was funded (over the summer when I wasn’t blogging)!
The project is joint with Steven Low, Venkat Chandrasekaran, and Yisong Yue and has the (somewhat generic) title “Algorithmic Challenges in Smart Grids: Control, Optimization, and Learning.”
For those who are curious, here’s the quick and dirty summary of the goal…taken directly from the proposal.
Data centers are where the Internet and cloud services live, and so they have been getting lots of public attention in recent years. If we read technology news or research papers, it’s not uncommon that we see IT giants, like Google and Facebook, publicly discuss and share the designs of mega-scale data centers they operate. But, another important type of data center –– multi-tenant data center, or commonly called “colocation”/”colo” –– has been largely hidden from the public and rarely discussed (at least in research papers), although it’s very common in practice and located almost everywhere, from Silicon Valley to the gambling capital, Las Vegas.
Unlike a Google-type data center where the operator manages both IT equipment and the facility, multi-tenant data center is a shared facility where multiple tenants house their own servers in shared space and the data center operator is mainly responsible for facility support (like power, cooling, and space). Although the boundary is blurring, multi-tenant data centers can be generally classified as either a wholesale data center or a retail data center: wholesale data centers (like Digital Realty) primarily serve large tenants, each having a power demand of 500kW or more, while retail data centers (like Equinix) mostly target tenants with smaller demands.
You hear a lot about Bitcoin these days — how it is or isn’t the future of currency… In the middle of such a discussion recently, the topic of energy usage came up: Suppose Bitcoin did take over — what would the sustainability impact be?
It’s certainly a complicated question, and I don’t think I have a good answer to it yet. But, a first order question along the way would be: how much energy is required per Bitcoin transaction?
There’s a nice analysis of this over at motherboard, and the answer it comes to is this: a single Bitcoin transaction uses enough electricity to power ~1.5 american households for a day! By comparison, a Visa transaction requires somewhere on the order of the electricity to power .0003 households for a day…
That’s certainly a much bigger gap than I would’ve guessed, and highlights that there would certainly be a lot of energy-efficiency improvement needed if Bitcoin were to grow dramatically! Of course, it’s not clear whether that would be possible, since the security of the system depends on the difficulty of the computations (and thus large energy consumption). So, in a sense, the significant energy usage becomes part of the protection against attacks… I guess there are probably some interesting research questions here.
See the full article for a discussion of how they arrived at these numbers. It’s a back-of-the-envelope calculation, but a pretty reasonable estimate in my mind.
Climate and energy are critical, massive, and complex issues. Whatever we talk about, it will be just a small piece of the overall puzzle and, by definition, unbalanced. This post collects some tidbits that point to an underlying trend, focusing on the most commonly asked question “is there a business case for smart grid?” This trend suggests an indispensable role for distribution utility of the future.
Accelerating pace of DER (distributed energy resources)
I’m pleasantly surprised by the NYT report today (Dec 1, 2014) that one of the world’s largest investor-owned electric utilities, E.On of Germany, has decided to split itself into two, one focusing on the less (!) risky business of renewables and distribution, and the other on the more risky conventional generation business of coal, nuclear and natural gas. “We are seeing the emergence of two distinct energy worlds,” E.On’s CEO said. In case you think this is an irrational impulsive move, a financial analyst estimated that of E.On’s 9.3 billion euro in pretax profits in 2013, more than half came from the greener, more predictable businesses. The utility industry has entered a period of disequilibrium in recent years, contemplating how best to leverage emerging technologies and evolve their business models (we will return to this point below). Initial response to E.On’s decision: its share price rose about 5% today. E.On said it will present a plan in 2016 to spin off most of the unit that currently holds the conventional generation.
These last few weeks, the news has been full of lots of seemingly conflicting messages about renewables, so I figured it was worth talking about things a little bit in a post.
First, the good news. The old conventional wisdom that solar can never match prices with conventional generation is just plain false at this point. Deutsche Bank recently released a report highlighting that rooftop solar will reach grid parity (i.e., be as cheap, or cheaper, than the average electricity bill in the US) in 47 states in 2016, and that it has already has reached grid parity in states accounting for more than 90% of the US electricity consumption. Now, 2016 is a pivotal year because those numbers assume the 30% subsidies that are present today for solar, which goes away in 2016. However, even if these drop to 10% parity will be maintained in 36 states, as this plot from the report shows (the x-axis is the electricity price /kwh – cost of solar/kwh, so positive means savings from solar):
Of course, that still includes a subsidy, so solar costs aren’t matching those of other sources yet. However, that will happen soon if current trends continue for even a few years. Here’s one of my favorite plots in that regard (from a World Bank analysis). I still think it’s crazy how quickly the technology advancements and economics are working in favor of solar.
Now for the bad news. An interesting analysis emerged from some of the folks that were behind Google’s RE<C initiative, that went looking for breakthrough approaches for renewable generation that could make renewable energy cheaper than coal. Their conclusion: no current forms of renewable energy are enough new approaches to solve climate change, even in best case forecasts. Thus, in their words, even if RE<C had reached it’s goal, that goal was “not ambitious enough to reverse climate change.” “To reverse climate change, our society requires something beyond today’s renewable energy technologies.” So, in some sense, we’re too late. But, engineers and inventors can do amazing things, so who knows what breakthroughs will come over the next 20 years. For example, one approach that would be a game changer for the models typically used would be a way to pull CO2 from the atmosphere and store it…
Given the push toward renewable energy that is happening these days, it’s natural for folks to start to demonize fossil fuels like coal, gas, etc. I think this is a shame. Cheap energy is a really great thing — and fossil fuels are really good at providing this. I’ve posted before about how important energy usage is for the health and advancement of humankind. So, we have to be careful that a push toward renewable energy sources does not come at the expense of the health and welfare of the population — where cheap fossil fuels can save lives, we should use them without guilt! It’s disappointing when folks like Bill Gates take flack for decisions like this…
But, at the same time, there are also clear reasons to prefer renewable energy to fossil fuels in the long run. Typically, people jump to environmental concerns as the hammer to motivate this. Environmental issues are certainly one strong motivating factor, but for this post, I want to try to make the argument a different way.
In particular, even if we completely ignore all the points about pollution and environmental side-effects (which, of course, we shouldn’t do as a society), I think the value of energy usage for society makes a clear argument for investing in the development of renewables and, eventually, converting as much as possible to renewable energy.
I’d like to announce a postdoc opportunity at Caltech for those on the energy-side of things. The program is run by the Resnick Institute, which is the overarching center for energy research of all forms on campus. It includes the things that we (Steven, Mani, me, etc) do in power systems, as well as lots of other activities across materials, chemistry, physics, aeronautics, etc. So, it’s a great place for interdisciplinary work.
Here’s the blurb about the postdoc fellowship:
About the Resnick Sustainability Institute Post-Doctoral Fellowship: The Resnick fellows will have support for up to two years to work on creative, cross-catalytic research that complements the existing work of the Caltech faculty, or that creates new research directions within the mission areas of the Resnick Sustainability Institute. Eligible candidates will have completed their PhD within five years of the start of the appointment, and should have secured a commitment from one or more Caltech faculty members to serve as a mentor and provide office/lab space for the length of the fellowship. Candidates can come from any country, provided they are proficient in English. Applications consisting of a research proposal, cover letter, recommendations and CV can be submitted through our website: http://resnick.caltech.edu/fellowships-apply.php. The fellowship will provide an annual salary of $65,000 plus benefits, $6,000/year in research budget, and relocation allowance of $3,000. Any questions can be directed to email@example.com.
Note that this is not the only postdoc program available for folks that want to join RSRG. We also look for postdocs through the CMI program, and that call will come out later in the fall. Applications for CMI tend to be due in December.
A bit of news on the data center front, for those who may have missed it: Facebook recently announced the deployment of a new power-efficient load balancer called “Autoscale.” Here’s their blog post about it.
Basically, the quick and dirty summary of the design is to adapt the number of active servers so that it’s proportional to the workload, and adjust the load balancing to focus on keeping servers “busy enough” so that they don’t end up in a situation where lots of servers are very lightly loaded.
So, the ideas are very related to what’s been going on in academia over the last few years. Some of the ideas are likely inspired by Anshul Ghandi and Mor Harchol-Balter et al.’s work (who have been chatting with Facebook over the past few years), and it’s actually quite similar in the architecture to the “Net Zero Data Center Architecture” developed by HP (that incorporated some of our work, e.g. these papers, which are joint with Minghong Lin, who now works with the infrastructure team at Facebook).
While this isn’t the first tech company to release something like this, it’s always nice to see it happen. And, it will give me more ammo to use when chatting with people about the feasibility of this sort of design. It is amazing to me that I still get comments from folks about how “data center operators don’t care about energy”… So, to counter that view, here’re some highlights from the post:
“Improving energy efficiency and reducing environmental impact as we scale is a top priority for our data center teams.”
“during low-workload hours, especially around midnight, overall CPU utilization is not as efficient as we’d like. […] If the overall workload is low (like at around midnight), the load balancer will use only a subset of servers. Other servers can be left running idle or be used for batch-processing workloads.”
Anyway, congrats to Facebook for taking the plunge. I hope that I hear about many other companies doing the same in the coming years!
The typical story surrounding data centers and energy is an extremely negative one: Data centers are energy hogs. This message is pervasive in the media, and it certainly rings true. However, we have come a long way in the last decade, and though we certainly still need to “get our house in order” by improving things further, the most advanced data centers are quite energy-efficient at this point. (Note that we’ve done a lot of work in this area at Caltech and, thanks to HP, we are certainly glad to see it moving into industry deployments.)
But, the view of data centers as energy hogs is too simplistic. Yes, they use a lot of energy, but energy usage is not a bad thing in and of itself. In the case of data centers, energy usage typically leads to energy savings. In particular, moving things to the cloud is most often a big win in terms of energy usage…
More importantly, though, the goal of this post is to highlight that, in fact, data centers can be a huge benefit in terms of integrating renewable energy into the grid, and thus play a crucial role in improving the sustainability of our energy landscape.
In particular, in my mind, a powerful alternative view is that data centers are batteries. That is, a key consequence of energy efficiency improvements in data centers is that their electricity demands are very flexible. They can shed 10%, 20%, even 30% of their electricity usage in as little as 10 minutes by doing things such as precooling, adjusting the temperature, demand shifting, quality degradation, geographical load balancing, etc. These techniques have all been tested at this point in industry data centers, and can be done with almost no performance impact for interactive workloads!