June is a month that is dominated by conference travel for me, with three of my favorite conferences all typically happening back-to-back. The third (and final) of these this year was Stochastic Networks. The little one at home prevented me from being able to join for the whole conference, but I was happy to be able to come for the first two days.
Stochastic Networks is an applied probability conference that is the type of event that doesn’t happen often enough in computer science. Basically, it consists of 20-25 invited hour-long talks over a week. The talks are mostly senior folks with a few junior folks thrown in, and are of an extremely high quality. And, if you do the math, that makes an average of 4-5 talks per day, which means that the days leave a lot of time for conversation and interaction. Because of the quality of the speakers, there are still lots of folks that attend even if they aren’t presenting (which makes for somewhere around a 100+ person audience, I’d guess), so it becomes a very productive event, both in terms of working with current collaborators and in terms of starting up new projects.
I was one of the speakers this year, and took the opportunity to try to introduce the stochastic networks crowd to electricity markets. It was a bit of an experiment, because I figured that most of the audience wouldn’t have background in power systems; so, my goal was really to highlight that the tools and models they are already familiar with could be quite applicable and powerful in this important new area.
To do this, I focused on two classic OR/OM type models (the newsvendor problem and Cournot competition) and tried to show how important insights about electricity markets can come from studying these well-known models. You can judge for yourself how successful I was if you take a look at my slides…
There were a number of very interesting talks on the first two days of the conference, including Neil Walton‘s talk, where he summarized his work that recently won the Best Paper award at Sigmetrics. Neil is really one of the young stars in this area — he has an amazing knack for returning to very classic models that have been studied to death and pulling out something truly novel (both creative and impactful).
But, I think the talk that I’ll write about here is Ilze Ziedins‘ talk on selfish routing. Having seen so many talks on this topic over the years in the AGT community, I was expecting this talk to retread known ground for me to a large extent, but instead, it highlighted how skewed a view of the area I had from seeing primarily the AGT work on the topic. She focused on issues that really require modeling the stochastic state of the models to study, and this is something that the routing games literature in the AGT community hardly ever delves into.
One of the cutest pieces of her talk focused on the Downs-Thomson paradox, which I must admit I hadn’t come across before she explained it. It is similar in spirit to Braess’ paradox, which has become a cornerstone of the AGT community, but it’s different — it deals with adding capacity instead of adding a link to a network.
In a nutshell, the Downs-Thompson paradox is that, in a world with both public transportation (e.g., buses/trains) and private transportation (cars), increasing private road capacity can make overall congestion worse, because commuters shift from public transportation (which has economies of scale) to private transportation (which causes negative externalities). This is a great thing to keep in mind when looking at the history of public/private transportation investments in LA as opposed to in the Netherlands, for example.
Ilze’s talk looked at the role of information revelation with respect to this paradox. In particular, can revealing more information about the congestion in the system mitigate the damaging behavior? The short answer she gave is “yes.” In particular, if agents can do state-dependent routing and have accurate information about current states of the routes, then the paradox mostly disappears…at least in simple networks.