Last week, I had the pleasure of giving a talk at a workshop for the new Initiative for Mathematical Sciences and Engineering at UIUC. It was a very interesting visit, especially given the similar perspective of the initiative to the Computing and Mathematical Sciences PhD program that we are starting next year at Caltech. I really enjoyed getting some perspective on the challenges they’ve faced with such a broad interdisciplinary program…
The workshop itself was quite interesting, too (and had a catchy name — Hot TIME, which stands for “Hot Topics at the Interface of Mathematics and Engineering”) . The breadth of the initiative showed in the breadth of the talks and people at the workshop, including everything from Networking, Economics, & Learning to Applied Geometry & Topology.
Given the breadth of the workshop, there were lots of very interdisciplinary projects that were described. One that stuck out for me was Ruth Williams‘ talk on the first day. I know Ruth’s work quite well, as she is an extremely influential applied probabilist and queueing theorist, so it’s always great to hear her talks… But, one thing that has excited me about her work recently (and that she highlighted in her talk here) is that she has really found important problems within biology for which queueing theoretic tools are a perfect fit. This is particularly exciting since queueing as a discipline has really been focused nearly entirely on service and information systems for the last thirty years.
I won’t try to explain the biology, since it’s beyond me, but I can say that at a high level, she’s been able to show that queueing models are quite accurate predictors in the context of enzymatic processing networks. And, further, it turns out that the models that are most appropriate are variations of the same multi-class queueing networks that many of us use in the context of data centers and call centers!
She has a few slides up on the topic, if you want to learn more. They’re targeted for queueing folks, so you should be able to get an idea for the application even if, like me, you’re not well-trained on the bio side. And, a nice starting point for more detailed explanations is her paper: “Queueing up for enzymatic processing: correlated signaling through coupled degradation.”
Another highlight for me was hearing about the Lav Varshey’s work. Lav is a new faculty hire at UIUC who very recently moved from IBM research. He has a particularly interesting research vision surrounding the idea of “Computational Creativity,” which sounds quite fuzzy, but is actually quite rigorous. In some sense, this can be viewed as a next step for Watson at IBM…
While the notion of computational creativity is vague, they’ve been able to have remarkable success in very concrete ways. In particular, they’ve been able to use a “computationally creative computer” to automatically design and discover recipes that are flavorful, healthy, and novel. And, they’ve even tested the recipes using a panel of chefs to quantify and evaluate their performance! The most fun part is that they now have a food truck that serves food designed by their algorithms….so when he gives talks, he can have the food truck outside to show the proof of concept!
If you want to find out more, check out the NPR story on the work: “Computers May Someday Beat Chefs At Creating Flavors We Crave”
For me, it was quite nice to have such breadth in the workshop topics, because it gave me a chance to talk about work that I don’t normally get to talk about. These days, it seems that I almost always give talks about topics related to data centers, electricity markets, or the combination of the two, but the breadth of Hot TIME let me take the opportunity to talk about the work that Umang Bhaskar, Siddharth Barman, Federico Echenique, and I have been doing looking at incorporating an empirical perspective into the algorithmic questions that dominate algorithmic game theory. Informally, I term our work “Empirical Algorithmic Game Theory,” since our goal is to ask many of the same questions that the AGT community has been asking, but to ask it starting from data (inferring a model from the data) rather than starting directly from the model. So, from an economist’s view, what we’re trying to do is add a computational/algorithmic perspective to revealed preference theory.
I’ll resist saying more about it now, because I’m planning to have a 3-post series on the topic pretty soon…but if you’re curious, you can check out a few papers we have on the topic. Also, Aaron Roth and Noam Nissan each wrote posts that nicely explain the work…