It seems like I’ve been waiting forever to make this post! Back in the fall, I helped to organize an Alumni College at Caltech centered around the theme of “CS+X”. Now, I’m very excited to announce that the videos from the event are up!
What is an alumni college you ask? Well, instead a homecoming game or something like that, we get alumni back to Caltech by promising a day of research talks, well really thinks like TED talks! So, Alumni College focuses on a different theme each year, and then does a day of provocative talks on that topic. This year the theme was “Disrupting Science and Engineering with computational thinking” i.e., the disruptive power of CS+X.
As I’ve written about before, we view “CS+X” as what makes Caltech’s approach to computer science so distinctive compared to other schools. We pride ourselves on inventing fields and then leaving them for others once they’re popular so that we can invent the next field. So, “seeding fields and then ceding them”…
In any case, the alumni college was a day filled with talks on CS+X from researchers at Caltech that are leading new fields… We covered CS+Astronomy, CS+Physics, CS+Biology, CS+Economics, CS+Chemistry, CS+Energy, and so on…
You can watch all of them on Youtube here. Enjoy!
As I was flying to the NSDI PC meeting this week I was catching up on reading and came across an article on privacy in the Atlantic that (to my surprise) pushed nearly the same perspective on privacy that we studied in a paper a year or so ago… Privacy as plausable deniability.
The idea is that hacks, breaches, monitoring behavior, etc. are so common and hard to avoid that relying on tools from crypto or differential privacy isn’t really enough. Instead, if someone really cares about privacy they probably need to take that into account in their actions. For example, you can assume that google/facebook/etc. are observing your behavior online and that this is impacting prices, advertisements, etc. Tools from privacy, encryption, etc. can’t really help with this. However, tools that add “fake” traffic can. If an observer knows that you are using such a tool then you always have plausible deniability about any observed behavior, and if these are chosen carefully, then they can counter the impact of personalized ads, pricing, etc. There are now companies such as “Plausible Deniability LLC” that do exactly this!
On the research front, we looked at this in the context of the following question: If a consumer knows that their behavior is being observed and cares about privacy, can the observer infer the true preferences of the consumer? Our work gives a resounding “no”. Using tools from revealed preference theory, we show that the observer not only cannot learn, but that every set of observed choices can be “explained” as consistent with any underlying utility function from the consumer. Thus, the consumer can always maintain plausible deniability.
If you want to see the details, check it out here! And, note that the lead author (Rachel Cummings) is on the job market this year!
P.S. The NSDI PC meeting was really stimulating! It’s been a while since I had the pleasure of being on a “pure systems” PC, and it was great to see quite a few rigorous/mathematical papers be discussed and valued. Also, it was quite impressive to see how fair and thorough the discussions were. Congrats to Aditya and Jon on running a great meeting!
Well, last week was the final workshop in the semester-long “Algorithms & Uncertainty” program that I helped to organize at the Simons institute. There are a few weeks of reading groups / seminars / etc. left, but the workshop marks the last “big” event. It’s sad to see the program wrapping up, though I’m definitely happy that I will be spending a lot less time on planes pretty soon — I’ve been flying back and forth every week!
The last workshop was on “Beyond worst-case analysis” and the organizers (Avrim Blum, Nir Ailon, Nina Balcan, Ravi Kumar, Kevin Leyton-Brown, Tim Roughgarden) did a great job of putting together a really unique program. Since there’s not “one way” to go beyond worst-case, the week included a huge array of interesting ways to get results beyond what is possible from worst-case analysis, whether it be learning commonalities of different types of instances, semi-random / smoothed analysis models, or extracting features that enable learning what algorithmic approach to use.
If you’re interested in getting a broad overview of the approaches, I highly recommend watching Avrim’s introductory talk, which gave a survey of a ton of different approaches for going “beyond worst-case” that have received attention recently. Tim also has a great course on the topic that is worth checking out…
My own contribution was the final talk of the week, which gave an overview of our work on how to use properties of prediction noise to help design better online optimization algorithms. The idea is that predictions are crucial to almost all online problems, but we don’t really understand how properties of prediction error should influence algorithm design. Over the last few years we’ve developed some models and results that allow initial insights into, e.g., how correlation structure in prediction errors changes the “optimal” algorithm form. Check out the video for more details!
One of the great new NSF programs in recent years is the introduction of the “Algorithms in the Field” program, which is a joint initiative from the CCF, CNS, and IIS divisions in CISE. It’s goal is almost a direct match with what I try to do with my research: it “encourages closer collaboration between (i) theoretical computer science researchers [..] and (ii) other computing and information researchers [..] very broadly construed”. The projects it funds are meant to push the boundaries of theoretical tools and apply them in a application domain.
Of course this is perfectly suited to what we do in RSRG at Caltech! We missed the first year of the call due to bad timing, but we submitted this year and I’m happy to say it was funded (over the summer when I wasn’t blogging)!
The project is joint with Steven Low, Venkat Chandrasekaran, and Yisong Yue and has the (somewhat generic) title “Algorithmic Challenges in Smart Grids: Control, Optimization, and Learning.”
For those who are curious, here’s the quick and dirty summary of the goal…taken directly from the proposal.
People outside of Southern California often don’t appreciate how dense (and strong) the collection of universities is in the socal region. Between Caltech, USC, UCLA, UCSD, UCSB, Irvine, Riverside, etc. There’s a lot of exciting stuff going on! And, one of the great things about the area is that there’s a strong sense of community. That is really on show at this time of the year…
We’re in the middle of workshop season in the LA area where every week or so there is a Socal X workshop. We’ve already had the Socal Control Workshop, next up is the Socal Network Economics and Game Theory (NEGT) symposium (next Friday). The week after, we have the Socal Theory Day, and the week after that we have the Socal Machine Learning day! (The last two are being hosted at Caltech this year.)
So, if you’re in the area — I’ll probably see you at least once over the next few weeks! Be sure to register for the ones you want to attend ASAP…
We always have a large number of postdocs around at Caltech (we usually have ~20+), and this year is no exception. Our application site just went live, so please help me spread the word. We have (multiple) postdoc openings in all of the following areas in CMS:
I am personally looking for postdocs as part of the first four programs. Don’t worry too much about which program is best suited for you when you apply, the backend of the application site is unified so that all faculty can easily see all the applicants. Just be sure to mark the names of the faculty that you are most interested in working with when you go through the application process.
I’m generally looking for postdocs in the areas of Network Economics, Smart Grid, and Online Algorithms, but a few areas that I’m particularly hoping to find people in are: (i) digital platforms (any flavor of research, from measurement to modeling to economic analysis), (ii) markets surrounding data, (iii) electricity markets for demand response and renewables, (iv) online optimization or, more broadly, online algorithms. So, if you’re interested in these areas please apply (and send me mail)!
Over the last year, while I haven’t been blogging, one of the new directions that we’ve started to look at in RSRG is “data markets”.
“Data Markets” is one of those phrases that means lots of different things to lots of different people. At its simplest, the idea is that data is a commodity these days — data is bought and sold constantly. The challenge is that we don’t actually understand too much about data as an economic good. In fact, it’s a very strange economic good and traditional economic theory doesn’t apply…