As I was flying to the NSDI PC meeting this week I was catching up on reading and came across an article on privacy in the Atlantic that (to my surprise) pushed nearly the same perspective on privacy that we studied in a paper a year or so ago… Privacy as plausable deniability.
The idea is that hacks, breaches, monitoring behavior, etc. are so common and hard to avoid that relying on tools from crypto or differential privacy isn’t really enough. Instead, if someone really cares about privacy they probably need to take that into account in their actions. For example, you can assume that google/facebook/etc. are observing your behavior online and that this is impacting prices, advertisements, etc. Tools from privacy, encryption, etc. can’t really help with this. However, tools that add “fake” traffic can. If an observer knows that you are using such a tool then you always have plausible deniability about any observed behavior, and if these are chosen carefully, then they can counter the impact of personalized ads, pricing, etc. There are now companies such as “Plausible Deniability LLC” that do exactly this!
On the research front, we looked at this in the context of the following question: If a consumer knows that their behavior is being observed and cares about privacy, can the observer infer the true preferences of the consumer? Our work gives a resounding “no”. Using tools from revealed preference theory, we show that the observer not only cannot learn, but that every set of observed choices can be “explained” as consistent with any underlying utility function from the consumer. Thus, the consumer can always maintain plausible deniability.
If you want to see the details, check it out here! And, note that the lead author (Rachel Cummings) is on the job market this year!
P.S. The NSDI PC meeting was really stimulating! It’s been a while since I had the pleasure of being on a “pure systems” PC, and it was great to see quite a few rigorous/mathematical papers be discussed and valued. Also, it was quite impressive to see how fair and thorough the discussions were. Congrats to Aditya and Jon on running a great meeting!
(I wrote this during the workshop a few weeks ago, and just realized that I never actually hit “publish.” Better late then never, I guess!)
Every year in the fall, all the folks in southern California interested in the intersection of economics and engineering/computer science get together and have a two-day workshop that we call NEGT for “Network Economics and Game Theory.” Hosting duties rotate between USC, UCLA, and Caltech, and this year it was our job. The workshop is just wrapping up and, thanks to our amazing admin Sydney Garstang, everything went wonderfully!
There were lots of great talks, and the slides will eventually start to show up here. Of the many highlights, our two external speakers both gave really great talks. Our first keynote, Tim Roughgarden, gave a great overview of recent results in the area of approximate mechanism design. This is a direction that many folks in the Algorithmic Game Theory community have been pushing on in a while, but Tim showed some very interesting new results. Plus, it is always interesting to see how economists react to this direction, which is very different than the traditional viewpoint. Our second keynote, Markus Mobius, gave a really interesting empirical take on the power of social learning. He showed results from an experiment involving Harvard undergraduates performing a task that required social learning and was able to test various conjectures for how such learning occurs (as well as the magnitude of social learning that occurs). Given the huge focus in CS on models where we learn from our friends, it was quite interesting to see that the magnitude of such social learning is actually pretty small, and seems to occur only in vary specific ways.
Every year (since 2009) in the fall, all of the folks in southern California that work at the border of economics and CS/EE get together for the Network Economics and Game Theory (NEGT) workshop. The hosting duties rotate between USC, UCLA, and Caltech, and this year the honor falls to us here at Caltech.
We’ve just finished finalizing the program — and it’s a great one. So, if you’re in the area, come on by!
We’re holding it on Nov 20-21. We’ll have a very reasonable start time each day of 10am so that folks try to avoid LA traffic in the morning, and we’ll end both days with a reception so that you can avoid traffic on the way home, too. Markus Mobius (MSR) and Tim Roughgarden (Stanford) are the keynotes, and then we have a great list of invited speakers from all across Southern California to round out the program.
Attendance is free, but please register early, if possible, so that we can plan the catering! Also, we’ll have a poster session for students to present work (and work-in-progress). If you’re interested, just sign up when you register.
Net neutrality has been a hot topic in recent months, one with a lot of emotional baggage and rhetoric that makes it difficult to follow the core issues. I’m not going to attempt to unravel things here; after all, it is a topic of hundreds of research papers over the last decade, so it would take more than a short blog post to really get into the issues. Rather, I want to make a simple point that is often missed.
We are in the middle of a large-scale experiment on the impact of net neutrality, and the effects of a loss of net neutrality have proven disastrous.
Why do I say this? Well, net neutrality has never existed for mobile devices, so by comparing the mobile experience (and contracting) with the wired world, we can already observe the consequences of giving up net neutrality, and they are clearly worrisome.
June is a month that is dominated by conference travel for me, with three of my favorite conferences all typically happening back-to-back. The third (and final) of these this year was Stochastic Networks. The little one at home prevented me from being able to join for the whole conference, but I was happy to be able to come for the first two days.
Stochastic Networks is an applied probability conference that is the type of event that doesn’t happen often enough in computer science. Basically, it consists of 20-25 invited hour-long talks over a week. The talks are mostly senior folks with a few junior folks thrown in, and are of an extremely high quality. And, if you do the math, that makes an average of 4-5 talks per day, which means that the days leave a lot of time for conversation and interaction. Because of the quality of the speakers, there are still lots of folks that attend even if they aren’t presenting (which makes for somewhere around a 100+ person audience, I’d guess), so it becomes a very productive event, both in terms of working with current collaborators and in terms of starting up new projects.
This past week, a large part of our group attended ACM EC up in Palo Alto. EC is the top Algorithmic Game Theory conference, and has been getting stronger and stronger each year. I was on the PC this year, and I definitely saw very strong papers not making the cut (to my dismay)… In fact, one of the big discussions at the business meeting of the conference was how to handle the growth of the community.
Finding about about the increasingly difficult acceptance standards, I was even happier that our group was so well-represented. We had four papers on a variety of topics, from privacy to scheduling to equilibrium computation. I’ll give them a little plug here before talking about some of my highlights from the conference…
One of the fun parts of Sigmetrics is all the interesting workshops and tutorials that surround the conference. These really help to create communities around some of the sub-disciplines in the field, and often attract lots of new folks into the area.
For example, for the past few years, there have been three consistent workshops on the menu — Greenmetrics, which focuses on energy issues, WPIN/NetEcon, which focus on network economics, and MAMA, which focuses on the mathematical side of performance analysis. You see the reflection of this consistency in the program and the main Sigmetrics conference, which has increasingly large sets of papers in each of these areas.