# This year a clever algorithm won the day in rankmaniac

I wrote last year about the mini-projects I use in my “flagship” course Networks: Structure & Algorithms.  We recently finished the first of these miniprojects: rankmaniac.

In rankmaniac, students have to build a system (using Amazon Elastic MapReduce) to compute the nodes of a huge web crawl that have the top 20 pageranks, and to do the computation as quickly as possible.  The system gets scored based on the sum of the mean-squared error of ranking of the 20 nodes output x 30sec + the computation time.  Thus, there’s a tradeoff between accuracy and computing time, and the goal is to find (and efficiently implement) a parallel algorithm for computing approximate pageranks very quickly…

The students have one week to spend on this and, each night, their current solutions are run and the scores are posted on a public leaderboard.  Their grades are not impacted by their ranking (only whether they pass two given milestones), but the team at the top of the leaderboard at the end of the week is crowned “rankmaniac” for the year.

This year’s competition

Every year, students try lots of interesting algorithmic approaches to the problem, but most years, it turns out that system optimizations win the day.  For the scale of the problems that I can give them, a clean, simple, efficient, implementation can typically beat a complex algorithm…  But, this year was very exciting because the winning team won because of a clever algorithmic optimization!

In particular, they used a nice idea from a paper of Kamvar et al at WWW: Extrapolation Methods for Accelerating PageRank Computations.   The key idea in the paper is quite simple and effective: perform quadratic extrapolation to minimize the number of iterations computed by the Power Method. Intuitively, one can think of simply replacing computation steps in the Power Method with extrapolation steps every once in a while in order to speed up computation.  You can’t do this too often, or things won’t converge, but you can do it often enough that significant speedups are possible.

In the case of rankmaniac, the extrapolation steps meant that the winning team could run fewer iterations than any of the other teams, which was enough to win the day!

# Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard

One of the things one might hope to get from an equilibrium concept in a game is that it might plausibly describe or predict behavior in the real world. That’s one reason negative results suggesting that certain equilibria can’t be computed efficiently are so interesting—they call those equilibrium concepts into question. Of course, there’s much more to it than that, but the existence of efficient algorithms (or, even better, simple, distributed dynamics) for computing equilibria is a big selling point for a solution concept.

In that sense, correlated equilibria (CE) and coarse correlated equilibria (CCE) are much more promising than Nash equilibria (NE). Even in games with many players, there exist a number of natural dynamics that quickly converge to these solution concepts. In particular, these dynamics induce efficient computation of approximate CE and CCE in multiplayer games; by contrast, computation of approximate Nash equilibria is computationally hard in multiplayer games.

If you aren’t so familiar with CEs and CCEs, you can think of them as very similar to Nash equilibria, in that they also consist of a distribution over players’ action profiles at which no player can benefit by unilateral deviation, and hence represent stable choices of distributions over player actions. The big difference is that, whereas a Nash equilibrium is defined to be a product of independent distributions over actions (one for each player); correlated and coarse correlated equilibria are general (joint) probability distributions—CEs and CCEs allow players to correlate their choices of actions. For the purposes of this blog post, let’s not worry too much about the distinction between CEs and CCEs. Notice that CE and CCE are more general than NE—every NE is a CE (CCE), but the converse is not true. So it makes sense that one might be able to compute a CE efficiently, even in settings where computing a NE is hard.

Beyond computation of equilibria, another significant thread of research in algorithmic game theory has been the study of the quality of equilibria, often as measured by the social welfare of the equilibrium or its ratio to the social welfare of the socially optimal outcome (c.f. the extensive literature on the price of anarchy (PoA)). Given that we know it is possible to efficiently compute CE and CCE, it is natural to ask how good are the equilibria we can efficiently compute? For example, do existing efficient dynamics find the best such equilibria, or at least ones that approximately optimize the social welfare? Since the gap between the worst and the best equilibria (CE or CCE), in terms of social welfare, can be large in natural games, it is interesting to understand if there exist efficient dynamics or algorithms that avoid—at least to some extent—the bad outcomes.

In their notable work, Papadimitriou and Roughgarden [PR] show that determining a socially optimal CE is NP-hard, in a number of succinct multiplayer games*. (What’s socially optimal? The equilibrium that maximizes the total (or average) welfare of the players.) This result intuitively follows from the fact that determining an action profile with maximum welfare—i.e., solving the problem of welfare optimization even without equilibrium constraints—is NP-hard in general. The hardness result of [PR]  leaves open the question of computing near-optimal CE/CCE, i.e., whether there exist efficient algorithms that compute CE/CCE with welfare at least, say, alpha times the optimal, for a nontrivial approximation ratio alpha < 1.

In recent work with Sid Barman, a postdoc at Caltech, we consider exactly this question. We establish that, unless P=NP, there does not exist any efficient algorithm that computes a CCE with welfare better than the worst possible CCE, in succinct multiplayer games*. We also establish similar hardness results for computing equilibria under the egalitarian objective or Pareto-optimality.

Analogous hardness results hold for CE. A classical interpretation of a CE is in terms of a mediator who has access to the players’ payoff functions and who draws outcomes from a correlated equilibrium’s joint distribution over player actions and privately recommends the corresponding actions to each player. The equilibrium conditions ensure that no player can benefit in expectation by unilaterally deviating from the recommended actions. Therefore, the problem we study here is exactly the computational complexity of the problem that a mediator faces if she wishes to maximize social welfare.

We also extend the hardness result to approximate CE and CCE. Therefore, while one can efficiently compute an approximate CE/CCE in succinct multiplayer games, one cannot provide any nontrivial welfare guarantees for the resulting equilibrium (unless P=NP).

In addition, we show that this hardness result also holds specifically for potential games (generally considered to be a very tractable class of games), and persists even in settings where the gap between the best and worst equilibrium is large.

We also have some positive results, but I think the negative ones are the most interesting—this says that those appealing dynamics that compute CE/CCE can’t be guaranteed to be finding anything other than the worst equilibrium. To me, that’s some surprising bad news. One silver lining is that this provides new motivation for studying the price of anarchy (the quality of the worst equilibrium) for CE/CCE, since generally that’s the best thing we can hope to compute.

*Technical Aside (succinct games): In general multiplayer games the size of the normal form representation, \$N\$, is exponentially large in the number of players; one can compute a CE/CCE that optimizes a linear objective by solving a linear program of size polynomial in \$N\$, and hence the computational complexity of equilibrium computation is not interesting for general games. However, most games of interest—such as graphical games, polymatrix games, congestion games, local effect games, network design games, anonymous games, and scheduling games—admit a succinct representation (wherein the above-mentioned linear program can be exponentially large in the size of the representation), and hence it is such succinctly representable games that we (and previous works) study.

[PR] PAPADIMITRIOU, C. H. AND ROUGHGARDEN, T. 2008. Computing correlated equilibria in multi-player games. Journal of the ACM (JACM) 55, 3, 14.

# I’m Adam Wierman and this is how I work

I’m a big fan of the “I’m XX and this is how I work” series on lifehacker. It’s quite interesting to see how successful folks manage their time / work-spaces / etc.  The contrasts are interesting, e.g., between Ira Glass (of this American Life) and  Nathan Blecharczyk (of AirBnB).

In any case, I figured that since I’m not getting asked by Lifehacker anytime soon, I’d do one of my own with a bit of an academic slant…

What are some apps/software/tools you can’t live without?

Probably the biggest no-brainer is Dropbox, which I use for everything (both work and personal).  It’s crucial for keeping projects synced with collaborators/students/TAs/etc.  Sometimes we run a SVN inside or use github to get tighter management of code, but often dropbox is enough.

For teaching, I’m a big fan of Piazza.  I really can’t imagine running a large class without it at this point.  Typically questions posted by students get responses within 10 minutes (either from TAs or other students) and then I have an archive of these questions that I can use to improve the course in future years.

For travel, my assistant got me using TripIt, and I’m a big fan now.  It keeps everything in one place and syncs it so I have all the details I need even when offline.  Saves a lot of time on the road…

For presentations, most people complain about powerpoint, but I think it’s a great tool.  Especially the newer versions are much improved.  As an academic, once you create a keyboard shortcut to start a tex equation in a text box (Alt-1 for me) then it’s smooth sailing (at least on Windows).

Of course email is a large part of my job… I use gmail for everything and am a big believer in Inbox Zero.  So, the only things I have in my inbox are the tasks I want to accomplish in a given day.  The key things I use to manage that are filters in gmail (it’s amazing how freeing it is to set up an “AcademicSpam” filter, mine is a little more complicated than the one suggested by PhD comics) and the boomerang plugin.  I just started using boomerang recently, but I already can’t imagine not being about to send messages at specific times and have them leave the inbox only to pop up new at a later, scheduled time, i.e., boomerang them.

Setting up meetings can be a pain, but is much easier with when2meet. I don’t know why anyone still uses doodle…

For writing, I use winedt with sumatrapdf (which integrates much more tightly than adobe, e.g., you can click on the pdf and winedt takes you to that part of your code).  And, for blogging, I use latex2wp to convert tex to wordpress format.