Nash Equilibrium” [Rubinstein 2015].
• “The Complexity of Fairness Through Equilibrium” [Othman et al. 2016].
I am indebted to Christos and all my co-authors: Yakov Babichenko, Mark Braverman, Young Kun Ko, Pasin Manurangsi, Abraham Othman, and Omri Weinstein.
I am also grateful to Boaz Barak, Shai Ben-David, Jonah Brown-Cohen, Karthik C. S., Yang Cai, Alessandro Chiesa, Paul Christiano, Constantinos Daskalakis, Shaddin Dughmi, Mika Goos, Rishi Gupta, Elad Haramaty, Noam Nisan, Prasad Raghavendra, Tselil Schramm, Madhu Sudan, Luca Trevisan, Michael Viderman, and anonymous reviewers, for commenting on, correcting, and inspiring much of the content of this book.
The writing of this book was supported in part by the MSR Ph.D. Fellowship (and wonderful MSR summer internships).
This has been an amazing journey. I am most grateful for the opportunities I had to share it with my family and friends.
Aviad Rubinstein
Cambridge, MA
March 2019
PART I
OVERVIEW
1
The Frontier of Intractability
The combination of vast amounts of data, unprecedented computing power, and clever algorithms allows today’s computer systems to drive autonomous cars, beat the best human players at chess and Go, and live stream videos of cute cats across the world. Yet computers can fail miserably at predicting outcomes of social processes and interactions, elections being only the most salient example. And this is hardly surprising, as there are two potent reasons for such unpredictability: One reason is that human agents are driven by emotions and hormones and billions of neurons interacting with a complex environment, and as a result their behavior is extremely hard to model mathematically. The second reason is perhaps a bit surprising, and certainly closer to the concerns of this book: Even very simple and idealized models of social interaction, in which agents have extremely clear-cut objectives, can be intractable.
To have any hope of reasoning about human behavior on national and global scales, we need a mathematically sound theory. Game theory is the mathematical discipline that models and analyzes the interaction between agents (e.g., voters) who have different goals and are affected by each other’s actions. The central solution concept in game theory is the Nash equilibrium. It has an endless list of applications in economics, politics, biology, etc. (e.g., [Aumann 1987]).
By Nash’s theorem [Nash 1951], an equilibrium always exists. Furthermore, once at a Nash equilibrium, players have no incentive to deviate. The main missing piece in the puzzle is:
How do players arrive at an equilibrium in the first place?
After many attempts by economists for more than six decades1 (e.g., [Brown 1951, Foster and Young 2006, Hart and Mas-Colell 2003, Kalai and Lehrer 1993, Lemke and Howson 1964, Robinson 1951, Scarf 1967, Shapley 1964]), we still don’t have a satisfying explanation.
About ten years ago, the study of this fundamental question found a surprising answer in computer science: Chen et al. [2009b] and Daskalakis et al. [2009a] proved that finding a Nash equilibrium is computationally intractable.2 And if no centralized, specialized algorithm can find an equilibrium, it is even less likely that distributed, selfish agents will naturally converge to one. This casts doubt over the entire solution concept.
For the past decade, the main remaining hope for Nash equilibrium has been approximation. The central open question in this field has been:
Is there an efficient algorithm for finding an approximate Nash equilibrium?
We give a strong negative resolution to this question: our main result rules out efficient approximation algorithms for finding Nash equilibria.3
Our theorem is the latest development in a long and intriguing technical story. The first question we computer scientists ask when encountering a new algorithmic challenge is: is it in P, the class of polynomial time (tractable) problems; or is it NP-hard, like Satisfiability and the Traveling Salesperson Problem (where the best known algorithms require exponential time)? Approximate Nash equilibrium falls into neither category; its complexity lies between P and NP-hard—hence the title of our book. Let us introduce two (completely orthogonal) reasons why we do not expect it to be NP-hard.
The first obstacle for NP-hardness is the totality of Nash equilibrium. When we say that Satisfiability or the Traveling Salesperson Problems are NP-hard, we formally mean that it is NP-hard to determine whether a given formula has a satisfying assignment, or whether a given network allows the salesperson to complete her travels within a certain budget. In contrast, by Nash’s theorem an equilibrium always exists. Deciding whether an equilibrium exists is trivial, but we still don’t know how to find one. This is formally captured by an intermediate complexity class called PPAD.
The second obstacle for NP-hardness is an algorithm (the worst kind of obstacle for intractability). The best known algorithms for solving NP-hard or PPAD-hard problems require exponential ≈ 2n) time, and there is a common belief (formulated a few years ago as the “Exponential Time Hypothesis” [Impagliazzo et al. 2001]) that much faster algorithms do not exist. In contrast, an approximate Nash equilibrium can be found in quasi-polynomial (≈ nlog n) time. Notice that this again places the complexity of approximate Nash equilibrium between the polynomial time solvable problems in P and the exponential time required by NP-hard problems. Therefore, approximate Nash equilibrium is unlikely to be NP-hard—or even PPAD-hard, since we know of no quasi-polynomial algorithm for any other PPAD-hard problem.
As illustrated in the last two paragraphs, approximate Nash equilibrium is very far from our standard notion of intractability, NP-hardness. In some sense, it is one of the computationally easiest problems that we can still prove to be intractable—it lies at the frontier of our understanding of intractability. Unsurprisingly, the techniques we had to master to prove the intractability of approximate Nash equilibrium are useful for many other problems. In particular, we also prove hardness of approximation for several other interesting problems that either belong to the class PPAD or have quasi-polynomial time algorithms.
1.1 PPAD: Finding a Needle You Know Is in the Haystack
Consider a directed graph G. Each edge contributes to the degree of the two vertices incident on it. Hence the sum of the degrees is twice the number of edges, and in particular it is even. Now, given an odd-degree vertex v, there must exist another odd-degree vertex. But can you find one? There is, of course, the trivial algorithm, which simply brute-force enumerates over all the graph vertices.
Now suppose G further has the property4 that every vertex has in- and out-degree at most 1. Thus, G is a disjoint union of lines and cycles. If v has an outgoing edge but no incoming edge, it is the beginning of a line. Now the algorithm for finding another odd degree node is a bit more clever—follow the path to the end—but its worst case is still linear in the number of nodes.
In the worst case, both algorithms run in time linear in the number of vertices. When the graph is given explicitly, e.g., as an adjacency matrix, this is quite efficient compared to the size of the input. But what if the graph is given as black-box oracles S and P that return, for each vertex, its successor and predecessor in the graph? The amount of work5 required for finding another odd-degree vertex by the exhaustive algorithm, or the path-following