Aviad Rubinstein

Hardness of Approximation Between P and NP


Скачать книгу

the community. Balcan et al. [2013] give a polynomial-time algorithm for enumerating (α, β)-communities in the special case where the degree of every node is Ω(n). Arora et al. [2012] consider several semi-random models where the edges inside the community are generated at random, according to the expected degree model. For the general case, the latter paper by Arora et al. gave a simple quasi-polynomial (nO(log n)) time for detecting (α, β)-communities whenever αβ is at least some positive constant. (Their algorithm is essentially identical to the meta-algorithm for bilinear optimization that we outlined above.)

      We show that, for every constants α > β ∈ (0,1], community detection requires quasi-polynomial time (assuming ETH). For example, when α = 1 and β = 0.01, this means that we can hide a clique C, such that every single vertex not in C is connected to at most 1% of C. Our main result is actually a much stronger inapproximability: even in the presence of a (1, o(1))-community, finding any (β + o(1), β)-community is hard.

      Unlike all quasi-polynomial approximation schemes mentioned above, Arora et al.’s algorithm has the unique property that it can also exactly count all the (α, β)-communities. Our second result is that counting even the number of (1, o(1))-communities requires quasi-polynomial time. A nice feature of this result is that we can base it on the much weaker #ETH assumption, which asserts that counting the satisfying assignment for a 3-SAT instance requires time 2Ω(n). (Note, for example, that #ETH is likely to be true even if P = NP.)

       1.2.3 VC and Littlestone’s Dimensions

      A common and essential assumption in learning theory is that the concepts we want to learn come from a nice, simple concept class, or (in the agnostic case) that they can at least be approximated by a concept from a simple class. When the concept class is sufficiently simple, there is hope for good (i.e., sample-efficient and low-error) learning algorithms.

      There are many different ways to measure the simplicity of a concept class. The most influential measure of simplicity is the VC Dimension [Vapnik and Chervonenkis 1971], which captures learning in the Probably Almost Correct (PAC) model. We also consider Littlestone’s Dimension [Littlestone 1987], which corresponds to minimizing mistakes in online learning (see Section 2.5 for definitions). When either dimension is small, there are algorithms that exploit the simplicity of the class, to obtain good learning guarantees.

      In Chapter 14 we consider the algorithmic challenge of computing either dimension. In particular, we study the most optimistic setting, where the entire universe and concept class are given as explicit input (a binary matrix whose (x, c)-th entry is 1 iff element x belongs to concept c). In this model, both dimensions can be computed in quasi-polynomial time. Interestingly, the algorithm does not go through the bilinear optimization problem; instead, it exploits the fact that for concept class C, both dimensions are bounded by log |C|. Two decades ago, it was shown that quasi-polynomial time is indeed necessary for both dimensions [Frances and Litman 1998, Papadimitriou and Yannakakis 1996]. The computational intractability of computing the (VC, Littlestone’s) dimension of a concept class suggests that even in cases where a simple structure exists, it may be inaccessible to computationally bounded algorithms.

      Theorems 14.1 and 14.2 extend the results of Frances and Litman [1998] and Papadimitriou and Yannakakis [1986] to show that the VC and Littlestone’s Dimensions cannot even be approximately computed in polynomial time.

       1.2.4 Signaling

      Many classical questions in economics involve extracting information from strategic agents. Lately, there has been growing interest within algorithmic game theory in signaling: the study of how to reveal information to strategic agents (see, e.g., [Cheng et al. 2015, Dughmi 2014, Dughmi et al. 2013, Emek et al. 2014, Miltersen and Sheffet 2012] and references therein). Signaling has been studied in many interesting economic and game theoretic settings. Among them, ZERO-SUM SIGNALING proposed by Dughmi [2014] stands out as a canonical problem that cleanly captures the computational nature of signaling. In particular, focusing on zero-sum games clears away issues of equilibrium selection and computational tractability of finding an equilibrium.

       Definition 1.3

      ZERO-SUM SIGNALING [Dughmi 2014]. Alice and Bob play a Bayesian zero-sum game where the payoff matrix M is drawn from a publicly known prior. The signaler Sam privately observes the state of nature (i.e., the payoff matrix), and then publicly broadcasts a signal φ(M) to both Alice and Bob. Alice and Bob Bayesian-update their priors according to φ(M)’s and play the Nash equilibrium of the expected game; but they receive payoffs according to the true M. Sam’s goal is to design an efficient signaling scheme φ (a function from payoff matrices to strings) that maximizes Alice’s expected payoff.

      Dughmi’s main result proves that assuming the hardness of the PLANTED CLIQUE problem, there is no additive Fully Polynomial Time Approximation Scheme (FPTAS) for ZERO-SUM SIGNALING. The main open question left by Dughmi [2014] is whether an additive PTAS exists. Here we answer this question in the negative: we prove that assuming the ETH [Impagliazzo et al. 2001], obtaining an additive--approximation (for some constant > 0) requires quasi-polynomial time image. This result is tight thanks to a recent quasi-polynomial image time algorithm by Cheng et al. [2015]. Another important advantage of our result is that it replaces the hardness of PLANTED CLIQUE with a more believable worst-case hardness assumption (see, e.g., the discussion in Braverman et al. [2015]).

      The main result in this book rules out the PTAS (polynomial time approximation schemes) for two-player Nash equilibrium. Consider a game between two players, each choosing between a large number (n) of actions. The inputs to the algorithm are two n × n matrices with entries in [0, 1]. The goal is to find, for every constant ε, an ε-approximate Nash equilibrium; i.e., a mixed strategy for each player, such that either player can gain at most ε by deviating to a different strategy.

      This has been the central open question in equilibrium computation for the past decade. There were good reasons to be hopeful: there was a quasi-polynomial time [Lipton et al. 2003], a series of improved approximation ratios [Bosse et al. 2010, Daskalakis et al. 2007, 2009b, Kontogiannis et al. 2009, Tsaknakis and Spirakis 2008], and several approximation schemes for special cases [Alon et al. 2013, Barman 2015, Daskalakis and Papadimitriou 2009, Kannan and Theobald 2007]. Our main result settles this question in the negative:

      Main theorem. There exists a constant ϵ > 0 such that, assuming ETH for PPAD, finding an ϵ-approximate Nash equilibrium in a two-player n × n game requires time image.

      We supplement Theorem 1.1 with a series of other hardness of approximation results for Nash equilibrium in related settings, further establishing the point that even approximate Nash equilibria are intractable. First, we consider different sources of complexity. Our main result shows intractability of approximate Nash equilibria when the complexity of the game arises from each player choosing among many actions. In Theorems 5.1 and 5.2, we prove that in games where each player only has two actions, the complexity can arise from a large number of players; finding an approximate Nash equilibrium in n-player, binary action games is PPAD-hard (settling another open question from Daskalakis’s thesis [Daskalakis 2008]). Alternatively, even if there are only two players and the number of actions is constant, a different source of complexity can be the players’ uncertainty; finding