Lior Silberman asked about applications of the 2-to-2 game theorem to hardness of approximation, and James Lee answered mentioning applications to vertex cover. Let me elaborate a little on vertex cover, and other matters. (Here is the pervious post on the 2-to-2 game theorem whose proof was very recently completed by Khot, Minzer, and Safra.)
A vertex cover of a graph G is a set of vertices such that every edge contains a vertex in the set. VERTEX COVER is the algorithmic problem of finding such a set of vertices of minimum size. Famously this problem is an NP-complete problem, in fact, it is one of the problems in Karp’s original list of 21 NP-complete problems. A matching in a graph is a set of edges such that every vertex is included in at most one edge. Given a graph there is an easy (greedy) efficient algorithm to find a maximal matching with respect to inclusion. Finding a maximal matching with edges with respect to inclusion, gives us at the same time a vertex cover of size and a guarantee that the minimum size of a vertex cover is at least . A very natural question is to find an efficient algorithm for a better approximation. There is by now good evidence that this might not be possible. It is known to derive (Khot and Regev (2003)) from Khot’s unique game conjecture (Khot (2002)). I mention VERTEX COVER and MAX CUT (and the unique game conjecture and PCP) in section 3.5 of my ICM18 survey paper, and the problem described below about polytope integrality gap is taken from there.
NP-Hardness results for of approximating vertex cover
Johan Håstad (left) Irit Dinur (Right)
In the last two decades there were three improvements on the threshold of NP-hardness for approximating vertex cover. All three were part of major breakthrough papers.
7/6 Håstad (1.166..).
The paper is: Some optimal inapproximability results, J. of ACM 48 (2001), 798–859
10 √5 -21 (1.3606..) Dinur and Safra and the importance of being biased
The paper is: On the hardness of approximating minimum vertex cover. Ann. of Math., 162 (2005), 439–485. The 2002 conference version was entitled “The importance of being biased.”
√2 – Khot-Minzer-Safra (1.4142..)
This is given (along with various other applications in Appendix B of the paper.)
Vertex cover and polytope-integrality-gap
Given a graph and nonnegative weights on its vertices, the weighted version of vertex cover is the algorithmic problem of finding a set of vertices of minimum weight that covers all edges.
Minimize , where is a 0-1 vectors,
Subject to: for every edge.
Of course, this, more general problem, is also NP-complete. The linear programming relaxation allows s to be real numbers belonging to the interval [0,1]. The integrality gap for general vertex cover problems is 2, and given the solution to the linear programming problem you can just consider the set of vertices so that . This will be a cover and the ratio between this cover and the optimal one is at most 2. (The integrality gap for the standard relaxation of MAX CUT is .) The integrality gap is important in many parts of optimization theory and the theory of computing. Here is a beautiful problem that I learned from Anna Karlin.
Consider the integrality gap (called the polytope integrality gap) between the covering problem and the linear programming relaxation when the graph G is fixed. In greater generality, consider a general covering problem of maximizing ctx subject to Ax b where A is integral matrix of nonnegative integers. Next, considered the integrality gap between 0-1 solutions and real solutions in [0; 1] when A and b are fixed (thus the feasible polyhedron is fixed, hence the name “polytope integrality gap”) and only c (the objective function) varies. The problem is if for vertex cover for every graph G and every vector of weights, there is an efficient algorithm achieving the polytope integrality gap. The same question can be asked for polytope integrality gap of arbitrary covering problems.
Friedgut’s Junta theoren/Bourgain’s Thm/FKN/Kindler-Safra, for Grassman Graphs
For the Boolean cube (or the “Hammer scheme”) there are nice theorems by Friedgut, by Bourgain, by Kindler-Safra, by Friedgut, Kalai, and Naor, and by others asserting that if a function is approximately “low-degree” then it is approximately a “Junta” or a “dictator.” These theorems play a role in PCP theory. When you replace the “Hamming scheme” by more general association schemes matters become much more involved but still hopeful. (Even the “Johnson scheme” this is a major challenge, with some recent results.) The Khot-Minzer-Safra paper and the earlier papers mentioned in my previous post prove such results and also prove how such results are useful for PCP purposes.
Alswede-Khachatrian theorem answering an old conjecture by Erdős-Ko-Rado and “Frankl’s other conjecture”
The paper by Dinur and Safra used beautifully several exciting combinatorial results and ideas. It uses analysis and Boolean functions for biased product probability distributions on the discrete cube. It also uses Erdős-Ko-Rado theory Continue reading