## Hardness of Approximating Vertex Cover, Polytope-Integrality-Gap, the Alswede-Kachatrian theorem, and More.

Lior Silberman asked about applications of the 2-to-2 game theorem to hardness of approximation, and James Lee answered mentioning applications to vertex cover. Let me elaborate a little on vertex cover, and other matters. (Here is the pervious post on the 2-to-2 game theorem whose proof was very recently completed by Khot, Minzer, and Safra.) ## Vertex cover

A vertex cover of a graph G is a set of vertices such that every edge contains a vertex in the set. VERTEX COVER is the algorithmic problem of finding such a set of vertices of minimum size. Famously this problem is an NP-complete problem, in fact, it is one of the problems in Karp’s original list of 21 NP-complete problems. A matching in a graph is a set of edges such that every vertex is included in at most one edge. Given a graph $G$ there is an easy (greedy) efficient algorithm to find a maximal matching with respect to inclusion. Finding a maximal matching with $r$ edges with respect to inclusion, gives us at the same time a vertex cover of size $2r$ and a guarantee that the minimum size of a vertex cover is at least $r$. A very natural question is to find an efficient algorithm for a better approximation. There is by now good evidence that this might not be possible. It is known to derive (Khot and Regev (2003)) from Khot’s unique game conjecture (Khot (2002)). I mention VERTEX COVER and MAX CUT (and the unique game conjecture and PCP) in section 3.5 of my ICM18 survey paper, and the problem described below about polytope integrality gap is taken from there.

## NP-Hardness results for of approximating vertex cover

Johan Håstad (left) Irit Dinur (Right)

In the last two decades there were three improvements on the threshold of NP-hardness for approximating vertex cover. All three were part of major breakthrough papers.

The paper is: Some optimal inapproximability results, J. of ACM 48 (2001), 798–859

### 10 √5 -21 (1.3606..) Dinur and Safra and the importance of being biased

The paper is: On the hardness of approximating minimum vertex cover. Ann. of Math., 162 (2005), 439–485.  The 2002 conference version was entitled  “The importance of being biased.”

## Vertex cover and polytope-integrality-gap

Given a graph $G$ and nonnegative weights on its vertices, the weighted version of vertex cover is the algorithmic problem of finding a set of vertices of minimum weight that covers all edges.

Minimize $w_1x_1 + w_2x_2 +\cdot + w_nx_n$, where $x = (x_1,x_2,\dots ,x_n)$ is a 0-1 vectors,

Subject to: $x_i + x_j \ge 1$ for every edge.

Of course, this, more general problem, is also NP-complete. The linear programming relaxation allows $x_i$s to be real numbers belonging to the interval [0,1]. The integrality gap for general vertex cover problems is 2, and given the solution to the linear programming problem you can just consider the set of vertices $i$ so that $x_i \ge 1/2$. This will be a cover and the ratio between this cover and the optimal one is at most 2. (The integrality gap for the standard relaxation of MAX CUT  is $log n$.) The integrality gap is  important in many parts of optimization theory and the theory of computing. Here is a beautiful problem that I learned from Anna Karlin.

Consider the integrality gap (called the polytope integrality gap) between the covering problem and the linear programming relaxation when the graph G is fixed. In greater generality, consider a general covering problem of maximizing ctx subject to Ax b where A is integral matrix of nonnegative integers. Next, considered the integrality gap between 0-1 solutions and real solutions in [0; 1] when A and b are fixed (thus the feasible polyhedron is fixed, hence the name “polytope integrality gap”) and only c (the objective function) varies. The problem is if for vertex cover for every graph G and every vector of weights, there is an efficient algorithm achieving the polytope integrality gap. The same question can be asked for polytope integrality gap of arbitrary covering problems.

## Friedgut’s Junta theoren/Bourgain’s Thm/FKN/Kindler-Safra, for Grassman Graphs

For the Boolean cube (or the “Hammer scheme”) there are nice theorems by Friedgut, by Bourgain, by Kindler-Safra, by Friedgut, Kalai, and Naor, and by others asserting that if a function is approximately “low-degree” then it is approximately a “Junta” or a “dictator.” These theorems play a role in PCP theory. When you replace the “Hamming scheme” by more general association schemes matters become much more involved but still hopeful. (Even the “Johnson scheme” this is a major challenge, with some recent results.) The Khot-Minzer-Safra paper and the earlier papers mentioned in my previous post prove such results and also prove how such results are useful for PCP purposes.

## Alswede-Khachatrian theorem answering an old conjecture by Erdős-Ko-Rado and “Frankl’s other conjecture”

The paper by Dinur and Safra used beautifully several exciting combinatorial results and ideas. It uses analysis and Boolean functions for biased product probability distributions on the discrete cube. It also uses Erdős-Ko-Rado theory and a result that I will briefly mention. (See also this post on extremal combinatorics, this post on Levon Khachatrian, and this polymath10 post on a more general problem.)

What is the largest size $f(n,k,r)$ of a family of $k$-subsets from an $n$-element set such that every two sets in the family have at least $r$ elements in common?

For $r=1$ this is the Erdős-Ko-Rado theorem that asserts that when $n \ge 2k$ the maximum is ${{n-1} \choose {k-1}}$.

Here is an example: $n=8, k=4$, and $r=2$.

You may think that the best thing to do is to take all 4-subsets containing ‘1’ and ‘2’. There are 15 such sets. But if you take all subsets containing at least 3 elements from {1,2,3,4} you have 17 sets so that every two have at least two elements in common. Erdos, Ko and Rado posed a conjecture for the case $n=4r, k=2r$, and Peter Frankl posed a conjecture for every n, k, and r. Frankl’s conjecture was settled by Ahlswede and Khachatrian. The proof is beautiful and quite involved.

Ahlswede and Khachatrian’s theorem:  For every $n,k,r$, there is $m$ such that $f(n,k,r)$ is attained by the family $F(n,k,r,m)$ $F(n,k,r,m)= \{S \subset [n]:|S \cap [r+2m]|\ge r+m\}.$

See also Filmus’ paper Ahlswede-Khachatrian Theorems: Weighted, Infinite, and Hamming, and a stability result in the paper by Ellis, Keller, and Lifshitz, Stability for the Complete Intersection Theorem, and the Forbidden Intersection Problem of Erdős and Sós (that we mentioned in this post). (Correction:) The interesting paper by Balogh and Mubayi A new short proof of a theorem of Ahlswede and Khachatrian, does not give a new proof for the “big” theorem by Ahlswede and Khachatrian but rather to another consequence of the main result proved by them.

It was a pleasant surprise to see this theorem playing a role in the paper by Irit and Muli.

This entry was posted in Combinatorics, Computer Science and Optimization and tagged , , , , , , , . Bookmark the permalink.

### 3 Responses to Hardness of Approximating Vertex Cover, Polytope-Integrality-Gap, the Alswede-Kachatrian theorem, and More.

1. Yuly Shipilevsky says:

Meanwhile, my P vs NP solution

http://vixra.org/abs/1801.0100

2. Gil Kalai says:

An update on polytope integrality gap: I mentioned in the post the beautiful problem if for vertex cover for every graph G and every vector of weights, there is an efficient algorithm achieving the “polytope integrality gap”. Anna Karlin kindly informed me that Mohit Singh got in touch with her after seeing the conjecture on my blog and pointed out that the hope for approximating the polytope integrality gap for vertex cover is unlikely to be possible because of its relationship to fractional chromatic number. Mohit noted that fractional chromatic number is hard to approximate even when it is constant assuming UGC.