Serge Vlăduţ : Lattices with exponentially large kissing numbers

   (I thank Avi Wigderson for telling me about it.) Serge Vlăduţ just arxived a paper with examples of lattices in R^n such that the kissing number is exponential in n. The existence of such a lattice is a very old open problem and the best record was an example where the kissing number is n^{\log n}. (This is based on a 1959 construction of Barnes and Wall and can be found in the famous book by Conway and Sloane.) Noga Alon found in 1997 collections of spheres so that each one touches subexponentially many others. Vlăduţ’s result is very exciting!

A crucial step was a result by Ashikhmin,  Barg, and Vlăduţ from 2001 showing that certain algebraic-geometry binary linear codes have exponentially many minimal weight vectors. (This result disproved a conjecture by Nati Linial and me.) This follows by applying in a very clever way early 80s constructions going back to Bos, Conway, Sloane and Barnes and Sloane, and this requires some subtle selection of the AG-codes that can be used. To quote the author: “In order to apply Constuctions D and E we need specific good curves (the curves in the Garcia Stichtenoth towers do not perfectly match our construction) and some Drinfeld modular curves  perfectly suit our purposes.”

These lines of ideas and appropriate AG-codes look to me the most promising avenue towards exponential lower bound for the Borsuk problem. (See, e.g.,  this paper and this post) We need for that purpose AG codes X where the maximum weight occurs exponentially often (this need not be difficult) and that every subset of vectors that misses the maximum distance is exponentially smaller than |X|.


Posted in Combinatorics, Geometry | Tagged | 4 Comments

Peter Keevash and Eoin Long: Forbidden vector-valued intersections

Peter Keevash and Eoin Long arxived some time ago a very nice paper:  Forbidden vector-valued intersections, which settled a problem that was first posed here on the blog. Here is the abstract.

We solve a generalised form of a conjecture of Kalai motivated by attempts to improve the bounds for Borsuk’s problem. The conjecture can be roughly understood as asking for an analogue of the Frankl-Rödl forbidden intersection theorem in which set intersections are vector-valued. We discover that the vector world is richer in surprising ways: in particular, Kalai’s conjecture is false, but we prove a corrected statement that is essentially best possible, and applies to a considerably more general setting. Our methods include the use of maximum entropy measures, VC-dimension, Dependent Random Choice and a new correlation inequality for product measures.

Actually, the original motivation for the problem was related to the cap-set problem. (Item 19 in the third post on the cap-set problem and the Frankl-Rodl’s theorem).  In 2009 I wrote three posts (ABC) with some ideas on connections between the cap-set problem and Frankl-Wilson/Frankl-Rodl’s theorem. The first post also dicusses various ways in which Frankl-Rodl’s theorem is more general than Frankl-Wilson.

Other related posts and papers:  Frankl-Wilson theorem; A 2014 paper by Keevash and Long Frankl-Rödl type theorems for codes and permutations;  Posts (I, II) on the startling solution of the cap-set prolems following the works of Croot, Lev, Pach, Ellenberg, and Gijswijt.


Posted in Combinatorics | Tagged , | Leave a comment

Peter Keevash: More and Easier Designs!

Peter Keevash just posted on the arxiv a couple of new papers on designs. The first is a rewritten version of his original paper The existence of designs with a much simpler proof. The second paper The existence of designs II contains several new startling results, such as the existence of resolvable hypergraph designs, large sets of hypergraph designs and decompositions of designs by designs. This is great!

Continue reading

Posted in Combinatorics | Tagged | Leave a comment

Monday February 19: Quantum Coding and High Dimensional Expanders

Tonight at 8:00 Israel time TCS+ Dor Misner on 2-to-2 games.


On Monday, February 19, we will have a special day on “Quantum Coding and High Dimensional Expanders” organized by Gilles Zemor.

Sandwiches will be served for lunch, alongside cut vegetables and pastries.

Our calendar:

Place: Room 130, IIAS, Feldman Building, Givat Ram


10:00 – 11:00 Gilles Zemor. Background on quantum computing and coding.

11:30 – 12:30 Gilles Zemor. Quantum stabilizer codes and quantum LDPC codes.

14:00 – 15:00 Dorit Aharonov. Quantum locally testable codes.

15:15 – 16:15 Gilles Zemor. Quantum LDPC codes and high-dimensional expanders.

Posted in Combinatorics, Computer Science and Optimization, Conferences, Quantum, Updates | Tagged , | 2 Comments

Alef’s Corner: There is Still a Small Gap in the Proof

Image | Posted on by | Leave a comment

My Argument Against Quantum Computers: An Interview with Katia Moskvitch on Quanta Magazine

Quanta Magazine published an interview with me about quantum computers. It was a pleasure discussing this issue with Katia Moskvitch and I enjoyed also the photo session with David Vaaknin who also took a video of me explaining the importance of quantum noise.

For more information, here are links to my 2014 paper with Guy Kindler, my paper The Quantum Computer Puzzle on the Notices of AMS, its arxived expanded version, and my ICM2018 paper. Last but not least my cartoon post: If Quantum Computers are not Possible Why are Classical Computers Possible?

Posted in Computer Science and Optimization, Physics, Quantum | Tagged , , , , | 8 Comments

The Semester Break activities of the High Dimensional Combinatorics and Expanders Special Year

UPDATE: Schedule change! An additional workshop On February 5 and shift in dats of the others. Here is the calendar

We have now at HUJI a semester break, but the special semester in High dimensional combinatorics and IIAS leaded by Tali Kaufman and Alex Lubotzky is extremely active.

The Monday program continues during the semester break (Jan 26 to March 18) in the form of “special days”, each devoted to a certain topic. (On a very basic level.)

The first special day, this Monday, January 29, is organized by Anna Gundert and Uli Wagner.
The topic is pseudo-randomness. On January 30 there is an Action now day (less basic); on February 05: Stability and property testing February 12: High dimensional Ranom walks day; On February 19: Error correcting codes and high dimensional expanders; on.

I will update about the later special days later.

On Monday, February 12, we will have a special day on “High Dimensional Random Walks” organized by Ron Rosenthal.

Place: Room 130, IIAS, Feldman Building, Givat Ram
10:00-11:00     Dudi Mass –  Random walks by decreasing differences method 

11:30-12:30     Ron Rosenthal – Random walks on oriented cells – part I

13:45- 14:45    Izhar Oppenheim –  Decomposition Theorem for high dimensional random walks

15:00-16:00     Amitay Kamber –  Cutoff of random walks on hyperbolic surfaces and Ramanujan complexes

16:30-17:30     Ron Rosenthal – Random walks on oriented cells – part II

Continue reading

Posted in Combinatorics, Updates | Tagged | 1 Comment

Akshay Venkatesh Lectures at HUJI – Ostrowski’s Prize Celebration, January 24&25

Thursday January 25, 14:15-15:45 Ostrowski’s prize ceremony and Akshay Venkatesh’s prize lecture: Period maps and Diophantine problems

Followed by a Basic notion lecture by Frank Calegary 16:30-17:45: The cohomology of arithmetic groups and Langlands program

Wednesday January 24, 18:00-17:00: Akshay Venkatesh lectures on Cryptography and the geometry of algebraic equations. Lecture accessible also to second and third year undergraduate students.


Posted in Algebra and Number Theory, Computer Science and Optimization, Geometry, Updates | Tagged , , | Leave a comment

Hardness of Approximating Vertex Cover, Polytope-Integrality-Gap, the Alswede-Kachatrian theorem, and More.

Lior Silberman asked about applications of the 2-to-2 game theorem to hardness of approximation, and James Lee answered mentioning applications to vertex cover. Let me elaborate a little on vertex cover, and other matters. (Here is the pervious post on the 2-to-2 game theorem whose proof was very recently completed by Khot, Minzer, and Safra.)

Vertex cover

A vertex cover of a graph G is a set of vertices such that every edge contains a vertex in the set. VERTEX COVER is the algorithmic problem of finding such a set of vertices of minimum size. Famously this problem is an NP-complete problem, in fact, it is one of the problems in Karp’s original list of 21 NP-complete problems. A matching in a graph is a set of edges such that every vertex is included in at most one edge. Given a graph G there is an easy (greedy) efficient algorithm to find a maximal matching with respect to inclusion. Finding a maximal matching with r edges with respect to inclusion, gives us at the same time a vertex cover of size 2r and a guarantee that the minimum size of a vertex cover is at least r. A very natural question is to find an efficient algorithm for a better approximation. There is by now good evidence that this might not be possible. It is known to derive (Khot and Regev (2003)) from Khot’s unique game conjecture (Khot (2002)). I mention VERTEX COVER and MAX CUT (and the unique game conjecture and PCP) in section 3.5 of my ICM18 survey paper, and the problem described below about polytope integrality gap is taken from there.

NP-Hardness results for of approximating vertex cover

Johan Håstad (left) Irit Dinur (Right)

In the last two decades there were three improvements on the threshold of NP-hardness for approximating vertex cover. All three were part of major breakthrough papers.

7/6  Håstad  (1.166..).

The paper is: Some optimal inapproximability results, J. of ACM 48 (2001), 798–859

10 √5 -21 (1.3606..) Dinur and Safra and the importance of being biased

The paper is: On the hardness of approximating minimum vertex cover. Ann. of Math., 162 (2005), 439–485.  The 2002 conference version was entitled  “The importance of being biased.”

√2 – Khot-Minzer-Safra (1.4142..)

This is given (along with various other applications in Appendix B of the paper.)

Vertex cover and polytope-integrality-gap

Given a graph G and nonnegative weights on its vertices, the weighted version of vertex cover is the algorithmic problem of finding a set of vertices of minimum weight that covers all edges.

Minimize w_1x_1 + w_2x_2 +\cdot + w_nx_n, where x = (x_1,x_2,\dots ,x_n) is a 0-1 vectors,

Subject to: x_i + x_j \ge 1 for every edge.

Of course, this, more general problem, is also NP-complete. The linear programming relaxation allows x_is to be real numbers belonging to the interval [0,1]. The integrality gap for general vertex cover problems is 2, and given the solution to the linear programming problem you can just consider the set of vertices i so that x_i \ge 1/2. This will be a cover and the ratio between this cover and the optimal one is at most 2. (The integrality gap for the standard relaxation of MAX CUT  is log n.) The integrality gap is  important in many parts of optimization theory and the theory of computing. Here is a beautiful problem that I learned from Anna Karlin.

Consider the integrality gap (called the polytope integrality gap) between the covering problem and the linear programming relaxation when the graph G is fixed. In greater generality, consider a general covering problem of maximizing ctx subject to Ax b where A is integral matrix of nonnegative integers. Next, considered the integrality gap between 0-1 solutions and real solutions in [0; 1] when A and b are fixed (thus the feasible polyhedron is fixed, hence the name “polytope integrality gap”) and only c (the objective function) varies. The problem is if for vertex cover for every graph G and every vector of weights, there is an efficient algorithm achieving the polytope integrality gap. The same question can be asked for polytope integrality gap of arbitrary covering problems.

Friedgut’s Junta theoren/Bourgain’s Thm/FKN/Kindler-Safra, for Grassman Graphs

For the Boolean cube (or the “Hammer scheme”) there are nice theorems by Friedgut, by Bourgain, by Kindler-Safra, by Friedgut, Kalai, and Naor, and by others asserting that if a function is approximately “low-degree” then it is approximately a “Junta” or a “dictator.” These theorems play a role in PCP theory. When you replace the “Hamming scheme” by more general association schemes matters become much more involved but still hopeful. (Even the “Johnson scheme” this is a major challenge, with some recent results.) The Khot-Minzer-Safra paper and the earlier papers mentioned in my previous post prove such results and also prove how such results are useful for PCP purposes.

Alswede-Khachatrian theorem answering an old conjecture by Erdős-Ko-Rado and “Frankl’s other conjecture”

The paper by Dinur and Safra used beautifully several exciting combinatorial results and ideas. It uses analysis and Boolean functions for biased product probability distributions on the discrete cube. It also uses Erdős-Ko-Rado theory Continue reading

Posted in Combinatorics, Computer Science and Optimization | Tagged , , , , , , , | 1 Comment

Jacob Fox, David Conlon, and Benny Sudakov: Vast Improvement of our Knowledge on Unavoidable Patterns in Words

I heard a lecture by Benny Sudakov on the remarkable paper  Tower-type bounds for unavoidable patterns in words, by David Conlon, Jacob Fox, and Benny Sudakov.  Here are the slides, and let me let the slides speak for themselves.

The problem

Continue reading

Posted in Combinatorics | Tagged , , , | Leave a comment