Here are our choices. **What would you add? what would you delete?**

Discovery No.1: the square root of 2 is not a rational number.

Discovery no.2(A): Euclidean Geometry

Discovery no.2(B): Non-Euclidean Geometry

Discovery no.3: Abel-Galois Theorem: there is no solution with radicals to the general equation of the fifth degree and above.

Discovery no. 4(A): Differential and integral calculus (Isaac Newton, Gottfried Leibniz, 17^{th}Century).

Discovery no. 4(B): The analysis of complex functions (Augustin-LouisCauchy, Bernhard Riemann, 19.^{th}century)

Discovery no. 5(A): There are various kinds of infinity. For example, there are more real numbers than natural numbers.

Discovery no. 5(B): Gödel’s Incompleteness theorem: A mathematical theory broad enough includes true statements that cannot be proven.

Discovery no. 6(A): The Gauss elimination method for solving systems of linear equations.

Discovery no. 6(B): Linear programming and the Simplex algorithm for solving it.

Discovery no. 7: The Bell Curve and the Central Limit Theorem

Discovery no. 8: The Prime Number Theorem.

Discovery no. 9 (a): The theory of computability. Undecidable problems.

Discovery no. 9 (b): Computational Complexity theory. The theory of NP-complete problems.

Discovery no. 10: Additional paradigms of mathematical research beyond the paradigm of theorem/proof. Numerical methods, simulations, scientific computation and the development of mathematical models.

]]>

This is a belated report on a remarkable breakthrough from 2017. The paper is Short Presburger arithmetic is hard, by Nguyen and Pak.

**Danny Nguyen**

Algorithmic tasks are often intractable. But there are a few miracles where efficient algorithms exist: Solving systems of linear equations, linear programming, testing primality, and solving integer programming problems when the number of variables is bounded. The last miracle is a historic 1983 theorem of Hendrik Lenstra (Here is the paper) and it is the starting point of this post.

**Lensra’s theorem:** Consider a system of linear inequalities

where is a vector of variables, *A* is an integral *k* by *n* matrix and *b* is an integral vector of length *n*.

Let be a fixed integer. There is a polynomial time algorithm to determine if the system has an integral solution.

Of course, the full Integer Programming problem when is part of the input, is **NP-complete**. This problem came already (second in Karp’s list!) in the Mayflower of NP-complete problems – Karp’s paper.

Sasha Barvinok famously showed in 1993 that even counting the number of solutions is in **P**. (Barvinok utilized the short generating function approach pioneered by Brion, Vergne and others.)

Next, I want to describe an amazing 1990 theorem of Ravi Kannan,

Kannan’s theorem considers formulas with one quantifier alternation in the Presburger arithmetic and it asserts that when the number of variables is fixed, there is a polynomial time algorithm to decide if the formula is satisfiable.

(Here is a free version of Kannan’s paper.) Also here the counting problems were tackled with great success. Barvinok and Kevin Woods remarkably showed how to count projections of integer points in a (single) polytope in polynomial time, and subsequently Woods extended this approach to general Presburger expressions Φ with a fixed number of inequalities!

An important strengthening was achieved by Friedrich Eisenbrand and Gennady Shmonin in the 2008 paper Parametric integer programming in fixed dimension. See also the survey chapter by Barvinok Lattice points and lattice polytopes.

You can find the formulation of Kannan’s theorem in full generality a little further but let me present now a special case related to the famous Frobenius coin problem. (See this post on GLL for more on Presburger arithmetic)

Given *k* coins with integral values, the Frobinius coin problem is to determine the largest integer that cannot be described as positive integer combinations of the values of the coins. (See also this post on GLL.)

**Theorem (Kannan):** There is a polynomial time algorithm to solve the Frobenius coin problem for every fixed number of coins.

The issue of the way theory meets practice for the problems discussed in this post is very interesting but we will not discuss it. Let me remark that Herb Scarf (who along with Kannan played a role in B-L (Before Lenstra) developments) offered another approach for the solution of the Frobenius coin problem and related IP (Integer Programming) problems based on his theory of maximal lattice-free convex bodies. See this related post.

Given the result of Kannan and later that of Barvinok and Woods, many people expected that also for two alternations, or even for any other fixed number of alternations, Presburger arithmetic would be in polynomial time. Nguyen and Pak proved that the problem is **NP-complete** already for two quantifier alternations! Here is the link to the paper Short Presburger arithmetic is hard. Igor Pak’s homepage has a few other related papers.

Let me bring here Sasha Barvinok’s MathSciNet featured review of Nguyen and Pak’s paper which tells the story better than I could.

Presburger arithmetic allows integer variables, integer constants, Boolean operations (&, ∧, ¬), quantifiers (∃, ∀), equations and inequalities (=, <, >, ≤, ≥), addition and subtraction (+, −) and multiplication by integer constants. It does not allow multiplication of variables (if we allow multiplication of variables, we get Peano arithmetic).

Geometrically, a quantifier-free formula of Presburger arithmetic describes the set of integer points in a Boolean combination of rational polyhedra (that is, in the set obtained from finitely many rational polyhedra by taking unions, intersections and complements). Similarly, a formula of Presburger arithmetic with existential quantifiers only describes the set of integer points obtained from the set of integer points in a Boolean combination of polyhedra by a projection along some coordinates.

Unlike Peano arithmetic, Presburger arithmetic is decidable. Here the authors zoom in on the computational complexity of Presburger arithmetic, once the combinatorial complexity of the formula is bounded in advance. If we fix the number of variables, the validity of a formula with no quantifier alternations (that is, of the type ∃ . . . ∃Φ() or of the type ∀ . . . ∀Φ()) can be established in polynomial time by Lenstra’s integer programming algorithm [see H. W. Lenstra Jr., Math. Oper. Res. 8 (1983), no. 4, 538–548; MR0727410].

For a fixed number of variables, formulas with one quantifier alternation (∃ . . . ∃∀ . . . ∀Φ()) can also be solved in polynomial time, as shown by R. Kannan [in Polyhedral combinatorics (Morristown, NJ, 1989), 39–47, DIMACS Ser. Discrete Math. Theoret. Comput. Sci., 1, Amer. Math. Soc., Providence, RI, 1990; MR1105115]. The decision procedure can be characterized as a polynomial time algorithm for parametric integer programming.

Suppose now that we fix the number of variables and the number of Boolean operations in advance (and hence get what is called a short formula of Presburger arithmetic). Thus the only parameters of the formula are the numerical values of the constants in the formula. The authors show that deciding validity becomes **NP-complete** if one allows

two quantifier alternations. Remarkably, they present an example of a formula

∃z ∈ ∀y ∈ ∃x ∈ Φ(x, y, z)

with an **NP-complete** decision problem, even though Φ contains at most 10 inequalities.

Another remarkable example is an **NP-complete** decision problem for a formula of the

type

∃z ∈ ∀y ∈ ∃x ∈ : *Ax + By + Cz ≤ b, *

with at most 24 inequalities.

As the number of quantifier alternations is allowed to increase, the computational complexity in the polynomial hierarchy also moves up. The authors also describe the computational complexity of corresponding counting problems.

The proof is very clever; it uses the continued fraction expansion of a rational number to encode a growing family of intervals, with the help of which the authors build an **NP-complete** problem.

]]>

I am very happy to report that yesterday, March 19, 2019, Karen Uhlenbeck was awarded the 2019 Abel prize “for her pioneering achievements in geometric partial differential equations, gauge theory and integrable systems, and for the fundamental impact of her work on analysis, geometry and mathematical physics.” The full citation for the prize and additional material may be found in the Abel Prize page (Additional sources: I, II, III, IV, V,VI,VII, VII.)

We move on to the original post:

We start with the classical question and go on with a new version contributed by my son Lior.

**Update:** A few brief comments on the original problem and Lior’s survivor variant are added to the end of the post. (Spoiler warning)

The Monty Hall problem is a famous brain teaser, first posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from a reader’s letter quoted in Marilyn vos Savant’s “Ask Marilyn” column in Parade magazine in 1990.

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

In the current season of the Isreali Survivor television game each participant receives a case that contains a stone. One of the stones contains a golden coin. Every week one participant is voted off the game. But, if after being voted off, the participant discovers the golden coin inside his stone then he stays in the game. In our puzzle, the host gives the best player of the round an opportunity to switch his stone with another stone.

Suppose you are in the survivor game and there are three remaining participants. One of the other participants is voted off. He opens his case, smashes the stone, and does not find the gold coin. Therefore he is out. The host gives you an opportunity: “Do you want to replace your stone with the stone of the other remaining participant?”Is it to your advantage to switch the stones?

**Lior, who contributed the question is our youngest son. Here in the picture from left to right: Hagai (our second child), my wife Mazi, Ilan, Lior, Eran (Neta’s husband), Yoav, and Neta (our first child.) Ilan and Yoav are Neta and Eran’s children. **Picture taken by Felix S. Rettberg. ©Felix S. Rettberg

After you answered to the poll consider this: If your answers to the two polls are different can you put your finger on the difference between the two scenarios?

An earlier survivor post: Impossibility result for “survivor”.

An overwhelming number of people – 96 percent – gave the right answer to the first problem that it is beneficial to switch. (The probability to win the car goes up from 1/3 to 2/3.)

This answer is based on various assumptions about the strategy of the host. The problem (as given in Wikipedia) only says: “**the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat.” **The precise strategy of the host is not entirely clear from this formulation of the problem. If, for example, the host’s choice to open another door depends on whether there is a car behind door number No. 1, switching may not be a good idea. (For example, if the host opens another door only when there is a car behind door number 1, you had better not switch.) If the host randomly chooses either door No. 2 or door No. 3, then it makes no difference whether to switch or not.

The problem gives an impression that this is a repeated situation. (And various people have made it explicit.) The game show is played every week, the host always opens one of the other doors, and the door he opens always has a goat behind it, and the host always gives the opportunity to switch. In this case it is indeed beneficial to switch.

Also 80 % of participants gave the right answer to Lior’s variant. In this case switching **does not** make a difference.

When you change one word in the original problem: “**the host, who does not know what’s behind the doors, opens another door, say No. 3, which has a goat,” **then for this version it makes no difference whether or not to switch. However, this version does not specify what would happen if door number 3 had a car. (The host will be fired?) This difficulty does not arise in Lior’s survivor version.

In this post I would like to report on two recent related breakthroughs, one by Andreas Holmsen and Dong-Gyu Lee, and the other by Andreas Holmsen. These developments are related to problems that are described in Problems for Imre Bárány’s birthday, and to my earlier 2004 Oberwolfach report “Expectations from commutative algebra“. While at it I will mention some earlier important papers by Pauline Sarrabezolles, by Shay Moran and Amir Yehudayoff, and by Boris Bukh.

You can find the statements of Helly theorem, Radon theorem, and the colorful Caratheodory theorem in this post. The colorful Helly theorem was first observed by Lovasz. The fractional Helly theorem was first proved by Katchalski and Liu.

**The colorful Helly theorem:** Let be nonempty families of convex sets in . If for every choice of a *transversal* – one set from every family – there is a point in common to all the chosen sets then for one of the families, there is a point in common to all sets in the family.

[Sorry, I got it wrong first, thanks to Shakhar Smorodinsky for alerting me.~~]~~

**The fractional Helly theorem:** For every there is such that if are convex sets in and at least fraction of -tuples of the sets have a point in common then a fraction of at least of the sets have a point in common.

I should mention that these theorems do not follow combinatorially from Helly’s theorem itself. They manifest deep properties of hypergraphs coming from geometry, and it is an important question to give general abstract properties, combinatorial or topological leading to such properties.

The result of Holmsen and Lee asserts roughly that the fractional Helly property follows combinatorially from Radon’s theorem! Holmsen’s result asserts that the fractional Helly property follows combinatorially from the colorful Helly theorem!

Radon numbers and the fractional Helly theorem

Authors: Andreas Holmsen Dong-Gyu Lee

**Abstract:** A basic measure of the combinatorial complexity of a convexity space is its Radon number. In this paper we show a fractional Helly theorem for convexity spaces with a bounded Radon number, answering a question of Kalai. As a consequence we also get a weak epsilon-net theorem for convexity spaces with a bounded Radon number. This answers a question of Bukh and extends a recent result of Moran and Yehudayoff.

Large cliques in hypergraphs with forbidden substructures

**Abstract: **A result due to Gyárfás, Hubenko, and Solymosi (answering a question of Erdös) states that if a graph G on n vertices does not contain as an induced subgraph yet has at least edges, then G has a complete subgraph on at least vertices. In this paper we suggest a “higher-dimensional” analogue of the notion of an induced which allows us to generalize their result to k-uniform hypergraphs. Our result also has an interesting consequence in discrete geometry. In particular, it implies that the fractional Helly theorem can be derived as a purely combinatorial consequence of the colorful Helly theorem.

Let me mention that Gyárfás’ type problems in graph theory and a little on the connection with fractional Helly were mentioned in the post When a few colors suffice. On this front major advances has happened in a series of recent papers and I will discuss them at a later post. (But, for now see, Alex Scott and Paul Seymour’s A survey of χ-boundedness, and Proof of the Kalai-Meshulam conjecture by Maria Chudnovsky, Alex Scott, Paul Seymour, Sophie Spirkl.)

How many tranversals can we guarantee in colorful Carathéodory? This is a very interesting problem on its own, and may be also related to Sierksma’s conjecture on the number of Tverberg’s partitions that we mentioned in this recent post (and in this old one).

The colourful simplicial depth conjecture (2014)

Pauline Sarrabezolles

**Abstract:**

Given d+1 sets of points, or colours, in , a colourful simplex is a set such that |T∩Si|≤1, for all i∈{1,…,d+1}. The colourful Carathéodory theorem states that, if 0 is in the convex hull of each , then there exists a colourful simplex T containing 0 in its convex hull. Deza, Huang, Stephen, and Terlaky (Colourful simplicial depth, Discrete Comput. Geom., 35, 597–604 (2006)) conjectured that, when for all i∈{1,…,d+1}, there are always at least colourful simplices containing 0 in their convex hulls. We prove this conjecture via a combinatorial approach.

On weak ε-nets and the Radon number (2017)

Shay Moran, Amir Yehudayoff

**Abstract: **We show that the Radon number characterizes the existence of weak nets in separable convexity spaces (an abstraction of the euclidean notion of convexity). The construction of weak nets when the Radon number is finite is based on Helly’s property and on metric properties of VC classes. The lower bound on the size of weak nets when the Radon number is large relies on the chromatic number of the Kneser graph. As an application, we prove a boosting-type result for weak ϵ-nets.

We already mentioned Boris Bukh’s beautiful counter example to Eckhoff’s partition conjecture in the 2010 paper Radon partitions in convexity spaces. Let me mention a general object used by Boris which is of much independent interest.

Let be a set of points in . For let , and let be the nerve of all the sets . is a simplicial complex whose vertices are indexed by all non empty subsets of . The abstract setting we have here is that of a simplicial complex, whose vertices are indexed by all subsets of and such that a collection of vertices which represent a family of subsets with non empty intersection must be a face of the complex.

]]>It is a notoriously difficult open problem which integers can be written as the sum of three integer cubes. Such integers cannot be 4 or 5 modulo 9, since integer cubes are modulo 9. Euler’s proved that 0 has no such representation. It can be conjectured that for every other integer such a representation does exist. For more on the problem see this 1996 posting by Noam Elkies.

**33** was the smallest integer for which this was not known and **Andrew Booker** in Bristol just proved that

I heard it from Alon Amit on Quora and FB who saw it in Tim Browning’s homepage. Alon’s Quora post contains more information on the problem, and a link to Bjorn Poonen’s paper on undecidability in number theory, which opens with the problem. The earlier record (30) was achieved in 1999 by Michael Beck, Eric Pine, Wayne Tarrant, and Kim Yarbrough Jensen, following an approach suggested by N. Elkies. (Here is the paper.)

_______

Let *k* be a positive integer with *k* ≠ ±4 (mod 9). Then Heath-Brown [HB92] has conjectured that there are inﬁnitely many triples *(x,y,z)* ∈ such that

Various numerical investigations of (1) have been carried out, beginning as early as 1954 [MW55]; see [BPTYJ07] for a thorough account of the history of these investigations up to 2000. The computations performed since that time have been dominated by an algorithm due to Elkies [Elk00]. The latest that we are aware of is the paper of Huisman [Hui16], which determined all solutions to (1) with* k ≤ 1000* and max{|x|,|y|,|z|}≤ . In particular, Huisman reports that solutions are known for all but 13 values of *k ≤ 1000*:

*(2) 33, 42, 114, 165, 390, 579, 627, 633, 732, 795, 906, 921, 975. *

Elkies’ algorithm works by ﬁnding rational points near the Fermat curve using lattice basis reduction; it is well suited to ﬁnding solutions for many values of *k* simultaneously. In this paper we describe a diﬀerent approach that is more eﬃcient when *k* is ﬁxed. It has the advantage of provably ﬁnding all solutions with a bound on the smallest coordinate, rather than the largest as in Elkies’ algorithm. This always yields a nontrivial expansion of the search range since, apart from ﬁnitely many exceptions that can be accounted for separately, one has

max{|x|,|y|,|z|} > 3 √2min{|x|,|y|,|z|}.

_______

And here is A 2015 Tim Browning in Numberphile video about the problem; Reddit discussion; 2013 MO question; The Aperiodical.

And I met Tim Browning last fall at IST, Austria! Tim told me about his works with Will Sawin and others applying the circle method (and other methods from Analytic number theory) to algebraic geometry. See the paper by Browning and Sawin A geometric version of the circle method and subsequent papers.

]]>Consider a Brownian motion in three dimensional space. What is the largest number of points on the path described by the motion which form an arithmetic progression? (Namely, , so that all are equal.)

A 2-D picture; In two dimension the answer is [~~infinity: oops my mistake]~~ that there is an AP of arbitrary large length almost surely. Source: Raissa D’Souza

The product ranges over all primes. In other words,

Just heard it from Avinoam Mann.

]]>

Philippe Flajolet 1948-2011

I am happy to forward the announcement on two free online courses (Mooks) by Bob Sedgewick *Analysis of Algorithms *and* Analytic Combinatorics.*

*Analysis of Algorithms* page provides access to online lectures, lecture slides, and assignments for use in teaching and learning from the book ** An Introduction to the Analysis of Algorithms** by

*Analytic Combinatorics *page provides access to online lectures, lecture slides, and assignments for use in teaching and learning from the (famous) book ** Analytic Combinatorics **by

Both courses are appropriate for use by instructors as the basis for a “flipped” class on the subject, or for self-study by individuals. Are these courses for you? The short answer is YES, but for a longer answer below are introductory slide for both courses.

To read more on Philippe Flajolet, see, Wikipedea; Obituary on GLL )**.**

My friend Dan Romik wrote an impressive paper Orthogonal polynomial expansions for the Riemann xi function about expansion of the Riemann Zeta function. (I thank Dan for telling me about it.) Dan kindly agreed to write a blog post about his work in May 2019.

Dan Romik (click once to enlarge, twice to enlarge further)

Let me mention also another impressive paper by Michael Griffin, Ken Ono, Larry Rolen and Dan Zagier: Jensen polynomials for the Riemann zeta function and other sequences. (I learned about it from Ken Ono over Facebook. Ken writes: “After almost 2 years and 83 drafts, I am happy to share this paper on the Riemann Hypothesis. Amazingly, this paper has been recommended for publication 6 days after submission!” ). Here is a MO question regarding this development.

It is now the **10 year anniversary of polymath projects** and let me belatedly mention a successful project polymath15 that took place over Terry Tao blog. The problem is related to the Hermite expansion of the Zeta function. Roughly speaking you can apply a noise operator on Zeta that causes higher degrees to decay exponentially. (The noise can be “negative” and then the higher degrees explode exponentially.) The higher the amount * t* of noise the easier the assertion of the Riemann Hypothesis (RH) becomes. Let , the de Bruijn-Newman constant, be the smallest amount of noise under which the assertion of RH is correct. It was conjectured that namely the assertion of the RH fails when you apply negative noise no matter how small. This follows from known conjectures on the spacing between zeroes of the Zeta function since when the assertion of RH is known for some level (positive or negative) of noise, then with a higher level of noise spacing between zeroes become more boring. The conjecture was settled by Brad Rodgers and Terry Tao. (See this blog post by Terry Tao.)

The task of polymath15 (proposed here, launched here, last (11th) post here) was to use current analytic number theory technology that already has yielded information on the zeroes of the zeta function (in the direction of RH and related conjectures) to deduce sharper upper bounds for than the previously known 0.5 value. The collaborative efforts led to

**Theorem (Polymath 15):**

A remarkable success! (Of course, proving RH was not an objective of polymath15.) There were also interesting comments of general nature regarding the RH and other big conjectures like this nice comment by anonymous.

It is even possible that (i.e. RH is true) and for each there is a simple Hermitian operator (possibly related to a probabilistic interpretation of ) having zeros as its spectrum (i.e. realizing Hilbert-Polya approach for ) but no such operator for (which may explain the failed attempts so far to find it). Since the zeros of converge to that of , it is possible that there is a proof of RH via the functions (whose zeros are more regularly distributed) but not via direct attack on itself!

Such “homotopic approach” to study via reminds similar methods used in the past to solve big problems (e.g. de Branges solution – via Loewner’s PDE with flow parameter – of the Bieberbach conjecture, and Perelman’s solution – via Hamilton’s Ricci flow PDE – of the Poincare conjecture).

]]>

**J Scott Provan **(site)

*The following post was kindly contributed by Karim Adiprasito. (Here is the link to Karim’s paper.) Update: See Karim’s comment on the needed ideas for extend the proof to the general case. See also in the comment section references to papers by Balin and Fleming and by Jensen and Yu.*

So, Gil asked me to say a little bit about my proof of the *g*-conjecture (and some other conjectures in discrete geometry) on his blog, and since he bought me many coffees to explain it to him (or if he is to be believed, the department paid), I am happy to oblige.

So, I want to explain a special, but critical case to the proof. It contains the some shadow of the core ideas necessary, but needs some more tricks I will remark on afterwards.

Also, I wanted to take this opportunity to mention something marvelous that I learned from Leonid Gurvits recently that increased my own understanding of one of the key tricks used indefinitely. That trick is the following cool lemma.

**Leonid Gurvits**

**PERTURBATION LEMMA:** Consider two linear maps

of two real vector spaces and . Assume that

Then a generic linear combination of and has kernel

Cool, no? **Proof, then:** Find a subspace of such that

so that in particular is injective on . Then, for small enough, the image of

is

But if we norm in any way, then approximates as tends to zero, which is linearly independent from by assumption. **WALLA**

Now, how is this used.

Let me set up some of the basic objects.

If is an abstract simplicial complex on ground-set , let denote the nonface ideal in , where .

Let denote the face ring of . A collection of linear forms in the polynomial ring is a **partial linear system of parameters** if

for the Krull dimension. If , then is simply a **linear system of parameters**, and the corresponding quotient is called an **Artinian reduction** of .

The *g*-conjecture (as described earlier in Gil’s blog) is implied by the following property:

**(HL)** For every sphere of even dimension , there is an Artinian reduction and a degree one element such that the map

is an isomorphism.

This is quite a reasonable demand. Indeed, Graebe proved that and that the resulting pairing

is perfect, so and are isomorphic as vector spaces. We shall call this property **(PD)**, because it is a special case of Poincaré pairing.

(HL) is a special case of the Hard Lefschetz Theorem I prove in my paper, and we will prove it for a subset of all triangulated spheres here. Proving it for all spheres implies the -conjecture (and other conjectures, such as the Grünbaum conjecture), and proving the hard Lefschetz theorem in full generality is not much harder.

**Lou Billera**

Lets recall a cool notion due to Provan and Billera: A pure simplicial -complex is **vertex decomposable** if it is a simplex, or there exists a vertex whose link is vertex decomposable of dimension and its deletion is vertex decomposable of dimension .

We restrict our attention to vertex decomposable spheres and disks and assume the boundary of the link is vertex decomposable as well in every step.

**THEOREM:** Vertex decomposable spheres satisfy (HL).

We prove this theorem by induction on dimension, the base case of zero-dimensional spheres being clear.

Lets label the vertices of in order of their vertex decomposition, from to . Now, will be a linear combination of indeterminates, so lets assume we have constructed an element that uses just the first of them, and such that itself is as close to a Lefschetz element as possible for its kind, that is, the kernel of

is the intersection of kernels of the maps

where ranges from to .

We want to construct a map with this property (which I call the **transversal prime property**}. To this effect, we want to apply the perturbation lemma to the maps , , and with respect to the spaces and . Let us denote by the ball given as the union of neighborhoods of the first vertices.

For this, we have to find out the kernel of . But this is the the ideal in generated by the monomials of faces which are not in the neighborhood of any of the first vertices. Lets call it . Lets also look at the image of , which by Graebe’s theorem is exactly the span of the images of the maps the maps

where ranges from to .

But then, is in degree if and only if is in degree . Why is that? Because with respect to the Poincaré pairing, (in degree ) and (in degree ) are dual.

The ring is obtained by taking , seen as a quotient of and modding out by the ideal generated by the linear system . But that is of length , even though is only of dimension . We can remove the vertex for the price of removing one of the linear forms, but then we have the same issue, having a -sphere and a system of length . Still, one too many! Taking a subsystem of length , we obtain an Artinian reduction for via a linear system , but what happens to the additional linear form of not in ? It has to act as a Lefschetz element on if we want

to be trivial in degree . But we may assume so by induction! Hence, we can choose as the generic sum of and by the perturbation lemma.

So, ultimately, we can construct a map with the transversal prime property. But then its kernel is the intersection of the kernels of

,

where ranges from to . But that is . **SABABA.**

Now, we have the Lefschetz theorem for a special class, but that is less than what we want in the end, since vertex decomposable spheres are few and in between (do you see a reason why? there are many). So, what do we do? For a long time, I tried to extend the perturbation lemma to combine more than two maps.

Recently (depending on when Gil puts this post on the blog), I met Leonid Gurvits for the first time on a marvelous little conference at the Simons Institute. I knew that the problem is related to Hall’s Marriage Theorem for operators (I explain this connection a bit further in my paper), but Leonid enlightened this further by pointing me towards several nice papers, starting with his work on Quantum Matching Theory. Indeed, finding a good extension to three and more maps would essentially mean that we could also find Hall Marriage Type Theorems for 3-regular hypergraphs, which we know for complexity reasons to be unlikely.

So what can we do instead? Well, it turns out that I only really needed to look at the -skeleton of above, and there is no need to be vertex decomposable. It is enough to find another nicely decomposable -manifold that contains it the -skeleton of , and then use some technical topological tricks to connect the local picture to global homological properties.

]]>