The Frankl-Wilson theorem is a remarkable theorem with many amazing applications. It has several proofs, all based on linear algebra methods (also referred to as dimension arguments). The original proof is based on a careful study of incidence matrices for families of sets. Another proof by Alon, Babai and Suzuki applies the “polynomial method”. I will describe here one variant of the theorem that we will use in connection with the Borsuk Conjecture (It follows a paper by A. Nilli). I will post separately about more general versions, the origin of the result, and various applications. One application about the maximum volume of spherical sets not containing a pair of orthogonal vectors is presented in the next post.
Let and suppose that is a prime.
Theorem: Let be a subset of with the property that no two vectors in are orthogonal. Then .
We will prove a slightly modified version which easily implies the Theorem as we stated it before:
Let be the set of vectors () in such that and the number of ’1′ coordinates is even. Let be a subset of with the property that no two vectors in are orthogonal. Then .
The ad-hoc trick:
Claim 1: Let . If then .
This is a crucial part of the argument and as far as I can see the part that you need to replace entirely in other applications of the method.
Claim 1′: For every , .
Given Claim 1′, if then . Since the first coordinate is 1 it is not possible that and it follows that .
Proof (of Claim1′): Suppose that the number of coordinates where and is , the number of coordineates where and is , the number of coordineates where and is , and the number of coordineates where and is . The inner product . Now is even and also is even so is also even and is divisible by 4. is divisible by 4 and therefore so is .
The algebra-puzzle trick
The answer is ZERO because one of the terms is (x-x).
Magic: underline the empty line with your mouse to see the solution.
Define a family of polynomials in variables , over the field with elements as follows. For every define . Here .
Claim 2: For , we have .
Proof: Since we must have that for some , . Therefore one of the terms in the product defining must vanish. (Remember, we work over the field with elements. Ahla!
Claim 3: .
Proof: The inner product of a vector with itself in . This is important! Therefore . (It is a product of non-zero numbers modulo . What is modulo p? This is a result of a different Wilson.) Walla!
Remark: Some people prefer to simply write which gives a different less extravagant way to present the same thing. (And without the algebra-puzzle.) I still prefer the way we defined the polynomials because for other applications of the method, it generalizes more easily.
The linear algebra (diagonal) trick
Claim 4: The polynomials are linearly independent.
Proof: Suppose that . Substitute in this sum for . By Claim 2 we obtain that the outcome is . By claim 3 and therefore . Sababa!
Remark: It is worthwhile to remember a variation of this trick, the linear algebra upper-diagonal trick, where you conclude linear independence when you assume that there is some order on your set of ‘s and that for .
The multilinearization trick
We now replace the polynomials by new polynomials by applying the following rule:
Replace each by 1 and each by !
Claim 5: The new polynomials are square-free, and they attain the same values for vectors with coordinates .
Proof: This is because when we have a variable that attains only the values +1 and -1 the value of is 1, and the value of is the same as that of . Walla!
Claim 6: The polynomials are linearly independent.
Proof: The proof of claim 4 applies. Walla!
The end of the proof:
By claim 6, the size of is at most the dimension of the vector space of square-free polynomials of degree in variables . This dimension is the number of sqauare-free monomials of degree in variables which is .