**Yaacov Levitzki**

The purpose of this post is to describe the Amitsur-Levitzki theorem: It is meant for people who are not necessarily mathematicians. Yet they need to know two things. The first is what matrices are. Very briefly, matrices are rectangular arrays of numbers. The second is that two n by n square matrices A and B can be multiplied. We denote their product by A x B and the product is again an n by n matrix. Unlike numbers, the product of matrices need not be commutative. In other words A x B can be different from B x A. The Wikipedia article about matrices is a good source.

We can multiply more than two matrices. We can write A x B x C for the product of A, B and C. The order of the matrices is important, but the order in which we perform the multiplication is not. This is because multiplication of matrices is associative, that is

(A x B ) x C = A x (B x C).

Here is the Amitsur Levitzki Theorem for 2 x2 matrices:

For every four 2 x 2 matrices A, B, C, and D

A x B x C x D – B x A x C x D – A x B x D x C + B x A x D x C – A x C x B x D + C x A x B x D + A x C x D x B – C x A x D x B + A x D x B x C – D x A x B x C – A x D x C x B + D x A x C x B + C x D x A x B – C x D x B x A – D x C x A x B + D x C x B x A – B x D x A x C + B x D x C x A + D x B x A x C – D x B x C x A + B x C x A x D – B x C x D x A – C x B x A x D + C x B x D x A = 0 .

In other words, we take the sum of the products of the matrices for all 24 possible orderings (permutations) with certain plus or minus signs, and lo and behold, we always get 0.

I will say more about it. But first a few remarks. The Amitsur-Levitzki theorem deals with products of matrices of size . It is very beautiful and important and when it comes to mathematics, it doesn’t get much better than that. It can be a nice theorem to explain to non mathematicians, but in this case I have especially one non-mathematician in mind. Alex Levitzki - Yaacov Levitzkii’s son. I promised Alex who is a famous HU biologist and chemist to tell him about his father’s theorem so why not share it with others. Yaacov Levitzki was one of the founding members of my department in Jerusalem. As a young man he came to Göttingen with the idea to study chemistry but attending a lecture by Emmy Noether converted him to mathematics.

**The Amitsur Levitzki Theorem:** For every matrices of size by denoted by , we have:

,

where the sum is taken over all orderings (permutations) of and denotes the sign of the ordering .

The sign of an ordering of 1,2,…,n is determined by the amount of pairs of numbers which are inverted in the ordering (namely, -they do not appear in their natural order). Such pairs are called **inversions**. The sign of is +1 if there are an even number of inversions and -1 if there are an odd number of inversion. For example, consider orderings of 1,2,3,4.

For the ordering 2,3,1,4 there are two inversions (the pairs {1,2} and {1,3}). The sign of this ordering is 1.

For the ordering 1,2,3,4 there are zero inversions. The sign of this ordering is 1.

For the ordering 4,1,2,3 there are three inversions (the pairs {1,4}, {2,4} and {3,4}). The sign of this ordering is -1.

**So, for example, if the ordering is 4,1,2,3 the term this ordering contributes is** .

One thing to note is that for 1 by 1 matrices the Amitsur-Levitzki Theorem is simply the commutative law **ab-ba =0**.

The Amitsur-Levitzki theorem is the starting point of a rich theory of rings with polynomial identities. I remember several chats with Shimshon Amitsur (who was Levitzkii’s student) on this subject. Amitsur was very enthusiastic about a theorem of Procesi that related all polynomial identities of matrices with the famous Cayley-Hamilton theorem that we learn in first-year linear algebra which asserts that when you substitute an n by n matrix in its characteristic polynomial you always get zero .

Finally, do not confuse Alex Levitzki with my friend and colleague (both at HU and Yale) Alex Lubotzky. Such confusion caused one of them to be charged for lunches of the other in our faculty club for many months.

Nice article. I’d never seen that elegant theorem.

I respectfully disagree ;-)

For instance, let be column vectors, and be a row vector. Though it’s certainly true that , note that , and therefore costs something like operations to compute, while is a scalar and hence costs only or so operations. Order does matter for anyone who implements numerical algorithms ;-)

On a more serious note: this is a beautiful theorem. I am embarassed that I didn’t know about it yet…

Dear John and Rod, thanks! I think the point regarding “non associativity” when it comes to computation complexity and numerics is quite serious. A related question I like to ask students is how they would suggest to compute a binomial coefficient ? the formula you choose and the ordering of the operations make quite a difference.

Nice eulo-hamiltonian traversal in a Caylake by a Swan.

In other words: there is a graph theoretic proof of this result due to R.G Swan which uses eulerian tours. Surprisingly enough R.G. Swan also gave a simplest proof of Rankin´s theorem on Hamiltonian cycles in Cayley graphs.

great post!

Indeed; this beautiful theorem is stated in Bollobas’ “Modern Graph Theory,” Section I.5, where he gives the graph-theoretic proof. The first insight is that the expression is multilinear, so it suffices that it be proven for each matrix equal to E_{ij} for some i, j. The expression is then identified with a weighted sum over Eulerian tours as proaonuiq says.

Dear Proaonuiq and Qiaochu, thanks for your comments.

Several people asked me for a proof or a link to a proof. The original paper appeared in: Proceedings of the American Mathematical Society, Vol. 1(1950), pp. 449-463, and it is very readable. There is indeed a graph theoretic proof by Swan which is also presented in Bollobas (two) graph theory textbooks. There is an article “Polynomial identities and the Cayley-Hamilton theorem” by Edward Formanek in The Mathematical Intelligencer 11(89) 37-39 which describes the related works of Processi and Razmislov. Rowen’s BAMS book review of a book by Formanek and Drensky http://www.ams.org/bull/2006-43-02/S0273-0979-06-01082-2/S0273-0979-06-01082-2.pdf contains a lot of interesting information.

There is also a proof of Amitsur_Levitzki’s theorem By Bertram Kostant in the paper “A theorem of Frobenius, a theorem of Amitsur-Levitski and cohomology theory. J. Math. Mech. 7 (1958), 237–264″ and a proof by Shmuel Rosset in Israel J Math 23(1976) 187-188.

Some remarks about Swan’s proof (Proceedings of the AMS, vol. 14, pp. 367-373), since I have just finished reading it myself. First, it is openly avaliable online thanks to AMS:

http://www.ams.org/journals/proc/1963-014-03/S0002-9939-1963-0149468-6/S0002-9939-1963-0149468-6.pdf

Secondly, the proof is incomplete, as was noticed in 1969. Here is Swan’s correction (Proceedings of the AMS, vol. 21, pp. 379-380):

http://www.ams.org/journals/proc/1969-021-02/S0002-9939-1969-0255439-7/S0002-9939-1969-0255439-7.pdf

With this correction the proof becomes indeed complete. It might be not the easiest, but the most elementary proof of Amitsur-Levitzki in literature. Due to the way the proof is written up, it evokes a bit more skepticism than it actually deserves – the pictures don’t always reflect the situation one really has in the respective cases, and one might suspect that this breaks the proofs. Fortunately, it does not. For example, on Figure 3, we might have $A=C$, but that’s no problem. Or we might have $e_i=e’$ for some $i$ on Figure 5, and that’s again no problem. Or $e_2=e_7$ (for example) on Figure 7; again, this doesn’t change anything in the proof. Also, Swan seems to make some arguments whose point I don’t really understand. Why is it important that $A$ has order greater than $2$ in Case 2? And why can’t we simply apply the pigeonhole principle in the Correction of Case 3 (where we suppose that every edge meets $P$, but there are $2n+2$ edges, so that at least two of these edges must have the same target and the same source by pigeonhole, so we are done by Remark 1 of §4)?

@Gil: I am wondering whether you could give some good sources about the skew-symmetric (and symmetric?) versions of Amitsur-Levitzki, where the matrices are assumed to be skew-symmetric (resp. symmetric) and, as a result, some less complicated (in the sense of: less terms) polynomial identities are satisfied. These versions seem to be significantly less known.