David Wilson and a cover of Shlomo’s recent book “Curvature in mathematics and physics”

A few weeks ago, in David Kazhdan’s *basic notion seminar,* Shlomo Sternberg gave a lovely presentation** Kirchhoff and Wilson via Kozdron and Stroock**

*.*The lecture is based on the work presented in the very recent paper by Michael J. Kozdron, Larissa M. Richards, and Daniel W. Stroock: Determinants, their applications to Markov processes, and a random walk proof of Kirchhoff’s matrix tree theorem. Preprint, 2013. Available online at arXiv:1306.2059.

Here is the abstract:

Kirchhoff’s formula for the number of spanning trees in a connected graph is over 150 years old. For example, it says that if are the nonzero eigenvalues of the Laplacian, then the number *k* of spanning trees is There are many proofs. An algorithm due to Wilson via loop erased random walks produces such a tree, and Wilson’s theorem is that all spanning trees are produced by his algorithm with equal probability. Hence, after the fact, we know that Wilson’s algorithm produces any given tree with probability *1/k*. A proof due to Lawler, using the Green’s function, shows directly that Wilson’s algorithm has the probability *1/k* of producing any given spanning tree, thus simultaneously proving Wilson’s theorem and Kirchhoff’s formula. Lawler’s proof has been considerably simplified by Kozdron and Stroock. I plan to explain their proof. The lecture will be completely self-contained, using only Cramer’s rule from linear algebra.

(Here are also lecture notes of the lecture by Ron Rosenthal.)

Here is some background.

## The matrix-tree theorem

The matrix tree theorem asserts that the number of rooted spanning trees of a connected graph *G* is the product of the non-zero eigenvalues of *L(G)*, the Laplacian of *G*.

Suppose that *G* has *n* vertices. The Laplacian of *G* is the matrix whose *(i,i)*-entry is the degree of the *i*th vertex, and its *(i,j)* entry for is 0 if the *i*th vertex is not adjacent to the* j*th vertex, and -1 if they are adjacent. So *L(G)=D-A(G)* where *A(G)* is the adjacency matrix of *G*, and* D* is a diagonal matrix whose entries are the degrees of the vertices. An equivalent formulation of the matrix-tree theorem is that the number of spanning trees is the determinant of a matrix obtained from the Laplacian by deleting the* j* th row and *j* th column.

We considered a high dimensional generalization of the matrix tree theorem in these posts (I, II, III, IV).

## How to generate a random spanning tree for a graph *G*?

### Using the matrix-tree theorem

**Method A:** Start with an edge , use the matrix-tree theorem to compute the probability that *e *belongs to a random spanning tree of *G, *take *e* with probability . If *e* is taken consider the contraction and if *G* is not taken consider the deletion and continue.

This is an efficient method to generate a random spanning tree according to the uniform probability distribution. You can extend it by assigning each edge a weight and chosing a tree with probability proportional to the product of its weights.

### Random weights and greedy

**Method B:** Assign each edge a random real number between 0 and 1 and chose the spanning tree which minimizes the sum of weights via the greedy algorithm.

This is a wonderful method but it leads to a different probability distribution on random spanning trees which is very interesting!

### The Aldous-Broder random walk method

**Method C:** The Aldous-Broder theorem. Start a simple random walk from a vertex of the graph until reaching all vertices, and take each edge that did not form a cycle with earlier edges. (Or, in other words, take every edge that reduced the number of connected components of the graph on the whole vertex set and visited edges.)

Amazingly, this leads to a random uniform spanning tree. The next method is also very amazing and important for many applications.

### David Wilson’s algorithm

**Method D**: Wilson’s algorithm. Fix a vertex as a root. (Later the root will be a whole set of vertices, and a tree on them.) Start from an arbitrary vertex *u* not in the root and take a simple random walk until you reach the root. Next, erase all edges in cycles of the path created by the random walk so you will left with a simple path from *u* to the root. Add this path to the root and continue!

Here is a link to Wilson’s paper! Here is a nice presentation by Chatterji and Gulwani.

The proof of Wilson’s algorithm that we present is due to Greg Lawler and can be found in his paper “Loop-Erased Random Walk” in “Perplexing Problems in Probability: Festschrift in Honor of Harry Kesten” (Birkhauser, 1999, p 197-217). Our original goal was an expository account of how Kirchhoff’s matrix tree theorem follows from Lawler’s proof of Wilson’s algorithm. While working on that write-up, however, we discovered that the proof techniques lead to an explicit computational formula for the stationary distribution of certain Markov chains and processes (Theorem 4.2 in the preprint). Here is a talk that Larissa Richards gave recently to undergrads about this proof of Kirchhoff’s theorem.

Thanks for the remark, Michael.

A very cute and short proof of Kirchhoff theorem is to use induction http://mathoverflow.net/a/73400/1737

Another interesting algorithm for generating random spanning trees is via chip-firing/the abelian sandpile model; see for instance this paper: http://arxiv.org/pdf/1107.1313v2.pdf.