I am sure that every one of the readers of this blog heard about Laci Babai’s quasi-polynomial algorithm for graph isomorphism and also the recent drama about it: A mistake pointed out by Harald Helfgott, a new sub-exponential but not quasi-polynomial version of the algorithm that Laci found in a couple of days, and then, a week later, a new variant of the algorithm again found by Laci which is quasi-polynomial. You can read the announcement on Babai’s homepage, three excellent Quanta magazine articles (I,II,III), and many blog posts all over the Internet.

Babai’s result is an off-scale scientific achievement, it is wonderful in many respects, and I truly admire and envy Laci for this amazing breakthrough. I also truly admire Harald for his superb job as a Bourbaki expositor.

Tel Aviv University: Sackler distinguished lectures in Pure Mathematics Wednesday, January 18 (Poster. Sorry, too late, I heard it was very inspiring, don’t miss the other talks!)

Tel Aviv University Combinatorics seminar: Sunday, Jan. 22, 10:00-11:00

Title: Canonical partitioning and the emergence of the Johnson graphs: **Combina-**

**torial aspects of the Graph Isomorphism problem **

(The talk does not depend on Wednesday’s talk)

Hebrew University Colloquium San. Jan 22, 16:00-17:00 Title: **Graph isomorphism and coherent configurations: The Split-or-Johnson routine**

Lecture room 2, Manchester building (Mathematics)

*Local versus global symmetry and the Graph Isomorphism problem I–III*

Lecture I: Monday, January 23, 2017 at 15:30

Lecture II: Tuesday, January 24, 2017 at 15:30

Lecture III: Thursday, January 26, 2017 at 15:30

All lectures will take place at Auditorium 232, Amado Mathematics Building, Technion (Website)

Pekeris lecture, Jan 29, 11:00-12:00 **Hidden irregularity versus hidden symmetry **

EBNER AUDITORIUM (webpage)

]]>

The purpose of this post is to (belatedly) formally announce that the project has ended, to give links to the individual posts and to briefly mention some advances and some thoughts about it.

The posts were

- Polymath10: The Erdos Rado Delta System Conjecture, Posted Nov 2, 2015. (138 comments)
- Polymath10, Post 2: Homological Approach, Posted Nov 10, 2015. (125 comments.)
- Polymath 10 Post 3: How are we doing?, Posted Dec 8, 2015. (103 comments.)
- Polymath10-post 4: Back to the drawing board?, Posted Jan 31, 2016. (11 comments.)
- Polymath 10 Emergency Post 5: The Erdos-Szemeredi Sunflower Conjecture is Now Proven. Posted May 17, 2016. (35 comments.)
- Polymath 10 post 6: The Erdos-Rado sunflower conjecture, and the Turan (4,3) problem: homological approaches, Posted on May 27, 2016. (5 comments.)

The problem was not solved and we did not come near a solution. The posts contain some summary of the discussions, a few results, and some proposals by the participants. Phillip Gibbs found a remarkable relation between the general case and the balanced case. Dömötör Palvolgyi shot down quite a few conjectures I made, and Ferdinand Ihringer presented results about some Erdos-Ko-Rado extensions we considered (In term of upper bounds for sunflower-free families). Several participants have made interesting proposals for attacking the problem.

I presented in the second post a detailed homological approach, and developed it further in the later threads with the help of Eran Nevo and a few others. Then, after a major ingredient was shot down, I revised it drastically in the last post.

Participants made several computer experiments, for sunflower-free sets, for random sunflower-free sets, and also regarding the homologica/algebraic ideas.

The posts (and some comments) give some useful links to literature regarding the problem, and post 5 was devoted to a startling development which occurred separately – the solution of the Erdos-Szemeredi sunflower conjecture for sunflowers with three petals following the cup set developments. (The Erdos-Szemeredi sunflower conjecture is weaker than the Erdos-Rado conjecture.)

A (too) strong version of the homological conjecture appeared in my 1983 Ph. D. thesis written in Hebrew. The typesetting used the Hebrew version of Troff.

]]>

Five years ago I wrote a post entitled Is Backgammon in P? It was based on conversations with Peter Bro Miltersen and Uri Zwick (shown together in the above picture) about the computational complexity of computing the values (and equilibrium points) of various stochastic games, and also on some things I learned from my game theory friends over the years about proving that values exist for some related games. A few weeks ago two former students of Peter, Rasmus Ibsen-Jensen and Kristoffer Arnsfelt Hansen visited Israel and I had a chance to chat with them and learn about some recent exciting advances.

Is there a polynomial time algorithm for chess? Well, if we consider the complexity of chess in terms of the board size then it is fair to think that the answer is “no”. But if we wish to consider the complexity in terms of the number of all possible positions then it is easy to go backward over all positions and determine the outcome of the game when we start with each given position.

Now, **what about backgammon? ** Like chess, backgammon is a game of complete information. The difference between backgammon and chess is the element of luck; at each position your possible moves are determined by a roll of two dice. This element of luck increases the computational skill needed for playing backgammon compared to chess. It can easily be seen that optimal strategy for players in backgammon need not involve any randomness.

**Problem 1: **Is there a polynomial time algorithm to find the optimal strategy (and thus the value) of a stochastic zero sum game with perfect information? (Like backgammon)

This question (raised by Ann Condon in 1998) represents one of the most fundamental open problem in algorithmic game theory.

Heads-up poker is just a poker game with two players. To make it concrete you may think about heads-up Texas hold’em poker. This is not a game with complete information, but by according to the minmax theorem it still has a value. The optimal strategies are mixed and involve randomness.

**Problem 2: **Is there a polynomial time algorithm to find the optimal strategy (and thus the value) of a stochastic zero-sum game with incomplete information? (like heads-up Texas hold’em poker).

It will be very nice to find even a sub-exponential algorithm for a stochastic zero-sum game with incomplete information like poker.

**Problem 2′: **Is there a subexponential-time algorithm to find the optimal strategy (and thus the value) of a stochastic zero-sum game with incomplete information?

For games with complete information like backgammon, a subexponential algorithm was found by Walter Ludwig and in greater generality by Sergei Vorobyov, Henrik Björklund, and Sven Sandberg. It is related to subexponential simplex-type algorithms for linear programming called RANDOM-FACET, found in the early 90s by Matousek, Sharir and Welzl and myself.

Kristoffer Arnsfelt Hansen (see abstract below) presented a polynomial-time algorithm for 2-persons zero sum stochastic games, when the games have a bounded number of states. (Earlier algorithms were exponential.) The paper is: Exact Algorithms for Solving

Stochastic Games by Kristoffer Arnsfelt Hansen, Michal Koucky, Niels Lauritsen,

Peter Bro Miltersen, and Elias P. Tsigaridas. Slides of the talk are linked here.

As for backgammon there are very good computer programs. (We talked about chess-playing computers in this guest post by Amir Ban and since that time Go-playing computers are also available.) The site Cepheus Poker Project and this science paper Heads-up limit hold’em poker is solved are good sources on major achievements by a group of researchers from Alberta regarding two players poker.

**Problem 3: **Is there a polynomial time algorithm to find Nash equilibrium point (or another form of optimal strategy) of a stochastic n-player game with incomplete information? (like Texas holdem poker.) Here *n* is fixed and small.

I think that people are optimistic that even the answer to problem 3 is yes. (There are hardness results for finding equilibrium points in matrix games but the relevance to our case is not clear.) If we want an algorithm which optimally plays poker, it is not clear that finding a Nash equilibrium is the way to go.

**Problem 4:** Find an algorithm for playing Texas hold’em poker when there are more than two players.

When the objective is to maximize revenues against human players I expect that it will be possible to develop computer programs for playing poker better than humans.

**Problem 5:** How to play the game MEDIAN of the previous post?

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match (both heads or both tails), then Even keeps both pennies, so wins one from Odd (+1 for Even, −1 for Odd). If the pennies do not match (one heads and one tails) Odd keeps both pennies, so receives one from Even (−1 for Even, +1 for Odd) (source wikiPedia)

Variants of this game have been played since ancient times. In Hebrew matching pennies is called ZUG O PERET (even or odd; זוג או פרט). It is played like this: There are two players. Each player in his turn makes an announcement “even” or “odd”. Then each of the two players shows (simultaneously) some number of fingers and the announcing player wins if the sum of fingers has the announced parity.

The big match is a drastic repeated version of matching pennies. The game is played between players Even and Odd. Each player has a penny and in each stage must secretly turn the penny to heads or tails and the payoffs are the same as in matching pennies. If Even plays “head” the game continues to the next stage. However if Even plays “tails” (or tries for the “big match” as it is called) then the payoff in that round is repeated for all future rounds: Namely, if the pennies match Even will get 1 for all future rounds, and if the pennies do not match Even will pay one for all future rounds.

By playing heads with probability 1/2 and tails with probability 1/2, Odd can guarantee an expected payoff of 0. But what about Even? Can he also guarantee an expected payoff of 0? This was an open question for quite some time. The big match was introduced in 1957 by Dean Gillette who asked if the game has a value, namely if Even has a strategy to guarantee a payoff of 0.

**Problem 7:** Does big match has a value?

Here is a blog post on the big match by Presh Talwalkar on his blog “mind your decisions.”* *You also can read about the big match in this post of Kevin Bryan’s economics blog “a fine theorem.”

In 1968, David Blackwell and Thomas S. Ferguson settled Gillete’s question and proved that even can guarantee a zero payoff and thus big match did in fact have a value. This was the first step to showing all zero-sum stochastic games have value under limiting average payoff, which was proven in 1982 by Mertens and Neyman.

Rasmus Ibsen-Jensen presented both positive and negative results on attaining the value for the big match with limited types of strategies and also on complexity issues regarding other stochastic games. Here are the slides for Rasmus’ talk (see full abstract below). Part of the talk is based on the paper The Big Match in Small Space by Kristoffer Arnsfelt Hansen, Rasmus Ibsen-Jensen, and Michal Koucky.

This is a remarkable story with very important results and open questions. Here is the Wikipedia article on stochastic games and this short paper by Eilon Solan. I see now that the post is becoming too long and I will have to talk about it in a different post.

**Problem 8** (informal): Does every stochastic game have ~~a value~~ an equilibrium?

Following a major step by Truman Bewley and Elon Kohlberg (1976), Jean-François Mertens and Abraham Neyman (1981) proved that every two-person zero-sum stochastic game with finitely many states and actions has a uniform value. Nicolas Vieille (2000) has shown that all two-person stochastic games with finite state and action spaces have a limiting-average equilibrium payoff. The big question is to extend Vieille’s result to games with many players.

Kristoffer, Rasmus and Abraham (Merale) Neyman.

Exact algorithms for solving stochastic games

Speaker: Kristoffer Arnsfelt Hansen, Aarhus University

==================================================

In this talk we consider two-player zero sum stochastic games

with finite state and action space from an algorithmic

perspective. Prior to our work, algorithms for solving

stochastic games relied either on generic reductions to decision

procedures for the first order theory of the reals or on value or

strategy iteration. For all these algorithms, the complexity is

at least exponential even when the number of positions is a

constant and even when only a crude approximation is required

We will present an exact algorithm for solving these games based

on a simple recursive bisection pattern. The algorithm runs in

polynomial time when the number of positions is constant and our

algorithms are the first algorithms with this property. While the

algorithm is not based directly on real algebraic geometry, our

algorithm depends heavily on results from the field.

Based on joint work with Michal Koucký, Niels Lauritzen,

Peter Bro Miltersen, and Elias P. Tsigaridas published at STOC’11.

Abstract: The talk will attempt to characterize good strategies for some special cases of stochastic games. For instance, the talk will argue that there might always be a good strategy with a certain property for all games in a special case of stochastic games or that no good strategy exists that has some property for some game. Concretely,

1) for the stochastic game the Big Match, no good strategy (for lim inf) exists that only depends on how long the game has been playing and a finite amount of extra memory (when the extra memory is updated deterministically).

2) for the Big Match there is a good strategy that uses only a single coin flip per round and exponentially less space then previous known good strategies.

3) let x be the greatest reward in a stochastic game. The talk will next give a simple characterization of the states of value equal to x for which there exists either (a) an optimal strategy; (b) for each epsilon>0, a stationary epsilon-optimal strategy; or (c) for each epsilon>0, a finite-memory epsilon-optimal strategy (when the memory is updated deterministically) . The characterization also gives the corresponding strategy.

4) the talk will then consider stochastic games where there exists epsilon-optimal stationary strategies for all epsilon>0. It will argue that the smallest positive probability in a stationary epsilon-optimal strategy must be at least double exponential small for some sub-classes of stochastic games, while for other classes exponential small probabilities suffices.

1) and 2) is based on “The Big Match in Small Space”, 3) is based on “The Value 1 Problem Under Finite-memory Strategies for Concurrent Mean-payoff Games” 4) is based on “Strategy Complexity of Concurrent Stochastic Games with Safety and Reachability Objectives” and “The complexity of ergodic mean-payoff games“. All papers can be found in http://Rasmus.Ibsen-Jensen.com

]]>

Ehud Friedgut reminded me of the game MEDIAN which I proposed many years ago.

There are three players and they play the game for eight rounds. In every round all players simultaneously say a number between 1 and 8. A player whose number is (strictly) between the other two get a point. At the end of the game the winner is the player whose number of points is strictly between those of the others.

]]>

The institute was inaugurated in 1925 by a lecture of Edmund Landau, who later served as one of the first heads of the department. It has since developed into a defining and leading place in mathematics research, with world renowned research faculty working in diverse areas of up-to-date research.

Our graduate program gives students the chance to develop into researchers that shape mathematics of the future. The department offers a uniquely attractive environment to learn and work, with weekly seminars, frequent special lecture series on current topics in mathematics and scientific exchange with visiting researchers from around the world.

This is enriched further by the Israel Institute of Advanced Studies situated at Hebrew University that organizes thematic years on state-of-the art advances in science, and the close collaboration with the renowned departments of physics and computer science and engineering. You can venture even further and visit the nearby University of Tel Aviv, the Technion, the Weizmann Institute, Bar Ilan University, Ben Gurion University or the University of Haifa, that contribute to the active research environment and that we here enjoy a frequent and close scientific exchange with.

Accepted graduate students are expected to take up the standard course load in the department (12 credit points — each credit point is roughly 1 hour per week for a semester long course) — but are otherwise free to pursue their research.

The Hebrew University is a unique place that unites students and researchers of all faiths and origins to work together and advance science in a secular and inclusive atmosphere for the betterment of our world. It is consistently ranked among the top universities worldwide. Advanced courses for PhD students as well as all research seminars and the Colloquium lectures are typically given in English.

You can explore the unique cultural environment the city has to offer, uniting a rich past with a vibrant youth culture, allowing you to witness history as well as one of the many street concerts.

Our admissions procedure looks for students with a record of excellent academic achievements. We ask you to submit a CV, a brief outline of your research interests, and scans of official university transcripts (as PDF files), as well as names of three possible advisors at our department. These names are not binding, but help us get a feeling what your goals are. You also need to arrange for two recommendation letters to be sent to us directly by the letter writers to math.gradschool@mail.huji.ac.il . To ensure we can properly access the recommendations, please ensure the subject of the email will be “Recommendation letter for Last Name, First Name”. For the application material, please ensure the subject of the email will be “Application Last Name, First Name”. As part of our admissions process, applicants who pass our initial screening would typically be either invited for an interview or be interviewed over skype.

The deadline for receiving all application materials, including recommendation letters, is January 31. Applications submitted after this date will be considered on a case by case basis.

]]>

Monday 10-11:45 (Combinatorics seminar) **Adam Shefer – Geometric Incidences and the Polynomial Method**

Location: Rothberg (CS) B220

On Monday afternoon we will have four talks at the library of Belgium house by

**13:15-14:00 Peter Pach, Progression-free sets. New: SLIDES**

**14:10-14:55 Shoham Letzter,**

**15:15-16:00 Jordan Ellenberg, **

**16:10- 16:55 Fedya Petrov, Group rings vs. polynomials. New: SLIDES**

and a** problem session** moderated by Jordan starting at 16:55. New: PROBLEMS.

On Tuesday we start at 9:30 and will have four talks at the library of Belgium house:

**9:30-10:15 Noga Alon, Combinatorial Nullstellensatz and its algorithmic aspects. New: SLIDES**

** 10:35-11:20 Olga Holtz, A potpourri on power ideals, hyperplane
arrangements, graphs, and zonotopes (NEW: SLIDES)**

( lunch)

**UPDATES—Changes**

**Wednesday 9:30-10:15, ** **Anurag Bishnoi, zeros of polynomials over a finite grid. NEW:SLIDE.**

**Thursday 11:00-12:00 Seva Lev, Avoiding 3AP with differences in Room 209 Mathematics.**

Further informal discussions and talks may continue on Wednesday/Thursday.

The Thursday 14:30 Colloquium by **Jordan Ellenberg **will be on **The cap set problem**.

I will update titles as they come along.

]]>

A new proof for Keevash’s theorem on the existence of designs was discovered by Stefan Glock, Daniela Kühn, Allan Lo, and Deryk Osthus! The proof is given in the paper The existence of designs via iterative absorption, and the paper contains also some new applications of the method of proof. This is great news! A second proof to a major difficult theorem is always very very important and exciting. Keevash’s theorem gave a vast generalization of the problem for decompositions of hypergraphs to complete subhypergraphs and the new theorem is even a much more general hypergraph decomposition theorem. Congratulations!

One of the important open problems about designs is the existence of q-analogs. The first example was given in 1987 by Simon Thomas. Michael Braun, Tuvi Etzion , Patric R. J. Östergard , Alexander Vardy, Alferd Wasserman found remarkable new q-designs. See also this article: Researchers found mathematical structure that was thought not to exist. Congratulations! It is an interesting question if the new existence methods apply to q-analogs (and perhaps in greater generality for all sort of algebraic gadgets).

As part of a project with Nati Linial and Yuval Peled I was interested in finding a* k*-dimensional simplicial complex on *k(k-1)* vertices with a complete *(k-1)*-dimensional skeleton, with vanishing rational homology so that every *(k-1)* face is included in the same number of *k*-faces. (This “same number” must be *k*.) Better still I want all links of i-faces to be combinatorially the same. For *k*=2 the 6-vertex triangulation of is an example, but I did not have any other example. I asked about it on MathOverflow and GNiklasch identified a remarkable example for *k=3*. (And there are some hopes for *k=4*.) Actually, I need to devote a post to MathOverflow experiences. I got answers there to several problems that intrigued me for decades.

One more thing: Daniela Kühn and Deryk Osthus were involved in recent years (sometimes with coauthors) in knocking out some very important problems in graph theory and extremal combinatorics. Their ICM14 survey describes some of their works related to Hamiltonian cycles including their solution to the famous Kelly’s conjecture.

]]>

I am quite fond of (and a bit addicted to) Nate Silver’s site FiveThirtyEight. Silver’s models tell us what is the probability that Hillary Clinton will win the elections. It is a very difficult question to understand how does the model relates to reality. What does it even mean that Clinton’s chances to win are 81.5%? One thing we can do with more certainty is to compare the predictions of one model to that of another model.

Some data from Nate Silver. Comparing the chance of winning with the chance of winning of the popular vote accounts for “aggregation of information,” the probability for a **recount** accounts for noise sensitivity. The computation for the winning probabilities themselves is also similar in spirit to the study of noise sensitivity/stability.

This data is month-old. Today, Silver gives probability above 80% for Clinton’s victory.

Given two candidates “zero” and “one” and a fixed , suppose that every voter votes for “one” with probability and for “zero” with probability and that these events are statistically independent. Asymptotically complete aggregation of information means that with high probability (for large populations) “one” will win.

Aggregation of information for the majority rule was studied by Condorcet in what is known as the “Condorcet’s Jury theorem”. The US electoral rule which is a two-tier majority with some weights also aggregates information but in a somewhat weaker way.

The data in Silver’s forecast allows to estimate aggregation of information based on actual polls which give different probabilities for voters in different states. This is reflected by the relation between the probability of winning and the probability for winning the popular vote. Silver’s data allows to see for this comparison if the the simplistic models behave in a similar way to the models based on actual polls.

We talked about Condorcet’s Jury theorem in this 2009 post on social choice.

**Marie Jean Nicolas Caritat, marquis de Condorcet (1743-1794)**

Suppose that the voters vote at random and each voter votes for each candidate with probability 1/2 (again independently). One property that we ask from a voting method is that the outcomes of the election will be robust to noise of the following kind: Flip each ballot with probability *t* for . “Noise stability” means that if *t* is small then the probability of such random errors in counting the votes to change the identity of the winner is small as well. The majority rule is noise stable and so is the US election rule (but not as much).

How relevant is noise sensitivity/stability for actual elections? One way to deal with this question is to compare noise sensitivity based on the simple model for voting and for errors to noise sensitivity for the model based on actual polls. Most relevant is Silver’s probability for “recount.”

Nate Silver computes the probability of victory for every candidate based on running many “noisy” simulations based on the outcomes of the polls. (The way different polls are analyzed and how all the polls are aggregated together to give a model for voters’ behavior is a separate interesting story.)

We talked about noise stability and elections in this 2009 post (and other places as well).

The Banzhaf power index is the probability that a voter is pivotal (namely her vote can determine the outcome of the election) based on each voter voting with equal probability to each candidate. The Shapley-Shubik power index is the probability that a voter is pivotal under a different a priory distribution for the individual voters (under which the votes are positively correlated). Nate silver computes certain power indices based on the distribution of votes in each states as described by his model. Of course, voters in swing states have more power. It could be interesting to compare the properties of the abstract power indices and the more realistic ones from FiveThirtyEight. For example, the Banzhaf power indices sum up to the square root of the size of the population, while the Shapley-Shubik power indices sum up to one. It will be interesting to check the sum of pivotality probabilities under Silver’s model. (I’d guess that Silver’s model is closer to the Shapley-Shubik behavior.)

We talked about elections, coalition forming and power measures here, here and here.

In some earlier post we considered (but *did not* recommend) the HEX election rule. FiveThirtyEight provides a tool to represent the states of the US on a HEX board where sizes of states are proportional to the number of electoral votes.

According to the HEX rule one candidates wins by forming a continuous right-left path of winning states, and the other wins by blocking every such path or, equivalently, by forming a north-south path of winning states. The Hex rule is not “neutral” (symmetric under permuting the candidates).

If we ask for winning a north-south path for red and an east-west path for blue then red wins. For a right-left blue path much attention should be given to Arizona and Kansas.

If we ask for winning a north-south path for blue and an east-west path for red then blue wins and the Reds’ best shot would be to try to gain Oregon.

Now with the recent rise of the democratic party in the polls it seems possible that we will witness two disjoint blue north-south paths (with Georgia) as well as a blue east-west path. For a percolation-interested democratically-inclined observer (like me), this would be beautiful.

One way to consider both two basic properties of the majority rule as sort of stability to errors is as follows:

a) (Information aggregation reformulated) If all voters vote for the better candidate and with some probability a ballot will be flipped, then with high probability as the number of voters grows, the better candidate still wins.

We can also consider a weak form of information aggregation where is a fixed small real number. One way to think about this property is to consider an encoding of a bit by a string on n identical copies. Decoding using the majority rule have good error-correction capabilities.

b) (Noise stability) If all voters vote at random (independently with probability 1/2 for each candidate) and with some small probability a ballot will be flipped, then with high probability (as get smaller) this will not change the winner.

The “anomaly of majority” refers to these two properties of the majority rule which in terms of the Fourier expansion of Boolean functions are in tension with each other.

It turns out that for a sequence of voting rules, information aggregation is equivalent to the property that the maximum Shapley-Shubik power of the players tends to zero. (This is a theorem I proved in 2002. The quantitative relations are weak and not optimal.) Noise stability implies a positive correlation with some weighted majority rule, and it is equivalent to approximate low-degree Fourier representation. (These are results from 1999 of Benjamini Schramm and me.) Aggregation of information when there are two candidates implies a phenomenon called indeterminacy when there are many candidates.

The anomaly of majority is important for the understanding of why classical information and computation is possible in our noisy world.

Frank Wilczek, on of the greatest physicists of our time, wrote in 2015 a paper about future physics were he (among many other interesting things) is predicting that quantum computers will be built! While somewhat unimpressed by factoring large integers, Wilczek is fascinated by the possibility that

A quantum mind could experience a superposition of “mutually contradictory” states

Now, imagine **quantum elections** where the desired outcome of the election is going to be a superposition Hilary and Donald (Or Hillary’s and Donald states of mind, if you wish.) For example **|Hillary>** PLUS **|Donald>**.

Can we have a quantum voting procedure which has both a weak form of information aggregation and noise stability? Weak form of information aggregation amounts for the ability to correct a small fraction of random errors. Noise stability amounts to decoding procedure which is based on low-degree polynomials. Such procedures are unavailable and proving that they do not exist (or disproving it) is on my “to do” list.

The fact that no such quantum mechanisms are available appears to be important for the understanding of why robust quantum information and quantum computation is not possible in our noisy world!

Quantum election and a quantum Arrow’s theorem were considered in the post “Democrat plus Republican over the square-root of two” by

One last point. I learned about Nate Silver from my friend Greg Kuperberg, and probably from his mathoverflow answer to a question about mathematics and social science. There, Greg wrote referring to the 2008 elections: “The main person at this site, Nate Silver, has hit 50 home runs in the subject of American political polling.” Indeed, in the 2008 elections Silver correctly predicted who will win in each of the 50 states of the US. This is clearly impressive but does it reflect Silver’s superior methodology? or of him being lucky? or perhaps suggests some problems with the methodology? (Or some combination of all answers?)

One piece of information that I don’t have is the probabilities Silver assigned in each state in 2008. Of course, these probabilities are not independent but based on them we can estimate the expected number of mistakes. (In the 2016 election the expected number of mistakes in state-outcomes is today above five.) Also here, because of dependencies the expected value accounts also for some substantial small probability for many errors simultaneously. Silver’s methodology allows to estimate the actual distribution of “for how many states the predicted winner will lose?” (This estimation is not given on the site.)

Now, suppose that the number of such errors is systematically lower than the predicted number of errors. If this is not due to lack, it may suggest that the probabilities for individual states are tilted to the middle. (It need not necessarily have bearing on the presidential probabilities.)

One mental experiment I am fond of asking people (usually before elections) is this: Suppose that just a minute before the votes are counted you can change the outcome of the election (say, the identity of the winner, or even the entire distribution of ballots) according to your own preferences. Let’s assume that this act will be completely secret. Nobody else will ever know it. Will you do it?

In 2008 we ran a post with a poll about it.

We can run a new poll specific to the US 2016 election.

I really like days of elections and their special atmosphere in Israel where I try never to miss them, and also in the US (I often visit the US on Novembers). I also believe in democracy as a value and as a tool. Often, I don’t like the results but usually I can feel happy for those who do like the results. (And by definition, in some sense, most people do like the outcomes.)

And here is a post about democracy in talmudic teachings.

Below the fold, my own opinion on the coming US election.

**The choice as I see it**

]]>

Live streaming for Avifest is available here. The program is here. Following the first two lectures I can witness that the technical quality of the broadcast is very good and the scientific quality of the lectures is superb. As this is posted Dick Karp have started his lecture. Go for it!!

]]>

Ladies and gentlemen, A midrasha (school) in honor of Alex Lubotzky’s 60th birthday will take place from November 6 – November 11, 2016 at the Israel Institute for Advanced Studies, the Hebrew University of Jerusalem. Don’t miss the event! And read this:

“Groups have always played a central role in the different branches of Algebra, yet their importance goes far behind the limits of Algebraic research. One of the most significant examples for this is the work of Alex Lubotzky. Over the last 35 years, Alex has developed and applied group theoretic methods to different areas of mathematics including combinatorics, computer science, geometry and number theory. His research led the way in the study of expander graphs, p-adic Lie groups, profinite groups, subgroup growth and many geometric counting problems. The 20th Midrasha, dedicated to Alex’s 60th birthday, will include lectures from leading mathematicians in these fields presenting their current work”

My friendship with Alex goes well over the last forty years, we shared exotic experiences in the Jordan River and the Amazon River, shared apartments at Yale, taught a course together 5 times and more.

]]>