Avi Wigderson gave a great CS colloquium talk at HUJI on Monday (a real auditorium talk with an audience of about 200 people). The title of the talk was
The Value of Errors in Proofs – a fascinating journey from Turing’s 1936 seminal R ≠ RE to the 2020 breakthrough of MIP* = RE
A large part of the talk was devoted to the series of conceptually new forms of proofs compared to the ordinary notion of proofs in mathematics. Starting from the 1980s, these new notions of proofs came from theoretical computer science. They include Zero-knowledge proof (ZKP), interactive proofs (IP), multi-prover interactive proofs (MIP), probabilistically checkable proofs (PCP), and the very recent quantum multi prover interactive proofs. Of course, all these proofs are probabilistic; the prover convinces the verifier(s) that a mathematical statement is correct only with very high probability.
One little disagreement Avi and I have is whether the probabilistic proofs of mathematical statements (from the 70s) could be regarded as a major new type of mathematical proof coming from TCS. For example, in connection with the efficient primality testing of Solovay-Strassen and Rabin-Miller, Rabin’s paper stated the following theorem:
is a prime!
At that time, this theorem had only a probabilistic proof: you can be convinced that the statement is correct with very high probability depending on some internal randomization. I remember hearing a lecture by Rabin about it in the mid 70s where he was happy about this new notion of a proof (2000 years after Euclid) for a mathematical theorem. (And I was also happy about it.)
OK, now for the disagreement: in my view, Rabin’s proof that is a prime (and similar results), is indeed a new startling notion of mathematical proof coming from TCS that belongs to this series of later discoveries and could even be regarded as one of its starting points.
According to Avi, Rabin’s proof that is a prime is, in fact, not a proof at all! The ordinary notion of mathematical proof is captured by the class NP, and proofs are not relevant at all for statements that can be verified by efficient algorithms (whether deterministic or randomized).
This debate is 30% semantic, and 10% a matter of historical assessment and giving proper credits, but I think it is 50% a serious question about the interface of insights from theoretical computer science and practical reality, and of interpretation of results from TCS. In this case, it is about the meaning of proofs in mathematics and the practice of proving mathematical theorems.
What do you think?
(Of course, of an even greater importance is the interface between insights from TCS and the practical reality of computation.)