Real Analysis Introductory Mini-courses at Simons Institute

The Real Analysis ‘Boot Camp’ included three excellent mini-courses.

Inapproximability of Constraint Satisfaction Problems (5 lectures)
Johan Håstad (KTH Royal Institute of Technology)

(Lecture I, Lecture II, Lecture III, Lecture IV, Lecture V)


Unlike more traditional ‘boot camps’ Johan rewarded answers and questions by chocolates (those are unavailable for audience of the video).

Starting from the PCP-theorem (which we will take as given) we show how to design and analyze efficient PCPs for NP-problems. We describe how an efficient PCP using small amounts of randomness can be turned into an inapproximability result for a maximum constraint satisfaction problem where each constraint corresponds to the acceptance criterion of the PCP. We then discuss how to design efficient PCPs with perfect completeness in some interesting cases like proving the hardness of approximating satisfiable instances of 3-Sat.

We go on to discuss gadget construction and how to obtain optimal reductions between approximation problems. We present Chan’s result on how to take products of PCPs to get hardness for very sparse CSPs and give some preliminary new results using these predicates as a basis for a gadget reduction.

Finally we discuss approximation in a different measure, and in particular the following problem. Given a (2k+1)-CNF formula which admits an assignment that satisfies k literal in each clause, is it possible to efficiently find a standard satisfying assignment?

Analytic Methods for Supervised Learning​ (4 lectures)
Adam Klivans (University of Texas, Austin)

(Lecture I, Lecture II, Lecture III, Lecture IV) additional related lecture by Adam on Moment matching polynomials.

In this mini-course we will show how to use tools from analysis and probability (e.g., contraction, surface area and limit theorems) to develop efficient algorithms for supervised learning problems with respect to well-studied probability distributions (e.g., Gaussians). One area of focus will be understanding the minimal assumptions needed for convex relaxations of certain learning problems (thought to be hard in the worst-case) to become tractable.

Introduction to Analysis on the Discrete Cube (4 lectures)
Krzysztof Oleszkiewicz (University of Warsaw)

(Lecture I, Lecture II, Lecture III, Lecture IV) Here are the slides for the lecture which contain material for 1-2 additional lectures.

The basic notions and ideas of analysis on the discrete cube will be discussed, in an elementary and mostly self-contained exposition. These include the Walsh-Fourier expansion, random walk and its connection to the heat semigroup, hypercontractivity and related functional inequalities, influences, the invariance principle and its application to the Majority is Stablest problem. The mini-course will also contain some other applications and examples, as well as several open questions.