Tag Archives: Harmonic analysis

Real Analysis Introductory Mini-courses at Simons Institute

The Real Analysis ‘Boot Camp’ included three excellent mini-courses.

Inapproximability of Constraint Satisfaction Problems (5 lectures)
Johan Håstad (KTH Royal Institute of Technology)

(Lecture I, Lecture II, Lecture III, Lecture IV, Lecture V)

jh-choclate

Unlike more traditional ‘boot camps’ Johan rewarded answers and questions by chocolates (those are unavailable for audience of the video).

Starting from the PCP-theorem (which we will take as given) we show how to design and analyze efficient PCPs for NP-problems. We describe how an efficient PCP using small amounts of randomness can be turned into an inapproximability result for a maximum constraint satisfaction problem where each constraint corresponds to the acceptance criterion of the PCP. We then discuss how to design efficient PCPs with perfect completeness in some interesting cases like proving the hardness of approximating satisfiable instances of 3-Sat.

We go on to discuss gadget construction and how to obtain optimal reductions between approximation problems. We present Chan’s result on how to take products of PCPs to get hardness for very sparse CSPs and give some preliminary new results using these predicates as a basis for a gadget reduction.

Finally we discuss approximation in a different measure, and in particular the following problem. Given a (2k+1)-CNF formula which admits an assignment that satisfies k literal in each clause, is it possible to efficiently find a standard satisfying assignment?

Analytic Methods for Supervised Learning​ (4 lectures)
Adam Klivans (University of Texas, Austin)

(Lecture I, Lecture II, Lecture III, Lecture IV) additional related lecture by Adam on Moment matching polynomials.

In this mini-course we will show how to use tools from analysis and probability (e.g., contraction, surface area and limit theorems) to develop efficient algorithms for supervised learning problems with respect to well-studied probability distributions (e.g., Gaussians). One area of focus will be understanding the minimal assumptions needed for convex relaxations of certain learning problems (thought to be hard in the worst-case) to become tractable.

Introduction to Analysis on the Discrete Cube (4 lectures)
Krzysztof Oleszkiewicz (University of Warsaw)

(Lecture I, Lecture II, Lecture III, Lecture IV) Here are the slides for the lecture which contain material for 1-2 additional lectures.

The basic notions and ideas of analysis on the discrete cube will be discussed, in an elementary and mostly self-contained exposition. These include the Walsh-Fourier expansion, random walk and its connection to the heat semigroup, hypercontractivity and related functional inequalities, influences, the invariance principle and its application to the Majority is Stablest problem. The mini-course will also contain some other applications and examples, as well as several open questions.

Ryan O’Donnell: Analysis of Boolean Function

Ryan O’Donnell has begun writing a book about Fourier analysis of Boolean functions and  he serializes it on a blog entiled Analysis of Boolean Function.  New sections appear on Mondays, Wednesdays, and Fridays.

Besides covering the basic theory, Ryan intends to describe applications in theoretical computer science and other areas of mathematics, including combinatorics, probability, social choice, and geometry.

Beside being a great place to learn this interesting material, actively participating in Ryan’s blog can make you a hero! Don’t miss this opportunity.

Each chapter of Ryan’s book ends with a “highlight” illustrating the use of Boolean analysis in problems where you might not necessarily expect it. In a post over Computational Complexity Ryan described some of these highlights in order to give a flavor of the contents:

  • Testing linearity (the Blum-Luby-Rubinfeld Theorem)
  • Arrow’s Theorem from Social Choice (and Kalai’s “approximate” version)
  • The Goldreich-Levin Algorithm from cryptography
  • Constant-depth circuits (Linial-Mansour-Nisan’s work)
  • Noise sensitivity of threshold functions (Peres’s Theorem)
  • Pseudorandomness for F_2-polynomials (Viola’s Theorem)
  • NP-hardness of approximately solving linear systems (Hastad’s Theorem)
  • Randomized query complexity of monotone graph properties
  • The (almost-)Polynomial Freiman-Ruzsa Theorem (i.e., Sanders’s Theorem)
  • The Kahn-Kalai-Linial Theorem on influences
  • The Gaussian Isoperimetric Inequality (Bobkov’s proof)
  • Sharp threshold phenomena (Friedgut and Bourgain’s theorems)
  • Majority Is Stablest Theorem
  • Unique Games-hardness from SDP gaps (work of Raghavendra and others)