Thanks to Irit Dinur for telling me about the following:

Oded Goldreich’s recent choice is about the paper: Circuit Lower Bounds for Nondeterministic Quasi-Polytime: An Easy Witness Lemma for NP and NQP, by Corry Murray and Ryan Williams.

Ryan Williams proved seven years ago that ACC does not contain NEXP. The new paper shows that ACC does not contain NQP (nondeterministic quasi-polynomial time). A huge improvement!

(ACC stands for Boolean functions that can be described by bounded depth polynomial size citcuits with Boolean gates and mod 6 gates.)

Oded’s next choice is a paper by Roei Tell pointing out a remarkable direct consequence from the Murray-Williams result for the hardness vs. randomness agenda. ~~If the goal is to show that ~~** BPP=P **implies that ** NP ≠ P, **then if earlier we had results 10% this strong the consequence is perhaps 70% strong.

Remarks:** **Above there are links to three posts (by Oded, Scott, and Dick and Ken) on Ryan’s older result. I wrote several posts on bounded depth circuits of several kinds, here (with an interesting comment by Ryan!), here, here, and here.

Update: Here is a detailed blog post over GLL about the breakthrough

### Like this:

Like Loading...

*Related*

It is conjectured that the majority function does not belong to ACC. (So ACC does not include P and not even NC). I am curious what would be the consequences of such a statement to the Hardness vs Randomness program or to other questions in computational complexity.

These are of course nice results, but I strongly disagree with statement about “almost” proving $P \neq NP$ in the title of the Roei Tell paper. The actual conclusion is that NTIME[n^f(n)] is not in P/poly. However, by non-deterministic time hierarchy theorem we know unconditionally that NTIME[n^f(n)] is not in NP, hence it is not in P. Given this, it is very strangle to interpret the conclusion that “NTIME[n^f(n)] is not in P/poly” as “almost $P \neq NP$”.

Yes, I find the matter confusing, so I deleted my description.

Hi Bogdan, maybe you or some reader can clarify the connection to randomization vs hardness better. My understanding was that the basic implication in this theory going back to Nisan and Wigderson is that the aim is to prove: NP not equal P (or a stronger hardness result) implies BPP=P (or a weaker derandomization result). (There were also some results in the opposite direction but of less convincing nature.) At least the title “Proving that prBPP=prP is as hard as “almost” proving that P neq NP” suggests that the new result goes in the opposite direction – namely that proving a version of BPP=P implies proving a somewhat weak form of P not equal NP. (But it is hard to tell with all the negations.)

Hi Gil,

I think that it’s better to think of the connection between P=BPP and circuit lower bounds (rather than P != NP). Hardness-randomness (e.g., Nisan-Wigderson) implies that very strong circuit lower bounds will yield that P=BPP, and weak results in the other direction have been gradually developed in the last two decades. This new development is a strong result in the other direction: A version of P=BPP (and even much weaker derandomization results) imply very strong circuit lower bounds, so strong that they are very close quantitatively to bounds that will yield “P != NP”.

As to the meaning of the latter (“very strong”) lower bounds, and whether they are actually “almost P != NP”, see my response to Bogdan above…

Roei Tell

Dear Roei, many thanks! My impression from the (very) earlier results (that Mike Saks told me) in the other direction is that they were of the form: [Derandomization implies new reduction between two hardness results] more than [Derandomization implies hardness]. But it looks that matters have changed!!!

Hi Bogdan,

I actually agree with you that I should have been more careful with the title (and will upload a revision in a few days to clarify the exact issue that you are raising).

I think that one can interpret the lower bound “NTIME[n^f(n)] not in P/poly” in two ways. The first is as a predecessor of “NP not in P/poly”, as hinted by the title, and the second is as a predecessor of “DTIME[n^f(n)] not in P/poly”, as you suggest. Both seem valid to me, and I personally see no convincing reason that extending the lower bound in the latter way will be easier than extending it in the former way. (*)

Indeed it will be great to have any indication as to whether it’s “easier” to prove such an extended time-hierarchy theorem (for algorithms-vs-circuits) or to prove “NP not in P/poly”. Note that so far our circuit lower bounds for P/poly tend to crucially rely on non-determinism, and that we’ve been able to make progress in proving that derandomization implies (small!) NTIME lower bounds, but it’s still open whether derandomization implies lower bounds even against EXP.

Roei

(*) In particular, note that when trying to prove either statement (i.e., “DTIME[n^f(n)] not in P/poly” or “NP not in P/poly”) we have to handle two conflicting resources, which of course doesn’t happen with time-hierarchy…

As happens half of the times got the implication wrong. Will fix it soon. Update: the implication as I wrote it is opposite to my understanding of Hardness vs randomness results but is consistent with the paper’s title. So I deleted my interpretation altogether. More update: Indeed the implication is from the randomness to hardness direction! I cannot tell if BPP implies (NP not equal P) is a realistic hope. (A plausible interpretation would be that BPP=P is as hopeless to prove as NP =! P) I remember in the early hardness vs randomization days some people hoped to unconditionally prove BPP=P, but perhaps there is now stronger evidence against such hopes.

Here is a detailed blog post over GLL about the breakthrough. https://rjlipton.wordpress.com/2018/01/23/progress-on-the-frontier/