Back to list of papers


Hitting Sets Give Two-Sided Derandomization of Small Space

By Kuan Cheng and William M. Hoza


Read the paper: ToCECCCCCC proceedings

Abstract (for specialists)

A hitting set is a "one-sided" variant of a pseudorandom generator (PRG), naturally suited to derandomizing algorithms that have one-sided error. We study the problem of using a given hitting set to derandomize algorithms that have two-sided error, focusing on space-bounded algorithms. For our first result, we show that if there is a log-space hitting set for polynomial-width read-once branching programs (ROBPs), then not only does $\mathbf{L}=\mathbf{RL}$ hold, but $\mathbf{L}=\mathbf{BPL}$ as well. This answers a question raised by Hoza and Zuckerman (SICOMP 2020).

Next, we consider constant-width ROBPs. We show that if there are log-space hitting sets for constant-width ROBPs, then given black-box access to a constant-width ROBP $f$, it is possible to deterministically estimate $\mathbb{E}[f]$ to within $\pm \varepsilon$ in space $O(\log(n/\varepsilon))$. Unconditionally, we give a deterministic algorithm for this problem with space complexity $O(\log^2 n+ \log(1/\varepsilon))$, slightly improving over previous work.

Finally, we investigate the limits of this line of work. Perhaps the strongest reduction along these lines one could hope for would say that for every explicit hitting set, there is an explicit PRG with similar parameters. In the setting of constant-width ROBPs over a large alphabet, we prove that establishing such a strong reduction is at least as difficult as constructing a good PRG outright. Quantitatively, we prove that if the strong reduction holds, then for every constant $\alpha > 0$, there is an explicit PRG for constant-width ROBPs with seed length $O(\log^{1+ \alpha} n)$. Along the way, unconditionally, we construct an improved hitting set for ROBPs over a large alphabet.

Not-so-abstract (for curious outsiders)

⚠️ This summary might gloss over some important details.

A "decision problem" is a problem where the answer is always "yes" or "no," e.g., the problem of determining whether a given number is prime. Suppose there's a randomized algorithm $A$ that solves some decision problem with high probability using a small amount of memory. Does that automatically mean there's a deterministic algorithm that solves the same problem using a similar amount of memory? In other words, is it always possible to "derandomize" any low-memory decision algorithm? Computer scientists think so, but we're not sure how to prove it.

If $A$ has the special feature that it never gives false positives, derandomization is potentially easier. Some of the most promising approaches treat $A$ as a "black box," meaning that the deterministic algorithm would just simulate $A$ in a few carefully crafted scenarios and then figure out the correct answer based on what $A$ outputs in those scenarios. In this paper, we prove that if it is possible to derandomize all low-memory decision algorithms that never give false positives in a black-box manner, then it is possible to derandomize all low-memory decision algorithms, whether they give false positives or not.

We posted a manuscript online in February 2020; I presented the paper at CCC in July 2020; the journal version was published in the ToC special issue for CCC 2020 in September 2022. The exposition in the ToC version is improved in several minor ways compared to the CCC proceedings version and the ECCC version. The latter two versions are the same except for formatting.


Expository material:

[Video] My prerecorded presentation for CCC (July 2020). Here are the slides from that presentation. See also these other slides that I used for a Zoom presentation at the UIUC CS Theory Seminar (November 2022).

🎓

The main result of the paper is explained in my PhD dissertation (Section 3.3).

🔭

I wrote a survey that discusses the main result of the paper (Section 3).

👨‍🏫

The main result of the paper is covered in these lecture notes for a course taught by Avishay Tal (Fall 2021).


What others think: