\(\renewcommand{\epsilon}{\varepsilon}\) \(\renewcommand{\hat}{\widehat}\) \(\DeclareMathOperator*{\E}{\mathbb{E}}\)

Back to list of papers


Recent Progress on Derandomizing Space-Bounded Computation

Survey paper by William M. Hoza


Read the paper: ECCC

Abstract (for specialists)

Is randomness ever necessary for space-efficient computation? It is commonly conjectured that \(\mathsf{L} = \mathsf{BPL}\), meaning that halting decision algorithms can always be derandomized without increasing their space complexity by more than a constant factor. In the past few years (say, from 2017 to 2022), there has been some exciting progress toward proving this conjecture. Thanks to recent work, we have new pseudorandom generators (PRGs), new black-box derandomization algorithms (generalizations of PRGs), and new non-black-box derandomization algorithms. This article is a survey of these recent developments. We organize the underlying techniques into four overlapping themes:

  1. The iterated pseudorandom restrictions framework for designing PRGs, especially PRGs for functions computable by arbitrary-order read-once branching programs.
  2. The inverse Laplacian perspective on derandomizing \(\mathsf{BPL}\) and the related concept of local consistency.
  3. Error reduction procedures, including methods of designing low-error weighted pseudorandom generators (WPRGs).
  4. The continued use of spectral expander graphs in this domain via the derandomized square operation and the Impagliazzo-Nisan-Wigderson PRG (STOC 1994).

We give an overview of these ideas and their applications, and we discuss the challenges ahead.

Not-so-abstract (for curious outsiders)

⚠️ This summary might gloss over some important details.

A "randomized" algorithm tosses coins to make decisions, whereas a "deterministic" algorithm doesn't use any randomness. In practice, randomized algorithms are often more efficient than deterministic algorithms. However, all else being equal, deterministic algorithms are preferable, because we don't always have access to random bits. "Derandomization" is the art of converting randomized algorithms into deterministic algorithms without making them significantly worse in other respects.

For several decades now, theoretical computer scientists have been trying to prove that every algorithm can be derandomized without significantly increasing the amount of memory that the algorithm uses. This is an expository paper that summarizes the progress that researchers have made on this problem in the last few years.

I posted a manuscript online in August 2022.


Expository material:

Slides from my presentation for the Columbia Theory Seminar (March 2022). I used very similar slides for a presentation at Oberwolfach Meeting 2146 on Complexity Theory (November 2021). Note: The presentation predates the article.


What others think: