Back to list of papers


Recent Progress on Derandomizing Space-Bounded Computation

Survey paper by William M. Hoza


Read the paper: ECCCBEATCS

Abstract (for specialists)

Is randomness ever necessary for space-efficient computation? It is commonly conjectured that $\mathsf{L} = \mathsf{BPL}$, meaning that halting decision algorithms can always be derandomized without increasing their space complexity by more than a constant factor. In the past few years (say, from 2017 to 2022), there has been some exciting progress toward proving this conjecture. Thanks to recent work, we have new pseudorandom generators (PRGs), new black-box derandomization algorithms (generalizations of PRGs), and new non-black-box derandomization algorithms. This article is a survey of these recent developments. We organize the underlying techniques into four overlapping themes:

  1. The iterated pseudorandom restrictions framework for designing PRGs, especially PRGs for functions computable by arbitrary-order read-once branching programs.
  2. The inverse Laplacian perspective on derandomizing $\mathsf{BPL}$ and the related concept of local consistency.
  3. Error reduction procedures, including methods of designing low-error weighted pseudorandom generators (WPRGs).
  4. The continued use of spectral expander graphs in this domain via the derandomized square operation and the Impagliazzo-Nisan-Wigderson PRG (STOC 1994).

We give an overview of these ideas and their applications, and we discuss the challenges ahead.

Not-so-abstract (for curious outsiders)

⚠️ This summary might gloss over some important details.

A "randomized" algorithm tosses coins to make decisions, whereas a "deterministic" algorithm doesn't use any randomness. In practice, randomized algorithms are often more efficient than deterministic algorithms. However, all else being equal, deterministic algorithms are preferable, because we don't always have access to random bits. "Derandomization" is the art of converting randomized algorithms into deterministic algorithms without making them significantly worse in other respects.

For several decades now, theoretical computer scientists have been trying to prove that every algorithm can be derandomized without significantly increasing the amount of memory that the algorithm uses. This is an expository paper that summarizes the progress that researchers have made on this problem in the last few years.

I posted a manuscript online in August 2022; the paper was published in the October 2022 issue of the Bulletin of the EATCS (in Michal Koucký's "Computational Complexity" column). The ECCC version and the BEATCS version are essentially the same except for formatting.


Expository material:

Video from my presentation at the Institute for Advanced Study Computer Science and Discrete Mathematics Seminar (November 2023). Here are the slides from that presentation. See also these slides from my earlier presentation for the University of Warwick Complexity Meetings (November 2022), which follow the article more closely. I used very similar slides for presentations at the University of Michigan Theory Seminar (November 2022), the Columbia Theory Seminar (March 2022), and Oberwolfach Meeting 2146 on Complexity Theory (November 2021). Note: The presentation predates the article.


What others think: