Harry Buhrman, Marten Folkertsma, Ian Mertz, Florian Speelman, Sergii Strelchuk, Sathya Subramanian, Quinten Tupker
TQC 2025 (to appear)
[show abstract]Space complexity is a key field of study in theoretical computer science. In the quantum setting there are clear motivations to understand the power of space-restricted computation, as qubits are an especially precious and limited resource. Recently, a new branch of space-bounded complexity called catalytic computing has shown that reusing space is a very powerful computational resource, especially for subroutines that incur little to no space overhead. While quantum catalysis in an information theoretic context, and the power of “dirty” qubits for quantum computation, has been studied over the years, these models are generally not suitable for use in quantum space-bounded algorithms, as they either rely on specific catalytic states or destroy the memory being borrowed. We define the notion of catalytic computing in the quantum setting and show a number of initial results about the model. First, we show that quantum catalytic logspace can always be computed quantumly in polynomial time; the classical analogue of this is the largest open question in catalytic computing. This also allows quantum catalytic space to be defined in an equivalent way with respect to circuits instead of Turing machines. We also prove that quantum catalytic logspace can simulate log-depth threshold circuits, a class which is known to contain (and believed to strictly contain) quantum logspace, thus showcasing the power of quantum catalytic space. Finally we show that both unitary quantum catalytic logspace and classical catalytic logspace can be simulated in the one-clean qubit model.
|
Yaroslav Alekseev, Yuval Filmus, Ian Mertz, Alexander Smal, Antoine Vinciguerra
MFCS 2025 (to appear)
[show abstract]In a seminal work, Buhrman et al. (STOC 2014) defined the class CSPACE$(s, c)$ of problems solvable in space s with an additional catalytic tape of size c, which is a tape whose initial content must be restored at the end of the computation. They showed that uniform TC$^1$ circuits are computable in catalytic logspace, i.e., CL = CSPACE$(O(\log n), 2^{O(\log n)}$, thus giving strong evidence that catalytic space gives L strict additional power. Their study focuses on an arithmetic model called register programs, which has been a focal point in development since then. Understanding CL remains a major open problem, as TC$^1$ remains the most powerful containment to date. In this work, we study the power of catalytic space and register programs to compute circuits of larger depth. Using register programs, we show that for every $\epsilon > 0$, $$SAC^2 \subseteq CSPACE\left(O\left(\frac{\log^2 n}{\log \log n}, 2^{O(\log^{1+\epsilon} n)}\right)\right)$$ This is an $O(\log \log n)$ factor improvement on the free space needed to compute SAC$^2$, which can be accomplished with near-polynomial catalytic space. We also exhibit non-trivial register programs for matrix powering, which is a further step towards showing e.g. NC$^2$ $\subseteq$ CL.
|
Aryan Agarwala, Ian Mertz
FOCS 2025 (to appear)
[show abstract]Matching is a central problem in theoretical computer science, with a large body of work spanning the last five decades. However, understanding matching in the time-space bounded setting remains a longstanding open question, even in the presence of additional resources such as randomness or non-determinism.In this work we study space-bounded machines with access to catalytic space, which is additional working memory that is full with arbitrary data that must be preserved at the end of its computation. Despite this heavy restriction, many recent works have shown the power of catalytic space, its utility in designing classical space-bounded algorithms, and surprising connections between catalytic computation and derandomization. Our main result is that bipartite maximum matching (MATCH) can be computed in catalytic logspace (CL) with a polynomial time bound (CLP). Moreover, we show that MATCH can be reduced to the lossy coding problem for NC circuits (LOSSY[NC]). This has consequences for matching, catalytic space, and derandomization: • Matching: this is the first well studied subclass of P which is known to compute MATCH, as well as the first algorithm simultaneously using sublinear free space and polynomial time with any additional resources. Thus, it gives a potential path to designing stronger space and time-space bounded algorithms. • Catalytic space: this is the first new problem shown to be in CL since the model was defined, and one which is extremely central and well-studied. Furthermore, it implies a strong barrier to showing CL lies anywhere in the NC hierarchy, and suggests to the contrary that CL is even more powerful than previously believed.• Derandomization: we give the first class C beyond L for which we exhibit a natural problem in LOSSY[C] which is not known to be in C, as well as a full derandomization of the isolation lemma in CL in the context of MATCH. This also suggests a possible approach to derandomizing the famed RNC algorithm for MATCH. Our proof combines a number of strengthened ideas from isolation-based algorithms for matching alongside the compress-or-random framework in catalytic computation.
|
Michal Koucký, Ian Mertz, Ted Pyne, Sasha Sami
FOCS 2025 (to appear)
[show abstract]A catalytic machine is a space-bounded Turing machine with additional access to a second, much larger work tape, with the caveat that this tape is full, and its contents must be preserved by the computation. Catalytic machines were defined by Buhrman et al. (STOC 2014), who, alongside many follow-up works, exhibited the power of catalytic space (CSPACE) and in particular catalytic logspace machines (CL) beyond that of traditional space-bounded machines. Several variants of CL have been proposed, including non-deterministic and co-non-deterministic catalytic computation by Buhrman et al. (STACS 2016) and randomized catalytic computation by Datta et. al. (CSR 2020). These and other works proposed several questions, such as catalytic analogues of the theorems of Savitch and Immerman and Szelepcsényi. Catalytic computation was recently derandomized by Cook et al. (STOC 2025), but only in certain parameter regimes. We settle almost all questions regarding non-deterministic and randomized catalytic computation, by giving an optimal reduction from catalytic space with additional resources to the corresponding non-catalytic space classes. With regards to non-determinism, our main result is that $$CL = CNL$$ and with regards to randomness, we show $$CL = CPrL$$ where CPrL denotes randomized catalytic logspace where the accepting probability can be arbitrarily close to 1/2. We also have a number of near-optimal partial results for non-deterministic and randomized catalytic computation with less catalytic space. In particular, we show catalytic versions of Savitch’s theorem, Immerman-Szelepscényi, and the derandomization results of Nisan and Saks and Zhou, all of which are unconditional and hold for all parameter settings.Our results build on the compress-or-compute framework of Cook et al. (STOC 2025). Despite proving broader and stronger results, our framework is simpler and more modular.
|
James Cook, Jiatu Li, Ian Mertz, Ted Pyne
STOC 2025
[slides]
[show abstract]In the catalytic logspace (CL) model of (Buhrman et. al. STOC 2013), we are given a small work tape, and a larger catalytic tape that has an arbitrary initial configuration. We may edit this tape, but it must be exactly restored to its initial configuration at the completion of the computation. This model is of interest from a complexity-theoretic perspective as it gains surprising power over traditional space. However, many fundamental structural questions remain open. We substantially advance the understanding of the structure of CL, addressing several questions raised in prior work. Our main results are as follows. 1. We unconditionally derandomize catalytic logspace: CBPL = CL. 2. We show time and catalytic space bounds can be achieved separately if and only if they can be achieved simultaneously: any problem in CL $\cap$ P can be solved in polynomial time-bounded CL. 3. We characterize deterministic catalytic space by the intersection of randomness and time: CL is equivalent to polytime-bounded, zero-error randomized CL. Our results center around the compress–or–random framework. For the second result, we introduce a simple yet novel compress–or–compute algorithm which, for any catalytic tape, either compresses the tape or quickly and successfully computes the function at hand. For our first result, we further introduce a compress–or–compress–or–random algorithm that combines runtime compression with a second compress–or–random algorithm, building on recent work on distinguish-to-predict transformations and pseudorandom generators with small-space deterministic reconstruction.
|
Marten Folkertsma, Ian Mertz, Florian Speelman, Quinten Tupker
ITCS 2025
[show abstract]A catalytic machine is a model of computation where a traditional space-bounded machine is augmented with an additional, significantly larger, "catalytic" tape, which, while being available as a work tape, has the caveat of being initialized with an arbitrary string, which must be preserved at the end of the computation. Despite this restriction, catalytic machines have been shown to have surprising additional power; a logspace machine with a polynomial length catalytic tape, known as catalytic logspace (CL), can compute problems which are believed to be impossible for L. A fundamental question of the model is whether the catalytic condition, of leaving the catalytic tape in its exact original configuration, is robust to minor deviations. This study was initialized by Gupta et al. (2024), who defined lossy catalytic logspace (LCL[e]) as a variant of CL where we allow up to e errors when resetting the catalytic tape. They showed that LCL[e] = CL for any e = O(1), which remains the frontier of our understanding. In this work we completely characterize lossy catalytic space (LCSPACE[s, c, e]) in terms of ordinary catalytic space (CSPACE[s, c]). We show that $$LCSPACE[s, c, e] = CSPACE[\Theta(s + e \log c), \Theta(c)]$$ In other words, allowing e errors on a catalytic tape of length c is equivalent, up to a constant stretch, to an equivalent errorless catalytic machine with an additional e log c bits of ordinary working memory. As a consequence, we show that for any e, LCL[e] = CL implies SPACE[e log n] $\subseteq$ ZPP, thus giving a barrier to any improvement beyond LCL[O(1)] = CL. We also extend all our results to every variant of catalytic space
|
James Cook, Ian Mertz
STOC 2024
[video (STOC)] [video (IAS)] [video (Lisbon)] [slides] [coverage (Goldreich)]
[show abstract]The Tree Evaluation Problem (TreeEval) (Cook et al. 2009) is a central candidate for separating polynomial time (P) from logarithmic space (L) via composition. While space lower bounds of $\Omega(\log^2 n)$ are known for multiple restricted models, it was recently shown by Cook and Mertz (2020) that TreeEval can be solved in space $O(\log^2 n/\log \log n)$. Thus its status as a candidate hard problem for L remains a mystery. Our main result is to improve the space complexity of TreeEval to $O(\log 𝑛 \cdot \log \log n)$, thus greatly strengthening the case that Tree Evaluation is in fact in L. We show two consequences of these results. First, we show that the KRW conjecture (Karchmer, Raz, and Wigderson 1995) implies L $\not\subseteq$ NC$^1$; this itself would have many implications, such as branching programs not being efficiently simulable by formulas. Our second consequence is to increase our understanding of amortized branching programs, also known as catalytic branching programs; we show that every function $f$ on $n$ bits can be computed by such a program of length poly$(n)$ and width $2^{O(n)}$.
|
James Cook, Ian Mertz
CCC 2022
[video (CCC)] [slides]
[show abstract]An $m$-catalytic branching program (Girard, Koucky, McKenzie 2015) is a set of $m$ distinct branching programs for $f$ which are permitted to share internal (i.e. non-source non-sink) nodes. While originally introduced as a non-uniform analogue to catalytic space, this also gives a natural notion of amortized non-uniform space complexity for $f$, namely the smallest value $|G|/m$ for an $m$-catalytic branching program $G$ for $f$ (Potechin 2017). Potechin (2017) showed that every function $f$ has amortized size $O(n)$, witnessed by an m-catalytic branching program where $m = 2^{2^n−1}$. We recreate this result by defining a catalytic algorithm for evaluating polynomials using a large amount of space but $O(n)$ time. This allows us to balance this with previously known algorithms which are efficient with respect to space at the cost of time (Cook, Mertz 2020, 2021). We show that for any $\epsilon \geq 2n^{-1}$, every function $f$ has an m-catalytic branching program of size $O_{\epsilon}(mn)$, where $m = 2^{2^{\epsilon n}}$. We similarly recreate an improved result due to Robere and Zuiddam (2021), and show that for $d \leq n$ and $\epsilon \geq 2d^{−1}$, the same result holds for $m = 2^{{n \choose \leq \epsilon d}}$ as long as $f$ is a degree-$d$ polynomial over $\mathbb{F}_2$. We also show that for certain classes of functions, $m$ can be reduced to $2^{poly(n)}$ while still maintaining linear or quasi-linear amortized size. In the other direction, we bound the necessary length, and by extension the amortized size, of any permutation branching program for an arbitrary function between $3n$ and $4n − 4$.
|
Shachar Lovett, Raghu Meka, Ian Mertz, Toniann Pitassi, Jiapeng Zhang
ITCS 2022
[video (ITCS)] [video (MIAO)] [slides]
[show abstract]Query-to-communication lifting theorems translate lower bounds on query complexity to lower bounds for the corresponding communication model. In this paper, we give a simplified proof of deterministic lifting (in both the tree-like and dag-like settings). Our proof uses elementary counting together with a novel connection to the sunflower lemma.In addition to a simplified proof, our approach opens up a new avenue of attack towards proving lifting theorems with improved gadget size—one of the main challenges in the area. Focusing on one of the most widely used gadgets—the index gadget—existing lifting techniques are known to require at least a quadratic gadget size. Our new approach combined with robust sunflower lemmas allows us to reduce the gadget size to near linear. We conjecture that it can be further improved to polylogarithmic, similar to the known bounds for the corresponding robust sunflower lemmas.
|
James Cook, Ian Mertz
ECCC, 2021
[show abstract]We show that the Tree Evaluation Problem with alphabet size k and height h can be solved by branching programs of size $k^{O(h/ \log h)} + 2^{O(h)}$. This answers a longstanding challenge of Cook et al. (2009) and gives the first general upper bound since the problem’s inception.
|
James Cook, Ian Mertz
STOC 2020
[video (STOC)] [video (IAS) (Cook)]
[show abstract]The study of branching programs for the Tree Evaluation Problem (TreeEval), introduced by S. Cook et al. (TOCT 2012), remains one of the most promising approaches to separating L from P. Given a label in $[k]$ at each leaf of a complete binary tree and an explicit function in $[k]^2 \rightarrow [k]$ for recursively computing the value of each internal node from its children, the problem is to compute the value at the root node. The problem is parameterized by the alphabet size $k$ and the height $h$ of the tree. A branching program implementing the straightforward recursive algorithm uses $\Theta((k + 1)^h)$ states, organized into $2^h − 1$ layers of width up to $k^h$. Until now no better deterministic algorithm was known. We present a series of three new algorithms solving TreeEval. They are inspired by the work of Buhrman et al. on catalytic space (STOC 2012), applied outside the catalytic-space setting. First we give a novel branching program with $2^{4h} poly(k)$ layers of width $2^{3k}$ , which beats the straightforward algorithm when $h = \omega(k/ \log k)$. Next we give a branching program with $k^{2h} poly (k)$ layers of width $k^3$. This has total size comparable to the straightforward algorithm, but is implemented using the catalytic framework. Finally we interpolate between the two algorithms to give a branching program with $(O( \frac{k}{h} ))^{2h} poly (k)$ layers of width $(O(\frac{k}{h}))^{\epsilon h}$ for any constant $\epsilon > 0$, which beats the straightforward algorithm for all $h \geq k^{1/2+poly(\epsilon)}$. These are the first deterministic branching programs to beat the straightforward algorithm, but more importantly this is the first non-trivial approach to proving deterministic upper bounds for TreeEval. We also contribute new machinery to the catalytic computing program, which may be of independent interest to some readers.
|
Mika Göös, Sajin Koroth, Ian Mertz, Toniann Pitassi
STOC 2020
[video (STOC)] [slides]
[show abstract]We show that Cutting Planes (CP) proofs are hard to find: Given an unsatisfiable formula F , (1) it is NP-hard to find a CP refutation of F in time polynomial in the length of the shortest such refutation; and(2) unless Gap-Hitting-Set admits a nontrivial algorithm, one cannot find a tree-like CP refutation of F in time polynomial in the length of the shortest such refutation. The first result extends the recent breakthrough of Atserias and M¨uller (FOCS 2019) that established an analogous result for Resolution. Our proofs rely on two new lifting theorems: (1) Dag-like lifting for gadgets with many output bits. (2) Tree-like lifting that simulates an r-round protocol with gadgets of query complexity O(log r).
|
Ian Mertz, Toniann Pitassi, Yuanhao Wei
ICALP 2019
[video (IAS)] [slides]
[show abstract]We obtain a streamlined proof of an important result by Alekhnovich and Razborov, showing that it is hard to automatize both tree-like and general Resolution. Under a different assumption than [2], our simplified proof gives improved bounds: we show under ETH that these proof systems are not automatizable in time $n^{f(n)}$, whenever $f(n) = o(\log^{1/7−\epsilon} \log n)$ for any $\epsilon > 0$. Previously non-automatizability was only known for $f(n) = O(1)$. Our proof also extends fairly straightforwardly to prove similar hardness results for PCR and Res(r).
|
|
Eric Allender, Anna Gál, Ian Mertz
computational complexity (2016),
MFCS 2015
[slides (MFCS) (Allender)]
[show abstract]We consider the complexity class ACC$^1$ and related families of arithmetic circuits. We prove a variety of collapse results, showing several settings in which no loss of computational power results if fan-in of gates is severely restricted, as well as presenting a natural class of arithmetic circuits in which no expressive power is lost by severely restricting the algebraic degree of the circuits. We draw attention to the strong connections that exist between ACC$^1$ and VP, via connections to the classes CC$^1$[m] for various m. These results tend to support a conjecture regarding the computational power of the complexity class VP over finite algebras, and they also highlight the significance of a class of arithmetic circuits that is in some sense dual to VP. In particular, these dual-VP classes provide new characterizations of ACC$^1$ and TC$^1$ in terms of circuits of semiunbounded fan-in. As a corollary, we show that ACC$^i$ = CC$^i$ for all $i \geq 1$.
|