Abstract: Recently, a quantum algorithm called Decoded Quantum Interferometry (DQI) was introduced that achieves an apparent exponential speedup for Optimal Polynomial Intersection (OPI) problem, which has previously been studied in the contexts of cryptography and error correcting codes. However, this left open the question of how many logical gates and logical qubits would be needed to solve a classically intractable instance of OPI. Here, we develop optimized implementations of DQI which greatly reduce its resource requirements. We establish that DQI for OPI is the first known candidate for verifiable quantum advantage with optimal asymptotic speedup: solving instances with classical hardness $O(2^N)$ requires only $\widetilde{O}(N)$ quantum gates, matching the theoretical lower bound. To realize this, we overcome the primary bottleneck of reversible Reed-Solomon decoding by introducing novel quantum circuits for the Extended Euclidean Algorithm (EEA) that reduce the leading-order space complexity to the theoretical minimum of $2nb$ qubits. These improvements are broadly applicable, including to Shor's algorithm for the discrete logarithm. We analyze OPI over binary extension fields $\GF(2^b)$, assess hardness against new classical attacks, and identify resilient instances. Our resource estimates show that classically intractable OPI instances (requiring $>10^{23}$ classical trials) can be solved with approximately 5.72 million Toffoli gates. This is roughly $1000$ times fewer gates than required for factoring RSA-2048 and, remarkably, is also less than the leading interactive protocol for computational proof of quantumness, positioning DQI as a compelling candidate for practical, verifiable quantum advantage.
Unentanglement and Post-Measurement Branching in Quantum Interactive Proofs
Sabee Grewal (University of Texas at Austin);
William Kretschmer (University of Texas at Austin)
Abstract: We investigate two resources whose effects on quantum interactive proofs remain poorly understood: the promise of unentanglement, and the verifier’s ability to condition on an intermediate measurement, which we call post-measurement branching. We first show that unentanglement can dramatically increase computational power: three-round unentangled quantum interactive proofs equal NEXP, even if only the first message is quantum. By contrast, we prove that if the verifier uses no post-measurement branching, then the same type of unentangled proof system has at most the power of QAM. Finally, we investigate post-measurement branching in two-round quantum-classical proof systems. Unlike the equivalence between public-coin and private-coin classical interactive proofs, we give evidence of a separation in the quantum setting that arises from post-measurement branching.
On the Pure Quantum Polynomial Hierarchy and Quantified Hamiltonian Complexity
Sabee Grewal (University of Texas at Austin);
Dorian Rudolph (Paderborn University)
Abstract: We prove several new results concerning the pure quantum polynomial hierarchy pureQPH. First, we show that QMA(2) ⊆ pureQΣ_2, i.e., two unentangled existential provers can be simulated by competing existential and universal provers. We further prove that pureQΣ_2 ⊆ QΣ_3 ⊆ NEXP. Second, we give an error reduction result for pureQPH, and, as a consequence, prove that pureQPH = QPH. A key ingredient in this result is an improved dimension-independent disentangler. Finally, we initiate the study of quantified Hamiltonian complexity, the quantum analogue of quantified Boolean formulae. We prove that the quantified pure sparse Hamiltonian problem is pureQΣ_i-complete. By contrast, other natural variants (pure/local, mixed/local, and mixed/sparse) admit nontrivial containments but fail to be complete under known techniques. For example, we show that the ∃∀-mixed local Hamiltonian problem lies in NP^QMA ∩ coNP^QMA.
Classification and implementation of unitary-equivariant and permutation-invariant quantum channels
Elias Theil (University of Copenhagen);
Laura Mancinska (University of Copenhagen)
Abstract: Many quantum information tasks use inputs of the form $\rho^{\otimes m}$, which naturally induce permutation and unitary symmetries. We classify all channels that respect both symmetries—unitary-equivariant and permutation-invariant maps from $(\mathbb{C}^{d})^{\otimes m}$ to $(\mathbb{C}^{d})^{\otimes n}$— via their extremal points. Operationally, each extremal channel factors as \emph{unitary Schur sampling} $\rightarrow$ an \emph{irrep-level unitary-equivariant channel} $\rightarrow$ the \emph{adjoint unitary Schur sampling}. We give a streaming implementation ansatz that uses an efficient streaming implementation of unitary Schur sampling together with a resource-state primitive, and we apply it to state symmetrization, symmetric cloning, and purity amplification. In these applications we obtain polynomial-time algorithms with exponential memory improvements in $m,n$. Further, for symmetric cloning we present, to our knowledge, the first efficient (polynomial-time) algorithm with explicit memory and gate bounds.
Randomized measurements for multi-parameter quantum metrology
Sisi Zhou (Perimeter Institute);
Senrui Chen (Caltech)
Abstract: The optimal quantum measurements for estimating different unknown parameters in a parameterized quantum state are usually incompatible with each other. Traditional approaches to addressing the measurement incompatibility issue, such as the Holevo Cram\'{e}r--Rao bound, suffer from multiple difficulties towards practical applicability, as the optimal measurement strategies are usually state-dependent, difficult to implement and also take complex analyses to determine. Here we study randomized measurements as a new approach for multi-parameter quantum metrology. We show quantum measurements on single copies of quantum states given by $3$-designs perform near-optimally when estimating an arbitrary number of parameters in pure states and more generally, {approximately low-rank well-conditioned states}, whose metrological information is largely concentrated in a low-dimensional subspace. The near-optimality is also shown in estimating the maximal number of parameters for three types of mixed states that are well-conditioned on their supports. Examples of fidelity estimation and Hamiltonian estimation are explicitly provided to demonstrate the power and limitation of randomized measurements in multi-parameter quantum metrology.
Will it glue? On short-depth designs beyond the unitary group
Lorenzo Grevink (CWI, QuSoft);
Jonas Haferkamp (Saarland University);
Markus Heinrich (University of Cologne);
Jonas Helsen (CWI, QuSoft);
Marcel Hinsche (Freie Universität Berlin);
Thomas Schuster (California Institute of Technology);
Zoltán Zimborás (University of Helsinki)
Abstract: We study the formation of short-depth designs beyond the unitary group. We provide a range of results on several groups of broad interest in quantum information science: the Clifford group, the orthogonal group, the unitary symplectic groups, and the matchgate group. For all of these groups, we prove that analogues of unitary designs cannot be generated by any circuit ensemble with light-cones that are smaller than the system size. This implies linear lower bounds on the circuit depth in one-dimensional systems. For the Clifford, orthogonal, and unitary symplectic group, we moreover show that commonly considered circuit ensembles cannot generate designs in sub-linear depth on any circuit architecture. We show this by exploiting observables in the higher-order commutants of each group, which allow one to distinguish any short-depth circuit from truly random. While these no-go results rule out short-depth designs over these subgroups, we prove that slightly weaker forms of randomness---including additive-error state designs and anti-concentration in sampling distributions---nevertheless emerge at logarithmic depths in many cases. Our results reveal that the onset of randomness in shallow quantum circuits is a widespread yet subtle phenomenon, dependent on the interplay between the group itself and the context of its application.
Magic and communication complexity
Uma Girish (Columbia University);
Alex May (Perimeter Institute for Theoretical Physics);
Natalie Parham (Columbia University);
Henry Yuen (Columbia University)
Abstract: We establish novel connections between magic in quantum circuits and communication complexity. In particular, we show that functions computable with low magic have low communication cost.
Our first result shows that the $\Dsim$ (deterministic simultaneous message passing) cost of a Boolean function $f$ is at most the number of single-qubit magic gates in a quantum circuit computing $f$ with any quantum advice state. If we allow mid-circuit measurements and adaptive circuits, we obtain an upper bound on the two-way communication complexity of $f$ in terms of the magic + measurement cost of the circuit for $f$.
As an application, we obtain magic-count lower bounds of $\Omega(n)$ for the $n$-qubit generalized Toffoli gate as well as the $n$-qubit quantum multiplexer.
Our second result gives a general method to transform $\Qent$ protocols (simultaneous quantum messages with shared entanglement) into $\Rent$ protocols (simultaneous classical messages with shared entanglement) which incurs only a polynomial blowup in the communication and entanglement complexity, provided the referee's action in the $\Qent$ protocol is implementable in constant $T$-depth. The resulting $\Rent$ protocols satisfy strong privacy constraints and are $\PSM^*$ protocols (private simultaneous message passing with shared entanglement), where the referee learns almost nothing about the inputs other than the function value. As an application, we demonstrate $n$-bit partial Boolean functions whose $\Rent$ complexity is $\mathrm{polylog}(n)$ and whose $\R$ (interactive randomized) complexity is $n^{\Omega(1)}$, establishing the first exponential separations between $\Rent$ and $\R$ for Boolean functions.
Rapid Mixing of Quantum Gibbs Samplers for Weakly-Interacting Quantum Systems
Štěpán Šmíd (Imperial College London);
Richard Meister (Imperial College London);
Mario Berta (RWTH Aachen University);
Roberto Bondesan (Imperial College London)
Abstract: Dissipative quantum algorithms for state preparation in many-body systems are increasingly recognised as promising candidates for achieving large quantum advantages in application-relevant tasks. Recent advances in algorithmic, detailed-balance Lindbladians enable the efficient simulation of open-system dynamics converging towards desired target states. However, the overall complexity of such schemes is governed by system-size dependent mixing times. In this work, we analyse algorithmic Lindbladians for Gibbs state preparation and prove that they exhibit rapid mixing, i.e., convergence in time poly-logarithmic in the system size. We first establish this for non-interacting spin systems, free fermions, and free bosons, and then show that these rapid mixing results are stable under perturbations, covering weakly interacting qudits and perturbed non-hopping fermions. Further, we adapt the techniques from separable qudits to the fermionic setting and prove rapid mixing of the strongly-interacting regime of the Fermi-Hubbard model. Our results constitute the first efficient mixing bounds for non-commuting qudit models and bosonic systems at arbitrary temperatures. Compared to prior spectral-gap-based results for fermions, we achieve exponentially faster mixing, further featuring explicit constants on the maximal allowed interaction strength. This not only improves the overall polynomial runtime for quantum Gibbs state preparation, but also enhances robustness against noise. Our analysis relies on oscillator norm techniques from mathematical physics, where we introduce tailored variants adapted to specific Lindbladians - an innovation that we expect to significantly broaden the scope of these methods.
Efficient magic-state generation with quantum tricycle codes
Varun Menon (Harvard University);
J. Pablo Bonilla Ataides (Harvard University);
Rohan Mehta (Harvard University);
Andi Gu (Harvard University);
Daniel Bochen Tan (Harvard University);
Mikhail D. Lukin (Harvard University)
Abstract: The preparation of high-fidelity non-Clifford (magic) states is an essential subroutine for universal quantum computation, but imposes substantial space-time overhead. Magic state factories based on high rate and distance quantum low-density parity check (LDPC) codes equipped with transversal non-Clifford gates can potentially reduce these overheads significantly, by circumventing the need for multiple rounds of distillation and by producing a large number of magic states in a single code-block. As a step towards realizing efficient, fault-tolerant magic state production, we introduce a class of finite block-length quantum LDPC codes which we name tricycle codes, generalizing the well-known bicycle codes to three homological dimensions. These codes can support constant-depth physical circuits that implement logical $CCZ$ gates between three code blocks. To construct these constant-depth $CCZ$ circuits, we develop new analytical and numerical techniques that apply to a broad class of three-dimensional homological and balanced product codes. We further show that tricycle codes enable single-shot state-preparation and error correction, leading to a highly efficient magic-state generation protocol. Numerical simulations of specific codes confirm robust performance under circuit-level noise, demonstrating a high circuit-noise threshold of $>0.5\%$. With modest post-selection, certain tricycle codes of block-lengths of only $50-100$ qubits are shown to achieve logical error-rates of $6\times 10^{-10}$ or lower. Finally, we construct optimal depth syndrome extraction circuits for tricycle codes and present a protocol for implementing them efficiently on a reconfigurable neutral atom platform.
Quantum lower bounds for simulating fluid dynamics
Abtin Ameri (MIT);
Joseph Carolan (University of Maryland);
Andrew M. Childs (University of Maryland);
Hari Krovi (IBM Quantum)
Abstract: Developing quantum algorithms to simulate fluid dynamics has become an active area of research, as accelerating fluid simulations could have significant impact in industry and fundamental science. While many approaches have been proposed for simulating fluid dynamics on quantum computers, it is largely unclear whether these algorithms will provide any speedup over existing classical approaches. In this paper we give evidence that quantum computers cannot significantly outperform classical simulations of fluid dynamics in general. We study two models of fluids: the Korteweg-de Vries (KdV) equation, which models shallow water waves, and the incompressible Euler equations, which model ideal, inviscid fluids. We show that any quantum algorithm simulating the KdV equation or the Euler equations for time T requires Ω(T^2) and exp(Ω(T)) copies of the initial state in the worst case, respectively. These lower bounds hold for the task of preparing the final state, and similar bounds hold for history state preparation. We prove the lower bound for the KdV equation by investigating divergence of solitons. For the Euler equations, we show that instabilities can accelerate state discrimination.
Quantum computation with qubit-oscillator systems: Trading modes against energy
Lukas Brenner (Technical University of Munich);
Beatriz Dias (Technical University of Munich);
Robert König (Technical University of Munich)
Abstract: We propose new schemes for quantum computation with hybrid qubit-oscillator systems consisting of a certain number of bosonic modes coupled to a constant number of qubits by a Jaynes-Cummings Hamiltonian. We ask how much energy is required to
weakly simulate an~$n$-qubit quantum circuit (i.e., produce samples from its output distribution) by a unitary circuit
in this model. We find that efficient approximate weak simulation of an~$n$-qubit quantum circuit of polynomial size with inverse polynomial error is possible with
(I) a constant number of modes and an exponential amount of energy, or
(II) a sublinear (polynomial) number of modes and a subexponential amount of energy, or
(III) a linear number of modes and a polynomial amount of energy. Our construction encodes qubits into high-dimensional approximate Gottesman-Kitaev-Preskill (GKP) codes. It provides new insight into the trade-off between system size (i.e., number of modes) and the amount of energy required to perform quantum computation in the continuous-variable setting.
Reconquering Bell sampling on qudits: stabilizer learning and testing, quantum pseudorandomness bounds, and more
Jonathan Allcock (Tencent Quantum Laboratory);
Joao F. Doriguello (HUN-REN Alfréd Rényi Institute of Mathematics);
Gábor Ivanyos (HUN-REN Institute for Computer Science and Control);
Miklos Santha (National University of Singapore)
Abstract: Bell sampling is a simple yet powerful tool based on measuring two copies of a quantum state in the Bell basis, and has found applications in a plethora of problems related to stabiliser states and measures of magic. However, it was not known how to generalise the procedure from qubits to $d$-level systems -- qudits -- for all dimensions $d > 2$ in a useful way. Indeed, a prior work of the authors (arXiv'24) showed that the natural extension of Bell sampling to arbitrary dimensions fails to provide meaningful information about the quantum states being measured. In this paper, we overcome the difficulties encountered in previous works and develop a useful generalisation of Bell sampling to qudits of all dimensions $d\geq 2$. At the heart of our primitive is a new unitary, based on Lagrange's four-square theorem, that maps four copies of any stabiliser state $|\mathcal{S}\rangle$ to four copies of its complex conjugate $|\mathcal{S}^\ast\rangle$ (up to some Pauli operator), which may be of independent interest. We then demonstrate the utility of our new Bell sampling technique by lifting several known results from qubits to qudits for any $d\geq 2$ (which involves working with submodules instead of subspaces):
1. Learning an unknown stabiliser state $|\mathcal{S}\rangle\in(\mathbb{C}^d)^{\otimes n}$ in $O(n^3)$ time with $O(n)$ samples;
2. Solving the Hidden Stabiliser Group Problem (a stabiliser version of the State Hidden Subgroup Problem) in $\widetilde{O}(n^3/\varepsilon)$ time with $\widetilde{O}(n/\varepsilon)$ samples;
3. Testing whether $|\psi\rangle\in(\mathbb{C}^d)^{\otimes n}$ has stabiliser size (a generalisation of stabiliser dimension for submodules) at least $d^t$ or is $\varepsilon$-far from all such states in $\widetilde{O}(n^3/\varepsilon)$ time with $\widetilde{O}(n/\varepsilon)$ samples if $\varepsilon = O(d^{-2})$;
4. Testing whether $|\psi\rangle\in(\mathbb{C}^d)^{\otimes n}$ is Haar-random or the output of a Clifford circuit augmented with less than $n/2$ single-qudit non-Clifford gates in $O(n^3)$ time using $O(n)$ samples. As a corollary, we show that Clifford circuits with at most $n/2$ single-qudit non-Clifford gates cannot prepare pseudorandom states, an exponential improvement over previous works;
5. Testing whether $|\psi\rangle\in(\mathbb{C}^d)^{\otimes n}$ has stabiliser fidelity at least $1-\varepsilon_1$ or at most $1-\varepsilon_2$ with $O(d^2/\varepsilon_2)$ samples if $\varepsilon_1 = 0$ or $O(d^2/\varepsilon_2^2)$ samples if $\varepsilon_1 = O(d^{-2})$.
A complexity theory for non-local quantum computation
Andreas Bluhm (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG);
Simon H\"{o}fer (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG);
Alex May (Perimeter Institute for Theoretical Physics);
Mikka Stasiuk (Perimeter Institute for Theoretical Physics);
Philip Verduyn Lunel (Sorbonne Universit\'e, Paris);
Henry Yuen (Columbia University)
Abstract: Non-local quantum computation (NLQC) replaces a local interaction between two systems with a single round of communication and shared entanglement.
Despite many partial results, it is known that a characterization of entanglement cost in at least certain NLQC tasks would imply significant breakthroughs in complexity theory.
Here, we avoid these obstructions and take an indirect approach to understanding resource requirements in NLQC, which mimics the approach used by complexity theorists: we study the relative hardness of different NLQC tasks by identifying resource efficient reductions between them.
Most significantly, we prove that $f$-measure and $f$-route, the two best studied NLQC tasks, are in fact equivalent under $O(1)$ overhead reductions.
This result simplifies many existing proofs in the literature and extends several new properties to $f$-measure.
For instance, we obtain sub-exponential upper bounds on $f$-measure for all functions, and efficient protocols for functions in the complexity class $\mathsf{Mod}_k\mathsf{L}$.
Beyond this, we study a number of other examples of NLQC tasks and their relationships.
Lower bounds on non-local computation from controllable correlation
Richard Cleve (Institute for Quantum Computing, Waterloo, Ontario);
Alex May (Perimeter Institute for Theoretical Physics)
Abstract: Understanding entanglement cost in non-local quantum computation (NLQC) is relevant to complexity, cryptography, gravity, and other areas.
This entanglement cost is largely uncharacterized; previous lower bound techniques apply to narrowly defined cases, and proving lower bounds on even most simple unitaries has remained open.
Here, we give two new lower bound techniques that can be evaluated for any unitary, and typically lead to non-trivial lower bounds.
Concretely, we give lower bounds on most of the commonly studied two qubit quantum gates, including CNOT, DCNOT, $\sqrt{\SWAP}$, the XX interaction, Haar random two qubit gates, and many others, none of which previously had known lower bounds.
For the CNOT gate one of our techniques gives a tight lower bound, fully resolving its entanglement cost.
Our proof technique makes use of two new properties of unitaries that we introduce, called the \emph{controllable correlation} and \emph{controllable entanglement}.
The resulting lower bounds have parallel repetition properties, and apply in the noisy setting.
The lower bound from controllable correlation has an elementary proof and applies to most unitaries, but does not appear to be tight for any of the unitaries we study.
The lower bound from controllable entanglement is tight for CNOT but fails for generic unitaries.
Its proof is less elementary; it requires the consideration of the i.i.d. setting and application of Shannon theory results, with the characterization of finite block length Schumacher compression being a key tool.
Clifford testing: algorithms and lower bounds
Marcel Hinsche (Freie Universität Berlin);
Zongbo Bao (Centrum Wiskunde & Informatica (CWI) and QuSoft, Amsterdam);
Philippe van Dordrecht (Centrum Wiskunde & Informatica (CWI) and QuSoft, Amsterdam);
Jens Eisert (Freie Universität Berlin);
Jop Briët (Centrum Wiskunde & Informatica (CWI) and QuSoft, Amsterdam);
Jonas Helsen (Centrum Wiskunde & Informatica (CWI) and QuSoft, Amsterdam)
Abstract: We consider the problem of Clifford testing, which asks whether a black-box $n$-qubit unitary is a Clifford unitary or at least $\varepsilon$-far from every Clifford unitary. We give the first 4-query Clifford tester, which decides this problem with probability~$\mathrm{poly}(\varepsilon)$. This contrasts with the minimum of 6 copies required for the closely-related task of stabilizer testing.
We show that our tester is tolerant, by adapting techniques from tolerant stabilizer testing to our setting. In doing so, we settle in the positive a conjecture of Bu, Gu and Jaffe, by proving a polynomial inverse theorem for a non-commutative Gowers 3-uniformity norm. We also consider the restricted setting of single-copy access, where we give an $O(n)$-query Clifford tester that requires no auxiliary memory qubits or adaptivity. We complement this with a lower bound, proving that any such, potentially adaptive, single-copy algorithm needs at least $\Omega(n^{1/4})$ queries. To obtain our results, we leverage the structure of the commutant of the Clifford group, obtaining several technical statements that may be of independent interest.
Instance-Optimal Quantum State Certification with Entangled Measurements
Ryan O'Donnell (Carnegie Mellon University);
Chirag Wadhwa (University of Edinburgh)
Abstract: We consider the task of quantum state certification: given a description of a hypothesis state~$\sigma$ and multiple copies of an unknown state~$\rho$, a tester aims to determine whether the two states are equal or $\epsilon$-far in trace distance. It is known that~$\Theta(d/\epsilon^2)$ copies of~$\rho$ are necessary and sufficient for this task, assuming the tester can make entangled measurements over all copies [CHW07, OW15, BOW19]. However, these bounds are for a worst-case~$\sigma$, and it is not known what the optimal copy complexity is for this problem on an \emph{instance-by-instance} basis. While such instance-optimal bounds have previously been shown for quantum state certification when the tester is limited to measurements unentangled across copies [CLO22, CLHL22], they remained open when testers are unrestricted in the kind of measurements they can perform.
We address this open question by proving nearly instance-optimal bounds for quantum state certification when the tester can perform fully entangled measurements. Analogously to the unentangled setting, we show that the optimal copy complexity for certifying~$\sigma$ is given by the worst-case complexity times the fidelity between~$\sigma$ and the maximally mixed state.
We prove our lower bounds using a novel quantum analogue of the Ingster--Suslina method, which is likely to be of independent interest. This method also allows us to recover the~$\Omega(d/\epsilon^2)$ lower bound for mixedness testing [OW15], i.e., certification of the maximally mixed state, with a surprisingly simple proof.
On the complexity of unique quantum witnesses and quantum approximate counting
Anurag Anshu (Harvard University);
Jonas Haferkamp (Saarland University);
Yeongwoo Hwang (Harvard University);
Quynh T. Nguyen (Harvard University)
Abstract: We study the long-standing open question on the power of unique witnesses in quantum protocols, which asks if UniqueQMA, a variant of QMA whose accepting witness space is 1-dimensional, contains QMA under quantum reductions.
This work rules out any black-box reduction from QMA to UniqueQMA by showing a quantum oracle separation between BQP^UniqueQMA and QMA. This provides a contrast to the classical case, where the Valiant-Vazirani theorem shows a black-box randomized reduction from UniqueNP to NP, and suggests the need for studying the structure of the ground space of local Hamiltonians in distilling a potential unique witness. Via similar techniques, we show, relative to a quantum oracle, that QMA^QMA cannot decide quantum approximate counting, ruling out a quantum analogue of Stockmeyer’s algorithm in the black-box setting. Our results employ a subspace reflection oracle, previously considered in [AK07; AKKT20; SY23], but we introduce new tools which allow us to exploit the unique witness constraint. We also show a strong “polarization” behavior of QMA circuits, which could be of independent interest in studying quantum polynomial hierarchies.
We then ask a natural question; what structural properties of the local Hamiltonian problem can we exploit? We introduce a physically motivated candidate by showing that the ground energy of local Hamiltonians that satisfy a computational variant of the eigenstate thermalization hypothesis (ETH) can be estimated through a UniqueQMA protocol. Our protocol can be viewed as a quantum expander test in a low energy subspace of the Hamiltonian and verifies a unique entangled state across two copies of the subspace. This allows us to conclude that if UniqueQMA is not equivalent to QMA, then QMA-hard Hamiltonians must violate ETH under adversarial perturbations (more accurately, further assuming the quantum PCP conjecture if ETH only applies to extensive energy subspaces). Under the same assumption, this also serves as evidence that chaotic local Hamiltonians, such as the SYK model may be computationally simpler than general local Hamiltonians.
Quantum statistics in the minimal Bell scenario
Victor Barizien (CEA, University of Geneva);
Jean-Daniel Bancal (CEA)
Abstract: In any experimental setting, quantum physics provides the statistical distributions that the observed outcomes are expected to follow. The set formed by all these distributions contains the imprint of quantum theory and captures some of its core properties. So far, only partial explicit descriptions of this set have been found for Bell-type settings in which entangled states can be shared and measured by independent observers. Here we obtain the complete explicit and analytical description of a full set of quantum statistics in terms of its extremal points. This is made possible by finding all bipartite quantum states and pairs of binary measurements that can be self-tested, that is, reconstructed from empirical statistics only. Our description precisely reveals some of the extent and limitations of quantum theory.
Abstract: We consider a pair of causally independent processes, modelled as the tensor product of two channels, acting on a possibly correlated input to produce random outputs X and Y. We show that, assuming the processes produce a sufficient amount of randomness, one can extract uniform randomness from X and Y. This generalizes prior results, which assumed that X and Y are (conditionally) independent. Note that in contrast to the independence of quantum states, the independence of channels can be enforced through spacelike separation. As a consequence, our results allow for the generation of randomness under more practical and physically justifiable assumptions than previously possible. We illustrate this with the example of device-independent randomness amplification, where we can remove the constraint that the adversary only has access to classical side information about the source.
Near-optimal performance of square-root measurement for general score functions and quantum ensembles
Hemant Mishra (Indian Institute of Technology Dhanbad);
Ludovico Lami (Scuola Normale Superiore);
Mark Wilde (Cornell University)
Abstract: The Barnum-Knill theorem states that the optimal success probability in the multiple state discrimination task is not more than the square root of the success probability when the pretty good or square-root measurement is used for this task. An assumption of the theorem is that the underlying ensemble consists of finitely many quantum states over a finite-dimensional quantum system. Motivated in part by the fact that the success probability is not a relevant metric for continuous ensembles, in this paper we provide a generalization of the notion of pretty good measurement and the Barnum-Knill theorem for general quantum ensembles, including those described by a continuous parameter space and an infinite-dimensional Hilbert space. To achieve this, we also design a general metric of performance for quantum measurements that generalizes the success probability, namely, the expected gain of the measurement with respect to a positive score function. A notable consequence of the main result is that, in a Bayesian estimation task, the mean square error of the pretty good measurement does not exceed twice the optimal mean square error.
Fundamentals of quantum Boltzmann machine learning with visible and hidden units
Abstract: One of the primary applications of classical Boltzmann machines is generative modeling, wherein the goal is to tune the parameters of a model distribution so that it closely approximates a target distribution. Training relies on estimating the gradient of the relative entropy between the target and model distributions, a task that is well understood when the classical Boltzmann machine has both visible and hidden units. For some years now, it has been an obstacle to generalize this finding to quantum state learning with quantum Boltzmann machines that have both visible and hidden units. In this paper, I derive an analytical expression for the gradient of the quantum relative entropy between a target quantum state and the reduced state of the visible units of a quantum Boltzmann machine. Crucially, this expression is amenable to estimation on a quantum computer, as it involves modular-flow-generated unitary rotations reminiscent of those appearing in my prior work on rotated Petz recovery maps. This leads to a quantum algorithm for gradient estimation in this setting. I then specialize the setting to quantum visible units and classical hidden units, and vice versa, and provide analytical expressions for the gradients, along with quantum algorithms for estimating them. Finally, I replace the quantum relative entropy objective function with the Petz--Tsallis relative entropy; here I develop an analytical expression for the gradient and sketch a quantum algorithm for estimating it, as an application of an independent derivation of a formula for the derivative of the matrix power function, which also involves modular-flow-generated unitary rotations. Ultimately, this paper demarcates progress in training quantum Boltzmann machines with visible and hidden units for generative modeling and quantum state learning.
Certifying and learning local quantum Hamiltonians
Andreas Bluhm (Univ. Grenoble Alpes, CNRS, Grenoble INP, LIG);
Matthias C. Caro (University of Warwick);
Francisco Escudero Gutiérrez (Centrum Wiskunde & Informatica, QuSoft);
Junseo Lee (Seoul National University);
Aadil Oufkir (University Mohammed VI Polytechnic);
Cambyse Rouzé (INRIA Saclay);
Myeongjin Shin (Korea Advanced Institute of Science and Technology)
Abstract: We study the problems of certifying and learning local quantum Hamiltonians and their associated Gibbs states. We first address Hamiltonian certification given real-time access to the dynamics of an unknown k-local Hamiltonian. Given oracle access to its time-evolution operator and a fully specified target Hamiltonian, the task is to decide whether the two Hamiltonians are identical or differ by at least a prescribed accuracy in normalized Frobenius norm, while minimizing the total evolution time. We introduce the first certification protocol that achieves optimal performance for all constant-locality Hamiltonians. For general n-qubit, k-local, traceless Hamiltonians, our algorithm succeeds with high probability using total evolution time that scales inversely with the target accuracy, and for constant locality this matches the fundamental lower bound, achieving Heisenberg-limit scaling. In contrast to prior approaches, our method requires neither inverse evolution nor controlled operations, and relies only on forward real-time dynamics.
We then turn to thermal states generated by local Hamiltonians. We develop algorithms for both learning and certifying Gibbs states that are fully sample-efficient in all relevant parameters. For polynomially bounded temperature, our methods achieve exponential improvements over general quantum state tomography. While the learning algorithm is inherently time-inefficient due to covering arguments, the certification algorithm is both sample- and time-efficient, resolving a previously open question on efficient Gibbs state testing.
Together, these results establish optimal or near-optimal complexity bounds for characterizing local quantum systems in both dynamical and thermal regimes.
A Sharp Computational Phase Transition for the Partition Function of the Transverse-Field Ising Model
Alistair Sinclair (UC Berkeley);
Thuy-Duong Vuong (UC San Diego)
Abstract: We study the problem of approximating the partition function of the transverse-field Ising model (TFIM), a widely studied quantum many-body model with important applications in quantum simulation and quantum annealing. Despite its fundamental importance, the algorithmic landscape for computing the TFIM partition function has remained poorly understood beyond restricted parameter regimes.
We provide a precise characterization of the temperature regimes in which efficient approximation is possible, establishing a sharp computational phase transition. Let $J$ denote the symmetric interaction matrix and $\Delta(J) = \lambda_{\max}(J)-\lambda_{\min}(J)$ be its spectral width. We show that for all inverse temperatures $\beta \in [0,1/\Delta(J)]$, there exists an efficient classical randomized algorithm that approximates the partition function $\tr(e^{-\beta H})$ to within an arbitrarily small multiplicative factor. We apply the standard Trotter decomposition to map the quantum model to a classical spin system, then leverage new techniques in Markov chain analysis to show an efficient algorithm that samples from and computes the partition function of the resulting distribution. This temperature threshold is tight: for $\beta > 1/\Delta(J)$, we show that approximating the partition function is NP-hard and thus is unlikely to admit an efficient classical or quantum algorithm.
Channel Coding and Quantum Channel Discrimination against Jammers: a Minimax Approach
Mario Berta (Institute for Quantum Information, RWTH Aachen University);
Michael Xuan Cao (Institute for Quantum Information, RWTH Aachen University);
Kun Fang (School of Data Science, The Chinese University of Hong Kong, Shenzhen);
Yongsheng Yao (Institute for Quantum Information, RWTH Aachen University)
Abstract: We study communication and discrimination over quantum channels with entanglement-enabled jammers. Using a minimax framework, we show universality reduces to worst-case optimization, yielding streamlined, dimension-independent characterizations of entanglement-assisted capacities and Stein-type error exponents for channel discrimination against quantum adversaries.
The Necessity of Extending Quantum Prior Beliefs
Mingxuan Liu (Centre for Quantum Technologies);
Ge Bai (The Hong Kong University of Science and Technology (Guangzhou));
Valerio Scarani (National University of Singapore)
Abstract: A mixed quantum state can be taken as describing the lack of knowledge about the true pure state of the system ("proper mixture"); or as arising from entanglement with another system that has been disregarded ("improper mixture"). We demonstrate that proper and improper mixtures, while indistinguishable for prediction, constitute distinct priors yielding inequivalent retrodictive updates. We introduce extended retrodiction to capture these latent correlations. This framework resolves the conflict in quantum smoothing, unifying the Guevara-Wiseman and Petz-Fuchs approaches as special cases of extended priors, and establishes their entropic relation.
Abstract: Quantum computing introduces many well-motivated problems rooted in physics, asking to compute information from input quantum states. Identifying the computational hardness of these problems yields potential applications with far-reaching impacts across both the realms of computer science and physics. However, these new problems do not neatly fit within the scope of existing complexity theory. The standard classes primarily cater to problems with classical inputs and outputs, leaving a gap to characterize problems involving quantum states as inputs. For instance, breaking new quantum cryptographic primitives involves solving problems with quantum inputs; this significantly changes Impagliazzo’s five-world while the complexity classes central to Pessiland, Heuristica, and Algorithmica are grounded in problems with classical inputs and outputs. To bridge these knowledge gaps, we explore the complexity theory for quantum promise problems and potential applications. Quantum promise problems are quantum-input decision problems asking to identify whether input quantum states satisfy specific properties.
We begin by establishing structural results for several fundamental quantum complexity classes:
p/mBQP, p/mQ(C)MA, p/mQSZKhv, p/mQIP, p/mBQP/qpoly, p/mBQP/poly, and p/mPSPACE. This
includes identifying complete problems, as well as proving containment and separation results
among these classes. Here, p/mC denotes the corresponding quantum promise complexity class
with pure (p) or mixed (m) quantum input states for any classical complexity class C. Surprisingly,
our findings uncover relationships that diverge from their classical analogues — specifically, we
show unconditionally that p/mQIP \neq p/mPSPACE and p/mBQP/qpoly \neq p/mBQP/poly. This starkly contrasts the classical setting, where QIP=PSPACE and separations such as BQP/qpoly \neq BQP/poly are only known relative to oracles.
This new framework has numerous applications in quantum cryptography, particularly in the
contexts of Microcrypt and unconditional cryptography [Qia24, MNY24]. For Microcrypt, we provide a better characterization of its primitives; for example, we show that OWSG and PRS can
be broken by a p/mQCMA oracle, leading to a natural quantum analogue of Impagliazzo’s five
worlds by substituting the classical complexity classes in Pessiland, Heuristica, and Algorithmica
with mBQP and mQCMA. Moreover, we establish the relativization barrier for proving the existence of EFI, noting that no such barrier currently exists within traditional complexity theory. For unconditional cryptography, our framework is the first to capture the notion of unconditional computational hardness, resolving the open problem in [Qia24,MNY24] by constructing an unconditionally secure auxiliary-input quantum commitment scheme with computational binding and statistical hiding. Our framework also has other applications in quantum property testing and unitary synthesis.
The code distance of Floquet codes (Winner of the Best Paper Award!)
Abstract: For fault-tolerant quantum memory defined by periodic Pauli measurements, called Floquet codes, we prove that every correctable, undetectable spacetime error occurring during the steady stage is a product of (i) measurement operators inserted at the time of the measurement and (ii) pairs of identical Pauli operators sandwiching a measurement that commutes with the operator. We call such errors benign; they define a binary vector subspace of spacetime errors which properly generalize stabilizers of static Pauli stabilizer codes. Hence, the code distance of a Floquet code is the minimal weight of an undetectable spacetime Pauli error that is not benign. Our results apply more generally to families of dynamical codes for which every instantaneous stabilizer is inferred from measurements in a time interval of bounded length.
High-dimensional quantum Schur transforms and Quantum Fourier transform for the symmetric group
Carli Bruinsma (QuSoft and University of Amsterdam);
Adam Burchardt (QuSoft and CWI);
Jiani Fei (Stanford);
Dmitry Grinko (QuSoft and University of Amsterdam);
Martin Larocca (Los Alamos National Laboratory);
Maris Ozols (QuSoft and University of Amsterdam);
Sydney Timmerman (Stanford);
Vladyslav Visnevskyi (QuSoft, University of Amsterdam, and QMATH, University of Copenhagen)
Abstract: The quantum Schur transform has become a foundational quantum algorithm, yet even after two decades since the seminal 2004 paper by Bacon, Chuang, and Harrow (BCH), some aspects of the transform remain insufficiently understood. Moreover, an alternative approach proposed by Krovi in 2018 was recently found to be incomplete. In this submission, we present a corrected version of Krovi's algorithm along with a detailed treatment of the high-dimensional version of the BCH Schur transform. This high-dimensional focus makes the two versions of the transform practical for regimes where the local dimension $d$ is much larger than the number of qudits $n$, with corrected Krovi's algorithm scaling as $\widetilde{O}(n^{7/2})$ in gate and depth complexity, and BCH as $\widetilde{O}(\min(n^5,nd^4))$.
Krovi's version of Schur transform crucially relies on the quantum Fourier transform for the symmetric group. To that end, we revisit a quantum Fourier transform algorithm by Kawano and Sekigawa. After a careful analysis, we correct their count of elementary one- and two-qubit gates and circuit depth up from $\tilde{\mathcal{O}}(n^3)$ to $\tilde{\mathcal{O}}(n^{7/2})$. This stems from our observation that Kawano and Sekigawa's analysis treats certain complicated multi-qubit operations as elementary. We also correct a mistake in how they label the basis vectors of a certain Hilbert space, simplify their algorithm by removing an unnecessary gate, and expand significantly on the implementation details of the algorithm.
Our work addresses key gaps in the literature, strengthening the algorithmic foundations of a wide range of results that rely on Schur--Weyl duality and Quantum Fourier Transform over the symmetric group in quantum information theory and quantum computation.
Fermionic Insights into Measurement-Based Quantum Computation: Circle Graph States Are Not Universal Resources
Brent Harrison (Dartmouth College);
Vishnu Iyer (University of Texas at Austin);
Ojas Parekh (Sandia National Laboratories);
Kevin Thompson (Sandia National Laboratories);
Andrew Zhao (Sandia National Laboratories)
Abstract: Measurement-based quantum computation (MBQC) is a strong contender for realizing quantum computers. A critical question for MBQC is the identification of resource graph states that can enable universal quantum computation. Any such universal family must have unbounded entanglement width, which is known to be equivalent to the ability to produce any circle graph state from the states in the family using only local Clifford operations, local Pauli measurements, and classical communication. Yet, it was not previously known whether or not circle graph states themselves are a universal resource. We show that, in spite of their expressivity, circle graph states are not efficiently universal for MBQC (i.e., assuming BQP ≠ BPP). We prove this by articulating a precise graph-theoretic correspondence between circle graph states and a certain subset of fermionic Gaussian states. This is accomplished by synthesizing a variety of techniques that allow us to handle both stabilizer states and fermionic Gaussian states at the same time. As such, we anticipate that our developments may have broader applications beyond the domain of MBQC as well.
Characterization of permutation gates in the third level of the Clifford hierarchy
Zhiyang (Sunny) He (MIT);
Luke Robitaille (MIT);
Xinyu Tan (MIT)
Abstract: The Clifford hierarchy is a fundamental structure in quantum computation whose mathematical properties are not fully understood. In this work, we characterize permutation gates---unitaries which permute the $2^n$ basis states---in the third level of the hierarchy. We prove that any permutation gate in the third level must be a product of Toffoli gates in what we define as \emph{staircase form}, up to left and right multiplications by Clifford permutations. We then present necessary and sufficient conditions for a staircase form permutation gate to be in the third level of the Clifford hierarchy. As a corollary, we construct a family of non-semi-Clifford permutation gates $\{U_k\}_{k\geq 3}$ in staircase form such that each $U_k$ is in the third level but its inverse is \emph{not} in the $k$-th level.
Nearly optimal algorithms to learn sparse quantum Hamiltonians
Amira Abbas (Google Quantum AI);
Nunzia Cerrato (Scuola Normale Superiore);
Francisco Escudero Gutiérrez (Centrum Wiskunde & Informatica (CWI) and QuSoft);
Dmitry Grinko (University of Amsterdam and QuSoft);
Francesco Anna Mele (Scuola Normale Superiore);
Pulkit Sinha (Institute for Quantum Computing, University of Waterloo)
Abstract: We study the problem of learning Hamiltonians H that are s-sparse in the Pauli basis, given access to their time-evolution operators. Although Hamiltonian learning has been extensively investigated, two issues recur in much of the existing literature: the absence of lower bounds establishing optimality and the use of mathematically convenient but physically opaque error measures.
We address both challenges by introducing two physically motivated notions of distance between Hamiltonians and designing a nearly optimal algorithm with respect to one of these metrics. The first, the time-constrained distance, quantifies distinguishability through dynamical evolution up to a bounded time. The second, the temperature-constrained distance, captures distinguishability through thermal states at bounded inverse temperatures. We show that s-sparse Hamiltonians with bounded operator norm can be learned under both distances using only $O(s log(1/ε))$ experiments and $O(s^2/ε)$ total evolution time. For the time-constrained distance, we further establish lower bounds of $Ω((s/n) log(1/ε) + s)$ experiments and $Ω(√s/ε)$ total evolution time, demonstrating near-optimality in the number of experiments.
As an intermediate result, we obtain an algorithm that learns every Pauli coefficient of s-sparse Hamiltonians up to error ε in $O(s log(1/ε))$ experiments and $O(s/ε)$ total evolution time, improving upon several recent results.
The source of this improvement is a new isolation technique, inspired by the Valiant-Vazirani theorem (STOC’85), which shows that NP is as easy as detecting unique solutions. This isolation technique allows us to query the time evolution of a single Pauli coefficient of a sparse Hamiltonian—even when the Pauli support of the Hamiltonian is unknown—ultimately enabling us to recover the Pauli support itself.
Post-Quantum Security of Block Cipher Constructions
Gorjan Alagic (University of Maryland/NIST);
Chen Bai (Virginia Tech);
Christian Majenz (Technical University of Denmark);
Kaiyan Shi (University of Maryland)
Abstract: Block ciphers are versatile cryptographic ingredients that are used in a wide range of applications ranging from secure Internet communications to disk encryption. While post-quantum security of public-key cryptography has received significant attention, the case of symmetric-key cryptography (and block ciphers in particular) remains a largely unexplored topic. In this work, we set the foundations for a theory of post-quantum security for block ciphers and associated constructions. Leveraging our new techniques, we provide the first post-quantum security proofs for the key-length extension scheme FX, the tweakable block ciphers LRW and XEX, and most block cipher encryption and authentication modes. Our techniques can be used for security proofs in both the plain model and the quantum ideal cipher model. Our work takes significant initial steps in establishing a rigorous understanding of the post-quantum security of practical symmetric-key cryptography.
The Black-Box Simulation Barrier Persists in a Fully Quantum World
Nai-Hui Chia (Rice University);
Kai-Min Chung (Academia Sinica);
Xiao Liang (The Chinese University of Hong Kong);
Jiahui Liu (Fujitsu Research of America)
Abstract: Zero-Knowledge (ZK) protocols have been a subject of intensive study due to their fundamental importance and versatility in modern cryptography. However, the inherently different nature of quantum information significantly alters the landscape, necessitating a re-examination of ZK designs.
A crucial aspect of ZK protocols is their round complexity, intricately linked to *simulation*, which forms the foundation of their formal definition and security proofs. In the *post-quantum* setting, where honest parties and their communication channels are all classical but the adversaries could be quantum, Chia, Chung, Liu, and Yamakawa [FOCS'21 & QIP'22] demonstrated the non-existence of constant-round *black-box-simulatable* ZK arguments (BBZK) for NP unless NP is in BQP. However, this problem remains widely open in the full-fledged quantum future that will eventually arrive, where all parties (including the honest ones) and their communication are naturally quantum.
Indeed, this problem is of interest to the broader theory of quantum computing. It has been an important theme to investigate how quantum power fundamentally alters traditional computational tasks, such as the *unconditional* security of Quantum Key Distribution and the incorporation of Oblivious Transfers in MiniQCrypt. Moreover, quantum communication has led to round compression for commitments and interactive arguments. Along this line, the above problem is of great significance in understanding whether quantum computing could also change the nature of ZK protocols in some fundamentally manner.
We resolved this problem by proving that only languages in *BQP* admit constant-round *fully-quantum* BBZK. This result holds significant implications. Firstly, it illuminates the nature of quantum zero-knowledge and provides valuable insights for designing future protocols in the quantum realm. Secondly, it relates ZK round complexity with the intriguing problem of BQP vs QMA, which is out of the reach of previous analogue impossibility results in the classical or post-quantum setting. Lastly, it justifies the need for the non-black-box simulation techniques or the relaxed security notions employed in existing constant-round fully-quantum BBZK protocols.
Abstract: Quantum access to arbitrary classical data encoded in unitary black-box oracles underlies interesting data-intensive quantum algorithms, such as machine learning or electronic structure simulation.
The feasibility of these applications depends crucially on gate-efficient implementations of these
oracles, which are commonly some reversible versions of the boolean circuit for a classical lookup
table. We present a general parameterized architecture for quantum circuits implementing a lookup
table that encompasses all prior work in realizing a continuum of optimal tradeoffs between qubits,
non-Clifford gates, and error resilience, up to logarithmic factors. Our architecture assumes only
local 2D connectivity, yet recovers results, with the appropriate parameters, poly-logarithmic error
scaling. We also identify novel regimes, such as simultaneous sublinear scaling in all parameters.
These results enable tailoring implementations of the commonly used lookup table primitive to any
given quantum device with constrained resources.
Higher moment theory and learnability of bosonic states
Joseph T. Iosue (University of Maryland);
Yu-Xin Wang (University of Maryland);
Ishaun Datta (Stanford University);
Soumik Ghosh (University of Chicago);
Changhun Oh (Korea Advanced Institute of Science and Technology);
Bill Fefferman (University of Chicago);
Alexey V. Gorshkov (University of Maryland)
Abstract: We present a sample- and time-efficient algorithm to learn any bosonic Fock state acted upon by an arbitrary Gaussian unitary. As a special case, this algorithm efficiently learns states produced in Fock state BosonSampling, thus resolving an open question put forth by Aaronson and Grewal (Aaronson, Grewal 2023). We further study a hierarchy of classes of states beyond Gaussian states that are specified by a finite number of their higher moments. Using the higher moments, we find a full spectrum of invariants under Gaussian unitaries, thereby providing necessary conditions for two states to be related by an arbitrary (including active, e.g.~beyond linear optics) Gaussian unitary.
Entangling logical qubits without physical operations
Shayan Majidy (Harvard);
Jin Ming Koh (Harvard);
Anqi Gong (ETH);
Andrei C. Diaconu (Harvard);
Daniel Bochen Tan (Harvard);
Alexandra A. Geim (Harvard);
Michael J. Gullans (University of Maryland/NIST);
Norman Y. Yao (Harvard);
Mikhail D. Lukin (Harvard)
Abstract: Fault-tolerant logical entangling gates are essential for scalable quantum computing, but are limited by the error rates and overheads of physical two-qubit gates and measurements. To address this limitation we introduce phantom codes---quantum error-correcting codes that realize entangling gates between all logical qubits in a codeblock purely through relabelling of physical qubits during compilation, yielding perfect fidelity with no spatial or temporal overhead. We present a systematic study of such codes. First, we identify phantom codes using complementary numerical and analytical approaches. We exhaustively enumerate all 2.71 x 10^{10} inequivalent CSS codes up to n=14 and identify additional instances up to n=21 via SAT-based methods. We then construct higher-distance phantom-code families using quantum Reed--Muller codes and the binarization of qudit codes. Across all identified codes, we characterize other supported fault-tolerant logical Clifford and non-Clifford operations. Second, through end-to-end noisy simulations with state preparation, full QEC cycles, and realistic physical error rates, we demonstrate scalable advantages of phantom codes over the surface code across multiple tasks. We observe one–to–two–order-of-magnitude reduction in logical infidelity at comparable qubit overhead for GHZ-state preparation and Trotterized many-body simulation tasks, given a modest preselection acceptance rate. Our work establishes phantom codes as a viable architectural route to fault-tolerant quantum computation with scalable benefits for workloads with dense local entangling structure, and introduces general tools for systematically exploring the broader landscape of quantum error-correcting codes.
Quantum simulation of chemistry via quantum fast multipole method
Dominic Berry (Macquarie University);
Kianna Wan (Stanford University);
Andrew Baczewski (Sandia National Laboratories);
Elliot Eklund (University of Sydney);
Arkin Tikku (University of Sydney);
Ryan Babbush (Google Quantum AI)
Abstract: Here we describe an approach for simulating quantum chemistry on quantum computers with significantly lower asymptotic complexity than prior work. The approach uses a real-space first-quantised representation of the molecular Hamiltonian which we propagate using high-order product formulae. Essential for this low complexity is the use of a technique similar to the fast multipole method for computing the Coulomb operator with O(eta) complexity for a simulation with eta particles. We show how to modify this algorithm so that it can be implemented on a quantum computer. We ultimately demonstrate an approach with t(eta^{4/3} N^{1/3} + eta^{1/3} N^{2/3})(eta Nt/epsilon)^o(1) gate complexity, where N is the number of grid points, epsilon is target precision, and t is the duration of time evolution. This is roughly a speedup by O(eta) over most prior algorithms. We provide lower complexity than all prior work for N<eta^7 (the regime of practical interest), with only first-quantised interaction-picture simulations providing better performance for N>eta^7. As with the classical fast multipole method, large numbers eta>10^3 would be needed to realise this advantage.
Power and limitations of distributed quantum state purification
Benchi Zhao (The University of Hong Kong);
Yu-Ao Chen (HKUST(GZ));
Xuanqiang Zhao (The University of Hong Kong);
Chengkai Zhu (HKUST(GZ));
Giulio Chiribella (The University of Hong Kong);
Xin Wang (HKUST(GZ))
Abstract: Quantum state purification protocols, which mitigate noise by converting multiple copies of noisy quantum states into fewer copies with a lower noise level, have applications in quantum communication and computation with imperfect devices. Here, we systematically study the task of state purification in distributed quantum systems, demanding that purification be achieved by local operations and classical communication (LOCC). We prove that, in the presence of depolarizing noise, no LOCC purification protocol starting from two copies can work blindly for all the states in three important sets: the set of all pure two-qubit states, the set of all two-qubit maximally entangled states, and the Bell basis. In stark contrast, we show that a targeted, single-state purification is always achievable in the presence of depolarizing noise, and we provide an explicit analytical LOCC protocol for every given two-qubit state. For arbitrary finite sets of pure states and arbitrary noise profiles, we develop an optimization-based algorithm that systematically designs LOCC purification protocols, and we demonstrate it through concrete examples. Overall, our results identify both fundamental limitations and practical noise reduction strategies for distributed quantum information processing.
Efficient quantum circuits for high-dimensional representations of SU(n) and Ramanujan quantum expanders
Vishnu Iyer (UT Austin);
Siddhartha Jain (UT Austin);
Stephen Jordan (Google Quantum AI);
Rolando Somma (Google Quantum AI)
Abstract: We present efficient quantum circuits that implement high-dimensional unitary irreducible representations (irreps) of SU(n), where n>=2 is constant. For dimension N and error eps, the number of quantum gates in our circuits is polynomial in log(N) and log(1/eps). Our construction relies on the Jordan-Schwinger representation, which allows us to realize irreps of SU(n) in the Hilbert space of n quantum harmonic oscillators. Together with a recent efficient quantum Hermite transform, which allows us to map the computational basis states to the eigenstates of the quantum harmonic oscillator, this allows us to implement these irreps efficiently. Our quantum circuits can be used to construct explicit Ramanujan quantum expanders, a longstanding open problem. They can also be used to fast-forward the evolution of certain quantum systems.
Quantum Search With Generalized Wildcards
Arjan Cornelissen (Simons Institute for the Theory of Computing);
Nikhil S. Mande (University of Liverpool);
Subhasree Patro (Technische Universiteit Eindhoven);
Nithish Raja (Technische Universiteit Eindhoven);
Swagato Sanyal (University of Sheffield)
Abstract: In the search with wildcards problem [Ambainis, Montanaro, Quantum Inf.~Comput.'14], one's goal is to learn an unknown bit-string x \in \{-1,1\}^n. An algorithm may, at unit cost, test equality of any subset of the hidden string with a string of its choice. Ambainis and Montanaro showed a quantum algorithm of cost O(\sqrt{n} \log n) and a near-matching lower bound of \Omega(\sqrt{n}). Belovs [Comput.~Comp.'15] subsequently showed a tight O(\sqrt{n}) upper bound.
We consider a natural generalization of this problem, parametrized by a subset \cal{Q} \subseteq 2^{[n]}, where an algorithm may test whether x_S = b for an arbitrary S \in \cal{Q} and b \in \{-1,1\}^S of its choice, at unit cost. We show near-tight bounds when \cal{Q} is any of the following collections: bounded-size sets, contiguous blocks, prefixes, and only the full set.
All of these results are derived using a framework that we develop. Using symmetries of the task at hand we show that the quantum query complexity of learning x is characterized, up to a constant factor, by an optimization program, which is succinctly described as follows: `maximize over all odd functions f : \{-1,1\}^n \to \mathbb{R} the ratio of the maximum value of f to the maximum (over T \in \cal{Q}) standard deviation of f on a subcube whose free variables are exactly T.'
To the best of our knowledge, ours is the first work to use the primal version of the negative-weight adversary bound (which is a maximization program typically used to show lower bounds) to show new quantum query upper bounds without explicitly resorting to SDP duality.
Optimal Qubit Purification and Unitary Schur Sampling via Random SWAP Tests (Winner of the Best Student Paper Award!)
Shrigyan Brahmachari (Duke University);
Austin Hulse (Duke University);
Henry Pfister (Duke University);
Iman Marvian (Duke University)
Abstract: The goal of qubit purification is to combine multiple noisy copies of an unknown pure quantum state to obtain one or more copies that are closer to the pure state. We show that a simple protocol based solely on random SWAP tests achieves the same fidelity as the Schur transform, which is optimal. This protocol relies only on elementary two-qubit SWAP tests, which project a pair of qubits onto the singlet or triplet subspaces, to identify and isolate singlet pairs, and then proceeds with the remaining qubits. For a system of $n$ qubits, we show that after approximately $T \approx n \ln n$ random SWAP tests, a sharp transition occurs: the probability of detecting any new singlet decreases exponentially with $T$. Similarly, the fidelity of each remaining qubit approaches the optimal value given by the Schur transform, up to an error that is exponentially small in $T$. More broadly, this protocol achieves what is known as weak Schur sampling and unitary Schur sampling with error $\epsilon$, after only $2n \ln(n \epsilon^{-1})$ SWAP tests. That is, it provides a lossless method for extracting any information invariant under permutations of qubits, making it a powerful subroutine for tasks such as quantum state tomography and metrology.
Quantum Metrology with Constrained Ancillae
Qiushi Liu (Perimeter Institute for Theoretical Physics);
Yuxiang Yang (The University of Hong Kong)
Abstract: We present a systematic framework addressing the challenge of identifying optimal sequential strategies for noisy quantum metrology under resource constraints, with a focus on restricted ancillae. While achieving the optimal metrological precision generally requires quantum error correction, we derive rigorous sufficient conditions for attaining the Heisenberg limit using ancilla-free sequential strategies, either without control or with identical unitary controls, based on a spectral analysis of the quantum channel. Complementing this asymptotic analysis, we introduce an efficient tensor network algorithm for optimizing ancilla-constrained metrological strategies in the finite-query regime, adaptable to a wide variety of noise models and experimental control capabilities.
High-Performance qLDPC Codes with Efficient Layouts on Flying Qubits
Edwin Tham (IonQ Inc.);
Nicolas Delfosse (IonQ Inc.);
Min Ye (IonQ Inc.);
Arda Aydin ;
John G. Gamble (IonQ Inc.);
Ilia Khait (IonQ Inc.)
Abstract: Quantum low-density parity-check (qLDPC) codes are a class of quantum error-correction (QEC) codes with low-weight parity-checks that each require only a few two-qubit gates to implement. In recent years, qLDPC codes have gained popularity, as concrete code constructions have been found that outperform the surface code, and correspondingly performant practical decoders have been built. An outstanding challenge, however, remains that their Tanner graphs are not 2D-local thereby necessitating entangling gates to operate on distant qubits on a 2D device. Trapped-ion and neutral-atom qubits possess the ability to move qubits around when necessary – i.e. “flying qubits” – obviating the need for long-range gates.
Here we report on an explicit layout that leverages flying qubits, that is very low-overhead for many families of cyclic codes (including the most promising qLDPC instances found to-date). Crucially, our layout eschews more complicated qubit permutations, and instead favours the cyclic shift a simple re-ordering of qubits along a loop that can be realized in depth 1 even on current generation devices. This contrasts significantly with layouts on fixed qubits that depend on a large number of long-range (and more error-prone) hardware couplers for long-distance gates.
We also report on two competitive new sets of cyclic qLDPC codes that we constructed. The first is a set of Bivariate-Bicycle (BB) codes with lower weight parity-checks and higher minimum distance while maintaining the same length and encoding rate as comparable BB codes in. Second, we also constructed new Hypergraph Product (HGP) codes, that significantly outperform previously state-of-the-art HGP instances that were optimized by machine-learning methods. Both sets of new codes are efficiently implementable with our cyclic layout with syndrome circuits of fixed depth, made up of alternating layers of parallel gates and only a very small number of cyclic shifts.
Combining competitive new qLDPC codes alongside a simple layout implementable on existing hardware, our work suggest a concrete and practical path towards a fault-tolerant quantum computer.
Quadratic tensors as a unification of Clifford, Gaussian, and free-fermion physics
Andreas Bauer (Massachusetts Institute of Technology);
Seth Lloyd (Massachusetts Institute of Technology)
Abstract: Certain families of quantum mechanical models can be described and solved efficiently on a classical computer, including qubit or qudit Clifford circuits and stabilizer codes, free-boson or free-fermion models, and certain rotor and GKP codes.
We show that all of these families can be described as instances of the same algebraic structure, namely quadratic functions over abelian groups, or more generally over (super) Hopf algebras.
Different kinds of degrees of freedom correspond to different "elementary" abelian groups or Hopf algebras:
$\mathbb Z_2$ for qubits, $\mathbb Z_d$ for qudits, $\mathbb R$ for continuous variables, both $\mathbb Z$ and $\mathbb R/\mathbb Z$ for rotors, and a super Hopf algebra $\mathcal F$ for fermionic modes.
Objects such as states, operators, superoperators, or projection-operator valued measures, etc, are tensors.
For the solvable models above, these tensors are quadratic tensors based on quadratic functions.
Quadratic tensors with $n$ degrees of freedom are fully specified by only $O(n^2)$ coefficients.
Tensor networks of quadratic tensors can be contracted efficiently on the level of these coefficients, using an operation reminiscent of the Schur complement.
Our formalism naturally includes models with mixed degrees of freedom, such as qudits of different dimensions.
We also use quadratic functions to define generalized stabilizer codes and Clifford gates for arbitrary abelian groups.
Finally, we give a generalization from quadratic (or 2nd order) to $i$th order tensors, which are specified by $O(n^i)$ coefficients but cannot be contracted efficiently in general.
Abstract: We investigate the role of energy, i.e. average photon number, in the computational complexity of bosonic systems. We show three sets of results: (1. Energy growth rates) There exist bosonic gate sets which increase energy incredibly rapidly, obtaining e.g. infinite energy in finite/constant time. We prove these high energies can make computing properties of bosonic computations, such as deciding whether a given computation will attain infinite energy, extremely difficult, formally undecidable. (2. Lower bounds on computational power) More energy "=" more computational power. For example, certain gate sets allow poly-time bosonic computations to simulate PTOWER, the set of deterministic computations whose runtime scales as a tower of exponentials with polynomial height. Even just exponential energy and O(1) modes suffice to simulate NP, which, importantly, is a setup similar to that of the recent bosonic factoring algorithm of [Brenner, Caha, Coiteux-Roy and Koenig (2024)]. For simpler gate sets, we show an energy hierarchy theorem. (3. Upper bounds on computational power) Bosonic computations with polynomial energy can be simulated in BQP, "physical" bosonic computations with arbitrary finite energy are decidable, and the gate set consisting of Gaussian gates and the cubic phase gate can be simulated in PP, with exponential bound on energy, improving upon the previous PSPACE upper bound. Finally, combining upper and lower bounds yields no-go theorems for a continuous-variable Solovay-Kitaev theorem for gate sets such as the Gaussian and cubic phase gates. Our results imply that, just like time and space, energy is a computational resource, and that theoretical models taking energy into account are needed for bosonic quantum computations.
Entanglement area law in interacting bosons: from Bose-Hubbard, $\phi4$, and beyond
Donghoon Kim (RIKEN Center for Quantum Computing);
Tomotaka Kuwahara (RIKEN Center for Quantum Computing)
Abstract: The entanglement area law is a universal principle that characterizes quantum many-body phases and underpins tensor network algorithms. Traditionally, its validity has been limited to systems with short-range interactions and bounded local energy. Achieving a complete generalization that removes both of these constraints has been a longstanding goal in quantum many-body theory, especially for interacting boson systems where unbounded energy presents intrinsic difficulties. In this work, we rigorously prove the area law for one-dimensional interacting boson systems with long-range interactions, covering broad models including the Bose-Hubbard and $\phi4$ classes. Furthermore, we establish an efficiency guarantee for Matrix-Product-State approximations of the ground states, offering a practical route to numerical simulation. One of our main technical contributions is a general method for Hilbert space dimension reduction, whose applicability extends to arbitrary spatial dimensions. These results address two major challenges simultaneously and provide important foundations for simulating long-range cold atomic systems.
Beyond Belief Propagation: Cluster-Corrected Tensor Network Contraction with Exponential Convergence
Siddhant Midha (Princeton University);
Yifan Frank Zhang (Princeton University)
Abstract: Tensor network contraction on arbitrary graphs is a fundamental computational challenge with applications ranging from quantum simulation to quantum error correction. Belief propagation (BP) offers a powerful and scalable approximation method for this task, yet its accuracy limitations remain poorly understood and systematic improvements have been lacking. In this work, we present a rigorous theoretical framework for BP in tensor networks that resolves these issues. By importing ideas from statistical mechanics, we construct a convergent cluster expansion that systematically corrects BP and yields rigorous error bounds.
This addresses two fundamental questions in BP algorithm:
- It clarifies when BP approximates ground truth well and provides a rigorous error bound
- It gives a polynomial-time algorithm to improve the BP algorithm to having inverse polynomial error.
Put together, our results lay the groundwork for a principled and extensible theory of BP-based tensor network contraction.
Simulating noisy IQP circuits under amplitude damping
Shravan Shravan (University of New Mexico);
Mohsin Raza (University of New Mexico);
Ariel Shlosberg (University of New Mexico)
Abstract: The classical simulation of noisy-intermediate scale quantum (NISQ) circuits has been a topic
of intense study over the past few years. The majority of results on efficient simulation assume
that the circuits undergo some variant of unital noise. For example, it has been shown that the
output distributions of random quantum circuits and arbitrary IQP circuits undergoing depolarizing
noise can be simulated in polynomial time with low error. However, it is currently unknown if such
results can be extended to circuits undergoing non-unital noise. In this work, we answer this question partially by providing a classical algorithm to simulate the output distributions of arbitrary IQP circuits of depth d = Ω(log(n)) undergoing amplitude damping noise with a runtime O(dpoly(n/ϵ)).
Provable Speedups for Convex Optimization via Quantum Dynamics
Shouvanik Chakrabarti (JPMorganChase);
Dylan Herman (JPMorganChase);
Jacob Watkins (JPMorganChase);
Enrico Fontana (JPMorganChase);
Brandon Augustino (JPMorganChase);
Junhyung Lyle Kim (JPMorganChase);
Marco Pistoia (JPMorganChase)
Abstract: This work investigates the possibility of quantum speedups for continuous optimization through quantum Hamiltonian simulation. We establish the first rigorous query complexity bounds for unconstrained convex optimization via a fully-specified instance of digital quantum annealing, based on the non-adiabatic Quantum Hamiltonian Descent (QHD) framework. In the process, we derive the first rigorous resource estimates for digital quantum simulation Schr\"odinger operators that depend only on input simulation parameters, given black-box evaluation access to a separable $G$-Lipschitz potential $b(t)f(x)$. We apply these simulation bounds to assess the complexity of optimization in the high-dimensional regime.
Our annealing schedule achieves \emph{arbitrarily fast} convergence rates in the evolution time, with computational time determined solely by the cost of discretization. We show that a $G$-Lipschitz convex function can be optimized to an error of $\epsilon$ with $\widetilde{\Ocal}(d^{1.5} G^2 R^2/\epsilon^2)$ queries, given a starting point that is Euclidean distance $R$ from optimal. Under reasonable assumptions about the query complexity of simulating general Schr\"odinger operators and choice of initial state, we show that $\widetilde{\Omega}(d/\epsilon^2)$ queries are necessary. As a result, QHD does not appear to offer improvements over classical zeroth order methods when $f$ is accessed via exact black-box evaluations.
However, we show that the QHD algorithm can tolerate $\widetilde{\Ocal}(\epsilon^3 /d^{1.5} G^2 R^2)$ noise in function evaluation, and as a result, provides a super-quadratic query advantage over the best existing noise-tolerant classical algorithms in the high-dimensional setting. We leverage this to design a quantum algorithm for stochastic convex optimization that offers a super-quadratic speedup over all known classical algorithms in this regime. The algorithms also outperforms existing zeroth-order quantum algorithms for noisy (with the same noise tolerance) and stochastic convex optimization in this setting. To our knowledge, these results represent the first rigorous quantum speedups for convex optimization obtained through a dynamical algorithm.
Spectral Small-Incremental Entangling: Breaking Quasi-Polynomial Complexity Barriers in Long-Range Interacting Systems
Tomotaka Kuwahara (RIKEN Center for Quantum Computing);
Yusuke Kimura (RIKEN Center for Quantum Computing);
Hugo Mackay (Harvard University);
Ayumi Ukai (RIKEN Center for Quantum Computing);
Carla Rubiliani (Tubingen university);
Donghoon Kim (RIKEN Center for Quantum Computing);
Yosuke Mitsuhashi (RIKEN Center for Quantum Computing);
Hideaki Nishikawa (RIKEN Center for Quantum Computing);
Cheng Shang (RIKEN Center for Quantum Computing)
Abstract: How the detailed entanglement structure emerges from quantum dynamics remains a fundamental challenge, motivated by recent advances in quantum simulators and information processing. As a central milestone, the Small-Incremental-Entangling (SIE) theorem bounds the entanglement-entropy growth rate, but does not control the entanglement spectrum itself. In this work, we define the spectral-entangling strength, which quantifies how strongly an operator can reshape the distribution of Schmidt coefficients across a bipartition. We then prove a spectral SIE theorem: for R\'enyi index $\alpha \ge 1/2$, the growth rate of R\'enyi entanglement entropies admits a universal bound. Remarkably, our bound at $\alpha=1/2$ is both qualitatively and quantitatively optimal; below this threshold ($\alpha<1/2$), no universal speed limit on entanglement growth can exist. This result yields a sharp $1/s^2$ threshold in the tail of the ordered Schmidt coefficients (with $s$ the Schmidt index), enabling rigorous truncation-based error control and establishing a quantitative link between entanglement-spectrum structure and computational complexity. As a practical highlight, for one-dimensional power-law interactions $1/r^{\eta}$ with $\eta>2$, this implies matrix-product-state approximations with bond dimension polynomial in $(n/\varepsilon)$ for ground states, real-time evolved states, and Gibbs states, thereby closing the quasi-polynomial gap. By controlling R\'enyi entanglement, we further obtain a rigorous \emph{a priori} bound on truncation error for time-dependent DMRG/TEBD-type simulations. Overall, we extend the SIE paradigm from bounding entanglement entropies to constraining the entanglement spectrum itself.
Fine-Grained Complexity for Quantum Problems from Size-Preserving Circuit-to-Hamiltonian Constructions
Nai-Hui Chia (Department of Computer Science, Rice University);
Atsuya Hasegawa (Graduate School of Mathematics, Nagoya University);
Francois Le Gall (Graduate School of Mathematics, Nagoya University);
Yu-Ching Shen (Department of Computer Science, Rice University)
Abstract: The local Hamiltonian (LH) problem is the canonical $\mathsf{QMA}$-complete problem introduced by Kitaev. In this paper, we show its hardness in a very strong sense: we show that the 3-local Hamiltonian problem on $n$ qubits cannot be solved classically in time $O(2^{(1-\varepsilon)n})$ for any $\varepsilon>0$ under the Strong Exponential-Time Hypothesis (SETH), and cannot be solved quantumly in time $O(2^{(1-\varepsilon)n/2})$ for any $\varepsilon>0$ under the Quantum Strong Exponential-Time Hypothesis (QSETH). These lower bounds give evidence that the currently known classical and quantum algorithms for LH cannot be significantly improved.
Furthermore, we are able to demonstrate fine-grained complexity lower bounds for approximating the quantum partition function (QPF) with an arbitrary constant relative error. Approximating QPF with relative error is known to be equivalent to approximately counting the dimension of the solution subspace of $\mathsf{QMA}$ problems. We show the SETH and QSETH hardness to estimate QPF with constant relative error. We then provide a quantum algorithm that runs in $O(\sqrt{2^n})$ time for an arbitrary $1/\poly(n)$ relative error, matching our lower bounds and improving the state-of-the-art algorithm by Bravyi, Chowdhury, Gosset, and Wocjan (Nature Physics 2022) in the low-temperature regime.
To prove our fine-grained lower bounds, we introduce the first size-preserving circuit-to-Hamiltonian construction that encodes the computation of a $T$-time quantum circuit acting on $N$ qubits into a $(d+1)$-local Hamiltonian acting on $N+O(T^{1/d})$ qubits. This improves the standard construction based on the unary clock, which uses $N+O(T)$ qubits.
Constant-Overhead Entanglement Distillation via Scrambling
Abstract: High-fidelity quantum entanglement enables key quantum networking capabilities such as secure communication and distributed quantum computing, but long-distance entanglement distribution is limited by noise and loss. Entanglement distillation protocols address this problem by extracting high-fidelity Bell pairs from multiple noisy ones. The primary objective is minimizing the resource overhead: the number of noisy input pairs needed to distill each high-fidelity output pair. While protocols achieving optimal overhead are known in theory, they often require complex decoding operations that make practical implementation challenging. We circumvent this challenge by introducing protocols that use quantum scrambling --- the spreading of quantum information under chaotic dynamics --- through random Clifford operations. Based on this scrambling mechanism, our protocol maintains asymptotically \emph{constant} overhead, independent of the desired output error rate $\bar{\varepsilon}$, and can be implemented with shallow quantum circuits of depth $O(\poly \log \log \bar{\varepsilon}^{-1})$ and memory $O(\poly \log \bar{\varepsilon}^{-1})$. Our protocol remains effective even with noisy quantum gates. By incorporating error correction, our protocol achieves state-of-the-art performance: starting with pairs of 10\% initial infidelity, we require only 7 noisy inputs per output pair to distill a single Bell pair with infidelity $\bar{\varepsilon}=10^{-12}$, substantially outperforming existing schemes. We demonstrate the utility of our protocols for quantum repeater networks.
On the Complexity of Decoded Quantum Interferometry
Kunal Marwaha (University of Chicago);
Bill Fefferman (University of Chicago);
Alexandru Gheorghiu (IBM Quantum);
Vojtech Havlicek (IBM Quantum)
Abstract: We study the complexity of Decoded Quantum Interferometry (DQI), a recently proposed quantum algorithm for approximate optimization. We argue that DQI is hard to classically simulate, and that the hardness comes from locating an exponentially large hidden subset. This type of hardness is shared by Shor's algorithm, but the hidden subset here has no apparent group structure. We first prove that DQI can be simulated in a low level of the polynomial hierarchy, ruling out hardness arguments related to quantum supremacy. Instead, we show that DQI implements an existential coding theory bound based on the MacWilliams identity, and that it prepares a state within an obfuscated quantum harmonic oscillator. Both viewpoints require a coherent application of a discrete Hermite transform, which has no natural classical analog.
A Unified Approach to Quantum Key Leasing with a Classical Lessor
Fuyuki Kitagawa (NTT Social Informatics Laboratories, NTT Research Center for Theoretical Quantum Information);
Jiahui Liu (Fujitsu Research of America);
Shota Yamada (AIST);
Takashi Yamakawa (NTT Social Informatics Laboratories, NTT Research Center for Theoretical Quantum Information)
Abstract: Secure key leasing allows a cryptographic key to be leased as a quantum state in such a way that the key can later be revoked in a verifiable manner. In this work, we propose a modular framework for constructing secure key leasing with a classical-lessor, where the lessor is entirely classical and, in particular, the quantum secret key can be both leased and revoked using only classical communication. Based on this framework, we obtain classical-lessor secure key leasing schemes for public-key encryption (PKE), pseudorandom function (PRF), and digital signature. We adopt the strong security notion known as security against verification key revealing attacks (VRA security) proposed by Kitagawa et al. (Eurocrypt 2025) into the classical-lessor setting, and we prove that all three of our schemes satisfy this notion under the learning with errors assumption. Our PKE scheme improves upon the previous construction by Goyal et al. (Eurocrypt 2025), and our PRF and digital signature schemes are respectively the first PRF and digital signature with classical-lessor secure key leasing property.
Along the way, we also construct a watermarking scheme and a dual-mode secure function evaluation scheme that satisfy certain useful properties, which may be of independent interest.
Quantum Merlin-Arthur with an Internally Separable Proof
Roozbeh Bassirian (University of Chicago);
Bill Fefferman (University of Chicago);
Itai Leigh (Tel Aviv University);
Kunal Marwaha (University of Chicago);
Pei Wu (Penn State University)
Abstract: While the role of entanglement in quantum proof systems has been extensively studied, the computational power of unentanglement remains poorly understood. Since entanglement admits many inequivalent multipartite structures, it is natural to ask how more fine-grained structural promises affect computational power.
In this work we investigate a mild promise: each proof is internally separable, meaning that after tracing out one register, a designated constant-size subsystem is separable from the rest—even though the overall proof may still be entangled across every bipartition. We prove a qualitative jump from one proof to two: with one internally separable proof, the resulting class is contained in $\EXP$ (even allowing inverse-exponential completeness–soundness gap), whereas with two unentangled internally separable proofs, the class equals $\NEXP$ at constant gap. Notably, in the $\NEXP$ construction, the second proof is used solely to implement a SWAP-based purity test.
Tight and Robust Consecutive Measurement Theorems with Applications to Quantum Cryptography
Chen-Xun Weng (Nanjing University);
Minglong Qin (National University of Singapore);
Yanglin Hu (University of Hong Kong);
Marco Tomamichel (National University of Singapore)
Abstract: In many quantum information tasks, we encounter scenarios where information about two incompatible observables must be retrieved. A natural approach is to perform consecutive measurements, raising a key question: How does the information gained from the first measurement compare to that from both? The consecutive measurement theorem (CMT) provides a general relation between these quantities and has found applications in quantum cryptography. However, its previous formulations are often either too loose or too brittle to yield meaningful bounds. In this work, we first establish a tight CMT and apply it to achieve the best upper bounds on the quantum value of certain nonlocal games and their parallel repetitions to date. We then develop a robust CMT and explore a novel application of CMT to obtain a tighter no-go theorem for quantum oblivious transfer in some regime. These contributions strengthen the theoretical tools for analyzing quantum advantage and have concrete implications for nonlocal games and quantum cryptographic protocols.
Learning and certification of local time-dependent quantum dynamics and noise
Daniel Stilck França (University of Copenhagen);
Tim Moebus (University of Cambridge);
Albert Werner (University of Copenhagen);
Cambyse Rouzé (Inria)
Abstract: Hamiltonian learning protocols are quickly establishing themselves as valuable tools to benchmark and verify quantum computers and simulators. However, virtually no rigorous protocols exist to learn time-dependent Hamiltonians and Lindbladians, despite their widespread applications. In this work, we address this gap and show how to learn the time-dependent evolution of a locally interacting $n$-qubit system arranged on a graph $\mathsf{G}$ of effective dimension $D$ by resorting only to the preparation of product Pauli eigenstates, evolution by the time-dependent generator for given times and measurements in product Pauli bases. We assume that the time-dependent parameters are well-approximated by functions in a known space of dimension $m$ and for which we can efficiently perform stable interpolation, say by polynomial functions. Our protocol outputs an expansion in that basis that approximates the parameters up to $\epsilon$ in an interval. The protocol only requires $\widetilde{\cO}\big(\epsilon^{-2}\,\poly{m}\,\log(n\delta^{-1})\big)$ samples and $\poly{n,m}$ preprocessing and postprocessing to learn the parameters with probability of success $1-\delta$, making it highly scalable. Importantly, the scaling in the dimension $m$ is polynomial, whereas naive extensions of previous methods yield a dependency that is exponential in $m$.
Like previous protocols for the time-independent case, ours is mostly based on estimating time derivatives of expectation values of various observables through interpolation techniques. We then obtain well-conditioned linear equations that allow us to evaluate the value of the time-dependent function for a local generator. However, whereas in the time-independent case it sufficed to only consider derivatives at time $t=0$, here we need to evaluate them at finite times while still being able to relate the derivatives to parameters of the evolution. Thus, besides dealing with technical intricacies related to the time-dependent case, our main innovation is to show how to combine Lieb-Robinson bounds, process shadows and semidefinite programs to estimate the parameters of the evolution efficiently at constant times. Along the way, we extend state-of-the-art Lieb-Robinson bounds on general graphs to the time-dependent, dissipative setting, a result of independent interest. In addition, we show how our technique can be used to verify the outputs of time-dependent dynamics for polynomial times from access to short-time dynamics for cases of interest like linear adiabatic schedules.
As such, our protocol is a valuable tool to verify various state preparation procedures on quantum computers and simulators, such as adiabatic preparation, or to characterize time-dependent Markovian noise.
Transversal Dimension Jump for Product qLDPC Codes
Christine Li (Columbia University);
John Preskill (Caltech);
Qian Xu (Caltech)
Abstract: We introduce transversal dimension jump, a code-switching protocol for lifted product (LP) quantum low-density parity-check (qLDPC) codes across different chain-complex dimensions, enabling universal fault-tolerant quantum computation with low overhead. The construction leverages the product structure of LP codes to implement one-way transversal CNOTs between a 3D code and its 2D component codes, enabling teleportation-based switching with geometrically nonlocal gates. Combined with constant-depth CCZ gates in 3D LP codes and low-overhead transversal Clifford gates in 2D LP codes, this yields universal, high-rate quantum logical computation with high thresholds and low space-time costs. Beyond asymptotic schemes, we identify explicit 3D–2D LP code pairs supporting cup-product CCZ gates, including bivariate tricycle–bicycle families such as the [[81, 3, 5]]–[[54, 2, 6]] pair, where the 3D tricycle codes admit depth-2 CCZ, weight-6 stabilizers, and pseudo-thresholds >~0.4%. As a byproduct, we show that the 3D codes enable highly efficient magic-state preparation: a single round of stabilizer measurements followed by depth-2 CCZ and postselection produces states with error <10^{-9} and success probability ~35%. Our results establish a native integration of qLDPC codes with complementary transversal gates—covering nearly all practically relevant families known so far—and open a broad design space for scalable, low-overhead universal quantum computation.
Powerful Primitives in the Bounded Quantum Storage Model
Mohammed Barhoush (University of Montreal);
Louis Salvail (University of Montreal)
Abstract: The bounded quantum storage model aims to achieve security against computationally unbounded adversaries that are restricted only with respect to their quantum memories. In this work, we provide information-theoretic secure constructions in this model for the following powerful primitives:
(1) CCA1-secure symmetric key encryption, message authentication codes, and one-time programs. These schemes require no quantum memory for the honest user, while they can be made secure against adversaries with arbitrarily large memories by increasing the transmission length sufficiently.
(2) CCA1-secure asymmetric key encryption, encryption tokens, signatures, signature tokens, and program broadcast. These schemes are secure against adversaries with roughly e^{\sqrt{m}} quantum memory where m is the quantum memory required for the honest user.
All of the constructions additionally satisfy disappearing security, essentially preventing an adversary from storing and using a transmission later on.
Composable simultaneous purification: when all communication scenarios reduce to spatial correlations
Matilde Baroni (Sorbonne Université, LIP6);
Dominik Leichtle (University of Edinburgh, School of Informatics);
Ivan Šupić (Université Grenoble Alpes);
Damian Markham (Sorbonne Université, LIP6);
Marco Túlio Quintino (Sorbonne Université, LIP6)
Abstract: Bell non-locality is a powerful framework to distinguish classical, quantum, and post-quantum resources, which relies on non-communicating players. Under which restriction can we have the same separations, if we allow for communication? Non-signalling state assemblages, and the fact that they can always be simultaneously purified, turned out to be the key element to restrict the simplest bipartite communication scenario, the prepare-and-measure, to the standard bipartite Bell scenario. Yet, many distinctive features of quantum theory are genuinely multipartite and cannot be reduced to two-party behaviour.
In this work we are interested in extending this simultaneous purification inspired result to all multipartite communication schemes. As a first step, we unify and extend the simultaneous purification result from states to instruments and super-instruments, which are composable structures, and open up the possibility to explore more complex communication scenarios. Our main contribution is to establish that arbitrary compositions of non-signalling assemblages cannot escape the standard spatial quantum Bell correlations set.
As a consequence, any interactive quantum realization of correlations outside of this set must involve at least one signalling assemblage of quantum operations, even when the resulting correlations are non-signalling.
Cloning is as Hard as Learning for Stabilizer States
Nikhil Bansal (University of Warwick);
Matthias C. Caro (University of Warwick);
Gaurav Mahajan (Yale University)
Abstract: The impossibility of simultaneously cloning non-orthogonal states lies at the foundations of quantum theory. Even when allowing for approximation errors, cloning an arbitrary unknown pure state requires as many initial copies as needed to fully learn the state. Rather than arbitrary unknown states, modern quantum learning theory often considers structured classes of states and exploits such structure to develop learning algorithms that outperform general-state tomography. This raises the question: How do the sample complexities of learning and cloning relate for such structured classes?
We answer this question an important class of states. Namely, for $n$-qubit stabilizer states, we show that the optimal sample complexity of cloning is $\Theta(n)$. Thus, also for this structured class of states, cloning is as hard as learning. To prove these results, we use representation-theoretic tools in the recently proposed Abelian State Hidden Subgroup framework and a new structured version of the recently introduced random purification channel to relate stabilizer state cloning to a variant of the sample amplification problem for probability distributions that was recently introduced in classical learning theory. This allows us to obtain our cloning lower bounds by proving new sample amplification lower bounds for classes of distributions with an underlying linear structure. Our results provide a more fine-grained perspective on No-Cloning theorems, opening up connections from foundations to quantum learning theory and quantum cryptography.
Positive maps and extendibility hierarchies from copositive matrices
Aabhas Gulati (Institut de Mathématiques, Université de Toulouse);
Ion Nechita (CNRS, Université de Toulouse);
Sang-Jun Park (Wuhan University)
Abstract: The characterization of positive, non-completely positive linear maps is a central problem in operator algebras and quantum information theory, where such maps serve as entanglement witnesses. This work introduces and systematically studies a new convex cone of pairwise copositive matrices, denoted $COPCP_n$. We establish that this cone is dual to the cone of pairwise completely positive matrices and, critically, provides a complete characterization for the positivity of the broad and physically relevant class of covariant maps. We provide a way to systematically lift matrices from the classical cone of copositive matrices, $COP_n$, to the new pairwise cone $COPCP_n$, thereby creating a powerful bridge between the well-studied theory of copositive forms and the structure of positive maps. We develop an analogous framework for decomposable maps, introducing the cone $PDEC_n$ of pairwise decomposable matrices. For several families of linear maps having diagonal unitary symmetry such as generalized Choi maps, we characterize membership in these cones using simple properties of the parameters of the maps.
As a primary application of this framework, we define a novel family of linear maps $\Phi_t^G$ parameterized by a graph $G$ and a real parameter $t$. We derive exact thresholds on $t$ that determine when these maps are positive, decomposable, or completely positive, linking these properties to fundamental graph-theoretic parameters. This construction yields vast new families of positive indecomposable maps, for which we provide explicit examples derived from infinite classes of graphs, most notably rank 3 strongly regular graphs such as Paley graphs.
On the dual side, we investigate the entanglement properties of large classes of symmetric states, such as the (mixture of) Dicke states. We prove that the sum-of-squares (SOS) hierarchies used in polynomial optimization to approximate the cone of copositive matrices correspond precisely to dual cones of witnesses for different levels of the PPT bosonic extendibility hierarchy. In the setting of the DPS hierarchy for separability, we construct a large family of boundary entanglement witnesses that are not certifiable by any level of the PPT bosonic extendibility hierarchy, answering a long standing open question from [DPS04]. Leveraging the duality, we also provide an explicit construction of bipartite (mixture of) Dicke states that are simultaneously entangled and $K_r$-PPT bosonic extendible for any desired hierarchy level $r \geq 2$ and local dimension $n \geq 5$.
Abstract: We present a simple algorithm that implements an arbitrary $n$-qubit unitary operator using a Clifford+T circuit with T-count $O(2^{4n/3} n^{2/3})$.
This improves upon the previous best known upper bound of $O(2^{3n/2} n)$, while the best known lower bound remains $\Omega(2^n)$.
Our construction is based on a recursive application of the cosine-sine decomposition, together with a generalization of the optimal diagonal unitary synthesis method by Gosset, Kothari, and Wu to multi-controlled $k$-qubit unitaries.
Limitations of Decoded Quantum Interferometry for MaxCut
Abstract: Decoded Quantum Interferometry (DQI) is a framework for approximating special kinds of discrete optimization problems that relies on problem structure in a way that sets it apart from other classical or quantum approaches. We show that the instances of MaxCut on which DQI attains a nontrivial asymptotic approximation guarantee are solvable exactly in classical polynomial time. We include a streamlined exposition of DQI tailored for MaxCut that relies on elementary graph theory instead of coding theory to motivate and explain the algorithm.
Plugging Leaks in Fault-Tolerant Quantum Computation and Verification
Theodoros Kapourniotis (National Quantum Computing Centre, UK);
Dominik Leichtle (University of Edinburgh, School of Informatics);
Luka Music (Quandela);
Harold Ollivier (ENS, INRIA Paris)
Abstract: With the advent of quantum cloud computing, the security of delegated quantum computation has become of utmost importance. While multiple statistically secure blind verification schemes in the prepare-and-send model have been proposed, none of them achieves full quantum fault-tolerance, a prerequisite for useful verification on scalable quantum computers. In this paper, we present the first fault-tolerant blind verification scheme for universal quantum computations able to handle secret-dependent noise on the verifier's quantum device. Composable security of the proposed protocol is proven in the Abstract Cryptography framework.
Our main tools are two novel distillation protocols that turn secret-dependent noise into secret-independent noise. The first one is run by the verifier and acts on its noisy gates, while the second and more complex one is run entirely on the prover's device and acts on states provided by the verifier. Both are required to overcome the leakage induced by secret-dependent noise. We use these protocols to prepare states in the X-Y-plane whose noise is overwhelmingly secret-independent, which then allows us to verify with exponential confidence arbitrary fault-tolerant BQP computations.
Classically simulating noisy quantum circuits via exponential decay of conditional correlation
Yifan (Frank) Zhang (Princeton University);
Su-un Lee (University of Chicago);
Sarang Gopalakrishnan (Princeton University);
Soumik Ghosh (University of Chicago);
Changhun Oh (Korea Advanced Institute of Science and Technology (KAIST));
Kyungjoo Noh (AWS Center for Quantum Computing);
Bill Fefferman (University of Chicago);
Liang Jiang (University of Chicago)
Abstract: While quantum computing can accomplish tasks that are classically intractable, the presence of
noise may destroy this advantage in the absence of fault tolerance. In this work, we present a quasi-polynomial-time classical algorithm for simulating quantum circuits under local depolarization noise, thereby ruling out their quantum advantage in these settings. Our algorithm leverages a property called approximate Markov property to sequentially sample from the measurement outcome distribution of noisy circuits. We establish approximate Markov property in a broad range of circuits: (1) we prove that it holds for any circuit when the noise rate exceeds a constant threshold, and (2) we provide strong analytical and numerical evidence that it holds for random quantum circuits subject to any constant noise rate, including non-unital noises. These regimes include previously known classically simulable cases as well as new ones, such as shallow random circuits and random circuits under non-unital noise, where anticoncentration does not hold and prior algorithms fail. Taken together, our results significantly extend the boundary of classical simulability and suggest that noise generically enforces approximate Markov property and classical simulability, thereby highlighting the limitation of noisy quantum circuits in demonstrating quantum advantage.
Complexity of Fermionic 2-SAT
Maarten Stroeks (Delft University of Technology);
Barbara M. Terhal (Delft University of Technology)
Abstract: We introduce the fermionic satisfiability problem, Fermionic k-SAT: this is the problem of deciding whether there is a fermionic state in the null-space of a collection of fermionic, parity-conserving, projectors on n fermionic modes, where each fermionic projector involves at most k fermionic modes. We prove that this problem can be solved efficiently classically for k = 2. In addition, we show that deciding whether there exists a satisfying assignment with a given fixed particle number parity can also be done efficiently classically for Fermionic 2-SAT: this problem is a quantum-fermionic extension of asking whether a classical 2-SAT problem has a solution with a given Hamming weight parity. We also prove that deciding whether there exists a satisfying assignment for particle-number-conserving Fermionic 2-SAT for some given particle number is NP-complete. Complementary to this, we show that Fermionic 9-SAT is QMA_1-hard.
Unclonable Cryptography in Linear Quantum Memory
Omri Shmueli (NTT Research);
Mark Zhandry (Stanford University)
Abstract: Quantum cryptography is a rapidly-developing area which leverages quantum information to accomplish classically-impossible tasks. In many of these protocols, quantum states are used as long-term cryptographic keys. Typically, this is to ensure the keys cannot be copied by an adversary, owing to the quantum no-cloning theorem. Unfortunately, due to quantum state's tendency to decohere, persistent quantum memory will likely be one of the most challenging resources for quantum computers. As such, it will be important to minimize persistent memory in quantum protocols.
In this work, we consider the case of one-shot signatures (OSS), and more general quantum signing tokens. These are important unclonable primitives, where quantum signing keys allow for signing a single message but not two. Naturally, these quantum signing keys would require storage in long-term quantum memory. Very recently, the first OSS was constructed in a classical oracle model and also in the standard model, but we observe that the quantum memory required for these protocols is quite large. In this work, we significantly decrease the quantum secret key size, in some cases achieving asymptotically optimal size. To do so, we develop novel techniques for proving the security of cryptosystems using coset states, which are one of the main tools used in unclonable cryptography.
End-to-end quantum algorithms for tensor problems
Enrico Fontana (JPMorganChase);
Sivaprasad Omanakuttan (JPMorganChase);
Junhyung Lyle Kim (JPMorganChase);
Joseph Sullivan (JPMorganChase);
Michael Perlin (JPMorganChase);
Ruslan Shaydulin (JPMorganChase);
Shouvanik Chakrabarti (JPMorganChase)
Abstract: We present a comprehensive end-to-end quantum algorithm for tensor problems, including tensor PCA and planted kXOR, that achieves potential superquadratic quantum speedups over classical methods. We build upon prior works by Hastings~(\textit{Quantum}, 2020) and Schmidhuber~\textit{et al.}~(\textit{Phys.~Rev.~X.}, 2025), and address key limitations by introducing a native qubit-based encoding for the Kikuchi method, enabling explicit quantum circuit constructions and non-asymptotic resource estimation. Our approach substantially reduces constant overheads through a novel guiding state preparation technique as well as circuit optimizations, reducing the threshold for a quantum advantage. We further extend the algorithmic framework to support recovery in sparse tensor PCA and tensor completion, and generalize detection to asymmetric tensors, demonstrating that the quantum advantage persists in these broader settings. Detailed resource estimates show that 900 logical qubits, $\sim 10^{15}$ gates and $\sim 10^{12}$ gate depth suffice for a problem that classically requires $\sim 10^{23}$ FLOPs. The gate count and depth for the same problem without the improvements presented in this paper would be at least $10^{19}$ and $10^{18}$ respectively. These advances position tensor problems as a candidate for quantum advantage whose resource requirements benefit significantly from algorithmic and compilation improvements; the magnitude of the improvements suggest that further enhancements are possible, which would make the algorithm viable for upcoming fault-tolerant quantum hardware.
Abstract: We construct a publicly-verifiable non-interactive zero-knowledge argument system for QMA with the following properties of interest.
- Transparent setup. Our protocol only requires a uniformly random string (URS) setup. The only prior publicly-verifiable NIZK for QMA (Bartusek and Malavolta, ITCS 2022) requires an *entire obfuscated program* as the common reference string.
- Extractability. Valid QMA witnesses can be extracted directly from our accepting proofs. That is, we obtain a publicly-verifiable non-interactive argument of *quantum knowledge*, which was previously only known in a privately-verifiable setting (Coladangelo, Vidick, and Zhang, CRYPTO 2020).
Our construction introduces a novel type of ZX QMA verifier with "strong completeness" and builds upon the coset state authentication scheme from (Bartusek, Brakerski, and Vaikuntanathan, STOC 2024) within the context of QMA verification. Along the way, we establish new properties of the authentication scheme.
The security of our construction rests on the heuristic use of a post-quantum indistinguishability obfuscator. Rather than rely on the full-fledged classical oracle model (i.e. ideal obfuscation), we isolate a particular game-based property of the obfuscator that suffices for our proof, which we dub the *evasive composability* heuristic.
As an additional contribution, we study a general method for replacing heuristic use of obfuscation with heuristic use of hash functions in the post-quantum setting. In particular, we establish security of the ideal obfuscation scheme of Jain, Lin, Luo, and Wichs (CRYPTO 2023) in the *quantum* pseudorandom oracle model (QPrO), which can be heuristically instantiated with a hash function. This gives us NIZK arguments of quantum knowledge for QMA in the QPrO, and additionally allows us to translate several quantum-cryptographic results that were only known in the classical oracle model to results in the QPrO.
Universal thermodynamic implementation of a process with a variable work cost
Abstract: The minimum amount of thermodynamic work required in order to implement a quantum computation or a quantum state transformation can be quantified using frameworks based on the resource theory of thermodynamics, deeply rooted in the works of Landauer and Bennett. For instance, the work we need to invest in order to implement n independent and identically distributed (i.i.d.) copies of a quantum channel is quantified by the thermodynamic capacity of the channel when we require the implementation's accuracy to be guaranteed in diamond norm over the n-system input. Recent work showed that work extraction can be implemented universally, meaning the same implementation works for a large class of input states, while achieving a variable work cost that is optimal for each individual i.i.d. input state. Here, we revisit some techniques leading to derivation of the thermodynamic capacity, and leverage them to construct a thermodynamic implementation of n i.i.d. copies of any time-covariant quantum channel, up to some process decoherence that is necessary because the implementation reveals the amount of consumed work. The protocol uses so-called thermal operations and achieves the optimal per-input work cost for any i.i.d. input state; it relies on the conditional erasure protocol in our earlier work, adjusted to yield variable work. We discuss the effect of the work-cost decoherence. While it can significantly corrupt the correlations between the output state and any reference system, we show that for any time-covariant i.i.d. input state, the state on the output system faithfully reproduces that of the desired process to be implemented. As an immediate consequence of our results, we recover recent results for optimal work extraction from i.i.d. states up to the error scaling and implementation specifics, and propose an optimal preparation protocol for time-covariant i.i.d. states.
Dequantization Barriers for Guided Stoquastic Hamiltonians
Shrinidhi Teganahally Sridhara (Université de Bordeaux, CNRS, LaBRI, France);
Yassine Hamoudi (Université de Bordeaux, CNRS, LaBRI, France);
Yvan Le Borgne (Université de Bordeaux, CNRS, LaBRI, France)
Abstract: Stoquastic Hamiltonians form an important class of quantum Hamiltonians, with applications to combinatorial optimization, analog computation, and adiabatic algorithms. The absence of a sign problem makes stoquastic Hamiltonians particularly amenable to classical simulation and dequantization techniques. Many such approaches rely on the availability of a guiding state, that is, a state with non-negligible overlap with the true ground state. This raises a fundamental question: can a suitably chosen guiding state always suffice to dequantize the preparation of stoquastic ground states?
We answer this question in the negative by constructing a family of stoquastic Hamiltonians, represented as adjacency matrices of carefully designed graphs, for which classical algorithms cannot efficiently sample from the ground-state distribution -- even given the optimal guiding state. Our graphs are built from a certain type of high-girth spectral expanders, to which self-similar trees are attached. This builds on and extends prior work of Gilyén, Hastings, and Vazirani [Quantum 2021, STOC 2021], which ruled out dequantization for a specific stoquastic adiabatic path. We strengthen their result by ruling out any classical algorithm for guided ground-state preparation, while also providing a derandomized construction.
Space–Time Efficient Transversal Architectures for Large-Scale Quantum Computation
Abstract: We present a low-overhead architecture that supports the layout and resource estimation of large-scale fault-tolerant quantum algorithms. Utilizing recent advances in fault tolerance with transversal gate operations, this architecture achieves a run time speed-up on the order of the code distance d, which we find directly translates to run time improvements of large-scale quantum algorithms. Our architecture consists of functional building blocks of key algorithmic subroutines, including magic state factories, quantum arithmetic units, and quantum look-up tables. These building blocks are implemented using efficient transversal operations, and we design space-time-efficient versions of them that minimize interaction distance, thereby reducing atom move times and minimizing the volume for correlated decoding. We further propose models to estimate their logical error performance. We perform resource estimation for a large-scale implementation of Shor's factoring algorithm, one of the prototypical benchmarks for large-scale quantum algorithms, on dynamically reconfigurable neutral atom arrays, finding that 2048-bit RSA factoring can be executed with 19 million qubits in 5.6 days, for 1 ms QEC cycle times. This represents close to 50x speed-up of the run-time compared to existing estimates with similar assumptions, with no increase in space footprint, achieving a genuine reduction of the space-time volume required for error-corrected quantum computation, and bringing the runtime of large-scale algorithms on emerging platforms into a practical regime.
Hiding, Shuffling, and Cycle Finding: Quantum Algorithms on Edge Lists
Amin Shiraz Gilani (University of Maryland);
Daochen Wang (University of British Columbia);
Pei Wu (The Pennsylvania State University);
Xingyu Zhou (University of British Columbia)
Abstract: The edge list model is arguably the simplest input model for graphs, where the graph is specified by a list of its edges. In this model, we study the quantum query complexity of three variants of the triangle finding problem. The first asks whether there exists a triangle containing a target edge and raises general questions about the hiding of a problem's input among irrelevant data. The second asks whether there exists a triangle containing a target vertex and raises general questions about the shuffling of a problem's input. The third asks whether there exists a triangle; this problem bridges the $3$-distinctness and $3$-sum problems, which have been extensively studied by both cryptographers and complexity theorists. We provide tight or nearly tight results for these problems as well as some first answers to the general questions they raise.
Furthermore, given any graph with low maximum degree, such as a typical random sparse graph, we prove that the quantum query complexity of finding a length-$k$ cycle in its length-$m$ edge list is $m^{3/4-1/(2^{k+2}-4)\pm o(1)}$, which matches the best-known upper bound for the quantum query complexity of $k$-distinctness on length-$m$ inputs up to an $m^{o(1)}$ factor. We prove the lower bound by developing new techniques within Zhandry's recording query framework [CRYPTO '19] as generalized by Hamoudi and Magniez [ToCT '23]. These techniques extend the framework to treat any non-product distribution that results from conditioning a product distribution on the absence of rare events. We prove the upper bound by adapting Belovs's learning graph algorithm for $k$-distinctness [FOCS '12]. Finally, assuming a plausible conjecture concerning only cycle finding, we show that the lower bound can be lifted to an essentially tight lower bound on the quantum query complexity of $k$-distinctness, which is a long-standing open question.
The Power of Quantum Circuits in Sampling
Guy Blanc (Stanford University);
Caleb Koch (Stanford University);
Jane Lange (MIT);
Carmen Strassle (Stanford University);
Li-Yang Tan (Stanford University)
Abstract: We give new evidence that quantum circuits are substantially more powerful than classical circuits. We show, relative to a random oracle, that polynomial-size quantum circuits can sample distributions that subexponential-size classical circuits cannot approximate even to TV distance $1-o(1)$. Prior work of Aaronson and Arkhipov (2011) showed such a separation for the case of exact sampling (i.e.~TV distance $0$), but separations for approximate sampling were only known for uniform algorithms.
A key ingredient in our proof is a new hardness amplification lemma for the classical query complexity of the Yamakawa--Zhandry (2022) search problem. We show that the probability that any family of query algorithms collectively finds $k$ distinct solutions decays exponentially in $k$.
Topology for qLDPC: transversal non-Clifford gates and magic state fountain on homological product codes with constant rate and beyond the N^{1/3} distance barrier
Abstract: We develop a topological theory for fault-tolerant quantum computation in quantum low-density parity-check (qLDPC) codes. We show that there exist hidden simplicial or CW complex structures encoding the topological data for all qLDPC and CSS codes obtained from product construction by generalizing the Freedman-Hastings code-to-manifold mapping. This is achieved by building manifolds from the Tanner graphs of the skeleton classical or quantum codes, which further form a product manifold and an associated thickened product code defined on its triangulation. One can further deformation retract the manifold back to a CW complex which supports a non-topological code with minimal overhead suitable for near-term implementation. Both types of codes admit cohomology operations including cup product which can induce non-Clifford gates. When applying this mapping to a 3D hypergraph product code obtained from the product of 3 copies of good classical expander codes, we obtain non-Clifford logical CCZ gates via constant depth circuits on a code with constant stabilizer weight $w=O(1)$, constant rate $K=\Theta(N)$, and polynomial distance $D=\Omega(N^{1/3})$. When applied to logical CCZ on 3D homological product codes consisting of the product of a pair of good quantum and classical LDPC codes, we can further improve the distance to $D=\Omega(\sqrt{N})$ exceeding the $N^{1/3}$ distance barrier implied by the Bravyi-König bound for conventional topological codes with the aid of non-Euclidean geometries.
Our work suggests that it is feasible to apply native logical non-Clifford gates on qLDPC codes or directly inject high-fidelity magic states as resources (`magic state fountain') without the distillation process. For the homological product construction, the fountain can inject $\Theta(\sqrt{N})$ magic states in parallel in a single round.
Limitations of Noisy Geometrically Local Quantum Circuits
Jon Nelson (University of Maryland);
Joel Rajakumar (University of Maryland);
Michael J. Gullans (University of Maryland)
Abstract: It has been known for almost 30 years that quantum circuits with interspersed depolarizing noise converge to the uniform distribution at 𝜔(log n) depth, where n is the number of qubits, making them classically simulable. We show that under the realistic constraint of geometric locality, this bound is loose: these circuits become classically simulable at even shallower depths. While prior work in this regime considered quantum circuits with random gates/inputs or circuits with high levels of noise, we consider sampling from any quantum circuit and noise of any constant strength. First, we prove that the output distributions of noisy geometrically local quantum circuits can be approximately sampled from in quasipolynomial time, when their depth exceeds a fixed Θ(log n) critical threshold which depends on the noise strength. This scaling in n matches classical simulability results that were previously only known for noisy random quantum circuits (Aharonov et al., STOC 2023). We further conjecture that our bound is still loose and that a Θ(1)-depth threshold suffices for simulability due to a percolation effect. To support this, we provide analytical evidence together with a candidate efficient algorithm. Our results rely on new information-theoretic properties of the output states of noisy shallow quantum circuits, which may be of broad interest. On a fundamental level, we demonstrate that unitary quantum processes in constant dimensions are more fragile to noise than previously understood.
A robust and composable device-independent protocol for oblivious transfer using (fully) untrusted quantum devices in the bounded storage model
Rishabh Batra (CQT, NUS);
Sayantan Chakraborty (University of Montreal);
Rahul Jain (CQT, NUS);
Upendra Kapshikar (University of Ottawa)
Abstract: We present a robust and composable device-independent (DI) quantum protocol between two parties for oblivious transfer (OT) using Magic Square devices in the bounded storage model [DFR`07, DFSS08] in which the (honest and cheating) devices and parties have no long-term quantum memory. After a fixed constant (real-world) time interval, referred to as DELAY, the quantum states decohere completely. The adversary (cheating party), with full control over the devices, is allowed joint (non-IID) quantum operations on the devices, and there are no time and space complexity bounds placed on its powers.
The running time of the honest parties is polylog(λ) (where λ is the security parameter). Our protocol has a negligible (in λ) security error and can be implemented in the NISQ (Noisy Intermediate Scale Quantum) era. By robustness, we mean that our protocol is correct even when devices are slightly off (by a small constant) from their ideal specification. This is an important property since small manufacturing errors in the real-world devices are inevitable.
Our protocol is sequentially composable and, hence, can be used as a building block to construct larger protocols (including DI bit-commitment and DI secure multi-party computation) while still preserving correctness and security guarantees. None of the known DI protocols for OT in the literature are secure against arbitrary (non-IID) devices and provide simulator-based (composable) security. This was a major open question in device-independent two-party distrustful cryptography, which we resolved.
We prove a parallel repetition theorem for a certain class of entangled games with a hybrid (quantum-classical) strategy. This parallel repetition allows us to show min-entropy guarantees on certain random variables, which helps in proving the security of our protocol. The hybrid strategy helps to incorporate DELAY in our protocol. This parallel repetition theorem is a main technical contribution of our work. Since our games use hybrid strategies and the inputs to our games are not independent, we use a novel combination of ideas from previous works showing parallel repetition of classical games [Raz95, Hol07], quantum games [JPY14, JMS20, JK25], and anchored games [BVY17, JK21].
Although we present security proof for protocols in the bounded storage model with no long-term quantum memory (after DELAY), we can extend our results, along the lines of [DFR`07], to incorporate linear (in the number of devices) long-term quantum memory.
Geometric optimization for quantum communication
Chengkai Zhu (HKUST(GZ));
Hongyu Mao (CUHK-Shenzhen);
Kun Fang (CUHK-Shenzhen);
Xin Wang (HKUST(GZ))
Abstract: Determining the ultimate limits of quantum communication, such as the quantum capacity of a channel and the distillable entanglement of a shared state, remains a central challenge in quantum information theory, primarily due to the phenomenon of superadditivity. This work develops Riemannian optimization methods to establish significantly tighter, computable two-sided bounds on these fundamental quantities.
For upper bounds, our method systematically searches for state and channel extensions that minimize known information-theoretic bounds. We achieve this by parameterizing the space of all possible extensions as a Stiefel manifold, enabling a universal search that overcomes the limitations of ad-hoc constructions. Combined with an improved upper bound on the one-way distillable entanglement based on a refined continuity bound on quantum conditional entropy, our approach yields new state-of-the-art upper bounds on the quantum capacity of the qubit depolarizing channel for large values of the depolarizing parameter, strictly improving the previously best-known bounds.
For lower bounds, we introduce Riemannian optimization methods to compute multi-shot coherent information. We establish lower bounds on the one-way distillable entanglement by parameterizing quantum instruments on the unitary manifold, and on the quantum capacity by parameterizing code states with a product of unitary manifolds. Numerical results for noisy entangled states and different channels demonstrate that our methods successfully unlock superadditive gains, improving previous results. Together, these findings establish Riemannian optimization as a principled and powerful tool for navigating the complex landscape of quantum communication limits. Furthermore, we prove that amortization does not enhance the channel coherent information, thereby closing a potential avenue for improving capacity lower bounds in general. This result can be of independent interest.
Classical Obfuscation of Quantum Circuits via Publicly-Verifiable QFHE
Abstract: A classical obfuscator for quantum circuits is a classical program that, given the classical description of a quantum circuit Q, outputs the classical description of a functionally equivalent quantum circuit Q' that hides as much as possible about Q. Previously, the only known feasibility result for classical obfuscation of quantum circuits (Bartusek and Malavolta, ITCS 2022) was limited to "nul" security, which is only meaningful for circuits that always reject. On the other hand, if the obfuscator is allowed to compile the quantum circuit Q into a quantum state |Q'>, there exist feasibility results for obfuscating much more expressive classes of circuits: All pseudo-deterministic quantum circuits (Bartusek, Kitagawa, Nishimaki and Yamakawa, STOC 2023, Bartusek, Brakerski and Vaikuntanathan, STOC 2024), and even all unitaries (Huang and Tang, FOCS 2025).
We show that (relative to a classical oracle) there exists a classical obfuscator for all pseudo-deterministic quantum circuits. As our main technical step, we give the first construction of a compact quantum fully-homomorphic encryption (QFHE) scheme that supports public verification of (pseudo-deterministic) quantum evaluation, relative to a classical oracle.
To construct our QFHE scheme, we improve on an approach introduced by Bartusek, Kitagawa, Nishimaki and Yamakawa (STOC 2023), which previously required ciphertexts that are both quantum and non-compact due to a heavy use of quantum coset states and their publicly-verifiable properties. As part of our core technical contribution, we introduce new techniques for analyzing coset states that can be generated "on the fly", by proving new cryptographic properties of the one-shot signature scheme of Shmueli and Zhandry (CRYPTO 2025). Our techniques allow us to produce QFHE ciphertexts that are purely classical, compact, and publicly-verifiable. This additionally yields the first classical verification of quantum computation protocol for BQP that simultaneously satisfies blindness and public-verifiability.
Measuring gravitational lensing time delays with quantum information processing
Zhenning Liu (University of Maryland, College Park);
William DeRocco (University of Maryland, College Park & The Johns Hopkins University);
Shiming Gu (University of British Columbia);
Emil T. Khabiboulline (NIST & University of Maryland, College Park);
Soonwon Choi (MIT);
Andrew M. Childs (University of Maryland, College Park);
Anson Hook (University of Maryland, College Park);
Alexey V. Gorshkov (NIST & University of Maryland, College Park);
Daniel Gottesman (University of Maryland, College Park)
Abstract: The gravitational fields of astrophysical bodies bend the light around them, creating multiple paths along which light from a distant source can arrive at Earth. Measuring the difference in photon arrival time along these different paths provides a means of determining the mass of the lensing system, which is otherwise difficult to constrain. This is particularly challenging in the case of microlensing, where the images produced by lensing cannot be individually resolved; existing proposals for detecting time delays in microlensed systems are significantly constrained due to the need for large photon flux and the loss of signal coherence when the angular diameter of the light source becomes too large.
In this work, we propose a novel approach to measuring astrophysical time delays. Our method uses exponentially fewer photons than previous schemes, enabling observations that would otherwise be impossible. Our approach, which combines a quantum-inspired algorithm and quantum information processing technologies, saturates a provable lower bound on the number of photons required to find the time delay. Our scheme has multiple applications: we explore its use both in calibrating optical interferometric telescopes and in making direct mass measurements of ongoing microlensing events. To demonstrate the latter, we present a fiducial example of microlensed stellar flares sources in the Galactic Bulge. Though the number of photons produced by such events is small, we show that our photon-efficient scheme opens the possibility of directly measuring microlensing time delays using existing and near-future ground-based telescopes.
On the Complexity of the Circuit Width Problem
Zhengfeng Ji (Tsinghua University);
Yinchen Liu (Tsinghua University);
Zhe'ou Zhou (Tsinghua University)
Abstract: We study the circuit width problem introduced by Montanaro in the polynomial representation of quantum circuits over the gate set ({H,Z,\mathrm{CZ},\mathrm{CCZ}}). In this framework, a circuit corresponds to a low‑degree polynomial over (\mathbb{F}_2), and the circuit width (w(f)) is the minimum number of qubits among circuits realizing a given polynomial (f). This parameter governs the precision with which a quantum computer can approximate the gap of (f), motivating the complexity of minimizing (w(f)). We prove that deciding whether (w(f)\le k) is NP‑complete, and that approximating (w(f)) within any factor better than (49/48-\epsilon) is NP‑hard. This inapproximability persists even for degree‑2 polynomials, showing that the hardness is gate‑set independent for common quadratic gate sets. On the algorithmic side, we give a nondeterministic polynomial‑time search algorithm with witness size (O(k\log(n/k))), yielding an XP algorithm by enumeration, and a fixed‑parameter tractable algorithm running in time (k^{O(k)}\cdot n). These results resolve Montanaro’s open question and place circuit width firmly within classical complexity theory while providing efficient algorithms for small width.
Quantum Speedups for Sampling and Non-convex Optimization with Stochastic Oracles
Guneykan Ozgul (Pennsylvania State University);
Xiantao Li (Pennsylvania State University);
Mehrdad Mahdavi (Pennsylvania State University);
Chunhao Wang (Pennsylvania State University)
Abstract: We present quantum speedups for sampling from probability distributions of the form $\pi \propto e^{-f}$, where $f:\mathbb{R}^d\mapsto \mathbb{R}$. We consider two oracle models: (i) a stochastic gradient oracle, where $f$ is in finite sum form, i.e., \(f(x)=\frac{1}{n}\sum_{i=1}^n f_i(x)\) and individual component gradients are accessible, (ii) a stochastic zeroth-order oracle, where only noisy evaluations of \(f\) are available.
Our main contribution is a general framework for quantumly accelerating classical stochastic sampling algorithms, such as Langevin Monte Carlo (LMC) and Hamiltonian Monte Carlo (HMC), by replacing stochastic gradient computations with variance-controlled quantum mean and gradient estimation subroutines. In contrast to prior quantum sampling approaches based on quantum walks, our methods do not require reversibility or exact gradient access, and preserve the structure of the underlying (possibly nonreversible) Markov chain.
In the stochastic gradient oracle model, we integrate unbiased quantum mean estimation with classical variance-reduction techniques, including stochastic variance-reduced gradients (SVRG) and control variates (CV). By jointly optimizing the target variance of quantum estimators and the frequency of full-gradient recomputation, we obtain provable improvements in gradient query complexity over the best known classical samplers. These results apply both to strongly log-concave and to non-logconcave distributions satisfying a log-Sobolev inequality, with convergence guarantees in Wasserstein distance and Kullback--Leibler divergence.
In the stochastic zeroth-order model, we develop new quantum gradient estimation procedures that are robust to noisy and potentially unbounded function evaluations. These estimators lead to improved evaluation complexity for quantum-accelerated LMC and HMC under standard smoothness assumptions.
Finally, we show that faster quantum sampling yields quantum speedups for optimization, including nonsmooth and approximately convex objectives. This recovers known quantum advantages for finite-sum optimization and establishes new improvements in the zeroth-order stochastic setting.
Limits on Quantum Information Processing from Non-Commutative Probability Theory
Ian George (National University of Singapore);
Marco Tomamichel (National University of Singapore)
Abstract: In classical information theory, the maximal correlation and \chi^{2}-contraction coefficient establish limits on distributed and sequential processing. Two distinct quantum maximal correlation coefficients have been proposed, but they do not extend all the classical results. Building on work of Petz, we use the family of non-commutative L^{2}(p) spaces that extend the data processing inequality for variance to quantum theory to extend the classical results to quantum theory. We introduce families of quantum maximal correlation coefficients and identify quantum \chi^{2}-divergences as non-commutative generalizations of the variance of the likelihood ratio. We establish a family of maximal correlation coefficients that must all be ordered on a single copy level for an arbitrary number of copies of one state to be able to be converted to a single copy of another target state under local operations. We prove the equivalent characterizations of perfect classical correlation extraction via local operations in quantum theory. We clarify the relationship between maximal correlation and \chi^{2}-contraction coefficients by proving they are the same operator norms evaluated on distinct maps. Then we establish new equivalent conditions to the saturation of the data processing inequality for \chi^{2}-divergences. This implies previous saturation results for the \chi^{2} and sandwiched Rényi divergences. Finally, we establish the quantum maximal correlation coefficients and \chi^{2}-contraction coefficients are often efficiently computable. This results in a generic method for efficiently computing mixing times of time-homogeneous quantum Markov chains with a unique full rank fixed point.
A Family of Information-Theoretic de Finetti Theorems for Constrained Optimization
Mario Berta (RWTH Aachen University);
Omar Fawzi (ENS Lyon);
Gereon Koßmann (RWTH Aachen University);
Martin Plavála (Leibniz University Hannover);
Julius A. Zeiss (RWTH Aachen University)
Abstract: We introduce the graph composition framework, a generalization of the st-connectivity framework
for constructing quantum algorithms. Our framework constructs algorithms that solve a connectivity problem on an undirected graph, where the availability of each edge is computed by a span program. The key novelty of our framework is that the construction allows for amortization of the span programs’ costs, while at the same time avoiding build-up of errors due to composition. We give generic time-efficient implementations of algorithms generated through the graph composition framework in the quantum read-only memory model, which is a weaker assumption than the more common quantum random-access model. Along the way, we also simplify the span program algorithm by converting it to a transducer, and remove the dependence of its analysis on the effective spectral gap lemma.
We use graph composition to unify existing quantum algorithmic frameworks. Surprisingly, we show that any randomized algorithm can be converted into an instance of the st-connectivity framework. Furthermore, we show that the st-connectivity framework subsumes the learning graph framework, and the weighted-decision-tree framework. We show that the graph composition framework subsumes part of the quantum divide-and-conquer framework, and that it is itself subsumed by the multidimensional quantum walk framework. Moreover, we show polynomial relations and separations between the optimal query complexities that can be achieved with several of these frameworks.
Finally, we apply our techniques to give improved algorithms for various string-search problems,
namely the Dyck-language recognition problem of depth 3, the 3-increasing subsequence problem, and the OR ◦ pSEARCH-problem. We also simplify existing quantum algorithms for the space-efficient directed st-connectivity problem, the pattern matching problem and the Σ∗ 20∗ 2Σ∗ -problem.
Optimizing fermionic Hamiltonians with classical interactions
Maarten Stroeks (Delft University of Technology);
Barbara M. Terhal (Delft University of Technology);
Yaroslav Herasymenko (Perimeter Institute for Theoretical Physics)
Abstract: We consider the optimization problem (ground energy search) for fermionic Hamiltonians with classical interactions. This QMA-hard problem is motivated by the Coulomb electron-electron interaction being diagonal in the position basis, a fundamental fact that underpins electronic-structure Hamiltonians in quantum chemistry and condensed matter. We prove that fermionic Gaussian states achieve an approximation ratio of at least 1/3 for such Hamiltonians, independent of sparsity. This shows that classical interactions are sufficient to prevent the vanishing Gaussian approximation ratio observed in SYK-type models. We also give efficient semi-definite programming algorithms for Gaussian approximations to several families of traceless and positive-semidefinite classically interacting Hamiltonians, with the ability to enforce a fixed particle number. The technical core of our results is the concept of a Gaussian blend, a construction for Gaussian states via mixtures of covariance matrices.