# The complexity of sampling from a weak quantum computer

**Abstract**: This is an exciting time for quantum computing. Over the next few years, intermediate-scale quantum computers with ~50-100 qubits are expected to become practical. These computers are too small to do error correction and they may not capture the full power of a universal quantum computer. A major theoretical challenge is to understand the capabilities and limitations of these devices. In order to approach this challenge, quantum supremacy experiments have been proposed as a near-term milestone. The objective of quantum supremacy is to find computational problems that are feasible on a small-scale quantum computer but are hard to simulate classically. Even though a quantum supremacy experiment may not have practical applications, achieving it will demonstrate that quantum computers can outperform classical ones. Among other proposals, over recent years, two sampling based quantum supremacy experiments have been proposed:

(1) The first one is based on sampling from the output of a random circuit applied to a square grid of qubits. The Google quantum AI group is planning to implement this task on a processor composed of a few (~50-100) superconducting qubits. In order to argue that this sampling task is hard, building on previous works of Aaronson, Arkhipov, and others, they conjectured that the output distribution of a low-depth (O(sqrt n) depth in particular) circuit is anti-concentrated meaning that it has nearly maximal entropy.

(2) The second proposal, known as Boson Sampling (Aaronson Arkhipov '10), is based on linear optical experiments. A baseline conjecture of Boson Sampling is that it is #P-hard to approximate the permanent of a Gaussian matrix with zero mean and unit variance with high probability.

In this talk, I first explain a joint work with Aram Harrow (QIP 2019) which proves the mentioned anti-concentration conjecture. This result also finds efficient ways to generate pseudo-random quantum states (known as t-designs), which have vast applications in quantum communication, algorithms, and cryptography. I then explain how the permanent of Gaussian matrices can be approximated in quasi-polynomial with high probability if instead of zero mean we consider a nonzero but vanishing mean Gaussian matrix. This result finds, to the best of our knowledge, the first example of a natural counting problem that is #P-hard to compute exactly on average and #P-hard to approximate in the worst case but becomes easy only when approximation and average case are combined. This result is based on joint work with Lior Eldar (FOCS 2018).

In the end, I introduce some intermediate scale models of quantum computation based on exactly solvable models in mathematical physics and argue that, even though they are very simple in the sense of solvability, they still cannot be simulated on a classical computer assuming plausible conjectures. I also explain how these models can be simulated by another intermediate model known as the one-clean qubit model. This result is based on joint work with Aaronson, Bouland, and Kuperberg (STOC 2017).

**Series:**Institute for Quantum Information (IQI) Weekly Seminar Series