The Computational Complexity of Linear Optics Scott Aaronson (MIT) Joint work with Alex Arkhipov vs
Mar 26, 2015
The Computational Complexity of Linear Optics
Scott Aaronson (MIT)Joint work with Alex Arkhipov
vs
In 1994, something big happened in the foundations of computer science, whose meaning
is still debated today…
Why exactly was Shor’s algorithm important?
Boosters: Because it means we’ll build QCs!
Skeptics: Because it means we won’t build QCs!
Me: Even for reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness result for one of the central computational problems
of modern science: QUANTUM SIMULATION
Shor’s Theorem:
QUANTUM SIMULATION is not in
probabilistic polynomial time, unless
FACTORING is also
Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik)
Advantages:
Based on more “generic” complexity assumptions than the hardness of FACTORING
Gives evidence that QCs have capabilities outside the polynomial hierarchy
Only involves linear optics (With single-photon Fock state inputs, and nonadaptive multimode photon-detection measurements)
Today, a different kind of hardness result for simulating quantum mechanics
Disadvantages:
Applies to relational problems (problems with many possible outputs) or sampling problems, not decision problems
Harder to convince a skeptic that your QC is indeed solving the relevant hard problem
Less relevant for the NSA
Bestiary of Complexity Classes
BQP
P#P
BPP
P
NP
PH
FACTORIN
G
PERMANENT
COUNTING
3SAT
XYZ…
How complexity theorists say “such-and-such is damn unlikely”:
“If such-and-such is true, then PH collapses to a finite level”
Suppose the output distribution of any linear-optics circuit can be efficiently sampled classically (e.g., by Monte Carlo). Then the polynomial hierarchy collapses (indeed P#P=BPPNP).
Indeed, even if such a distribution can be sampled by a classical computer with an oracle for the polynomial hierarchy, still the polynomial hierarchy collapses.
Suppose two plausible conjectures are true: the permanent of a Gaussian random matrix is(1) #P-hard to approximate, and(2) not too concentrated around 0.Then the output distribution of a linear-optics circuit can’t even be approximately sampled efficiently classically, unless the polynomial hierarchy collapses.
Our Results
If our conjectures hold, then even a noisy linear-optics experiment can
sample from a probability distribution that no classical
computer can feasibly sample from
Related WorkKnill, Laflamme, Milburn 2001: Linear optics with adaptive measurements yields universal QC
Valiant 2002, Terhal-DiVincenzo 2002: Noninteracting fermions can be simulated in P
A. 2004: Quantum computers with postselection on unlikely measurement outcomes can solve hard counting problems (PostBQP=PP)
Shepherd, Bremner 2009: “Instantaneous quantum computing” can solve sampling problems that seem hard classically
Bremner, Jozsa, Shepherd 2010: Efficient simulation of instantaneous quantum computing would collapse PH
nS
n
iiiaA
1,Per
BOSONS
nS
n
iiiaA
1,
sgn1Det
FERMIONS
There are two basic types of particle in the universe…
Their transition amplitudes are given respectively by…
All I can say is, the bosons got the harder job
Particle Physics In One Slide
Starting from a fixed initial state—say, |I=|1,…,1,0,…0— you get to choose any mm mode-mixing unitary U
U induces an unitary (U) on n-photon
states, defined by
Linear Optics for Dummies (or computer scientists)
Computational basis states have the form |S=|s1,…,sm, where s1,…,sm are nonnegative integers such that s1+…+sm=n
n = # of identical photons m = # of modes For us, m>n
!!!!
PerTUS
11
,
mm
TS
ttss
U
n
nm
n
nm 11
Then you get to measure (U)|I in the computational basis
Here US,T is an nn matrix obtained by taking si copies of the ith row of U and tj copies of the jth column, for all i,j
Theorem (Feynman 1982, Abrams-Lloyd 1996): Linear-optics computation can be simulated in BQPProof Idea: Decompose the mm unitary U into a product of O(m2) elementary “linear-optics gates” (beamsplitters and phaseshifters), then simulate each gate using polylog(n) standard qubit gates
Theorem (Gurvits): There exist classical algorithms to approximate S|(U)|T to additive error in randomized poly(n,1/) time, and to compute the marginal distribution on photon numbers in k modes in nO(k) time
Theorem (Bartlett-Sanders et al.): If the inputs are Gaussian states and the measurements are homodyne, then linear-optics computation can be simulated in P
Upper Bounds on the Power of Linear Optics
By contrast, exactly sampling the distribution over all n photons is extremely hard! Here’s why …
222Per: AIUIp n
Given any matrix ACnn, we can construct an mm mode-mixing unitary U (where m2n) as follows:
Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply (U), and measure.
Then the probability of observing |I again is
DC
BAU
Claim 1: p is #P-complete to estimate (up to a constant factor)
Idea: Valiant proved that the PERMANENT is #P-complete.
Can use a classical reduction to go from a multiplicative approximation of |Per(A)|2 to Per(A) itself.
Claim 2: Suppose we had a fast classical algorithm for linear-optics sampling. Then we could estimate p in BPPNP
Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate
Conclusion: Suppose we had a fast classical algorithm for linear-optics sampling. Then P#P=BPPNP.
IrMr
outputs Pr
222Per: AIUIp n
High-Level IdeaEstimating a sum of exponentially many positive or negative numbers: #P-complete
Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in BPPNP PH
If quantum mechanics could be efficiently simulated classically, then these two problems would become equivalent—thereby placing #P in PH, and collapsing PH
Extensions:- Even simulation of QM in PH would imply P#P = PH - Can design a single hard linear-optics circuit for each n- Can let the inputs be coherent rather than Fock states
So why aren’t we done?
Because real quantum experiments are subject to noise
Would an efficient classical algorithm that sampled from a noisy distribution—one that was only 1/poly(n)-close to the true distribution in variation distance—still collapse the polynomial hierarchy?
Difficulty in showing this: The sampler might adversarially neglect to output the one submatrix whose permanent we care about! So we’ll need to “smuggle” the PERMANENT instance we care about into a random submatrix
Main Result: Yes, assuming two plausible conjectures about random permanents (the “PGC” and the “PCC”)
There exist ε,δ ≥ 1/poly(n) for which the following problem is #P-hard. Given a Gaussian random matrix X drawn from N(0,1)C
n×n, output a complex number z such that
with probability at least 1- over X.
The Permanent-of-Gaussians Conjecture (PGC)
,1Per
X
z
We can prove the conjecture if =0 or =0! What makes it hard is the combination of average-case and approximation
For all polynomials q, there exists a polynomial p such that for all n,
The Permanent Concentration Conjecture (PCC)
nqnp
nX
nnCNX
1!PerPr
1,0~
Empirically true!
Also, we can prove it with determinant in place of permanent
U
Take a system of n identical photons with m=O(n2) modes. Put each photon in a known mode, then apply a Haar-random mm unitary transformation U:
Let D be the distribution that results from measuring the photons. Suppose there’s a fast classical algorithm that takes U as input, and samples any distribution even 1/poly(n)-close to D in variation distance. Then assuming the PGC and PCC, BPPNP=P#P and hence PH collapses
Main Result
Idea: Given a Gaussian random matrix A, we’ll “smuggle” A into the unitary transition matrix U for m=O(n2) photons—in such a way that S|(U)|I=Per(A), for some basis state |S
Useful lemma we rely on: given a Haar-random mm unitary matrix, an nn submatrix looks approximately Gaussian
Then the classical sampler has “no way of knowing” which submatrix of U we care about—so even if it has 1/poly(n) error, with high probability it will return |S with probability |Per(A)|2 Then, just like before, we can use approximate counting to estimate Pr[|S]|Per(A)|2 in BPPNP
Assuming the PCC, the above lets us estimate Per(A) itself in BPPNP
And assuming the PGC, estimating Per(A) is #P-hard
Problem: Bosons like to pile on top of each other!
Call a basis state S=(s1,…,sm) good if every si is 0 or 1 (i.e.,
there are no collisions between photons), and bad otherwise
If bad basis states dominated, then our sampling algorithm might “work,” without ever having to solve a hard PERMANENT instance
Furthermore, the “bosonic birthday paradox” is even worse than the classical one!
,3
2box same in the land particlesboth Pr
rather than ½ as with classical particles
Fortunately, we show that with n bosons and mkn2 modes, the probability of a collision is still at most (say) ½
Experimental ProspectsWhat would it take to implement the requisite experiment?• Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes• Reliable single-photon sources• Photodetector arrays that can reliably distinguish 0 vs. 1 photonBut crucially, no nonlinear optics or postselected measurements!
Our Proposal: Concentrate on (say)
n=20 photons and m=400 modes, so that classical simulation is
nontrivial but not impossible
Main Open ProblemsProve the Permanent of Gaussians Conjecture! (That approximating the permanent of an N(0,1) Gaussian random matrix is #P-complete)
Do our linear-optics experiment!
Are there other (e.g., qubit-based) quantum systems for which approximate classical simulation would collapse PH?
Can our linear-optics model solve classically-intractable decision problems?
Prove the Permanent Concentration Conjecture! (That Pr[|Per(X)|2<n!/p(n)] < 1/q(n))