BosonSampling Scott Aaronson (MIT) ICMP 2015, Santiago, Chile Based mostly on joint work with Alex Arkhipov.
Post on 03-Jan-2016
220 Views
Preview:
Transcript
BosonSampling
Scott Aaronson (MIT)ICMP 2015, Santiago, Chile
Based mostly on joint work with Alex Arkhipov
What This Talk Won’t Have
What It Will Have
z
P#P
Oracle for Counting Problems
NPEfficiently Checkable
Problems
PEfficiently Solvable Problems
PHConstant Number of NP Quantifiers
Shor’s Theorem: QUANTUM SIMULATION has no efficient classical algorithm, unless FACTORING does
also
The Extended Church-Turing Thesis (ECT)
Everything feasibly computable in the physical
world is feasibly computable by a (probabilistic) Turing
machine
So the ECT is false … what more evidence could anyone want?
Building a QC able to factor large numbers is hard! After 20 years, no fundamental obstacle has been found, but who knows?
Can’t we “meet the physicists halfway,” and show computational hardness for quantum systems closer to what they actually work with now?
FACTORING might be have a fast classical algorithm! At any rate, it’s an extremely “special” problem
Wouldn’t it be great to show that if, quantum computers can be simulated classically, then (say) P=NP?
nS
n
iiiaA
1,Per
BOSONS
nS
n
iiiaA
1,
sgn1Det
FERMIONS
Our Starting Point
In P #P-complete [Valiant]All I can say is, the bosons
got the harder job
Can We Use Bosons to Calculate the Permanent?
Explanation: Amplitudes aren’t directly observable.To get a reasonable estimate of Per(A), you might need to repeat the experiment exponentially many times
That sounds way too good to be true—it would let us solve NP-complete problems and more using QC!
So if n-boson amplitudes correspond to permanents…
Basic Result: Suppose there were a polynomial-time classical randomized algorithm that took as input a description of a noninteracting-boson experiment, and that output a sample from the correct final distribution over n-boson states.Then P#P=BPPNP and the polynomial hierarchy collapses.
Motivation: Compared to (say) Shor’s algorithm, we get “stronger” evidence that a “weaker” system can
do interesting quantum computations
Valiant 2001, Terhal-DiVincenzo 2002, “folklore”: A QC built of noninteracting fermions can be efficiently simulated by a classical computer
Related Work
Knill, Laflamme, Milburn 2001: Noninteracting bosons plus adaptive measurements yield universal QC
Jerrum-Sinclair-Vigoda 2001: Fast classical randomized algorithm to approximate Per(A) for nonnegative A
Bremner-Jozsa-Shepherd 2011 (independent of us): Analogous hardness results for simulating “commuting Hamiltonian” quantum computers
The Quantum Optics ModelA rudimentary subset of quantum computing, involving only non-interacting bosons, and not based on qubits
Classical counterpart: Galton’s Board, on display at many science museums
Using only pegs and non-interacting balls, you probably
can’t build a universal computer—but you can do some interesting
computations, like generating the binomial distribution!
2
1
2
12
1
2
1
The Quantum VersionLet’s replace the balls by identical single photons,
and the pegs by beamsplitters
Then we see strange things like the Hong-Ou-Mandel dip
The two photons are now correlated, even
though they never interacted!
Explanation involves destructive interference of amplitudes:Final amplitude of non-collision is
02
1
2
1
2
1
2
1
Getting Formal The basis states have the form |S=|s1,…,sm, where si is the number of photons in the ith “mode”
We’ll never create or destroy photons. So s1+…+sm=n is constant.
Initial state: |I=|1,…,1,0,……,0
For us, m=nO(1)
U
You get to apply any mm unitary matrix U—say, using a collection of 2-mode beamsplitters
n
nmM
1:
In general, there are ways to distribute n identical photons into m modes
U induces an MM unitary (U) on the n-photon states as follows:
!!!!
PerU
11
,,
mm
TSTS
ttss
U
Here US,T is an nn submatrix of U (possibly with repeated
rows and columns), obtained by taking si copies of the ith row of U and tj copies of the jth column for all i,j
Beautiful Alternate PerspectiveThe “state” of our computer, at any time, is a degree-n polynomial over the variables x=(x1,…,xm) (n<<m)
Initial state: p(x) := x1xn
We can apply any mm unitary transformation U to x, to obtain a new degree-n polynomial
m
m
ssS
sm
sS xxUxpxp
,,1
1
1'
Then on “measuring,” we see the monomialwith probability !!1
2
mS ss
msm
s xx 11
OK, so why is it hard to sample the distribution over photon numbers classically?
222Per: AIUIp n
Given any matrix ACnn, we can construct an mm unitary U (where m2n) as follows:
Suppose we start with |I=|1,…,1,0,…,0 (one photon in each of the first n modes), apply U, and measure.
Then the probability of observing |I again is
DC
BAU
Claim 1: p is #P-complete to estimate (up to a constant factor)
Idea: Valiant proved that the PERMANENT is #P-complete.
Can use a classical reduction to go from a multiplicative approximation of |Per(A)|2 to Per(A) itself.
Claim 2: Suppose we had a fast classical algorithm for boson sampling. Then we could estimate p in BPPNP
Idea: Let M be our classical sampling algorithm, and let r be its randomness. Use approximate counting to estimate
Conclusion: Suppose we had a fast classical algorithm for boson sampling. Then P#P=BPPNP.
IrMr
outputs Pr
222Per: AIUIp n
The previous result hinged on the difficulty of estimating a single, exponentially-small probability p—but what about noise and error?
The Elephant in the Room
The “right” question: can a classical computer efficiently sample a distribution with 1/nO(1) variation distance from the boson distribution?
Our Main Result: Suppose it can. Then there’s a BPPNP algorithm to estimate |Per(A)|2, with high probability over a Gaussian matrix nn
CNA
1,0~
Estimating |Per(A)|2, with high probability over i.i.d. Gaussian A, is a #P-hard problem
Our Main Conjecture
What makes the Gaussian ensemble special? Theorem: It arises by considering sufficiently small submatrices of Haar-random unitary matrices.
If this conjecture holds, then even a noisy n-photon experiment could falsify the Extended Church Thesis, assuming P#PBPPNP!
Much of our work is devoted to giving evidence for this conjecture
“Easier” problem: Just show that, if A is an i.i.d. Gaussian matrix, then |Per(A)|2 is approximately a lognormal random variable (as numerics suggest), and not so concentrated around 0 as to preclude its being hard to estimate
Can prove for determinant in place of permanent.For permanent, best known anti-concentration results [Tao-Vu] are not yet strong enough for us
Can calculate E[|Per(A)|2]=n! and E[|Per(A)|4]=(n+1)(n!)2, but not strong enough to imply anti-concentration result
BosonSampling Experiments
# of experiments > # of photons!
In 2012, groups in Brisbane, Oxford, Rome, and Vienna reported the first 3-photon BosonSampling experiments, confirming that the amplitudes were given by 3x3 permanents
Goal (in our view): Scale to 10-30 photonsDon’t want to scale much beyond that—both because(1)you probably can’t without fault-tolerance, and (2)a classical computer probably couldn’t even verify the results!
Challenges for Scaling Up:-Reliable single-photon sources (optical multiplexing?)-Minimizing losses-Getting high probability of n-photon coincidence
Scattershot BosonSamplingExciting recent idea, proposed by Steve Kolthammer and others, for sampling a hard distribution even with highly unreliable (but heralded) photon sources, like SPDCs
The idea: Say you have 100 sources, of which only 10 (on average) generate a photon. Then just detect which sources succeed, and use those to define your BosonSampling instance!Complexity analysis turns out to go through essentially without change
Using Quantum Optics to Prove that the Permanent is #P-Complete
[A., Proc. Roy. Soc. 2011]
Valiant showed that the permanent is #P-complete—but his proof required strange, custom-made gadgets
We gave a new, arguably more transparent proof by combining three facts:(1)n-photon amplitudes correspond to nn permanents(2) Postselected quantum optics can simulate universal quantum computation [Knill-Laflamme-Milburn 2001](3) Quantum computations can encode #P-complete quantities in their amplitudes
Can BosonSampling Solve Non-Sampling Problems?
(Could it even have cryptographic applications?)Idea: What if we could “smuggle” a matrix A with huge permanent, as a submatrix of a larger unitary matrix U? Finding A could be hard classically, but shooting photons into an interferometer network would easily reveal it
Pessimistic Conjecture: If U is unitary and |Per(U)|1/nO(1), then U is “close” to a permuted diagonal matrix—so it “sticks out like a sore thumb”
A.-Nguyen, Israel J. Math 2014: Proof of a weaker version of the pessimistic conjecture, using inverse Littlewood-Offord theory
BosonSampling with Lost PhotonsSuppose we have n+k photons in the initial state, but k are randomly lost. Then the probability of each output has the form
What can we say about these quantities? Are they also (plausibly) #P-hard to approximate?Work in progress with Daniel Brod
nSknS
SAPer,,1
2
knnNA 1,0~
SummaryIntuition suggests that not merely quantum computers, but many natural quantum systems, should be intractable to simulate on classical computers, because of the exponentiality of the wavefunction
BosonSampling provides a clear example of how we can formalize this intuition—or at least, base it on “standard” conjectures in theoretical computer science.
It’s also brought QC theory into closer contact with experiment. And it’s highlighted the remarkable connection between bosons and the matrix permanent. Future progress may depend on solving hard open problems about the permanent
Bonus: Rise and Fall of “Complexity”
But how to quantify? One simpleminded measure: apparent complexity.
The Kolmogorov complexity (estimated, say, by GZIP file size) of a coarse-grained (de-noised) description of our thermodynamic mixing process. Does it rise and then fall?
Sean Carroll’s example:
The Coffee AutomatonA., Carroll, Mohan, Ouellette, Werness 2015: A probabilistic nn reversible system that starts half “coffee” and half “cream.” At each time step, we randomly “shear” half the coffee cup horizontally or vertically (assuming a toroidal cup)
We prove that the apparent complexity of this image has a rising-falling pattern, with a maximum of at least ~n1/6
top related