-
Hybrid quantum-classical algorithms and quantum error
mitigation
Suguru Endo,1, ∗ Zhenyu Cai,2 Simon C. Benjamin,2 and Xiao
Yuan3, †
1NTT Secure Platform Laboratories, NTT Corporation, Musashino
180-8585, Japan2Department of Materials, University of Oxford,
Parks Road, Oxford OX1 3PH, United Kingdom3Stanford Institute for
Theoretical Physics, Stanford University, Stanford California
94305, USA
Quantum computers can exploit a Hilbert space whose dimension
increases exponentially withthe number of qubits. In experiment,
quantum supremacy has recently been achieved by the Googleteam by
using a noisy intermediate-scale quantum (NISQ) device with over 50
qubits. However, thequestion of what can be implemented on NISQ
devices is still not fully explored, and discoveringuseful tasks
for such devices is a topic of considerable interest. Hybrid
quantum-classical algorithmsare regarded as well-suited for
execution on NISQ devices by combining quantum computers
withclassical computers, and are expected to be the first useful
applications for quantum computing.Meanwhile, mitigation of errors
on quantum processors is also crucial to obtain reliable results.
Inthis article, we review the basic results for hybrid
quantum-classical algorithms and quantum errormitigation
techniques. Since quantum computing with NISQ devices is an
actively developing field,we expect this review to be a useful
basis for future studies.
CONTENTS
I. Introduction 2
II. Basic variational quantum algorithms 2A. Variational quantum
eigensolver 3B. Real and imaginary time evolution quantum
simulator 5
III. Variational quantum optimisation 7A. Quantum approximate
optimisation
algorithm 7B. Variational algorithms for machine learning 8
1. Quantum circuit learning 82. Data-driven quantum circuit
learning 93. Quantum generative adversarial networks 94. Quantum
autoencoder for quantum data
compression 105. Variational quantum state eigensolver 11
C. Variational algorithm for linear algebra 11D. Excited
state-search variational algorithms 12
1. Overlap-based method 122. Quantum subspace expansion 133.
Contraction VQE methods 134. Calculation of Green’s function 14
E. Variational circuit recompilation 14F. Variational-state
quantum metrology 15G. Variational quantum algorithms for
quantum
error correction 161. Variational circuit compiler for
quantum
error correction 162. Variational quantum error corrector
(QVECTOR) 17H. Dissipative-system variational quantum
eigensolver 17
∗ [email protected]† [email protected]
I. Other applications 18
IV. Variational quantum simulation 18A. Variational quantum
simulation algorithm for
density matrix 181. Variational real time simulation for
open
quantum system dynamics 182. Variational imaginary time
simulation for a
density matrix 18B. Variational quantum simulation
algorithms
for general processes 191. Generalised time evolution 192.
Matrix multiplication and linear
equations. 193. Open system dynamics 19
C. Gibbs state preparation 20D. Variational quantum simulation
algorithm for
Green’s function 20E. Other applications 21
V. Quantum error mitigation 21A. Extrapolation 21
1. Richardson extrapolation 212. Exponential extrapolation 223.
Methods to boost physical errors 234. Mitigation of algorithmic
errors 24
B. Least square fitting for several noiseparameters 24
C. Quasi-probability method 24D. Quantum subspace expansion 26E.
Symmetry verification 27F. Individual error reduction 27G.
Measurement error mitigation 28H. Learning-based quantum error
mitigation 28
1. Quantum error mitigation via CliffordData Regression 28
2. Learning-based quasi-probability method 29I. Stochastic error
mitigation 30J. Combination of error mitigation techniques 30
arX
iv:2
011.
0138
2v1
[qu
ant-
ph]
2 N
ov 2
020
mailto:[email protected]:[email protected]
-
2
1. Symmetry verification with errorextrapolation 31
2. Quasi-probability method with errorextrapolation 31
3. Symmetry verification withquasi-probability method 31
4. Combining quasi-probability, symmetryverification and error
extrapolation 32
VI. Conclusion 32
Acknowledgments 32
A. Derivation of Eq. (11) for variational quantumsimulation
33
B. Hadamard test and quantum circuits forvariational quantum
simulation 33
C. SWAP test and Destructive SWAP test 34
D. Methodologies for optimisation 341. Local cost function 342.
Hamiltonian morphing optimisation 35
E. Subspace expansion 35
References 36
I. INTRODUCTION
As the size of the Hilbert space of a quantum systemincreases
exponentially with respect to the system size,general quantum
systems are, in principle, hard to sim-ulate on a classical
computer. For example, systems ma-nipulating tens to hundreds of
qubits have been believedto be classically intractable, and they
have been pro-posed for demonstrating quantum advantages over
clas-sical supercomputers in the so-called task of
‘quantumsupremacy’ [1]. In October 2019, Google announced thatthey
had successfully demonstrated quantum supremacywith a high-fidelity
53 qubit device, named Sycamore [2].The dimension of the
computational state-space is aslarge as 253 ≈ 9.0 × 1015. To sample
one output of aquantum circuit on the 53 qubits, it is estimated
that aclassical supercomputer would need 10000 years,
whilstSycamore only took 200 seconds. Although recent ef-forts have
significantly reduced the classical simulationcost [3], classical
simulation of a general quantum circuitwill certainly become an
intractable task as we increasethe gate fidelity, the gate depth,
or the number of qubits.
While the tasks considered in quantum supremacy aregenerally
mathematically abstract problems, ultimatelythe field must progress
to demonstrate true quantum ad-vantage i.e. to solve a problem of
practical value with su-perior efficiency using a quantum device.
Current quan-tum hardware only incorporates a small number (tens)of
qubits with a non-negligible gate error rate, making it
insufficient for implementing conventional quantum algo-rithms
such as Shor’s factoring algorithm [4], the phaseestimation
algorithm [5], and Hamiltonian simulation al-gorithms [6]. These
generally require one to accuratelycontrol millions of qubits when
taking account of fault-tolerance [7].
Before realising a universal fault-tolerant quantumcomputer, a
more feasible scenario for current andnear-term quantum computing
is the so-called noisyintermediate-scale quantum (NISQ) regime [8],
where wecontrol tens to thousands of noisy qubits with gate
errorsthat may be on the order of 10−3 or lower. AlthoughNISQ
computers are not universal, we may exploit themto solve certain
computational tasks, such as chemistrysimulation, significantly
faster than classical computers,via a combination of quantum and
classical computers [9–14]. Intuitively, because a large portion of
the computa-tional burden is processed on the classical computer,
fullycoherent deep quantum circuits may not be required.As both
quantum and classical computers are used, suchsimulation methods
are called hybrid quantum-classicalalgorithms. In addition, to
compensate computation er-rors, quantum error mitigation techniques
can be used bya post-processing of the experiment data. Since
quantumerror mitigation does not necessitate encoding of qubitsas
full error correction does, it thus contributes to a hugesaving of
qubits, which is vital for NISQ simulation.
In this review paper, we aim to summarise the mostbasic ideas of
hybrid quantum-classical algorithms andquantum error mitigation
techniques. In Sec. II, we in-troduce the basic algorithms — the
variational quantumeigensolver and variational quantum simulation —
forfinding a ground state or simulating dynamical evolu-tion of a
many-body Hamiltonian. In Sec. III, we showhow the variational
quantum eigensolver algorithm canbe extended for general
optimisation problems includingmachine learning problems, linear
algebra problems, ex-cited energy spectra, etc [15–22]. Meanwhile,
we showin Sec. IV that the variational quantum simulation
algo-rithm may be extended as well for open systems,
generalprocesses, thermal states, and calculating Green’s
func-tion. Finally, in Sec. V, we show several error
mitigationmethods for suppressing errors in NISQ computing.
Thisreview does not cover the application of NISQ computersin
solving specific physics problems and we refer to McAr-dle et al.
[23] and Cao et al. [24] for reviews for its ap-plication in
quantum computational chemistry, to Baueret al. [25] in quantum
materials, etc.
II. BASIC VARIATIONAL QUANTUMALGORITHMS
Since NISQ devices can only apply a relatively shal-low circuit
on a limited number of qubits, conventionalquantum algorithms may
not be implemented on NISQdevices. Here we consider hybrid
quantum-classical al-gorithms tailored to NISQ computing. Because
the algo-
-
3
rithms generally use parametrised quantum circuits
andvariationally update the parameters, they are also
calledvariational quantum algorithms (VQAs).
For implementing VQAs [9, 11, 12], we first considerthe
parametrised trial wave function as
|ϕ(~θ)〉 = U(~θ) |ϕref 〉 , (1)
where U(~θ) = UN (θN ) . . . Uk(θk) . . . U1(θ1) generally
consists of single or two qubit gates, and ~θ =(θ1, θ2, . . . .,
θN )
T is a vector of independent real param-eters, and |ϕref 〉 is
the initial state. Typically, whenwe have Nq-qubit quantum
processor, we can choose
|ϕref 〉 = |0〉⊗Nq or any initial state from a classical
com-putation. Here, we refer to |ϕ(~θ)〉 as the ansatz state,
andU(~θ) as the ansatz circuit. The routine of a variationalquantum
algorithm typically works by preparing the trialstate, measuring
the state, and updating the parametersaccording to a classical
algorithm on the measurement re-sults. To circumvent the
accumulation of physical errors,
we generally assume that the ansatz U(~θ) is implementedwith a
shallow quantum circuit. The schematic figure ofVQAs is shown in
Fig. 1.
Although there exist a large number of VQAs, theycan be
generally classified into two categories: variationalquantum
optimisation (VQO) and variational quantumsimulation (VQS). VQO
involves optimising parametersunder a cost function. For example,
when we minimisethe energy of the state, i.e. the expectation value
of thegiven Hamiltonian as a cost function, the cost functionafter
optimisation approximates the ground state energy.The corresponding
state also approximates the groundstate. This is the so-called
variational quantum eigen-solver [9, 12] and other VQO algorithms
can be similarlydesigned by properly changing the cost function to
othermetrics. While variational quantum optimisation aims
tooptimise a static target cost function, VQS aims to sim-
FIG. 1. Schematic of varational quantum algorithms.
The ansatz state |ψ(~θ)〉 is generated via a
short-depthparametrised quantum circuit and measured to extract
clas-sical data. The measurement result is fed to a classical
com-puter to update the parameters.
ulate a dynamical process, such as the Schrödinger
timeevolution of a quantum state [14, 26]. VQS algorithmscan also
be applied for optimising a static cost func-tion, such as
variational imaginary time simulation, orstudying general many-body
physics problems. The dis-tinction between variational quantum
optimisation andvariational quantum simulation is not absolute, and
al-gorithms for problems in one category may be adaptedfor those in
the other category. Before showing how spe-cific VQO or VQS
algorithms work for specific tasks, inthis section, we first
illustrate the most basic VQO al-gorithm, a variational quantum
eigensolver for findingground state energy, and the most basic VQS
algorithmsfor simulating real and imaginary time simulation.
A. Variational quantum eigensolver
The variational quantum eigensolver (VQE) is a
hybridquantum-classical algorithm for computing the groundstate and
the ground state energy of a Hamiltonian Hof interest. In the
seminal work by Peruzzo et al. [9],the VQE algorithm was
theoretically proposed and ex-perimentally demonstrated for finding
the ground stateenergy of the HeH+ molecule using a two-qubit
photonicquantum processor. We note that the conventional ap-proach
for finding eigenstates of a Hamiltonian with auniversal quantum
computer is by adiabatic state prepa-ration and quantum phase
estimation (QPE) [27]. Werefer to McArdle et al. [23] and Cao et
al. [24] for thereview.
The VQE algorithm relies on the Rayleigh-Ritz varia-tional
principle, i.e., for any parameterised quantum state
|ϕ(~θ)〉, we have
min~θ〈ϕ(~θ)|H |ϕ(~θ)〉 ≥ EG, (2)
where EG is the ground state energy of the Hamiltonian
H and the minimisation is over all parameters ~θ [12].As we will
explain shortly, we can efficiently calculate
〈ϕ(~θ)|H |ϕ(~θ)〉 with quantum processors. Therefore,
byoptimising parameters ~θ via a classical computer, con-
sidering E(~θ) = 〈ϕ(~θ)|H |ϕ(~θ)〉 as a cost function to
beminimised, we can approximate the ground state energyand the
ground state.
Now, we explain how to measure E(~θ) =
〈ϕ(~θ)|H |ϕ(~θ)〉 for a Hamiltonian H. As Pauli op-erators and
products of them {I,X, Y, Z}⊗Nq form thecomplete basis for
operators, any Hamiltonian can beexpanded as
H =∑α
fαPα,
Pα ∈ {I,X, Y, Z}⊗Nq ,(3)
where fα are real coefficients and {X,Y, Z} are Paulioperators.
While an arbitrary Nq-qubit operator may
-
4
have exponential terms in the expansion, Hamiltoniansin reality
are generally sparse so that the expansion onlyhas the number of
terms polynomial to Nq. For example,the Hamiltonian of the 1D Ising
model is
H = h∑i
ZiZi+1 + λ∑i
Xi, (4)
where the number of terms is linear to the number ofqubits. This
also holds true for other systems. For ex-ample, to simulate the
electronic structure of molecules,we consider the fermionic
Hamiltonian
Hf =∑ij
tija†iaj +
∑ijkl
uijkla†ia†kalaj , (5)
where a†j (aj) denotes the creation (annihilation) opera-tor for
a fermion in the j-th orbital, and tij and uijkl arethe one and two
electron interactions, which can be effi-ciently calculated by
integrating the basis set wave func-tions [28, 29]. The Hamiltonian
consists of a polynomialnumber of terms consisting of products of
fermionic op-erators. They can be further mapped to qubit
operatorsvia encoding methods such as Jordan-Wigner, parity,
andBravi-Kitaev encodings [27, 30, 31]. For example,
theJordan-Wigner transformation is defined as
a†i → I⊗i−1 ⊗ σ− ⊗ Z⊗N−i
ai → I⊗i−1 ⊗ σ+ ⊗ Z⊗N−i,(6)
where σ± = (X ∓ iY )/2. We can therefore map thefermion
Hamiltonian Hamiltonian to the form of Eq. (3)with a polynomial
number of Pauli operators.
By assuming a Pauli decomposition of the Hamiltonian
H =∑α fαPα, the cost function E(
~θ) becomes
E(~θ) =∑α
fα 〈ϕ(~θ)|Pα |ϕ(~θ)〉 , (7)
where each term 〈ϕ(~θ)|Pα |ϕ(~θ)〉 is evaluated by calcu-lating
the expectation value of each Pauli operator Pα.Note that the
measurement of Pα can be fully parallelisedby using many quantum
processors.
After we have obtained E(~θ), we update the parame-ters by using
a classical computer to minimise the costfunction. As an example,
the gradient descent methodupdates the parameter settings as
~θ(n+1) = ~θ(n) − a∇E(~θ(n)), (8)
where ~θ(n) and ~θ(n+1) denote the parameters at the n-step and
the n + 1-step, respectively, a > 0 is a param-
eter determining the step size, and ∇E(~θ(n)) is the gra-dient
of the cost function at ~θ(n). The gradient descentmethod
deterministically decreases the cost function toa local minimum.The
concept of the VQE is illustratedin Fig. 2. In practice, it is
important to opt for a fastand accurate optimisation method
properly to reach the
global minimum or feasible solution in a reasonable time.Also,
the optimisation method should be robust to phys-ical noise and
shot noise in the quantum hardware. Inaddition to gradient based
methods, another type of opti-misation method is via direct search
of the cost function.While the gradient may be more sensitive to
physicalnoise – it typically vanishes exponentially in the numberof
qubits [32], direct search is believed to be more ro-bust to
physical noise, which may necessitate less repeti-tions [33]. We
refer to McArdle et al. [23] for the reviewof classical
optimisation algorithms.
Whether the VQE algorithm works also depends on thechoice of the
ansatz. To have an efficient quantum simu-lation algorithm, we need
to use a suitable ansatz for theproblem. If the ansatz state cannot
express the solution,e.g., when the solution is a highly entangled
state but theansatz can only generate low entangled states, it
cannotfind the correct solution. In literature, several
differenttypes of ansätze are proposed for different purposes.
Forexample, the unitary coupled cluster ansatz is known asa
suitable physically inspired ansatz for electronic struc-ture
problems in chemistry [9, 34, 35]. However, the uni-tary coupled
cluster ansatz generally necessitates a com-plicated form of
quantum circuit with gates applied onmultiple number of qubits,
where each multi-qubit gatecould be decomposed as a sequence of
two-qubit gates,and the number of multi-qubit gates is quadratic to
thenumber of qubits and the number of electrons. Sincethe unitary
coupled cluster ansatz involves many generaltwo-qubit gates, it may
be hard to implement on noisyquantum devices with short coherence
time and limitedconnectivity. This problem might be circumvented
byleveraging so-called “hardware efficient ansätze”. Thisfamily of
ansätze might be more experimentally feasi-ble [9, 10, 13],
because they are constructed based onrealisable demands on
connectivity and gate operationsthat correspond to real quantum
devices. However, ahardware efficient ansatz does not reflect the
details of
FIG. 2. Schematic of the variational quantum eigensolver.The
expectation values of the Pauli operators Pα are mea-sured for the
ansatz state, and the expectation value of theHamiltonain is
measured as a cost function on the classicalcomputer. The cost
function is sent to a classical optimiserto update the
parameters.
-
5
the simulated quantum system, and it has been shownthat
exponentially vanishing gradients (so-called barrenplateaus) are
liable to occur for randomly initialised pa-rameters [36]. There
are several other methods proposedto circumvent the vanishing
gradient problem [37–40].We refer to McArdle et al. [23] for a more
detailed dis-cussion for ansatz construction.
The disadvantage of the VQE method is that the cor-rectness of
the solution relies on the heuristic choice ofthe ansatz and the
optimisation may be caught by a localinstead of global minimum.
Furthermore, the total num-ber of measurements is O(�−2) for
reaching a precision� due to shot noise [9, 12], which is
quadratically worsethan the conventional QPE algorithm that uses
universalquantum computers.
The VQE algorithm has been experimentally demon-strated by
several groups [9, 10, 41–46]. To date, the thehydrogen chain was
simulated with a 12-qubit supercon-ducting system [47].
B. Real and imaginary time evolution quantumsimulator
Now we introduce the basic algorithms for variationalquantum
simulation (AQS), in particular, for simulatingreal [14] and
imaginary [48] time evolution. The real timeevolution of a quantum
system can be described via theSchrödinger equation as
d |ψ(t)〉dt
= −iH |ψ(t)〉 , (9)
where H is the Hamiltonian and |ψ(t)〉 is the time-dependent
state. The conventional approach for simu-lating the evolution is
to realize the time evolution e−iHt
as a unitary circuit and the state at time t is obtainedby
applying the unitary to the initial state [6, 49–51].The circuit
depth generally increases polynomially withrespect to the evolution
time t. Instead, variational quan-tum simulation algorithms assumes
that the quantumstate |ψ(t)〉 is represented by an ansatz quantum
cir-cuit, |ϕ(~θ(t))〉 = U(~θ(t)) |ϕref 〉, and the time evolutionof
Schrödinger equation of the state |ψ(t)〉 is mapped tothe evolution
of the parameters ~θ(t).
Different variational principles can be used to havedifferent
evolution equations of the parameters. Thethree most conventional
variational principles are — TheDirac and Frenkel variational
principle [52, 53], McLach-lan’s variational principle [54], and
the time-dependentvariational principle [55, 56]. The Dirac and
Frenkelvariational principle is not suitable for variational
quan-tum simulation because the equation of the parametersmay
involve complex solutions, which contradicts withthe requirement
that parameters are real. Although theother two variational
principle both gives real solutions,it is shown that the
time-dependent variational princi-ple could be more unstable and it
cannot be applied for
evolution of density matrices and imaginary time evolu-tion. On
the contrary, McLachlan’s principle generallyproduces stable
solutions and it is also applicable to allthe other scenarios
beyond real time simulation. We re-fer a detailed study of the
three variational principlesto Yuan et al. [26].
Now, we focus on McLachlan’s variational principle[54] and show
how to derive the evolution of the pa-rameters that effectively
simulates the time evolution.McLachlan’s principle requires one to
minimise the dis-tance between the ideal evolution and the
evolution in-duced of the parametrised trial state as
δ‖(∂/∂t+ iH) |ϕ(~θ(t))〉 ‖ = 0, (10)
where ‖ |ϕ〉 ‖ = 〈ϕ|ϕ〉 is a norm of states |ϕ〉 and δ is
thevariation over the derivative of the parameters θ̇j . With
real parameters ~θ, the solution gives the evolution of
theparameters ∑
j
Mk,j θ̇j = Vk, (11)
with coefficients
Mk,j = Re
(∂ 〈ϕ(~θ(t))|
∂θk
∂ |ϕ(~θ(t))〉∂θj
)Vk = Im
(〈ϕ(~θ(t))|H∂ |ϕ(
~θ(t))〉∂θk
),
(12)
where Re(·) and Im(·) are the real and imaginary
parts,respectively. We refer to Appendix A for the derivation.
Next, we describe the variational imaginary timesimulation
algorithm. The normalised Wick-rotatedSchrödinger equation [57] is
obtained by replacing t inEq. (9) with = iτ ,
d |ψ(τ)〉dτ
= −(H − 〈H〉) |ψ(τ)〉 , (13)
where 〈H〉 = 〈ψ(τ)|H|ψ(τ)〉 is included for preservingthe norm of
the state |ψ(τ)〉. Notably, the imaginarytime evolution can be
leveraged for preparing a Gibbsstate and for discovering the ground
state of quantumsystems [26, 48]. Following the same procedure for
realtime evolution, we first make use of McLachlan’s
princi-ple,
δ‖(∂/∂τ +H − 〈H〉) |ϕ(~θ(τ))〉 ‖ = 0, (14)
which then gives the evolution of the parameters:∑j
Mk,j θ̇j = Ck, (15)
with M defined in Eq. (12) and C defined by
Ck = −Re
(〈ϕ(~θ(τ))|H∂ |ϕ(
~θ(τ))〉∂θk
)= −1
2
∂E(~θ)
∂θk,
(16)
-
6
(|0〉+ eiθ |1〉)/√
2 X • X • H
. . . . . .
|ϕref 〉 U1 Uk−1 σk,i Uk Uj−1 σj,q
(a)
(|0〉+ eiθ |1〉)/√
2 X • X • H
. . . . . .
|ϕref 〉 U1 Uk−1 σk,i Uk UN σj
(b)
FIG. 3. Quantum circuits for computing (a) Re(eiθ 〈ϕref
|U†k,iUj,q |ϕref 〉) and (b) Re(eiθ 〈ϕref |U†k,iσjU |ϕref 〉) [14,
26, 48].
with E(~θ) =(〈ϕ(~θ(τ))|H |ϕ(~θ(τ))〉
). Note that the
C vector is related to the gradient of the energy, im-plying
that variational imaginary time simulation canbe regarded as a
generalisation of the gradient descentmethod. It has been
numerically found that the vari-ational imaginary time simulation
algorithm might beless sensitive to local minima in contrast to
simple gra-dient descent methods [48]. In addition, when imagi-nary
time evolution does reach a minimum that is notthe ground state, it
tends to be an excited eigenstate ofthe Hamiltonian, which can be
thus exploited for findinggeneral eigen-spectra [21]. It has
recently been observedthat an equivalent formulation of the
variational imag-inary time algorithm can be obtained by exploiting
thequantum natural gradient approach [58–60]. Notice thatsince M
matrix has to be measured, variational imagi-nary time evolution
needs more measurements than theconventional gradient descent
method. However, for anincreasing system size and simulation time,
the numberof measurements required for M matrix is
asymptoticallynegligible [61].
FIG. 4. Schematic of the variational quantum simulation
al-gorithm. The elements of M matrix and V (C) vectors aremeasured
for the ansatz state. The results are sent to the
classical computer to solve M~̇θ = V (C) to update the
pa-rameters, which will be fed to quantum processors.
Given the current parameters ~θ, we now showhow to efficiently
measure the M , V , and C
terms with quantum circuits. Suppose |ϕ(~θ(t))〉 =UN (θN ) . . .
Uk(θk) . . . U1(θ1) |ϕref 〉 with the derivative ofeach unitary
Uk(θk) expressed as
∂Uk(θk)
∂θk=∑i
gk,iUkσk,i, (17)
where σk,i are unitary operators and gk,i are
complexcoefficients. For instance, assuming Uk(θk) = e
−iθkX , wehave ∂Uk(θk)/∂θk = −iUkX and hence gk,i = −i andσk,i =
X. Thus, the derivative of the ansatz state is
∂ |ϕ(~θ(t))〉∂θk
=∑i
gk,iUk,i |ϕref 〉 , (18)
where we defined
Uk,i = UNUN−1 · · ·Uk+1Ukσk,i · · ·U2U1. (19)
Now each Mk,j can be written as
Mk,j =∑i,q
Re(g∗k,igj,q 〈ϕref |U
†k,iUj,q |ϕref 〉
). (20)
Supposing the Hamiltonian is decomposed as H =∑α fαPα, with real
coefficients fα and Pauli operators
Pα, we have Vk and Ck as
Vk = −∑i,α
Re(ig∗k,ifα 〈ϕref |U
†k,iσαU |ϕref 〉
),
Ck = −∑i,α
Re(g∗k,ifα 〈ϕref |U
†k,iσαU |ϕref 〉
),
(21)
Note that each term constituting M , V , and C can bewritten as
a sum of terms as
aRe(eiθ 〈ϕref | V |ϕref 〉
), (22)
where a, θ ∈ R depend on the coefficients gk,i and fj ,and V is
either U†k,iUj,q or U
†k,iσαU . Then we can cal-
culate M , C, and V with the quantum circuits shown in
-
7
Fig. 3. Refer to Appendix B for a detailed explanationabout
construction of the quantum circuit. Notice thatcomprehensive
analysis on the sampling cost for M , Vand C has been given by van
Straaten and Koczor [61].We summarise the variational algorithm for
simulatingreal (imaginary) time evolution.
Variational quantum simulation algorithmInput Hamiltonian H and
initial state|ψ(0)〉;Algorithm Output |ψ(T )〉 under real
(imagi-nary) time evolution.
Step 1 Determine the ansatz |ϕ(~θ)〉, the ini-tial parameter
~θ(0) for the initialstate |ψ(0)〉, and set t = 0.
Step 2 Use a quantum computer to computethe M matrix and the V
(C) vector.
Step 3 Use a classical computer to solve
Eq. (11) (Eq. (15)) to obtain ~̇θ(t).
Step 4 Set ~θ(t+ δt) = ~θ(t) + δt~̇θ(t) and t = t+ δt.
Step 5 Repeat step 2 to step 4 until t = T.
The schematic figure is also shown in Fig. 4. We canalso
simulate time dependent Hamiltonian evolution byusing the
time-dependent Hamiltonian at each step. Theaccuracy of the
simulation can be computed at each stepby the distance between the
evolution of the ansatz stateand that of the ideal evolution [26].
In the case of thereal time simulation, we have
‖(∂/∂t+ iH) |ϕ(~θ(t))〉 ‖2
=∑k,j
Mk,j θ̇kθ̇j − 2∑k
Vkθ̇k + 〈H2〉 , (23)
which is a function of M , V and 〈H2〉. Similar argumentsalso
hold for imaginary time evolution. Note that a vari-ational real
time simulation algorithm was demonstratedin an experiment using 4
superconducting qubits [62]. Inthis experiment, adiabatic quantum
computing was sim-ulated, which was used for discovering
eigenstates of anIsing Hamiltonian.
III. VARIATIONAL QUANTUMOPTIMISATION
In this section, we illustrate several examples of vari-ational
optimisation algorithms for different problems.The key idea is to
construct a Hamiltonian or a cost func-tion such that the solution
of the problem corresponds tothe ground state or the minimum of the
cost function.
A. Quantum approximate optimisation algorithm
The quantum approximate optimisation algorithm(QAOA) [13] was
initially proposed for solving classical
optimisation problems. The algorithm works by map-ping the
classical problem to a Hamiltonian HP so thatthe ground state
corresponds to the solution. Since wetry to solve a classical
optimisation problem, and theHamiltonian HP is diagonal in the
computational basisas HP =
∑α fαPα where Pα ∈ {I, Z}⊗Nq .
As an example, we consider the Boolean satisfiabilityproblem,
which aims to find solutions of Boolean vari-ables so that all
given clauses in a propositional formulaare true. The j-th Boolean
variable denoted as xj takesa value either 1 or 0, each of which
corresponds to trueand false. The Boolean satisfiability problem
consists ofxi∨xj , xi∧xj , and x̄ operations. xi∨xj becomes 1
wheneither of xi or xj is 1, xi ∧ xj is a product of xi and xj ,and
x̄ operations flip the value x. An example of Booleansatisfiability
problem is (x1 ∨ x2) ∧ (x̄1 ∨ x2) ∧ (x̄1 ∨ x̄2)and the solution
that all the clauses are true (with value1) is x1 = 0 and x2 =
1.
The general form of this problem (with clauses involv-ing three
or more booleans) is generally NP-hard and haswide range of
applications in computer science and cryp-tography [63]. To map the
Boolean satisfiability problemto a Hamiltonian, we first get the
Hamiltonian with itsground state being the solution of each clause.
For ex-ample, the Hamiltonian for the first clause (x1 ∨ x2) is
H1 =1
4(I − Z1)(I − Z2). (24)
After having the Hamiltonian Hk for each clause, theHamiltonian
for all clauses is expressed as HP =
∑kHk.
Since the solution corresponds to the ground state ofHP , we can
try to solve the problem by using a quantumalgorithm for searching
for the ground state. The varia-tional approach for solving such
problems is first studiedby Farhi et al. [13] who introduced the
ansatz
U(~θ1, ~θ2) =
D∏k=1
e−iθ(k)1 HXe−iθ
(k)2 HP , (25)
with HX =∑Nqj=1Xj , D being the number of repeti-
tions of the ansatz quantum circuit, ~θ1 = (θ(1)1 , θ
(2)1 , . . . ),
and ~θ2 = (θ(1)2 , θ
(2)2 , . . . ). With initial states |−,−, . . .〉
and |−〉 = 1√2(|0〉 − |1〉), we obtain the ansatz state
|ϕ(~θ1, ~θ2)〉 = U(~θ1, ~θ2) |−,−, . . .〉. Directly optimisingthe
parameters might be challenging for a fixed Hamilto-nian, so Farhi
et al. [13] also suggest to consider graduallychanging a
Hamiltonian in each step of optimisation as
H(t) =
(1− t
T
)HX +
t
THP , (26)
withH(0) = HX andH(T ) = HP . Since |−,−, . . .〉 is theground
state of H(0) = HX and solution is the groundstate of H(T ) = HP ,
the optimisation emulates the pro-cess of adiabatic state
preparation with D → ∞. Withsufficiently large D, and adaptive
optimisations of the
-
8
parameters for each optimisation step t, the algorithmmay still
be able to find the ground state solution.
A recent thorough study of the performance of theQAOA on MaxCut
problems can be found in Zhouet al. [64]. Utilising nonadiabatic
mechanisms, a heuris-tic strategy was proposed to learn the
parameters expo-nentially faster than the conventional approach.
Mean-while, the QAOA was implemented with 40 trapped-ionqubits
[65].
B. Variational algorithms for machine learning
Now we introduce the application of variational quan-tum
algorithms in machine learning. In general, machinelearning
provides a universal approach to learn the pat-tern of the given
data and to predict or reproduce newdata. For example, in
supervised learning, the given dataare described by {(~x1, y1),
(~x2, y2), . . . , (~xND , yND )} withND being the number of the
data. Then we try to con-struct the model y = f(~x) so that it
predicts the outputynew = f(~xnew) for any new data ~xnew. Suppose
the
model y = f(~x, ~θ) has parameters ~θ, the training processis to
tune the parameters to minimise a cost function,such as
CML(~θ) =∑k
|yk − f(~xk, ~θ)|2. (27)
When the cost function CML(~θ) is minimised to a smallvalue, it
indicates that the given data are well-modelled.In practice, there
are different ways to choose the model.For example, a linear
regression model is,
f(~x, ~θ) = ~w · ~x+ b, (28)
with real parameters ~w and b, and ~θ = (~w, b). In deepleaning,
i.e., a multi-layer neural network, the model isdescribed as
f(~x, ~θ) = gNd ◦ uNd · · · ◦ u2 ◦ g1 ◦ u1(~x),
uk(~x) = Wk~x+~bk,(29)
where Nd is the depth of the neural network, gk is
a nonlinear activation function and Wk and ~bk are aparametrised
matrix and vector, respectively. To circum-vent overfitting, we can
add the norm of the parametersto the cost function to restrict
degree of freedom of theparameters, which is called
regularisation.
Whether machine learning works or not is highly de-pendent on
the choice of the model. While quantumstates could efficiently
represent multipartite correlationsthat admit no efficient
classical representation, quantummachine learning protocols are
proposed involving quan-tum neural networks that consist of
variational quantumcircuits. Consequently, we can leverage a
quantum neu-ral network to dramatically enhance the
representabilityof the model [15, 16]. In addition, as the ansatz
circuit is
FIG. 5. Comparison between (a) classical neural networks and(b)
quantum neural networks used for supervised learning.The figure (b)
is the quantum circuit proposed in quantumcircuit leaning [15].
a unitary operator, the norm of the quantum state is
nec-essarily unity, and this constraint may lead to a
naturalregularisation of the parameters to avoid overfitting
[15].The schematic figures for classical and quantum neuralnetworks
are shown in Fig. 5.
Note that there are quantum machine learning algo-rithms [66–69]
that are based on universal quantum com-puting without variational
quantum circuits. While thesealgorithms are proven to have
exponential speedups overclassical algorithms under certain
conditions, they gener-ally require a deep quantum circuit and are
not suitablefor NISQ devices.
In this section, we illustrate five examples of quantummachine
learning algorithms. The first two algorithmsare introduced for
leaning classical data for solving a clas-sical problem; the latter
three are introduced for learningquantum data.
1. Quantum circuit learning
The quantum circuit learning (QCL) algorithm imple-ments
supervised learning with a variational quantumcircuit instead of a
classical neural network [15]. In QCL,
a state is prepared as |ϕ(~x, ~θ)〉 = U(~θ)U(~x) |ϕref 〉, andthe
output {f(~xk, ~θ)} is generated by measuring the statein a
properly chosen basis. The quantum circuit can nat-urally introduce
nonlinearality of the model f . Supposethe data is described as
{xk, yk} and the initial state en-coded with the information of x
is
ρin(x) =1
2Nq
Nq⊗k=1
[I + xXk +
√1− x2Zk
], (30)
-
9
which can be generated by applying the rotational-Ygate,
Ry(sin
−1x) with x ∈ [−1, 1], to each qubit that isinitialised in |0〉.
This state involves higher order termsof x up to the Nqth order,
where terms such as x
√1− x2
may enhance the learning process. Note that this argu-ment can
be naturally generalised to higher dimensionaldata. In addition,
nonlinearality can be introduced viathe measurement process and
classical post-processing ofthe measurement outcome. As we have
discussed above,two potential benefits of QCL are the
representability ofmultipartite correlations and the unitarity of
the quan-tum circuit for circumventing overfitting. Whether QCLcan
outperform classical neural network based machinelearning still
needs further study.
2. Data-driven quantum circuit learning
Data-driven quantum circuit learning (DDQCL) im-plements a
generative model by using a variational quan-tum circuit [16]. We
briefly summarise the classical gen-erative model. Suppose the data
are described with{~x1, ~x2, . . . , ~xND}, where ND is the number
of data andwe assume ~x has a binary representation, i.e., ~x ={x1,
x2 . . . , xNE} with xj ∈ {−1, 1} and NE being thelength of the
binary string. We can assume the datais generated from an unknown
probability distributionpD(~x), and the task of a generative model
is to construct
a model pM (~x|~θ) to approximate the probability distri-bution
pD(~x). The joint probability that the data is gen-
erated from pM (~x|~θ) is
L(~θ) =ND∏n=1
pM (~xn|~θ), (31)
which is the so-called likelihood function. By maximising
L(~θ), we can obtain the optimised model probability
dis-tribution for the data. Alternatively, we can adopt thecost
function
CDD(~θ) = −1
ND
ND∑n=1
log[max(ε, pM (~xn|~θ))], (32)
where a small value ε > 0 is introduced for avoiding
sin-gularities of the cost function. For a classical generative
model, an example model pM (~x|~θ) could be generated viaa
Boltzman machine on a classical computer.
For DDQCL, the model pM (~x|~θ) is obtained from aquantum
computer, for example, by the projection prob-ability
pQ(~x|~θ) = | 〈~x|ϕ(~θ)〉 |2, (33)
where |ϕ(~θ)〉 is an ansatz state generated on the varia-tional
quantum circuit, |~x〉 = |x1, x2, . . . xNE 〉 is the com-putational
basis, and the probability is defined accord-ing to the Born Rule.
The quantum generative model is
FIG. 6. A quantum circuit for sampling pQ(~x|~θ).
also referred to as the Born machine [70]. Since quan-tum states
can efficiently represent complex multipar-tite correlations and
the model can be efficiently sam-pled by measuring the prepared
state, DDQCL may beable to represent generative models that are
classicallychallenging. In experiment, DDQCL has been appliedto
successfully learn Greenberger-Horne-Zeilinger states(GHZ) states
and coherent thermal state by using a 4-qubit trapped ion device
[16]. The schematic figure for
sampling pQ(~x|~θ) is shown in Fig. 6.
3. Quantum generative adversarial networks
Quantum generative adversarial networks(QuGANs) [17, 71] are a
quantum analogue of generativeadversarial networks (GANs).
Conventional GANs arecomposed of three parts — true data,
generator, anddiscriminator, as shown in Fig. 7(a). The
generatorcompetes with the discriminator, where the formertries to
produce fake data and the latter tries to de-termine whether the
input data are true or fake. Byoptimising both the generator and
the discriminator,the generator can learn the distribution of the
truedata until the discriminator cannot tell the differencebetween
the true data and the fake data from thegenerator. GANs are
generalised to quantum computingby replacing each part with a
quantum system [17].By using a quantum circuit as the generator, we
canrepresent the Nq-dimensional Hilbert space with log Nqqubits and
compute sparse and low-rank matrices withO(poly(logNq)) steps.
Suppose the true data are de-scribed with an ensemble of quantum
states as a densitymatrix ρtrue, the discriminator implements a
quantummeasurement. The schematic figure for quantum GANsis shown
in Fig. 7 (b).
Suppose the fake density matrix from the generatoris produced
from a parametrised quantum circuit as
ρG(~θG) with parameters ~θG, which aims to learn thetrue density
matrix ρtrue. By using ancillary qubits,we can express density
matrices with pure states andwe refer to Dallaire-Demers and
Killoran [71] for de-tailed ansatz constructions. The discriminator
imple-ments a parametrised positive-operator valued measure
{P t(~θD), P f (~θD)}, with parameters ~θD and P t(~θD) +P f
(~θD) = I, to distinguish between ρtrue and ρG(~θG).
-
10
FIG. 7. Schematic diagrams for (a) classical GANs and (b)quantum
GANs. In classical GANs, the generator and thediscriminator consist
of classical neural network, while quan-tum neural networks are
used in quantum GANS.
Assuming that ρtrue and ρG(~θG) are sent to the discrim-inator
randomly with equal probability, the probabilitythat the
discriminator fails is
Pfail =1
2
(Tr[ρG(~θG)P
t(~θD)] + Tr[ρtruePf (~θD)]
),
=1
2
(CGA(~θD, ~θG) + 1
)(34)
with CGA(~θD, ~θG) = Tr[(ρG(~θG) − ρtrue)P t(~θD)]
beingproportional to the trace distance between ρG(~θG) and
ρtrue when optimised over Pt(~θD). With fixed param-
eters ~θG for the generator, we optimise the discrimi-
nator ~θD to minimise the failure probability Pfail or
CGA(~θD, ~θG). With optimised discriminator, we fix ~θDand in
turn optimise the generator ~θG to maximise thefailure probability.
By repeating this process, the param-
eters ~θD and ~θG arrives at an equilibrium with ρG(~θG) =ρtrue,
indicating a successful learning of the data.
4. Quantum autoencoder for quantum data compression
The classical autoencoder is used for compressing clas-sical
data as shown in Fig. 8(a). For an input set oftraining data {~x1,
~x2, . . . ., ~xND}, the autoencoder E firstencodes it to {~x′1,
~x′2, . . . ., ~x′ND} with each ~x
′i being a
smaller vector than ~xi. A decoder D is then applied totransform
the data to {~x′′1 , ~x′′2 , . . . ., ~x′′ND}, with each ~x
′′i
having the same size as ~xi. The task of an autoencoder
is to compress the input data into smaller size, with
therequirement that the input data can be recovered fromthe
compressed one with
∑k ‖~x′′k − ~xk‖2 ≤ ε where ε is
a desired accuracy.
Quantum autoencoder implements a similar task forcompressing
quantum states [18, 72]. Here, we considerthe scheme proposed in
Romero et al. [18]. Consider thecase with input states of n+m
qubits being compressedto states of n qubits. Suppose the input
states are an en-semble {pk, |φk〉AB} with probabilities pk and
unknownstates |φk〉AB . Here the subsystems A and B consists ofn and
m qubits, respectively. With a compression circuit
U(~θ) and a post-selection measurement (for example inthe
computational basis) on the m-qubits of system B,as shown in Fig.
8(b), each input state |φk〉AB of n+mqubits is mapped to a state
ρcompk of n qubits. The in-verse process then decodes the
compressed state and mapeach ρcompk to an output state ρ
outk . The average fidelity
between the input and outputs is
C(1)AE(
~θ) =∑k
pkF (|φk〉 , ρoutk (~θ)), (35)
which serves as a cost function and can be computedvia the SWAP
test or the destructive SWAP test cir-cuit. Refer to Appendix C for
details. When C
(1)AE(
~θ) ismaximised to a desired accuracy, it indicates a
successfulcompression of the input state ensemble. Meanwhile,
the
successful compression and decoding, i.e., C(1)AE = 1, are
achieved if and only if
U(~θ) |φk〉AB = |φcompk 〉A ⊗ |0̄〉B , ∀k, (36)
where |φcompk 〉A corresponds to the compressed state, and|0̄〉B
is some fixed reference state. Thus we can also adopt
FIG. 8. Schematic figures for (a) classical autoencoder and
(b)quantum autoencoder. The encoding operation E compressthe data,
and the decoding operation D decodes the data tothe original
dimension.
-
11
a simpler cost function
C(2)AE(
~θ) =∑k
pkTr[U(~θ) |φk〉 〈φk|AB U(~θ)†IA ⊗ |0̄〉 〈0̄|B ],
(37)
which can be computed as the probability projected toIA ⊗ |0̄〉
〈0̄|B .
5. Variational quantum state eigensolver
Analysing eigenvalues and eigenvectors of the covari-ance matrix
of data is crucial for extracting its importantfeatures. Such a
process is called the principal componentanalysis (PCA), which has
been widely used in data sci-ence and machine learning. The
covariance matrix of thedata could be uploaded onto a quantum
computer [73–75]. Here we show how to use the variational
quantumstate eigensolver (VQSE) algorithm to diagonalise
inputdensity matrices ρ to
ρ =∑j
λj |λj〉 〈λj | , 〈λi|λj〉 = δi,j . (38)
We focus on the VQSE algorithm introduced in Cerezoet al. [38],
which only requires a single copy of the stateand other schemes
requiring two copies of the state can befound in LaRose et al. [37]
and Bravo-Prieto et al. [76].Without loss of generality, we assume
the eigenvalues arein an descending order, i.e., λ1 ≥ λ2 ≥ · · · ≥
λf wheref = rank(ρ).
We first map a parametrised quantum circuit U(~θ) to
the input density matrix as ρ(~θ) = U(~θ)ρU†(~θ). Then wedefine
a cost function
CDI(~θ) = Tr[ρ(~θ)H], (39)
with H =∑j Ej |~xj〉 〈~xj | being a diagonal Hamiltonian
in the computational basis {|~xj〉} with
non-degenerateeigenvalues E1 < E2 < · · · < E2Nq . Then we
have
CDI(~θ) = ~E · ~q(~θ), (40)
with ~E = (E1, E2, . . . , E2Nq ), ~q(~θ) = (q1, q2, . . . ,
q2Nq ),
and qj = Tr[ρ(~θ) |~xj〉 〈~xj |]. Since ~q(~θ) is obtained
frommeasuring ρ, the vector ~q(~θ) can be obtained from apply-ing a
doubly stochastic matrix on the eigenvalue vector
λ. Hence λ majorises the measurement probabilities ~q(~θ),
i.e., ~λ � ~q(~θ). That is, with ~λ = (λ1, λ2, . . . , λ2Nq )
and~q(θ) = (q1, q2, . . . , q2Nq ) in descending orders, we
have∑ki=1 λi ≥
∑ki=1 qi, ∀k = {1, 2, . . . , 2Nq}. Here, we set
λk = 0 for k > f . Since the dot product is a Schur con-
cave function, satisfying f(~a) � f(~b), ∀~b � ~a, we have
CDI(~θ) = ~E · ~q(~θ) ≥ ~E · ~λ. (41)
The equality holds when ~q(~θ) = ~λ, i.e., the diagonalisa-tion
is achieved.
Therefore, after minimising the cost function CDI(~θ)
with optimal parameters ~θopt, the state ρ(~θopt) is rotated
to ρ(~θopt) =∑i λj |~xj〉 〈~xj | with λ1 ≥ λ2 ≥ · · · ≥ λf .
By
measuring ρ(~θopt) in the computational basis, we thusobtain
|~xj〉 with probability determined by the eigen-value λj and the the
corresponding eigenvector of ρ is
U(~θopt) |~xj〉.On the other hand, a time dependent cost
function
which combines local and global cost functions is usedto make
the best of their benefits [38]. See AppendixD for an explanation
of local and global cost functions.Also one can find applications
of VQSE in quantum errormitigation [14, 77, 78] and entanglement
specroscopy [79,80].
C. Variational algorithm for linear algebra
Variational quantum algorithms can be used for solv-ing
matrix-vector multiplication and solving linear sys-tems of
equations. The task of matrix-vector multipli-cation is to obtain
|vM〉 = M|v0〉 /‖M|v0〉 ‖, whereM is a sparse matrix, |v0〉 is a given
state vector,and ‖ |ψ〉 ‖ =
√〈ψ|ψ〉. Meanwhile, linear systems of
equation is to solve a linear equation M|vM−1〉 =|v0〉 to have
|vM−1〉 = M−1 |v0〉. There exist severalvariational quantum
algorithms for implementing thesetasks [19, 20, 81, 82]. Herein, we
illustrate the methodsintroduced in Bravo-Prieto et al. [19] and Xu
et al. [20].
We first consider matrix-vector multiplication pro-posed by Xu
et al. [20], where the solution |vM〉 cor-responds to the ground
state of the Hamiltonian,
HM = I −M|v0〉 〈v0|M†
‖M|v0〉 ‖2. (42)
When M is a sparse matrix and the circuit for prepar-ing |v0〉 is
known, the expectation value EM =〈ϕ(~θ)|HM |ϕ(~θ)〉 can be
efficiently evaluated by using thehadamard test or the swap test
circuit. Thus by minimis-ing the expectation value EM, we can find
the groundstate |vM〉. Because the ground state has a uniqueground
state energy 0, we can further know whether thediscovered state is
the ground state or not, unlike con-ventional VQE where the ground
state energy is gen-erally unknown. The matrix-vector
multiplication al-gorithm can be applied for implementing
Hamiltoniansimulation. Suppose we want to simulate a time
evo-lution operator U = exp(−iHt) via Trotterisation U ≈∏
exp(−iHδt). By setting M = 1−iHδt, we can approx-imate each
exp(−iHδt) and hence the whole evolution asU = MNS + O(t2/NS),
where NS = t/δt. Therefore, bysubsequently applying matrix-vector
multiplication, wecan simulate the time evolution of quantum
systems.
Here we also consider the algorithms for solving linearequations
independently proposed by Xu et al. [20] andBravo-Prieto et al.
[76]. The solution is mapped to the
-
12
ground state of the Hamiltonian
HM−1 =M†(I − |v0〉 〈v0|)M, (43)
with the smallest eigenvalue 0 as well. This Hamilto-nian was
firstly proposed by Subaş ı et al. [83], who ap-plied adiabatic
algorithms to find the ground state witha universal quantum
computer. With the variationalmethod, solving the ground state is
similar to the one formatrix-vector multiplication. Furthermore, Xu
et al. [20]used Hamiltonian morphing optimisation for avoiding
lo-cal minima, and Bravo-Prieto et al. [76] used a local
costfunction for circumventing a barren plateau issue and runa
simulation with up to 50 qubits. Refer to Appendix Dfor these
optimisation methods.
D. Excited state-search variational algorithms
Next we show how to find excited states and excitedenergy
spectra of a Hamiltonian. Calculating excited en-ergy spectra is
important for studying many-body quan-tum physics problem. For
example, it can be used tostudy chemical reaction dynamics,
important for creatingnew drugs and new methodologies for mass
productionof beneficial materials [29, 84]. In addition,
evaluationof excited states enables us to calculate the
photodisso-ciation rates and absorption bands, which are
essentialfor designing solar cells and investigating their
dynam-ics [85, 86]. There exist several VQAs for evaluatingexcited
states and excited energy spectra [21, 22, 87–94]. These algorithms
can be used as a subroutine forother applications, e.g., Green’s
function [95, 96], non-adiabatic coupling, and Berry’s phase [97]
and simulat-ing real time evolution [98] etc. In this section, we
reviewthree VQAs — the overlap-based method, the subspaceexpansion
method, and the contraction VQE method —for calculating excited
state energy, and the applicationin calculating Green’s
function.
1. Overlap-based method
The overlap-based method first uses VQE to find theground state
and then sequentially obtains excited statesby penalising the
previously obtained eigenstates [21, 22].
Suppose that the ground state |G̃〉 of the given Hamilto-nian H
is obtained from either the conventional VQE orvariational
imaginary time simulation. Now, suppose wereplace the Hamiltonian H
with the a Hamiltonian
H ′ = H + α |G̃〉 〈G̃| . (44)
Here, α is a positive number which is chosen to be suf-ficiently
larger compared to the energy gap between theground and the first
excited state of the Hamiltonian.Then the first excited state |E1〉
of the original Hamilto-nian H becomes the ground state of the new
Hamiltonian
H ′. Therefore, with the new Hamiltonian H ′, we can ob-tain the
first excited state |Ẽ1〉 of H with the VQE orvariational imaginary
time simulation on H ′.
To realise the VQE or imaginary time evolution of H ′,
we need to measure the energy E′(~θ) = 〈ϕ(~θ)|H ′ |ϕ(~θ)〉of the
trial state |ϕ(~θ)〉 as
E′(~θ) = 〈ϕ(~θ)|H |ϕ(~θ)〉+ α 〈ϕ(~θ)|G̃〉 〈G̃|ϕ(~θ)〉 . (45)
The first term can be computed in the same way as
theconventional VQE method. The second term is the over-
lap between |ψ(~θ)〉 and |G̃〉, which can be evaluated withthe
SWAP test circuit or the destructive SWAP test cir-cuit [99, 100].
These quantum circuit necessitate twocopies of the state. Note that
destructive SWAP test cir-cuit only leverages a shallow depth
circuit. We leave adetailed explanation about the (destructive)
SWAP testcircuit in Appendix C. Alternatively, we can computethe
overlap term without using two copies of the statebut using a
doubled depth of the quantum circuit [22].
Denoting |G̃〉 = U(~θG) |ϕref 〉, the overlap term can bewritten
as
〈ϕ(~θ)|G̃〉 〈G̃|ϕ(~θ)〉 = | 〈ϕref |U†(~θG)U(~θ) |ϕref 〉 |2.
(46)
This can be evaluated by applying U(~θ) and U†(~θG) tothe
reference state |ϕref 〉, and measuring the probabilityto observe
|ϕref 〉.
After finding the first excited state ofH, we can furtherfind
the second excited state by replacing the Hamilto-nian H with
H ′′ = H + α(|G̃〉 〈G̃|+ |Ẽ1〉 〈Ẽ1|). (47)
Then the second excited state of H becomes the groundstate of H
′′, which could be similarly solved via the VQEor variational
imaginary time simulation. It is not hardto see that this procedure
can be repeated to sequentiallydiscover other low energy
eigenstates.
In the overlap-based method, an error happens whenthe discovered
state |G̃〉 is not the exact ground state,such as the case where it
is a superposition of groundstate and excited states. Therefore
when the VQEmethod gets trapped to a local minimum, the
overlap-based method may fail to work. Note that this problem
issevere since if the ground state was calculated incorrectly,all
the subsequently calculated excited states are incor-rect.
Interestingly, it was numerically observed that thisproblem is less
severe for variational imaginary time evo-lution [21]. This is
because when variational imaginarytime evolution fails to find the
ground state, it may in-stead converge to an eigenstate of the
Hamiltonian basedon the definition of imaginary time evolution [21,
48].By penalising the discovered excited state, we can
stillconstruct a new Hamiltonian to find another low
energystate.
-
13
2. Quantum subspace expansion
The quantum subspace expansion solves a generalisedeigenvalue
problem in terms of the given Hamiltonianin a expanded subspace
around an approximated groundstate, and the obtained eigenstates
and eigenenergies cor-respond to those of the Hamiltonian [87].
Note that theobtained spectrum is error-mitigated because the
excitedstates are approximated as a linear combination of statesin
the expanded subspace. Refer to Sec. V D for a de-tailed
explanation about error mitigation effect of quan-tum subspace
expansion.
Let |G̃〉 be an approximation of the true ground stateobtained
from either the VQE or variational imaginarytime simulation.
Suppose we approximate an eigenstateof the Hamiltonian as
|ψeig(~c)〉 ≈∑m
cm |ψm〉 , (48)
where |ψm〉 = |G̃〉 with m = 0, |ψm〉 (m ≥ 1) arestates in the
expanded subspace, ~c = (c0, c1, c2, . . . )
T ,and 〈ψeig(~c)|ψeig(~c)〉 = 1. For fermionic systems, wecan
choose |ψm〉 = a†iaj |G̃〉 , m = (i, j), with a
†j and
aj being the creation and annihilation operators. For
spin systems, we can set |ψm〉 = Pm |G̃〉 with Pm ∈{I,X, Y, Z}⊗Nq
. As the number of Pm increases, the sub-space expands, which
potentially improves the accuracyof the subspace expansion method.
Denote E(~c,~c ∗) =〈ψeig|H |ψeig〉, the state |ψeig(~c)〉 is an
eigenstate of Hwhen E(~c) corresponds a local minimum
satisfying
δ[E(~c,~c∗)− E 〈ψeig(~c)|ψeig(~c)〉] = 0. (49)
Here δE(~c,~c∗) =∑i δci∂E(~c)/∂ci+c.c. and E is the La-
grangian multiplier for the constraint 〈ψeig(~c)|ψeig(~c)〉 =1. A
solution of this equation results in
H̃~c = ES̃~c. (50)
Here H̃ and S̃ are defined by
H̃αβ = 〈ψα|H|ψβ〉 , S̃αβ = 〈ψα|ψβ〉 , (51)
and E corresponds to the energy for eigenstate. BothH̃ and S̃
can be efficiently measured. For example,with |ψm〉 = Pm |G̃〉 we
have H̃αβ = 〈G̃|PαHPβ |G̃〉and S̃αβ = 〈G̃|PαPβ |G̃〉, which can be
respectively ob-tained by measuring the expectation value of the
oper-ators PαHPβ and PαPβ for the approximated ground
state |G̃〉. By solving the generalised eigenvalue problemof Eq.
(50), we can obtain the information of eigenspec-tra of the
Hamiltonian in the considered subspace. Werefer to Appendix E for
the derivation.
3. Contraction VQE methods
The contraction VQE methods firstly discover the low-est energy
subspace and then construct eigenstates in
that subspace. Here we introduce two contraction VQEmethods —
subspace-search VQE (SSVQE) [89] and mul-tistate contracted variant
of VQE (MC-VQE) [90].
The procedure of SSVQE is as follows. We first prepare
a set of orthogonal states {|φi〉ki=0} (〈φj |φi〉 = δi,j), suchas
states in the computational basis. Then we minimisethe cost
function
C(1)CO(
~θ) =
k∑j=0
〈φj |U†(~θ)HU(~θ) |φj〉 (52)
to constrain the subspace {U(~θ∗) |φi〉}ki=0 to the lowestk+ 1
eigenstates, where ~θ∗ is the parameter set after theoptimisation
and we define 0 th excited state |E0〉 as theground state. At this
stage, U(~θ∗) |φi〉 is generally a su-perposition of the eigenstate
{|Ei〉}ki=0 of H. To projectU(~θ∗) |φs〉 (s ∈ {0, . . . , k}) to the
k th excited state |Ek〉,we maximise
C(2)CO(
~φ) = 〈φs|V †(~φ)U†(~θ∗)HU(~θ∗)V (~φ) |φs〉 . (53)
By finding the optimal parameters ~φ∗, we can approxi-
mate the kth excited state |Ek〉 by U(~θ∗)V (~φ∗) |φs〉.
Inpractice, we can start with k = 0 to find the ground stateand
then increase the value of k to find excited states.
For the MC-VQE method, it also first projects the sub-space to
the lowest energy subspace in the same way asSSVQE. Different from
SSVQE, MC-VQE assumes theexcited states can be expanded in the
lowest energy sub-space as
|E〉 =k∑i=0
ciU(~θ∗) |φi〉 , (54)
and the goal is to find the matrix ~c = (c0, c1, . . . ,
ck).This task is similar to subspace expansion method men-tioned
above. Now, ~c can be computed by solving thefollowing eigenvalue
problem
H̃~c = E~c, (55)
where the matrix H̃ is
H̃α,β = 〈φα|U†(~θ∗)HU(~θ∗) |φβ〉 , (56)
and E is the corresponding excited state energy. Wecan regard
Eq. (55) as the special case of Eq. (50) withS = I, because the
states in the subspace are mutu-ally orthogonal. The diagonal term
of the matrix H̃can be evaluated as the same procedure to the
con-ventional VQE. Non-diagonal terms of H̃ can be ob-tained by
calculating expectation values of H with states|±α,β〉 = (|φβ〉 ±
|φα〉)/
√2 and linearly combining them
as
2H̃α,β = 〈+α,β |U†(~θ)HU(~θ) |+α,β〉
− 〈−α,β |U†(~θ)HU(~θ) |−α,β〉 .(57)
-
14
The potential problem of the contraction VQE meth-
ods is that the energy landscape of C(1)CO(
~θ) may bemore complicated than the conventional VQE. This
isbecause the conventional VQE only aims to find the cor-rect
ground state, while the contraction VQE methodsneed to find the
correct unitary for all the k orthogonalstates. The benefit of this
method is that the opitimiza-
tion of U(~θ) is averaged over multiple states, therefore
theexcited states can be calculated with an equal accuracy.
4. Calculation of Green’s function
Now, we show the application of the VQE algorithms inthe
calculation of Green’s function [95, 96], which playsa crucial role
in investigating many-body physics suchas high Tc superconductivity
[101], topological insula-tors [102] and magnetic materials [103].
The definitionof the retarded Green’s function at zero temperature
is
G(R)αβ (t) = −iΘ(t) 〈G| aα(t)a
†β(0) + a
†β(0)aα(t) |G〉 . (58)
Here, Θ(t) is a Heaviside step function, a(†)α(β) is the an-
nihilation (creation) fermionic operator for the fermionicmode
α(β), aα(t) = e
iHtaαe−iHt, and |G〉 is a ground
state. Here, we show how to calculate Green’s functionwith the
algorithms for finding excited states. Note thatanother approach is
to use the variational quantum sim-ulation algorithm in section
IV.
For simplicity, we consider Green’s function in themomentum
space for identical spins. We thus have
α = β = (k, ↑) and denote Green’s function as G(R)k (t).Green’s
function has another expression called Lehmannrepresentation as
Gk(t) = −i∑m
ei(EG−Em)t| 〈Em| a†k |G〉 |2 (59)
where |Em〉 and Em are eigenstates and eigenenergiesof the
Hamiltonian. Thus, if the transition amplitudes
| 〈Em| a†k |G〉 |2 can be calculated, we can obtain
Green’sfunction.
In literature, Endo et al. [95] used the contraction VQEmethod
and Rungger et al. [96] employed the overlapmethod to calculate the
transition amplitudes and fur-ther Green’s function. The algorithm
using the over-lap method [96] was originally employed for
computingGreen’s function in the specific Jordan Wigner encod-ing,
whose generalisation was further studied in Ibe et al.[104] for
general operators. Here we review the algorithmbased on the MC-VQE
method [95]. By using the expres-sion for mth excited state in Eq.
(54), we have
|Em〉 ≈k∑i=0
c(m)i U(
~θ∗) |φi〉 , (60)
and hence
〈Em| a†k |En〉 =∑ij
c(m)∗i c
(n)j 〈φi|U
†(~θ∗)a†kU(~θ∗) |φj〉 .
(61)
Denoting a†k = Ak + iBk, |φ±ij〉 = U(~θ∗)(|φi〉 ± |φj〉)/
√2
and |φi±ij 〉 = U(~θ∗)(|φi〉 ± i |φj〉)/√
2, where Ak and Bkare Hermitian operators, we have
Re[〈φi|U†(~θ∗)AkU(~θ∗) |φj〉] = 〈φ+ij |Ak |φ+ij〉
− 〈φ−ij |Ak |φ−ij〉
Im[〈φi|U†(~θ∗)AkU(~θ∗) |φj〉] = 〈φi+ij |Ak |φi+ij 〉
− 〈φi−ij |Ak |φi−ij 〉 ,
(62)
and similarly for Bk. Thus we can calculate
〈φi|U†(~θ∗)a†kU(~θ∗) |φj〉 and hence 〈Em| a†k |En〉 by mea-
suring the expectation values of Ak and Bk for states|φ±ij〉 and
|φ
i±ij 〉.
Note that based on calculation of the Green’s function,Rungger
et al. [96] implemented dynamical mean-fieldtheory (DMFT)
calculation on two-sites DMFT model byusing a superconducting
system and a trapped ion sys-tem. Also, in the work by Ibe et al.
[104], the accuracy ofthe calculation of transition amplitudes are
compared forthe overlap method and the contraction VQE methods
indetail.
E. Variational circuit recompilation
Circuit recompilation aims to approximate a givenquantum circuit
with ones that are compatible with apractical experiment hardware.
Concerning noisy gatesor restricted set of realisable gates, the
compiler runs toreduce the circuit noise or the implementation
cost, asshown in Fig. 9. For example, an arbitrary two-qubitunitary
is generally not directly supported on a practicalquantum hardware
and we need to compile the unitaryinto a sequence of realisable
gates. While a naive decom-position of the unitary may induce too
many unnecessarygates for hardware with specific topological
structures,finding efficient and simple decomposition of
quantumcircuits is vital for near-term quantum computing.
FIG. 9. Schematic figure for circuit recompilation algorithms.A
variational quantum circuit with hardware constraints triesto
approximate the target quantum circuit.
-
15
Denote the target unitary as UT and the variational
circuit as U(~θ), which consists of gates compatible withthe
hardware. Circuit recompilation is to tune the pa-
rameters ~θ so that UT ≈ U(~θ). We can mathematicallydefine a
metric of the distance between UT and U(~θ) viathe average gate
infidelity (AGI) [105, 106]
CI(~θ) = 1−∫dϕ| 〈ϕ|U†TU(~θ) |ϕ〉 |
2 (63)
where pure states ϕ are randomly chosen according to theHaar
measure. By construction, AGI indicates how dif-ferent two unitary
gates are on average for randomly sam-pled pure states, and it
vanishes when the circuit is per-fectly re-compiled. The AGI method
has been applied forcompiling high fidelity CNOT gate with
cross-resonancegate suffering from crosstalk and single qubit
operations.Furthermore, a high-fidelity four-qubit syndrome
extrac-tion circuit was recompiled to be achievable with
simul-taneous cross resonance drives under crosstalk [106].
Another related cost function [105] is
CHS(~θ) = 1−1
d|Tr(U†TU(~θ))|
2, (64)
which can be efficiently calculated with the ‘Hilbert-Schmidt
test’ circuit [105] by using two copies of states.In particular, we
have
1
d2Tr(U†TU
~θ)|2 = | 〈Ψ+|UT ⊗ U∗(~θ) |Ψ+〉 |2, (65)
where |Ψ+〉 = d−1∑i |i〉⊗|i〉 is the maximally entangled
state with d being the dimension of the system. Then,
Tr(U†TU~θ)|2/d2 can be evaluated by applying UT ⊗U∗(~θ)
to |Ψ+〉 and measure the probability to observe |Ψ+〉.Note that
CHS(~θ) and AGI is related [107, 108],
CHS(~θ) =d+ 1
dCI(~θ). (66)
Thus CHS(~θ) can be used as an alternative cost function
for circuit recompilation. Note that CI(~θ) and CHS(~θ)generally
exponentially vanish for an increasing systemsize. This problem is
solved by using the weighted av-erage of this global cost function
and the correspondinglocal cost function (see Appendix D for an
explanationof global and local costs). Then simple 9-qubit
unitarieswere successfully recompiled for noiseless simulator
andfor the Rigetti and IBM experimental processor.
Meanwhile, circuit recompilation can be implementedfor specific
input states, which may reduce the optimisa-tion complexity [105,
109]. The goal is to find the recom-
piled circuit U(~θ) so that UT |ψin〉 ≈ U(~θ) |ψin〉 holdsfor the
state |ψin〉 and the target unitary UT . When|ψin〉 is a product
state, we can easily define the Hamil-tonian Hin whose ground state
is |ψin〉. For example, for|ψin〉 = |00 . . . 0〉, we can set Hin =
−
∑j Zj , where Zj is
the Pauli Z operator acting on the j th qubit. Then, by
minimising the expectation value of Hin for the trial state
|ϕ(~θ)〉 = U†(~θ)UT |ψin〉, we can obtain U(~θ) such thatU†(~θ)UT
|ψin〉 ≈ |ψin〉 and hence UT |ψin〉 ≈ U(~θ) |ψin〉.We can leverage the
conventional VQE method or varia-tional imaginary time simulation
for the optimisation andthe fidelity of the re-compiled state is
lower-bounded as
FR ≥ 1−δ
E1 − E0, (67)
where δ = 〈ϕ(~θ)|Hin|ϕ(~θ)〉 − E0 is the deviation of thecost
function from the ground state energy, and E0 andE1 are the ground
state and first excited state energy.Note that E0 = −Nq and E1 = 2
− Nq when the uni-tary acts on Nq qubits. This algorithm
successfully re-compiled a unitary operator involving 7 qubit
systeminto another quantum circuit with different topology
anddramatically reduced the number of two-qubit gates to72 from
144, while the number of single-qubit gates in-creased to 77 from
42 [109].
F. Variational-state quantum metrology
Quantum metrology aims to discover the optimal setupfor probing
a parameter with the minimal statistical shotnoise [110–112]. The
basic setup of quantum metrology isas follows — firstly, we prepare
the initial probe state |ψp〉and evolve the state under the
Hamiltonian H(ω) withω being the target parameter. After time t,
the probestate can be described as |ψp(ω, t)〉, which is measuredand
analysed to extract the information of ω. Typically,ω can be a
magnetic field, for example, with Hamiltonian
H(ω) = ω∑j σ
(j)z with σ
(j)z being the Pauli Z operator,
and the measurement is a Ramsey type measurement.When separable
states are used as probes, the statisti-cal error of the parameter
behaves as δω ∝ 1/
√Nq with
Nq being the number of qubits. This scaling is called
thestandard quantum limit (SQL) [112]. Notably the scalingcan be
improved by using entangled states, such as GHZstates, symmetric
Dicke states, and squeezed states. Inthe absence of noise or for
specific types of noise in theevolution under the Hamiltonian, the
optimal strategyhas been revealed. For example, with no
environmen-tal noise, the optimal probe state has been proved tobe
the GHZ state which achieves the Heisenberg limitδω ∝ 1/Nq
[111–113]. However, in the presence of gen-eral types of noise,
analytical arguments about the opti-mal strategy are usually very
hard.
Variational-state quantum metrology is to use a quan-tum
computer to find the optimal quantum state forquantum metrology
with noisy hardware. Differentproposals have been studied with
either general vari-ational quantum circuits [114] or specific
experimen-tal setups [115], i.e., optical tweezer arrays of
neutralatoms [116, 117]. In general, suppose the initial probe
state is created on a quantum device, as |ϕp(~θ)〉. Byevolving
the state under the Hamiltonian H(ω) followed
-
16
with a proper measurement, we can define a cost func-tion to
reflect the metrology performance. In particu-lar, we can either
use the quantum Fisher information(QFI) [114] or the spin squeezing
parameter [115, 118].Here, we focus on the QFI, which characterises
the min-imum uncertainty of the estimated parameter as
δω ≥ 1√NsFQ[ρP (ω, t)]
, (68)
where FQ[ρP (ω, t)] is the QFI for the noisy output stateρP (ω,
t) after the evolution time t and Ns is the numberof samples.
Denote T = Nst to be the effective total timeof all Ns samples, we
can define a cost function as
CME(~θ, t) =T
tFQ[ρP (ω, t, ~θ)]. (69)
For fixed T , we aims to optimise ~θ and t to maximise
CME(~θ, t). We show the schematic of this method inFig. 10.
Although QFI of mixed states cannot be directlyevaluated in an
efficient way, classical Fisher information(CFI) defined for a
fixed measurement basis lower boundsQFI and it can be computed
efficiently. We note that CFIis equivalent to QFI when the
measurement basis is op-timal, so QFI can in principle be obtained
by optimisingthe measurement.
With variational-state quantum metrology, a highlyasymmetric
state has been discovered for a 9-qubit sys-tem that outperforms
previous results [114]. Interest-ingly, even though the Hamiltonian
and the noise modelis symmetric under the permutation of qubits,
there is asymmetry breaking for the optimal solution. Note
thatunlike conventional analytical approaches employed inquantum
metrology, we do not have to know the noisemodel of the quantum
device to obtain the optimisedstate. Recently, this algorithm was
generalised to multi-parameter estimation [119].
FIG. 10. Schematic figures for variational
quantum-statemetrology. After the probe state is prepared by the
vari-ational quantum circuit, it evolves under the HamiltonianH(ω)
with environmental noise, and is measured to evaluatethe cost
function.
G. Variational quantum algorithms for quantumerror
correction
Quantum error correction (QEC) makes use of largernumber of
physical qubits to encode logical qubits toprotect them against
physical errors. In general, con-ventional formulation of QEC does
not take into accountthe experimental implementation of the code.
However,in the NISQ era, for example, the set of possible
op-erations and the qubit topology are restricted for eachphysical
hardware. Therefore, hardware-friendly imple-mentation of QEC,
tailored to actual experiments, is cru-cial for near-term quantum
computers [120–122]. Here,we illustrate two examples [121, 122] for
realising QECon NISQ computers.
1. Variational circuit compiler for quantum error correction
This variational circuit compiler is for
automaticallydiscovering the optimal quantum circuit satisfying
user-specified requirements for a given QEC code [121]. Typ-ically,
such requirements tend to come from hardwareproperties, such as
available gate sets, limited topology,and achievable error rate.
More concrete example maybe two-qubit gate implementation, e.g.,
superconductingqubits employing CNOT gate, and ion trap systems
us-ing Møren-Sørensen gate. We prepare an ansatz unitary
U(~θ), or a process E(~θ), that takes account of gate noiseand
reflects the requirements. The compiler is to opti-
mise the parameters ~θ so that the ansatz emulates theencoded
target state |ψ0〉L for a given QEC code.
The essential point of the compiler is to design theHamiltonian
whose ground state is the target state. Thenthe target state can be
obtained via conventional VQEor variational imaginary time
simulation algorithm. Gen-erally, the code space is defined by a
set of commutingPauli operators, so- called stabiliser generators.
For ex-ample, when we consider the three qubit code, the
logicalstate |ψ〉three = α |000〉+β |111〉 is an eigenstate of Z1Z2and
Z2Z3 with eigenvalue 1. Suppose the target logicalstate |ψ0〉L is
determined by the stabiliser generator set{Gk} with Gk |ψ0〉L =
|ψ0〉L. For determining a particu-lar logical qubit state, we can
choose an additional logicaloperator, ML = |ψ0〉L 〈ψ0|L−|ψ⊥0 〉L 〈ψ⊥0
|L, which can bedecomposed as a linear combination of the logical
I, X,Y , Z operators. Then the logical state |ψ0〉 is the
uniqueground state of the Hamiltonian
HL = −∑k
akGk − a0ML (70)
with energy E0 = −(∑k ak + a0), where ak, a0 > 0.
When using the variational algorithm to minimise theaverage
energy, an approximation of the encoding circuitis found. Suppose
the energy of the optimally discoveredstate is Edis, the fidelity
between the discovered state
-
17
and the target logical state is lower bounded as
f ≥ 1− (Edis − E0)/a, (71)
where a = min{ak, a0}. This inequality ensures that ifEdis is
sufficiently close to E0, the discovered encodingcircuit
approximates the target QEC code well. The algo-rithm has been
numerically tested for the five and sevenqubit codes with different
available gate sets and undernoise free and noisy circuits
[121].
2. Variational quantum error corrector (QVECTOR)
Variational quantum error corrector (QVECTOR)) isfor discovering
a device-tailored quantum error correctingcode [122]. Different
from the compiler for preparing atarget logical state [121],
QVECTOR aims to discover theoptimal encode circuit that preserves
the quantum stateunder noise. As shown in Fig. 11, the circuit US
is used
for preparing the to-be-encoded k-qubit state |ϕ〉, V (~θ1)and V
†(~θ1) are the noisy encoding and decoding circuits
on n ≥ k qubits, and W (~θ2) is noisy gates for state re-covery,
which operates on n + r qubits. This quantumcircuit corresponds to
creating the [n, k] quantum codes
with r additional qubits. The parameters ~θ1 and ~θ2
areoptimised to maximise the average code fidelity,
CQV (~θ1, ~θ2) =
∫ψ∈C〈ψ| ER(~θ1, ~θ2)
(|ψ〉 〈ψ|
)|ψ〉 dψ,
(72)
where ER(~θ1, ~θ2) is the encoding, recovery, and
decodingoperations V (~θ1), W (~θ2), and V
†(~θ1), and the integra-tion is calculated following the Haar
distribution with
|ψ〉 = US |0〉⊗k. In practice, we can set US to be theunitary
2-design for efficiently calculating the averagefidelity [123].
With numerical simulation, QVECTORlearned the three qubit code
[124] under the phase damp-ing noise, resulting in a six times
longer T2 than conven-tional methods. In the presence of amplitude
and phasedamping noise, QVECTOR learned the quantum
codeoutperforming the five qubit stabilizer code.
FIG. 11. Schematic figure of the variational circuit for
QVEC-TOR.
H. Dissipative-system variational quantumeigensolver
Dissipative-system variational quantum eigensolver(dVQE) is for
obtaining the non-equilibrium steady state(NESS) in an open quantum
system. In practice, quan-tum systems inevitably interact with
their environments,and quantum states decohere owing to the noise.
There-fore, the ability to simulate open quantum systems is
in-dispensable for studying practical quantum phenomena.Notably,
investigating NESS of open quantum systems isvery important, e.g.,
in revealing the transport mech-anism in nano-scale devices such as
single-atom junc-tions [125].
The time independent Markovian open quantum dy-namics can be
described by the Lindblad master equa-tion,
d
dtρ = −i[H, ρ] + L(ρ), (73)
where ρ is the system state, H is the Hamiltonian, L(ρ)
=∑k(2LkρL
†k − L
†kLkρ − ρL
†kLk) and Lk is a Lindblad
operator. Denoting L′(ρ) = −i[H, ρ] + L(ρ), the non-equilibrium
steady state corresponds to the state withL′(ρ) = 0. To find the
steady state, we first vectorise thestate ρ =
∑nm ρnm |n〉 〈m| to
|ρ〉〉 = 1N∑nm
ρnm |n〉 |m〉 , (74)
where N is a normalisation factor. With vectorised L′,the steady
state satisfies L′|ρness〉〉 = 0 and hence wehave
〈〈ρness|L′†L′|ρness〉〉 = 0, (75)
where
L′ =(− i(H ⊗ I − I ⊗HT ) +Dv
),
Dv =∑k
(Lk ⊗ L∗k −
1
2L†kLk ⊗ I −
1
2I ⊗ LTk L∗k
).
(76)
Here A∗ denotes the complex conjugate of operator A.
As |ρ(~θ)〉〉 is a vector, we can express this vector
rep-resentation of the quantum state through using a puretrial
state on a variational quantum circuit by imposingproper
constraints so that the corresponding density ma-
trix ρ(~θ) will be physical, i.e., positive semi-definitenessand
hermiteness have to be satisfied. The trace of thestate needs to be
unity but this condition will be consid-ered when the expectation
value of a given observable ismeasured. We can prepare the
vectorised physical stateon a variational quantum circuit as
|ρ(~θ)〉〉 = U(~θ1)⊗ U∗(~θ1)|ρd(~θ2)〉〉,
|ρd(~θ2)〉〉 =1
N∑j
αj(~θ2) |j〉s ⊗ |j〉a ,(77)
-
18
where ~θ ≡ {~θ1, ~θ2}, αj(~θv) > 0 and∑j α(
~θ2) = 1, |j〉 isin the computational basis, and the
corresponding den-
sity matrix is ρ(~θ) =∑j αjU(
~θ1) |j〉s 〈j|s U†(~θ1). Here,N =
√∑j α
2j ensures the normalisation. The subscripts
a and s denote system and ancilla, respectively. We referto
Yoshioka et al. [126] for the detailed ansatz construc-
tion. By preparing a trial state |ρ(~θ)〉〉, and minimisingthe
cost function CNE(~θ) = 〈〈ρ(~θ|L′†L′|ρ(~θ)〉〉, we canobtain an
approximation of NESS. After the optimal pa-
rameters ~θ(op)1 and
~θ(op)2 are found, we can measure the
expectation value of any given observable O for ρ(~θ) by
randomly generating the state U(~θ(op)1 ) |j〉 with probabil-
ity αj(~θ(op)2 ), and averaging the measurement outcome.
This method was demonstrated for 8-qubit dissipativeIsing model
with a 16-qubit classical simulation.
I. Other applications
There are other applications, such as VQAs for non-linear
problems [127], fidelity estimation [128], factor-ing [129],
singular value decomposition [76, 130], quan-tum foundations [131],
circuit QED simulation [132], andGibbs state preparation [130, 133,
134].
IV. VARIATIONAL QUANTUM SIMULATION
In this section, we review the variational quantum sim-ulation
algorithms for simulating dynamical evolution ofquantum systems and
the application in linear algebratasks, Gibbs state prepration, and
evaluating Green’sfunction.
A. Variational quantum simulation algorithm fordensity
matrix
We first show how to generalise the simulation algo-rithm for
real and imaginary time evolution from purestates to mixed states
[26]. The main idea is again to con-sider a parametrised
representation of mixed states andmap the dynamics to the evolution
of the parameters.As we are considering mixed states, only
McLachlan’svariational principle applies, which leads to evolution
ofparameters with information determined by the densitymatrix.
Although conventional quantum computers op-erate on pure states, we
can also represent mixed stateswith their purified pure states by
using ancilla qubits.
1. Variational real time simulation for open quantumsystem
dynamics
In practice, a quantum system interacts with its envi-ronment,
so open quantum system simulation algorithms
are useful for investigating practical quantum phenom-ena. Here
we aim to simulate real time evolution of openquantum systems
described by the Lindblad master equa-tion dρ/dt = L(ρ) with L(ρ)
being a super-operator onthe state as in Eq. (73). By parametrising
the state as
ρ(~θ(t)) with real parameters, and applying McLachlan’s
variational principle δ‖dρ(~θ(t))/dt − L(ρ)‖ = 0, we canobtain a
similar equation of the parameters as the onefor closed systems in
Eq. (11) as∑
j
Mk,j θ̇j = Vk, (78)
where
Mk,j = Tr
[(∂ρ(~θ(t))
∂θk
)†∂ρ(~θ(t))
∂θj
]Vk = Tr
[(∂ρ(~θ(t))
∂θk
)†L(ρ)
].
(79)
Note that the evaluation of M and V can be reduced tothe
computation of terms like c · Re(Tr[eiϕρ1ρ2]), withρ1 and ρ2 being
two quantum states, and c, ϕ ∈ R [26].By encoding the mixed state
via a purified pure state,this term can be evaluated via the SWAP
test circuit.Note that, in order to simulate open system of Nq
qubits,we first need 2Nq qubits for representing its
purification.We also need two copies of the purification for
evaluat-ing M and V , so we need in total 4Nq qubits to
simulateopen quantum system dynamics on Nq qubits. We reviewshortly
that an alternative algorithm that simulates thestochastic
Schördinger equation which enables the simu-lation of Nq-qubit
open systems on an Nq-qubit quantumhardware.
2. Variational imaginary time simulation for a densitymatrix
The variational quantum simulation algorithm can beapplied for
simulating imaginary time evolution of den-sity matrices as well
[26], which is defined as
ρ(τ) =e−Hτρ0e
−Hτ
Tr[e−Hτρ0e−Hτ ], (80)
where ρ0 is the initial state. The time derivative equationfor
ρ(τ) is
dρ(τ)
dτ= −{H, ρ(τ)}+ 2 〈H〉 ρ(τ), (81)
where {A,B} = AB + BA. By applying McLach-lan’s variational
principle as δ‖dρ/dτ + {H, ρ(τ)} −2 〈H〉 ρ(τ)‖ = 0, we have the
evolution of the parame-ters as ∑
j
Mk,j θ̇j = Wk, (82)
-
19
where
Wk = −Tr[(
∂ρ(~θ(t))
∂θk
){H, ρ(~θ(τ))}
]. (83)
We note that Wk can be computed similarly to Vk.
B. Variational quantum simulation algorithms forgeneral
processes
In this section, we review the variational quantum sim-ulation
algorithms for general processes, including thegeneralised time
evolution, its application in solving lin-ear algebra tasks, and
simulating open system dynamics.
1. Generalised time evolution
Apart from real and imaginary time evolution, we con-sider the
generalised time evolution defined as
A(t)d
dt|u(t)〉 = |du(t)〉 , (84)
where |du(t)〉 =∑k Bk(t) |u′k(t)〉, and A(t) and Bk(t) are
general (possibly non-Hermitian) sparse matrices thatmay be
efficiently decomposed as sums of Pauli opera-tors, and each
|u′k(t)〉 could be either |u(t)〉 or any fixedstates. The quantum
states |u(t)〉 and |u′k(t)〉 can beunnormalised and we assume a
parametrisation of the
states as |u(~θ)〉 = α(~θ1) |ψ(~θ2)〉, where ~θ = {~θ1, ~θ2},
α(~θ1)is a classical coefficient, and |ψ(~θ2)〉 is a quantum
stategenerated on quantum computer. By using MchLaclan’svariational
principle
δ‖A(t) ddt|u(~θ(t))〉 −
∑k
Bk(t) |u′k(t)〉 ‖ = 0, (85)
we obtain the evolution of the parameters similar toEq. (11) as
∑
j
M̄k,j θ̇j = V̄k. (86)
Each matrix element M̄k,j or V̄k could be expanded asa sum of
terms that can be efficiently measured via aquantum circuit. We
refer to Endo et al. [135] for details.
The real time evolution corresponds to A(t) = 1and |du(t)〉 = −iH
|u(t)〉 and the imaginary time evo-lution corresponds to A(t) = 1
and |du(t)〉 = −(H −〈u(t)|H|u(t)〉) |u(t)〉 for Hamiltonian H.
Therefore, thegeneralised time evolution unifies real and
imaginarytime evolution. In addition, the generalised time
evolu-tion describes a general first-order differential
equationswith non-Hermitian Hamiltonians, which may have
ap-plications in non-Hermitian quantum mechanics. In thefollowing,
we show its application in solving linear alge-bra tasks and
simulating the stochastic Schördinger equa-tion.
2. Matrix multiplication and linear equations.
Now, we explain how we can apply the algo-rithm for generalised
time evolution to realise matrix-multiplication and to solve linear
systems of equa-tions [135]. This is an alternative method for
linear alge-bra discussed in Sec. III C [20, 76]. For sparse
matrixMand a (unnormalised) state vector |u0〉, we aim to obtain
|uM〉 =M|u0〉 , |u−1M 〉 =M−1 |u0〉 , (87)
for matrix-multiplication and solving linear systems
ofequations. In terms of matrix-multiplication, by setting|uM(t)〉 =
C(t) |u0〉 with C(t) = tTM+(1−
tT )I, we have
|uM(0)〉 = |u0〉 being the given vector and |uM(T )〉 =M|u0〉 = |uM〉
being the solution. The time dependentstate |uM(t)〉 follows a
generalised time evolution
d
dt|uM(t)〉 = D |u0〉 (88)
with D = (M−I)/T , which corresponds to the case withA(t) = I
and |du(t)〉 = |u0〉 in Eq. (84). For solving linearsystems of
equations, by considering C(t) |uM−1(t)〉 =|u0〉 with C(t) = tTM+
(1−
tT )I, we have |uM−1(0)〉 =
|u0〉 being the given vector and |uM−1(T )〉 =M−1 |u0〉 =|uM−1〉
being the solution. The state |uM−1(t)〉 followsthe evolution
equation as
C(t)d
dt|uM−1(t)〉 = −D |uM−1(t)〉 (89)
which corresponds to the generalised time evolution ofEq. (84)
with A(t) = C(t) and |du(t)〉 = −D |u(t)〉.Therefore, the ability of
efficiently simulating generalisedtime evolution enables us to
solve these two linear algebraproblems.
3. Open system dynamics
Now we show how to simulate open quantum sys-tem dynamics with
the variational algorithm of gener-alised time evolution [135].
Instead of directly simulat-ing the Lindblad master equation
defined in Eq. (73), weconsider its alternative representation via
the stochas-tic Schrödinger equation, where the whole evolution is
amixture of pure state trajectories [136]
d |ψc(t)〉 =
(−iH − 1
2
∑k
(L†kLk − 〈L†kLk〉)
)|ψc(t)〉 dt
+∑k
[(Lk |ψc(t)〉||Lk |ψc(t)〉 ||
− |ψc(t)〉)dNk
].
(90)Here |ψc(t)〉 is the state of each trajectory, d |ψc(t)〉
=|ψc(t+ dt)〉−|ψc(t)〉, and dNk is a random variable whichtakes
either 0 or 1 and satisfies dNkdNk′ = δkk′dNkand E[dNk] =
〈ψc(t)|L†kLk |ψc〉 dt. This implies that
-
20
the state |ψc〉 jumps to Lk |ψc〉 /‖Lk |ψc〉 ‖ with a prob-ability
E[dNk] when the state evolves from time t tot + dt. Each trajectory
can be regarded as a contin-uous evolution of the state
continuously measured by
{O0 = I −∑k L†kLkdt, Ok = L
†kLkdt}. When the mea-
surement corresponds to O0, the state evolves under
thegeneralised time evolution of Eq. (84) with A = I and
|du(t)〉 =
[−iH − 1
2
∑k
(L†kLk − 〈L†kLk〉)
]|u(t)〉 . (91)
This process can