Top Banner
arXiv:1105.1700v1 [physics.data-an] 9 May 2011 Principal axes for stochastic dynamics V.V. Vasconcelos, 1 F. Raischel, 2 M. Haase, 3 J. Peinke, 4 M. W¨ achter, 4 P.G. Lind, 1, 2 and D. Kleinhans 5, 6 1 Physics Department, Faculty of Sciences, University of Lisbon, 1649-003 Lisbon, Portugal 2 Center for Theoretical and Computational Physics, University of Lisbon, Av. Prof. Gama Pinto 2, 1649-003 Lisbon, Portugal 3 Institute for High Performance Computing, University of Stuttgart, Nobelstr. 19, DE-70569 Stuttgart, Germany 4 Institute of Physics, University of Oldenburg, DE-26111 Oldenburg, Germany 5 Institute for Marine Ecology, University of Gothenburg, Box 461, SE-405 30 G¨ oteborg, Sweden 6 Institute of Theoretical Physics, University of M¨ unster, DE-48149 M¨ unster, Germany We introduce a general procedure for directly ascertaining how many independent stochastic sources exist in a complex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure is based on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. We demonstrate our algorithm by applying it to two examples of systems showing Hopf-bifurcation. We argue that computing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phase space enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associated to the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to a new coordinate system aligned with the eigenvectors can increase the predictability of the system. PACS numbers: 02.50.Ga, 02.50.Ey, 89.65.Gh, 92.70.Gt Keywords: Stochastic Systems, Langevin equation, Predictability I. INTRODUCTION When dealing with measurements on complex systems it is typically difficult to find the optimal set of variables to de- scribe their evolution, which provides the best opportunities for understanding and predicting the system’s behavior. Recently a framework for analyzing measurements on stochastic systems was introduced [1, 2]. This framework is based on the assumption that the measured properties evolve according to a deterministic drift and stochastic fluctuations due to the interactions with the internal degrees of freedom or the environment. These fluctuations are globally ruled by some specific stochastic forcing inherent to the signal evolu- tion itself, and therefore cannot be separated from the mea- sured variables. Whether they are due to quantum uncertain- ties or a limited knowledge of the state of the system, they are fundamental for a complete description of a complex system. Several works in this scope have already shown the advan- tage of this approach, ranging from the description of turbu- lent flows [1] and climate indices [3] to the evolution of stock markets [4] and oil prices [5], just to mention some. In the course of time several improvements to the method were pro- posed concerning its robustness with respect to finite sampling effects and measurement noise [6–11]. However, a natural question arises when dealing simultane- ously with several variables: is it possible to find non-trivial functions of measured properties for which the stochastic fluc- tuation can be neglected? This would mean that it should be possible to decrease the number of stochastic variables needed to describe the system. In this paper we will follow this question up in detail and show, that by accessing the eigenvalues of the diffusion ma- trices comprehending the measured properties it is possible to derive a path in phase space through which the deterministic contribution is enhanced. As a direct application, our proce- dure allows for the determination of the number of indepen- dent sources of stochastic forcing in a system described by an arbitrarily large number of properties. A pictorial example is as follows. Consider a professional archer trying to aim at a target in a succession of shots. If the archer holds the bow without any support, one expects that the set of trials is distributed around the center according to a radially symmetric Gaussian. Hence deviations have the same amplitudes in all directions. However, if the archer lies on the floor, the vertical direction will be more confined than the horizontal. In this case we expect a smaller variance of deviations in the vertical direction than in the horizontal one. If the archer lies on an inclined plane the most confined and less confined directions will have a certain slope related to the plane inclination. Regarding the variance in space as a particular representation of the diffusion matrix, we can now think in more complex systems where at each point in phase space fluctuations can be defined through their local diffusion matrices. The study of its eigenvalues and eigenvectors is the scope of the present paper. We start in Sec. II by defining our system of variables mathematically and by describing the standard approach to estimate its dynamics from measured data as introduced in Ref. [1, 2], where a particular emphasis will be given to the diffusive terms. Then the general eigenvalue problem is intro- duced in the role of the local eigenvalues of diffusion matrices are discussed, which exhibits the framework for the present work. We show that at each point of phase space the eigen- vectors of the diffusion matrix are tangent to new coordinate lines, one of them corresponding to the lowest eigenvalue, thus indicating the direction towards which fluctuations are - at least partially - suppressed. In Sec. III we demonstrate our approach on two examples from Hopf-bifurcation systems with stochastic forcing. Section IV closes the paper with dis- cussion and conclusions.
10

Principal axes for stochastic dynamics

May 15, 2023

Download

Documents

Michael Sommer
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Principal axes for stochastic dynamics

arX

iv:1

105.

1700

v1 [

phys

ics.

data

-an]

9 M

ay 2

011

Principal axes for stochastic dynamics

V.V. Vasconcelos,1 F. Raischel,2 M. Haase,3 J. Peinke,4 M. Wachter,4 P.G. Lind,1, 2 and D. Kleinhans5, 6

1Physics Department, Faculty of Sciences, University of Lisbon, 1649-003 Lisbon, Portugal2Center for Theoretical and Computational Physics, University of Lisbon, Av. Prof. Gama Pinto 2, 1649-003 Lisbon, Portugal

3Institute for High Performance Computing, University of Stuttgart, Nobelstr. 19, DE-70569 Stuttgart, Germany4Institute of Physics, University of Oldenburg, DE-26111 Oldenburg, Germany

5Institute for Marine Ecology, University of Gothenburg, Box 461, SE-405 30 Goteborg, Sweden6Institute of Theoretical Physics, University of Munster,DE-48149 Munster, Germany

We introduce a general procedure for directly ascertaininghow many independent stochastic sources exist in acomplex system modeled through a set of coupled Langevin equations of arbitrary dimension. The procedure isbased on the computation of the eigenvalues and the corresponding eigenvectors of local diffusion matrices. Wedemonstrate our algorithm by applying it to two examples of systems showing Hopf-bifurcation. We argue thatcomputing the eigenvectors associated to the eigenvalues of the diffusion matrix at local mesh points in the phasespace enables one to define vector fields of stochastic eigendirections. In particular, the eigenvector associatedto the lowest eigenvalue defines the path of minimum stochastic forcing in phase space, and a transform to anew coordinate system aligned with the eigenvectors can increase the predictability of the system.

PACS numbers: 02.50.Ga, 02.50.Ey, 89.65.Gh, 92.70.GtKeywords: Stochastic Systems, Langevin equation, Predictability

I. INTRODUCTION

When dealing with measurements on complex systems itis typically difficult to find the optimal set of variables to de-scribe their evolution, which provides the best opportunitiesfor understanding and predicting the system’s behavior.

Recently a framework for analyzing measurements onstochastic systems was introduced [1, 2]. This framework isbased on the assumption that the measured properties evolveaccording to a deterministic drift and stochastic fluctuationsdue to the interactions with the internal degrees of freedomor the environment. These fluctuations are globally ruled bysome specific stochastic forcing inherent to the signal evolu-tion itself, and therefore cannot be separated from the mea-sured variables. Whether they are due to quantum uncertain-ties or a limited knowledge of the state of the system, they arefundamental for a complete description of a complex system.Several works in this scope have already shown the advan-tage of this approach, ranging from the description of turbu-lent flows [1] and climate indices [3] to the evolution of stockmarkets [4] and oil prices [5], just to mention some. In thecourse of time several improvements to the method were pro-posed concerning its robustness with respect to finite samplingeffects and measurement noise [6–11].

However, a natural question arises when dealing simultane-ously with several variables: is it possible to find non-trivialfunctions of measured properties for which the stochastic fluc-tuation can be neglected? This would mean that it should bepossible to decrease the number of stochastic variables neededto describe the system.

In this paper we will follow this question up in detail andshow, that by accessing the eigenvalues of the diffusion ma-trices comprehending the measured properties it is possible toderive a path in phase space through which the deterministiccontribution is enhanced. As a direct application, our proce-dure allows for the determination of the number of indepen-dent sources of stochastic forcing in a system described by an

arbitrarily large number of properties.

A pictorial example is as follows. Consider a professionalarcher trying to aim at a target in a succession of shots. Ifthe archer holds the bow without any support, one expectsthat the set of trials is distributed around the center accordingto a radially symmetric Gaussian. Hence deviations have thesame amplitudes in all directions. However, if the archer lieson the floor, the vertical direction will be more confined thanthe horizontal. In this case we expect a smaller variance ofdeviations in the vertical direction than in the horizontalone.If the archer lies on an inclined plane the most confined andless confined directions will have a certain slope related tothe plane inclination. Regarding the variance in space as aparticular representation of the diffusion matrix, we can nowthink in more complex systems where at each point in phasespace fluctuations can be defined through their local diffusionmatrices. The study of its eigenvalues and eigenvectors is thescope of the present paper.

We start in Sec. II by defining our system of variablesmathematically and by describing the standard approach toestimate its dynamics from measured data as introduced inRef. [1, 2], where a particular emphasis will be given to thediffusive terms. Then the general eigenvalue problem is intro-duced in the role of the local eigenvalues of diffusion matricesare discussed, which exhibits the framework for the presentwork. We show that at each point of phase space the eigen-vectors of the diffusion matrix are tangent to new coordinatelines, one of them corresponding to the lowest eigenvalue,thus indicating the direction towards which fluctuations are- at least partially - suppressed. In Sec. III we demonstrateour approach on two examples from Hopf-bifurcation systemswith stochastic forcing. Section IV closes the paper with dis-cussion and conclusions.

Page 2: Principal axes for stochastic dynamics

2

II. DEFINING STOCHASTIC EIGENDIRECTIONS

We consider anN -dimensional Langevin processX =(X1(t), . . . , XN (t)) whose probability density functions(PDFs)f(X, t) evolve according to the Fokker-Planck equa-tion (FPE) [12, 13]

∂f(X, t)

∂t= −

N∑

i=1

∂xi

[

D(1)i (X)f(X, t)

]

+

N∑

i=1

N∑

j=1

∂2

∂xi∂xj

[

D(2)ij (X)f(X, t)

]

.(1)

The functionsD(1)i andD(2)

ij are called the Kramers-Moyal orthe drift and diffusion coefficients and are defined as

D(k)(X) = lim

∆t→0

1

∆t

M(k)(X,∆t)

k!, (2)

whereM(k) are the first and second conditional moments

(k = 1, 2). Here we assume that the underlying process isstationary and therefore both drift and diffusion coefficientsdo not explicitly depend on timet. Conditional moments canbe directly derived from the measured data as [2, 7]:

M(1)i (X,∆t) = 〈Yi(t+∆t)− Yi(t)|Y(t) = X〉

M(2)ij (X,∆t) =

〈(Yi(t+∆t)− Yi(t))(Yj(t+∆t)− Yj(t))|Y(t) = X〉(3)

HereY(t) = (Y1(t), . . . , YN (t)) exhibits theN -dimensionalvector of measured variables at thet and〈·|Y(t) = X〉 sym-bolizes a conditional averaging over the entire measurementperiod, where only measurements withY(t) = X are takeninto account[27].D(1) is the drift vector andD(2) the diffu-sion matrix.

Associated with the FPE (1) is a system ofN coupled Ito-Langevin equations, which can be written as [12, 13]

dX

dt= h(X) +G(X)Γ(t) . (4)

HereΓ(t) is a set ofN normally distributed random variablesfulfilling

〈Γi(t)〉 = 0, 〈Γi(t)Γj(t′)〉 = 2δijδ(t− t′) , (5)

that drives the stochastic evolution ofX. The vectorsh andthe matricesG = {gij} for all i, j = 1, . . . , N are connectedto the local drift and diffusion function through

D(1)i (X) = hi(X) and (6)

D(2)ij (X) =

N∑

k=1

gik(X)gjk(X) . (7)

While the FPE describes the evolution of the joint distribu-tion of theN variables statistically, the system of Langevinequations in (4) models individual stochastic trajectories ofthe system. In Eq. (4) the termh(X) contains the determinis-tic part of the macroscopic dynamics, while the locally linear

functionsG(X) account for the amplitudes of the stochasticforces mirroring the different sources of fluctuations due to allsorts of microscopic interactions within the system. Here weshould point out that we consider stationary processes, other-wise ensemble averages have to be taken[2].

The conditional moments in Eq. (3) are computed directlyfrom data time series and are typically linear functions of∆tfor sufficiently small∆t [2, 7]. Then, through the limit inEq. (2), bothD(1)

i andD(2)ij are determined.

TheN ×N matrixG cannot be uniquely determined fromthe symmetric diffusion matrixD(2) for N ≥ 2, because thenumber of unknown elements inG exceeds the number ofknown elements inD(2) leading toN2 − 1

2N(N + 1) =12N(N − 1) free parameters. However, a simple method toobtainG from D

(2) is the following. Due to its symmetryand positive semi-definiteness the diffusion matrixD

(2) hasonly real, non-negative eigenvaluesλi(i = 1, . . . , N). There-fore an orthogonal transformationU can be found that diag-onalizesD(2), i.e. UT

D(2)

U = diag(λ1, . . . , λN). Takingthe positive root of the eigenvalues and transforming it backwe arrive atgij = (

√D(2))ij where the symbolic notation√

D(2) = Udiag(√λ1, . . . ,

√λN)U

T is used for simplicity.General forms ofG can be constructed by multiplication of√D(2) with arbitrary orthogonal matrices. For our consider-

ations the simple version is sufficient [12].

FIG. 1: Stream plot of the system in Eq. (8) with no stochasticity(D(2)

≡ 0), i.e.k1 = k2 = 0, where the arrows formally representthe drift vector of synthetic data extracted from direct simulation ofEq. (8). The solid line represents a partial trajectory. Trajectoriesdiverge from the unstable fixed pointr = 0 and converge to the limitcycler = 1.

Page 3: Principal axes for stochastic dynamics

3

FIG. 2: Principal axis of the diffusion matrix. Field of(a) largest and(b) smallest eigenvectors for the diffusion matrix corresponding toEq. (8) together with(c) the eigenvaluesλ as a function ofr and (in-set) a close-up aroundλ ∼ 0 (see text). The largest eigenvalue is as-sociated with the radial direction while the smallest one correspondsto the angular one. Herek1 = 0.5, k2 = 0.05, α = 0.7475 ≈

r2⟩

,t0 = 0, r0 = 1.01 andθ0 = 0. The analysis was performed from107 data points extracted by integration of Eq. (8) with integrationstep of∆t = 10−4.

Sinceh andG are functions of the variablesX and arenumerically determined on an1 × ... × nN mesh of pointsin phase space, one can always define at each mesh point theN eigenvalues and corresponding eigenvectors of the matrixG(X). This analysis provides information about the stochas-tic forcing acting on the system and was already applied to atwo-dimensional sub-critical bifurcation [15] and to the anal-ysis of human movements [16].

In general the eigenvalues indicate the amplitude of thestochastic force and the corresponding eigenvector indicatesthe direction toward which such force acts. Even more inter-esting features, however, can be extracted from the eigenval-ues and eigenvectors.

To each eigenvector of the diffusion matrix we can asso-ciate one independent source of stochastic forcingΓi. In thisscope, the eigenvectors can be regarded as defining principalaxes for stochastic dynamics. For instance, the vector fieldaligned at each mesh point to the eigenvector associated tothe smallest eigenvalue of matrixG defines the paths in phasespace towards which the fluctuations are minimal. Further-

FIG. 3: Time series of(a) x(t) = r(t) cos θ(t) and (b) y(t) =r(t) sin θ(t) wherer andθ are taken from the integration of Eq. (8)for the same conditions as in Fig. 2.

more, if the corresponding eigenvalues are very small com-pared to all the other ones at the respective mesh points, thecorresponding stochastic forces can be neglected and the sys-tem has onlyN−1 independent stochastic forces. In this casethe problem can be reduced in one stochastic variable by anappropriate transformation of variables, since the eigenvectorsin one coordinate system are the same as in another one (seeAppend. A).

These are the central ideas of our study which we next applyto an analytical example, namely the Hopf-bifurcation.

III. THE HOPF-BIFURCATION SYSTEM WITHSTOCHASTIC FORCING: AN ANALYTICAL EXAMPLE

In what follows we consider the dynamical system

dr

dt= r(1 − r2) + k1rΓ1 (8a)

dt= α− r2 + k2Γ2 (8b)

describing the evolution of the radial and azimuthal coor-dinates of a particle moving in two-dimensional space, withα, k1 andk2 being constants.Γ1,2 are the stochastic forces,which are Gaussian distributed andδ-correlated. Fork1 =k2 = 0 the system forα > 0 has an unstable fixed point atr = 0 and a stable limit cycle atr = 1, as sketched in Fig. 1.Equation (8b) usesα − r2 instead of the usual1 + r2 [17] inorder to be able to avoid the systematic increase ofθ in time.For this reason we choseα = 〈r2〉.

Introducing non-zero stochastic termsk1, k2 6= 0 for inde-pendent stochastic forces,Γ1 andΓ2, one has diagonal dif-fusion matrices with two eigenvalues,(k1r)2 andk22 at eachpoint (r, θ) corresponding to the radial and angular eigendi-rections respectively.

Since the eigenvectors do not change when changing thecoordinate system (see Append. A), “mixing” the radial andangular coordinates by e.g. changing to Cartesian should yieldthe same principal axis. We, therefore, now repeat our analy-sis for transformed variablesx = r cos θ andy = r sin θ.

Using Ito’s formulation for the transformation from polarto Cartesian coordinates [18], the Langevin system can be

Page 4: Principal axes for stochastic dynamics

4

FIG. 4: Drift and diffusion functions for the system in Eq. (8) for the same conditions as in Fig. 2, obtained from analysisof the time seriesx = r cos θ andy = r sin θ. For each drift and diffusion function the bottom surface reflect the numerical results, while the correspondingfitted surface on top is shifted upwards for clarity:(a) D(2)

11 (x, y), (b) D(2)12 (x, y), (c) D(2)

22 (x, y), (d) D(1)1 (x, y), (e) D(1)

2 (x, y). The bar plotscompare the coefficients that fit each function (dark bars) with the analytic, true ones (white bars) (see Tab. I).

defined through the drift and diffusion coefficients in trans-formed coordinates,hi(x, y), gij(x, y) (see Append. A):

hi = (h1/r − g222)Yi + (−1)ih2(Y1δ2i + Y2δ1i)(9a)

gij =g1jr(Y1δi1 + Y2δi2)− g2j(Y1δ2i + Y2δ1i) (9b)

with i, j = 1, 2 and (Y1, Y2) = (x, y). Thus, fromG inthe polar coordinate system, Eq. (9b) yieldsG for which the

diffusion matrixD(2) = GGT

reads

D(2) =

[

k22y2 + k21x

2 −k22xy + k21xy−k22xy + k21xy k22x

2 + k21y2

]

. (10)

The eigenvalues and corresponding eigenvectors areλ1 =

k21(x2 + y2) with v1 = (x/

x2 + y2, y/√

x2 + y2) andλ2 = k22(x

2+y2) with v2 = (−y/√

x2 + y2, x/√

x2 + y2).In contrast to the eigendirections, the eigenvalues depend

on the Jacobian of our transformation, since the nonlinear

Page 5: Principal axes for stochastic dynamics

5

Kramer-Moyal Coefficients forfunctions

1 x x2 x3 y y2 y3 xy x2y xy2

D(1)1 −0.009 1.00 0.077 −1.01 0.820 −0.012 1.26 0.014 1.23 1.24

D(1)2 −0.034 0.761 0.133 −1.09 0.99 0.036 −1.22 0.101 −1.00 −1.21

D(2)11 0.005 −0.008 0.262 ∼ 0 0.001 0.013 ∼ 0 ∼ 0 ∼ 0 ∼ 0

D(2)12 = D

(2)21 ∼ 0 −0.003 014 ∼ 0 −0.004 −0.011 ∼ 0 0.255 ∼ 0 ∼ 0

D(2)22 0.005 ∼ 0 −0.01 ∼ 0 −0.006 0.255 ∼ 0 ∼ 0 ∼ 0 ∼ 0

TABLE I: Coefficients of quadratic forms describing the surfaces in Fig. 4.

transformation from polar to Cartesian coordinates changesthe metric. According to Append. A the eigenvalues in po-lar coordinates areλi = λi/s

2i for i = 1, 2 with s21 =

(∂x∂r )2 + (∂y∂r )

2 = 1 ands22 = (∂x∂θ )2 + (∂y∂θ )

2 = r2. Theeigenvalues of our system in Cartesian coordinates are shownas solid and dashed lines in Fig. 2 and a full derivation for theeigenvalues in different coordinate systems is given in Ap-pend. A for the generalN -dimensional case.

Notice that, if we determine the eigenvalues in a Cartesiansystem, the metric is Euclidean, i.e. the eigenvalues measuredin the same units in bothx- andy-direction directly charac-terize the diffusion in principal directions. Nonlinear transfor-mations of coordinates such as the one from Cartesian to polarcoordinates, however, generally change the metric. In suchasystem the direction of the maximal eigenvalue is not neces-sarily the direction with the highest diffusion. For example, inthe Hopf-bifurcation in Eqs. (8), the maximal eigenvalue forr < k2/k1 is in azimuthal direction, but the maximal diffusionis still in the radial direction.

Looking tox andy asmeasuredvariables, they define ourCartesian system and thereforeλ1 is our maximal eigenvalueandλ2 is associated with the eigendirection showing minimalstochastic fluctuation.

We proceed to verify this result by numerically analyzingthe time seriesx andy plotted in Fig. 3, following the pro-cedure described in the previous section. The procedure isas follows. First, we compute the time series according toEq. (8) in coordinatesr andθ. Then, we transfer this time se-ries to Cartesian coordinatesx, y. In the new coordinates wecalculate the drift vectorsD(1) and diffusion matricesD(2).Finally, having matricesD(2) at each mesh point, one eas-ily computes the eigenvalues and eigenvectors, as shown inFig. 2. The analytical results given by Eq. (10) are reproducednicely: the stochastic contribution has two eigendirections,one in the radial direction and another in the angular one; thelarger eigenvalue is associated with the radial direction.Thedeviations observed for the minimum eigenvalue occur due todifferences in the quality of statistics in different regions ofphase space (value ofr): in particular regions in phase spaceat intermediate values ofr are less frequently realized in thedata series than others, resulting in a reduced accuracy of theestimated eigenvalues in these regions.

Figures 4a-c show the components of the diffusion matrix,namelyD(2)

11 , D(2)12 andD(2)

22 as a function of bothx andy.Notice that the diffusion matrix is symmetric (see Eq. (3)) and

thereforeD(2)21 = D

(2)12 . The surfaces obtained numerically

are fitted to a full quadratic polynomial inx andy through aleast square procedure. The coefficients of these polynomialfits are also shown in Fig.4a-c. The results of Fig. 4a-c complywith the analytical expression in Eq. (10). A similar analysisis done for the drift vector, i.e. for the functionD(1)

1 andD(1)2 ,

shown in Figs. 4d-e. Table I lists the values obtained for thecoefficients of the surface fits in Fig. 4.

It is worth stressing that the aforementioned procedure canlikewise be applied to measured multi-dimensional time seriesin cases where no additional information on the underlyingdynamics is available. In this case the corresponding fieldsofeigenvectors indicate the directions with largest and smalleststochastic contributions. Using this information, a transformof variables to a coordinate system aligned with the directionof the smallest stochastic contribution can be applied, whichin the present case would be the tangential direction. In therepresentation of these new coordinates, the stochastic contri-bution will then be reduced.

In the previous case, the two eigenvectors for diffusion wereintroduced tangentially and perpendicularly to the limit cy-cle. If the principal axes for the stochastic contribution are notaligned with the trajectories of the deterministic part of thesystem, one might suspect an interplay between both stochas-tic contribution and the drift of the system.

In order to demonstrate the wider applicability of ourmethod, we next consider such a case, with the major axisaligned withy-direction. We therefore investigate the system

dr

dt= r(1 − r2) +K1r cos θΓ1 +K2r sin θΓ2 (11a)

dt= (α− r2)−K1 sin θΓ1 +K2 cos θΓ2 . (11b)

In Cartesian coordinates the dynamics are described by

dx

dt= h1 +K1rΓ1 (12a)

dy

dt= h2 +K2rΓ2. (12b)

with

h1 = x(1− (x2 + y2))− y(α− (x2 + y2))

+x

x2 + y2(

(K21 − 2K2

2)y2 − k22x

2)

(13a)

h2 = y(1− (x2 + y2))− x(α − (x2 + y2))

Page 6: Principal axes for stochastic dynamics

6

FIG. 5: Hopf-system with strong stochastic contribution along they-direction. Drift functions are(a) D(1)1 (x, y) and (b) D(1)

2 (x, y). (c)Probability density function of(x, y) shows the most visited regions in phase space. Diffusion functions are(d) D(2)

11 (x, y), (e) D(2)12 (x, y) =

D(2)21 (x, y) and(f) D(2)

22 (x, y). Results were obtained by analyzing the time seriesx = r cos θ andy = r sin θ according to Eq. (11) with thesame upward shifting for clarity as in Fig. 4. The probability density function in (c) explains the deviations observed for the diffusion functions(see text).

+y

x2 + y2(

(K22 − 2K2

1)x2 − k21y

2)

. (13b)

Here we choseK1 = 0.05 andK2 = 0.5. In other words,the impact of stochastic fluctuations is large in they-directionand small along thex-axis. Notice that the determinant ofthe diffusion matrix is not preserved, since the Jacobian ofthetransformation is not the identity matrix (see Append. A).

As already done in the previous example, we integrate sys-tem (11) numerically and analyze the resulting data series ofx andy values. As can be seen from Fig. 5, the drift functionsh1 andh2 are properly derived (see Figs. 5a and 5b), as wellas the terms of the diffusion matricesD(2) (Figs. 5d-f). Thedeviations observed for the smallD

(2)11 at particular regions of

phase space (Fig. 5d) are due to the lack of accurate statisticsat those regions (see the PDF in Fig. 5c).

Interestingly, in both cases presented above, our analysisestimates better the diffusion matrix than the correspondingdrift vector, contrary to what is known in other situations [2].The reason for the poor quality of the drift estimate is probablydue to the small time increment used for the estimate, whichspoils particularly the result for the drift. See Ref. [19] fordetails.

As can be seen in Fig. 6a and 6b, the eigendirections areproperly derived as well as the two eigenvalues plotted inFig. 6c and 6d as a function ofr andθ, respectively.

With these two examples we first have illustrated that

our numerical approach succeeds in estimating the drift andthe diffusion functions contributions in a two-dimensionalstochastic system. A straightforward computation of theeigenvectors and eigenvalues of the diffusion matrices thenenables us to ascertain a set of principal stochastic directions.We would like to note that the method described here bearssome resemblance to the well-known principal componentanalysis[20]. In contrast to principal component analysisper-formed on the distribution of the measured data the methodwe propose, however, focuses on the stochasticfluctuationsin time. The properties of these fluctuations in general aredependent on the positionX resulting in vector fields of theprincipal axes of the stochastic contributions to the dynamicsof the system of consideration. The two methods seem how-ever related and should provide complementary insights whenapplied to specific sets of empirical data. Such matters are be-yond the scope of this paper and will be addressed elsewhere.

We also emphasize that the minimal eigenvalue of the dif-fusion matrix not necessarily points towards the directionthesystems as a whole can be expected to evolve to. It rather re-flects the direction of minimal stochastic forcing only. Forprediction of the system’s evolution in general time propa-gators need to be taken into account which involve both thestochastic (diffusion) and the deterministic (drift) parts of thedynamics [2]. Nevertheless, being able to ascertain the direc-tions in phase space where fluctuations are weaker enables one

Page 7: Principal axes for stochastic dynamics

7

FIG. 6: Results of the analysis of diffusion matrices plotted in Fig. 5 obtained from a system with an increased level of dynamical noise onthey component (second example). The individual panels depict(a) the vector fields of eigenvectors for the minimum and(b) the maximumeigenvalue, normalized to the respective eigenvalues, andthe dependence of the eigenvalues both both on(c) r and(d) θ.

to choose a transformation such that part of the new variableshave small stochastic terms, reducing the number of variableswhich are affected by stochastic forces. Further, this is also ofphysical interest to see in which variables noise is acting,forexample to which variable a thermal bath is physically con-nected, and it may be of technical interest to better detect anoise source, for example in an electric circuit[21].

IV. DISCUSSION AND CONCLUSIONS

In this paper we introduce the concept of eigendirectionsfor the stochastic dynamics in systems of arbitrary dimension.The procedure builds on the modeling of complex systemsby means of drift an diffusion functions, that specify a sys-tem of coupled Langevin equations. Estimates for these func-tions can be obtained directly from measurements on the sys-

tems without prior knowledge on its dynamics following anapproach described in Ref. [1, 2].

In Langevin systems the stochastic forcing is composedof independent Gaussianδ-correlated stochastic fluctuations.The way how these fluctuations effect the system dependsboth on the diffusion matrix and on the coordinate system cho-sen.

Whereas in former publications much attention has beenpaid to imperfections of the Langevin process, like measure-ment noise or correlated noise and finite size sampling[22–25]here we introduced a method which allows us to determine theeigendirections along which each stochastic force acts. Eachdirection of forcing is defined through the eigenvector and thecorresponding eigenvalue accounting for its amplitude. Theset of eigenvectors does not depend on the choice of the co-ordinate system and is therefore characteristic for the system.Further, for the particular case where a numberk of eigenval-

Page 8: Principal axes for stochastic dynamics

8

ues are negligible in comparison with the others at each meshpoint, the number of stochastic variables can be reduced. Evenin cases where the number of stochastic variables cannot be re-duced, the eigenvector associated with the lowest eigenvaluesat each point in phase space indicates the path with minimalstochastic forcing. In any case a transform to new coordinatesin the directions of minimum stochastic forcing can be per-formed, increasing the relative amplitude of the deterministiccomponents and the predictability of the corresponding vari-able.

The method was successfully applied to a two-dimensionalsystem exhibiting a Hopf-bifurcation. For the Hopf-bifurcation the diffusion matrix is better estimated through ouranalysis than the corresponding drift vector, contrary to whatis known in many other situations [2].

Compared to previous methods for minimization ofstochasticity introduced in Refs. [3, 26], our approach hastheadvantage of not requiring a parametrized Ansatz for quan-tifying the respective stochastic contributions to the system.Moreover, it should be applicable even in the case where thedata sets are contaminated with measurement noise [6].

Acknowledgements

The authors thank Bernd Lehle for useful discussions.VVV, FR (SFRH/BPD/65427/2009) and PGL (Ciencia 2007)thank Fundacao para a Ciencia e a Tecnologia (FCT)for financial support. All authors thank DAAD and

FCT for financial support through the bilateral cooperationDREBM/DAAD/03/2009.

Appendix A: The diffusion matrix in an arbitrary coordinatesystem

Consider a transformation of variablesX = {Xi} → X =

{Xi} with i = 1, . . . , N , which is given by a two-times con-tinuously differentiable deterministic vector functionF

X = F(X, t) . (A1)

Using Ito’s formula [12, 13], a truncated form of the Ito Taylorexpansion, the Langevin equations for the new variables takethe form

dXi =∂Fi

∂tdt+

N∑

k=1

∂Fi

∂XkdXk+

1

2

N∑

k=1

N∑

l=1

dXk∂2Fi

∂Xk∂XldXl + . . . . (A2)

Eq. (4) can be rewritten to

dXi = D(1)i (X)dt+

N∑

j=1

gij(X)Γj(t)√dt . (A3)

Inserting this expression into Eq. (A2), retaining all terms in the expansion up to orderdt while neglecting terms of order(dt)3/2 or higher, and taking advantage of the statistics of the stochastic forcing, Eq. (5), one obtains for the third expression onthe r.h.s.

1

2

N∑

k=1

N∑

l=1

(D(1)k dt+

N∑

j=1

gkjΓj

√dt)

∂2Fi

∂Xk∂Xl(D

(1)l dt+

N∑

m=1

glmΓm

√dt) =

1

2

N∑

k=1

N∑

l=1

N∑

j,m=1

2gkjglm∂2Fi

∂Xk∂Xlδjmdt (A4)

Thus, the transformed Langevin equation has the form

dXi =∂Fi

∂tdt+

N∑

k=1

∂Fi

∂Xk(D

(1)k dt+

N∑

j=1

gkjΓj(t)√dt) +

N∑

k=1

N∑

l=1

N∑

j=1

gljgkj∂2Fi

∂Xk∂Xldt+O(dt)3/2 (A5)

Restricting to stationary processes (∂Fi

∂t = 0) and using thenotation

dXi

dt= D

(1)i (X) +

N∑

j=1

gij(X)Γj(t) (A6)

the transformed drift function reads

D(1)i =

N∑

k=1

D(1)k

∂Fi

∂Xk+

N∑

l=1

N∑

j=1

gljgkj∂2Fi

∂Xk∂Xl

.

(A7)

The components of matrixG transform as

gij =

N∑

k=1

gkj∂Fi

∂Xk⇔ G = JG , (A8)

whereJ is the Jacobian of our transformation from coordi-natesXi (vector basisei) to Xi (vector basisei).

From elementary algebra it is known that a vectorv can bewritten in both coordinate systems:

v =∑

j

vj ej =∑

j

vj∑

k

Jjkek

Page 9: Principal axes for stochastic dynamics

9

=∑

k

j

Jjkvj

ek =∑

k

j

JTkj vj

ek

≡∑

k

vkek. (A9)

Thus, the matrixU incorporating the columns of eigenvec-torsuk of matrixG, with coordinates in basisei, can be writ-ten asU = J

TU with U = [u1 u2 . . . uN ] fulfilling

U−1 = U

T . Thus, from Eq. (A8) one obtains

UTGG

TU = U

TGG

TU. (A10)

So, the eigenvectors ofD(2) and D(2) are the same, apart

from the basis in which they are considered, and in the orig-inal (Cartesian) coordinate systemX, the transformation toprincipal axes is given by Eq. (A10) i.e.

UTD

(2)U = diag[λ1 λ2 . . . λN ] (A11)

At regular points the transformation in Eq. (A1) can be in-verted,

X = F−1(X) =: f(X) . (A12)

By definitionf(X) is chosen such that the normalized eigen-vectors are given by

uk =1

sk

∂f

∂Xk

with sk =

∂f

∂Xk

. (A13)

The Jacobian off(X) can then be written as

Jik(X) =∂fi

∂Xk

, (A14)

or in a more convenient form

J = [u1 u2 . . . uN ] diag[s1 s2 . . . sN ] =: Us.(A15)

The diagonal matrixs describes the metric of the new sys-tem. The Jacobian of the transformation in Eq. (A1) is theinverse relation

J = s−1

UT . (A16)

Introducing this relation into Eq. (A8) yields

G = JG = s−1

UTG , (A17)

i.e. the transformed diffusion matrix reads

D(2) = GG

T = s−1

UTGG

TUs

−1 . (A18)

Inserting Eqs. (A11) and (A15) finally leads to the diagonalmatrix

D(2) = diag

[

λ1

s21

λ2

s22. . .

λN

s2N

]

. (A19)

FIG. 7: Interchanging between coordinate systems. Arrows indi-cate that eigenvectors of the eigenvalues of the diffusion matrix (seeEq. (A19)) at each point of the mesh from which drift and diffusioncoefficients are derived. Here the two-dimensional case is illustrated.Extension to arbitrary number of variables is straightforward.

In practice, we have no access to the transformation inEq. (A1). Instead we have a grid (numerical) representationin X coordinate system and search for the transformation inEq. (A1) is such that one of the new variables, sayX1, isdefined by the field of eigenvectorsu1 corresponding to themaximal eigenvalueλ1. Then, each orthogonal direction isconsequently defined by one of the other (orthogonal) eigen-vectorsuj with j = 1, . . . , N . Figure 7 illustrates this for thetwo-dimensional case.

To obtain the coordinates of each pointP in this grid in thenew coordinatesX one considers the transformation (A1) to-gether with the inverted transformation (A12) and solves thesystem of PDEs in (A13). The problem lies in the couplingintroduced by the factorssk and the additional complicationof finding the correct boundary conditions which turns the so-lution of the PDEs into a complicated problem, even numeri-cally, although in 2D thesk can be neglected as we are onlyinterested in finding the direction∂fi∂Xk

. In most cases, how-ever, it should be possible to guess a suitable transform fromvisual inspection of the fields of eigenvectors. From one pointto the next one in the mesh, the eigenvectors are sorted ac-cording to continuity arguments. Further, depending on theobtained fields of eigenvectors, a scaled polar form or hyper-bolic coordinates may be considered, where the parameters ofsaid transform can then be fitted to the vector fields. Such par-ticular cases depend on the specific data set at hand and willbe addressed elsewhere.

Page 10: Principal axes for stochastic dynamics

10

[1] R. Friedrich and J. Peinke, Phys. Rev. Lett.78, 863 (1997).[2] R. Friedrich, J. Peinke and M.R.R. Tabar,Complexity in the

view of stochastic processesin Springer Encyclopedia of Com-plexity and Systems Science(Springer, Berlin, 2008).

[3] P.G. Lind, A. Mora, J.A.C. Gallas and M. Haase, Phys. Rev.E72, 056706 (2005).

[4] R. Friedrich, J. Peinke and Ch. Renner, Phys. Rev. Lett.84,5224 (2000).

[5] F. Ghasemi, M. Sahimi, J. Peinke, R. Friedrich, G.R. Jafari andM.M.R. Tabar, Phys. Rev. E75, 060102 (2007).

[6] F. Bottcher, J. Peinke, D. Kleinhans, R. Friedrich, P.G. Lind,M. Haase, Phys. Rev. Lett.97 090603 (2006).

[7] P.G. Lind, M. Haase, F. Bottcher, J. Peinke, D. Kleinhans andR. Friedrich, Phys. Rev. E81 041125 (2010).

[8] J. Carvalho, F. Raischel, M. Haase and P.G. Lind,J. Physics. Conf. Ser.285 012007 (2011).

[9] D. Kleinhans, R. Friedrich , and A. Nawroth, Phys. Lett A,34642-46 (2005).

[10] G. Gottschall, M. Wachter and J. Peinke, New J. Phys.10(8)083034 (2008).

[11] S.J. Lade, Phys. Lett. A373 3705-3709 (2009).[12] H. Risken,The Fokker-Planck Equation, (Springer, Heidelberg,

1984).[13] C. W. Gardiner,Handbook of stochastic Methods, (Springer,

Germany, 1997).[14] D. Lamouroux and K. Lehnertz, Phys. Lett. A373 3507-3512

(2009).[15] J. Gradisek, R. Friedrich, E. Govekar and I. Grabec, Meccanica,

38, 33 (2003).[16] A. M. van Mourik, A. Daffertshofer, and P. J. Beek, Biological

cybernetics94, 233 (2006).[17] J. Argyris, G. Faust, M. Haase and R. Friedrich,Die Er-

forschung des Chaos(Springer, Berlin, 2010).[18] N.G. van Kampen,Stochastic Processes in Physics and Chem-

istry (North Holland, 1992).[19] D. Kleinhans and R. Friedrich, inWind Energy: Proceedings of

the Euromech Colloquium, Eds. J. Peinke, P. Schaumann, andS. Barth (Springer, Berlin, 2007), pp. 129-133.

[20] H. Kantz and T. Schreiber,Nonlinear time series analysisCam-bridge Univ. Press, Cambridge (1997)

[21] R. Friedrich, S. Siegert, J. Peinke, St. Luck, M. Siefert, M. Lin-demann, J. Raethjen, G. Deuschl, G. Pfister,Phys. Lett. A271217-222 (2000).

[22] M. Ragwitz and H. Kantz, Phys. Rev. Lett.87, 254501 (2001)[23] R. Friedrich, C. Renner, M. Siefert and J. Peinke, Phys.Rev.

Lett. 89, 149401 (2002).[24] D. Kleinhans and R. Friedrich, Physics Letters A80, 368

(2007).[25] C. Anteneodo and R. Riera, Phys. Rev. E80, 031103 (2009).[26] P.G. Lind, A. Mora, M. Haase and J.A.C. Gallas,

Int. J. Bif. Chaos17(10) 3461-3466 (2007).[27] In practice binning or kernel based approaches with a certain

threshold are applied in order to evaluate the conditionY(t) =X. See e.g. Ref. [14] for details.