Dynamical Systems and Linear Algebra January 10, 2006 Fritz Colonius Institut f¨ ur Mathematik Universit¨atAugsburg Augsburg, Germany Wolfgang Kliemann Department of Mathematics Iowa State University Ames, IA 50011 Linear algebra plays a key role in the theory of dynamical systems, and concepts from dynamical systems allow the study, characterization and generalization of many objects in linear algebra, such as similarity of matrices, eigenvalues, and (generalized) eigenspaces. The most basic form of this interplay can be seen as a matrix A gives rise to a continuous time dynamical system via the linear ordinary differential equation ˙ x = Ax, or a discrete time dynamical system via iteration x n+1 = Ax n . The properties of the solutions are intimately related to the properties of the matrix A. Matrices also define nonlinear systems on smooth manifolds, such as the sphere S d−1 in R d , the Grassmann manifolds, or on classical (matrix) Lie groups. Again, the behavior of such systems is closely related to matrices and their properties. And the behavior of nonlinear systems, e.g. of differential equations ˙ y = f (y) in R d with a fixed point y 0 ∈ R d can be described locally around y 0 via the linear differential equation ˙ x = D y f (y 0 )x. Since A.M. Lyapunov’s thesis in 1892 it has been an intriguing problem how to construct an appropriate linear algebra for time varying systems. Note that, e.g., for stability of the solutions of ˙ x = A(t)x it is not sufficient that for all t ∈ R the matrices A(t) have only eigenvalues with negative real part (see [Hah67], Chapter 62). Of course, Floquet theory (see [Flo83]) gives an elegant solution for the periodic case, but it is not immediately clear how to build a linear algebra around Lyapunov’s ‘order numbers’ (now called Lyapunov exponents). The multiplicative ergodic theorem of Oseledets [Ose68] resolves the issue for measurable linear systems with stationary time dependencies, and the Morse spectrum together with Selgrade’s theorem [Sel75] clarifies the situation for continuous linear systems with chain transitive time dependencies. This section provides a first introduction to the interplay between linear algebra and anal- ysis/topology in continuous time. Subsection 1 recalls facts about d-dimensional linear differential 1
31
Embed
Dynamical Systems and Linear Algebracolonius/downloads/Colonius… · dynamical systems allow the study, characterization and generalization of many objects in linear algebra, such
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Dynamical Systems and Linear Algebra
January 10, 2006
Fritz Colonius
Institut fur Mathematik
Universitat Augsburg
Augsburg, Germany
Wolfgang Kliemann
Department of Mathematics
Iowa State University
Ames, IA 50011
Linear algebra plays a key role in the theory of dynamical systems, and concepts from
dynamical systems allow the study, characterization and generalization of many objects in linear
algebra, such as similarity of matrices, eigenvalues, and (generalized) eigenspaces. The most basic
form of this interplay can be seen as a matrix A gives rise to a continuous time dynamical system via
the linear ordinary differential equation x = Ax, or a discrete time dynamical system via iteration
xn+1 = Axn. The properties of the solutions are intimately related to the properties of the matrix
A. Matrices also define nonlinear systems on smooth manifolds, such as the sphere Sd−1 in Rd,
the Grassmann manifolds, or on classical (matrix) Lie groups. Again, the behavior of such systems
is closely related to matrices and their properties. And the behavior of nonlinear systems, e.g. of
differential equations y = f(y) in Rd with a fixed point y0 ∈ R
d can be described locally around y0
via the linear differential equation x = Dyf(y0)x.
Since A.M. Lyapunov’s thesis in 1892 it has been an intriguing problem how to construct an
appropriate linear algebra for time varying systems. Note that, e.g., for stability of the solutions of
x = A(t)x it is not sufficient that for all t ∈ R the matrices A(t) have only eigenvalues with negative
real part (see [Hah67], Chapter 62). Of course, Floquet theory (see [Flo83]) gives an elegant solution
for the periodic case, but it is not immediately clear how to build a linear algebra around Lyapunov’s
‘order numbers’ (now called Lyapunov exponents). The multiplicative ergodic theorem of Oseledets
[Ose68] resolves the issue for measurable linear systems with stationary time dependencies, and the
Morse spectrum together with Selgrade’s theorem [Sel75] clarifies the situation for continuous linear
systems with chain transitive time dependencies.
This section provides a first introduction to the interplay between linear algebra and anal-
ysis/topology in continuous time. Subsection 1 recalls facts about d-dimensional linear differential
1
equations x = Ax, emphasizing eigenvalues and (generalized) eigenspaces. Subsection 2 studies
solutions in Euclidian space Rd from the point of view of topological equivalence and conjugacy
with related characterizations of the matrix A. Subsection 3 presents, in a fairly general set-up,
the concepts of chain recurrence and Morse decompositions for dynamical systems. These ideas
are then applied in Subsection 4 to nonlinear systems on Grassmannian and flag manifolds induced
by a single matrix A, with emphasis on characterizations of the matrix A from this point of view.
Subsection 5 introduces linear skew product flows as a way to model time varying linear systems
x = A(t)x with, e.g., periodic, measurable ergodic, and continuous chain transitive time dependen-
cies. The following Subsections 6, 7, and 8 develop generalizations of (real parts of) eigenvalues and
eigenspaces as a starting point for a linear algebra for classes of time varying linear systems, namely
periodic, random, and robust systems. (For the corresponding generalization of the imaginary parts
of eigenvalues see, e.g., [Arn98] for the measurable ergodic case and [CFJ06] for the continuous, chain
transitive case.) Subsection 9 introduces some basic ideas to study genuinely nonlinear systems via
linearization, emphasizing invariant manifolds and Grobman-Hartman type results that compare
nonlinear behavior locally to the behavior of associated linear systems.
Notation: In this section the set of d × d real matrices is denoted by gl(d, R) rather than Rd×d.
1 Linear Differential Equations
Linear differential equations can be solved explicitly if one knows the eigenvalues and a basis of
eigenvectors (and generalized eigenvectors, if necessary). The key idea is that of the Jordan form
of a matrix. The real parts of the eigenvectors determine the exponential behavior of the solutions,
described by the Lyapunov exponents and the corresponding Lyapunov subspaces.
For information on matrix functions, including the matrix exponential, see §3.1. For information
on the Jordan canonical form, see §2.1. Systems of first order linear differential equations are also
discussed in §12.1.
Definitions
For a matrix A ∈ gl(d, R) the exponential eA ∈ GL(d, R) is defined by eA = I +∑∞
n=11n!A
n ∈
GL(d, R), where I ∈ gl(d, R) is the identity matrix.
A linear differential equation (with constant coefficients) is given by a matrix A ∈ gl(d, R) via
x(t) = Ax(t), where x denotes differentiation with respect to t. Any function x : R −→ Rd such
that x(t) = Ax(t) for all t ∈ R. is called a solution of x = Ax.
The initial value problem for a linear differential equation x = Ax consists in finding, for a given
2
initial value x0 ∈ Rd, a solution x(·,x0) that satisfies x(0,x0) = x0.
The distinct (complex) eigenvalues of A ∈ gl(d, R) will be denoted µ1, . . . , µr. (For definitions
and more information about eigenvalues, eigenvectors, and eigenspaces, see §1.4.3. For information
about generalized eigenspaces, see §2.1.) The real version of the generalized eigenspace is denoted
by E(A, µk) ⊂ Rd or simply Ek for k = 1, ..., r ≤ d.
The real Jordan form of a matrix A ∈ gl(d, R) is denoted by JR
A. Note that for any matrix A
there is a matrix T ∈ GL(d, R) such that A = T−1JR
AT .
Let x(·,x0) be a solution of the linear differential equation x = Ax. Its Lyapunov exponent for
x0 6= 0 is defined as λ(x0) = lim supt→∞
1tlog ‖x(t,x0)‖, where log denotes the natural logarithm and
‖ · ‖ is any norm in Rd.
Let µk = λk + iνk, k = 1, ..., r, be the distinct eigenvalues of A ∈ gl(d, R). We order the distinct
real parts of the eigenvalues as λ1 < ... < λl, 1 ≤ l ≤ r ≤ d, and define the Lyapunov space of λj
as L(λj) =⊕
Ek, where the direct sum is taken over all generalized real eigenspaces associated to
eigenvalues with real part equal to λj . Note that⊕l
j=1 L(λj) = Rd.
The stable, center, and unstable subspaces associated with the matrix A ∈ gl(d, R) are defined
as L− =⊕
L(λj), λj < 0, L0 =⊕
L(λj), λj = 0, and L+ =⊕
L(λj), λj > 0, respectively.
The zero solution x(t, 0) ≡ 0 is called exponentially stable if there exists a neighborhood U(0)
and positive constants a, b > 0 such that x(t,x0) ≤ a‖x0‖e−bt for all t ∈ R and x0 ∈ U(0).
Facts
Literature: [Ama90], [HSD04].
1. For each A ∈ gl(d, R) the solutions of x = Ax form a d−dimensional vector space sol(A) ⊂
C∞(R, Rd) over R, where C∞(R, Rd) = f : R −→ Rd, f is infinitely often differentiable.
Note that the solutions of x = Ax are even real analytic.
2. For each initial value problem given by A ∈ gl(d, R) and x0 ∈ Rd, the solution x(·,x0) is
unique and given by x(t,x0) = eAtx0.
3. Let v1, ...,vd ∈ Rd be a basis of Rd, then the functions x(·,v1), ...,x(·,vd) form a basis of the
solution space sol(A). The matrix function X(·) := [x(·,v1), ...,x(·,vd)] is called a fundamental
matrix of x = Ax, and it satisfies X(t) = AX(t).
4. Let A ∈ gl(d, R) with distinct eigenvalues µ1, ..., µr ∈ C and corresponding multiplicities nk =
α(µk), k = 1, ..., r. If Ek are the corresponding generalized real eigenspaces, then dimEk = nk
and⊕r
k=1 Ek = Rd, i.e. every matrix has a set of generalized real eigenvectors that form a
basis of Rd.
3
5. If A = T−1JR
AT , then eAt = T−1eJR
AtT , i.e. for the computation of exponentials of matrices it
is sufficient to know the exponentials of Jordan form matrices.
6. Let v1, ...,vd be a basis of generalized real eigenvectors of A. If x0 =∑d
i=1 αi vi, then
x(t,x0) =∑d
i=1 αix(t,vi) for all t ∈ R. This reduces the computation of solutions to x = Ax
to the computation of solutions for Jordan blocks, see the examples below or [HSD04, Chapter
5] for a discussion of this topic.
7. Each generalized real eigenspace Ek is invariant for the linear differential equation x = Ax,
i.e. for x0 ∈ Ek it holds that x(t,x0) ∈ Ek for all t ∈ R.
8. The Lyapunov exponent λ(x0) of a solution x(·,x0) (with x0 6= 0) satisfies
λ(x0) = limt→±∞1tlog ‖x(t,x0)‖ = λj if and only if x0 ∈ L(λj). Hence, associated to a matrix
A ∈ gl(d, R) are exactly l Lyapunov exponents, the distinct real parts of the eigenvalues of A.
9. The following are equivalent:
(a) The zero solution x(t, 0) ≡ 0 of the differential equation x = Ax is asymptotically stable.
(b) The zero solution is exponentially stable
(c) All Lyapunov exponents are negative.
(d) L− = Rd.
Examples
1. Let A = diag(a1, ..., ad) be a diagonal matrix, then the solution of the linear differential equa-
tion x = Ax with initial value x0 ∈ Rd is given by x(t,x0) = eAtx0 =
ea1t
·
·
·
eadt
x0.
2. Let e1 = (1, 0, ..., 0)T , ..., ed = (0, 0, ..., 1)T be the standard basis of Rd, then x(·, e1), ...,x(·, ed)
is a basis of the solution space sol(A).
3. Let A = diag(a1, ..., ad) be a diagonal matrix. Then the standard basis e1, ..., ed of Rd
consists of eigenvectors of A.
4. Let A ∈ gl(d, R) be diagonalizable, i.e. there exists a transformation matrix T ∈ GL(d, R) and
a diagonal matrix D ∈ gl(d, R) with A = T−1DT , then the solution of the linear differential
4
equation x = Ax with initial value x0 ∈ Rd is given by x(t,x0) = T−1eDtTx0, where eDt is
given in Example 1.
5. Let B =
λ −ν
ν λ
be the real Jordan block associated with a complex eigenvalue µ = λ+ iν
of the matrix A ∈ gl(d, R). Let y0 ∈ E(A, µ), the real eigenspace of µ. Then the solution
y(t,y0) of y = By is given by y(t,y0) = eλt
cos νt − sin νt
sin νt cos νt
y0. According to Fact 6 this
is also the E(A, µ)-component of the solutions of x = JR
Ax.
6. Let B be a Jordan block of dimension n associated with the real eigenvalue µ of a matrix
A ∈ gl(d, R). Then for
B =
µ 1
· ·
· ·
· ·
· 1
µ
one has eBt = eµt
1 t t2
2! · · tn−1
(n−1)!
· · · ·
· · · ·
· · t2
2!
· t
1
.
In other words, for y0 = [y1, ..., yn]T ∈ E(A, µ) the j-th component of the solution of y = By
reads yj(t,y0) = eµt∑n
k=jtk−j
(k−j)!yk. According to Fact 6 this is also the E(A, µ)-component
of eJRA t.
7. Let B be a real Jordan block of dimension n = 2m associated with the complex eigenvalue
µ = λ + iν of a matrix A ∈ gl(d, R). Then with D =
λ −ν
ν λ
and I =
1 0
0 1
, for
B =
D I
· ·
· ·
· ·
· I
D
one has eBt = eλt
D tD t2
2! D · · tn−1
(n−1)!D
· · · ·
· · · ·
· · t2
2! D
· tD
D
,
where D =
cos νt − sin νt
sin νt cos νt
. In other words, for y0 = [y1, z1, ..., ym, zm]T ∈ E(A, µ) the
5
j-th components, j = 1, ..., m, of the solution of y = By read
yj(t,y0) = eµt
m∑
k=j
tk−j
(k − j)!(yk cos νt − zk sin νt),
zj(t,y0) = eµt
m∑
k=j
tk−j
(k − j)!(zk cos νt + yk sin νt).
According to Fact 6 this is also the E(A, µ)-component of eJRA t.
8. Using these examples and Facts 5 and 6 it is possible to compute explicitly the solutions to
any linear differential equation in Rd.
9. Recall that for any matrix A there is a matrix T ∈ GL(d, R) such that A = T−1JRA T , where JR
A
is the real Jordan canonical form of A. The exponential behavior of the solutions of x = Ax
can be read off from the diagonal elements of JRA .
2 Linear Dynamical Systems in Rd
The solutions of a linear differential equation x = Ax, where A ∈ gl(d, R) define a (continuous
time) dynamical system, or linear flow in Rd. The standard concepts for comparison of dynamical
systems are equivalences and conjugacies that map trajectories into trajectories. For linear flows in
Rd these concepts lead to two different classifications of matrices, depending on the smoothness of
the conjugacy or equivalence.
Definitions
The real square matrix A is hyperbolic if it has no eigenvalues on the imaginary axis.
A continuous dynamical system over the ‘time set’ R with state space M , a complete metric
space, is defined as a map Φ : R × M −→ M with the properties
(i) Φ(0, x) = x for all x ∈ M ,
(ii) Φ(s + t, x) = Φ(s, Φ(t, x)) for all s, t ∈ R and all x ∈ M ,
(iii) Φ is continuous (in both variables).
The map Φ is also called a (continuous) flow.
For each x ∈ M the set Φ(t, x), t ∈ R is called the orbit (or trajectory) of the system through
x.
6
For each t ∈ R the time-t map is defined as ϕt = Φ(t, ·) : M −→ M . Using time-t maps,
the properties (i) and (ii) above can be restated as (i)’ ϕ0 = id, the identity map on M , (ii)’
ϕs+t = ϕs ϕt for all s, t ∈ R.
A fixed point (or equilibrium) of a dynamical system Φ is a point x ∈ M with the property
Φ(t, x) = x for all t ∈ R.
An orbit Φ(t, x), t ∈ R of a dynamical system Φ is called periodic if there exists t ∈ R, t > 0
such that Φ(t + s, x) = Φ(s, x) for all s ∈ R. The infimum of the positive t ∈ R with this property
is called the period of the orbit. Note that an orbit of period 0 is a fixed point.
Denote by Ck(X, Y ) (k ≥ 0) the set of k-times differentiable functions between Ck-manifolds X
and Y , with C0 denoting continuous.
Let Φ, Ψ : R × M −→ M be two continuous dynamical systems of class Ck (k ≥ 0), i.e. for k ≥ 1
the state space M is at least a Ck-manifold and Φ, Ψ are Ck-maps. The flows Φ and Ψ are:
(i) Ck−equivalent (k ≥ 1) if there exists a (local) Ck diffeomorphism h : M → M such
that h takes orbits of Φ onto orbits of Ψ, preserving the orientation (but not necessarily
parametrization by time), i.e.,
(a) for each x ∈ M there is a strictly increasing and continuous parametrization map τx : R
→ R such that h(Φ(t, x)) = Ψ(τx(t), h(x)) or, equivalently,
(b) for all x ∈ M and δ > 0 there exists ε > 0 such that for all t ∈ (0, δ), h(Φ(t, x)) =
Ψ(t′, h(x)) for some t′ ∈ (0, ε).
(ii) Ck−conjugate (k ≥ 1) if there exists a (local) Ck diffeomorphism h : M → M such that
h(Φ(t, x)) = Ψ(t, h(x)) for all x ∈ M and t ∈ R.
Similarly, the flows Φ and Ψ are C0−equivalent if there exists a (local) homeomorphism h :
M → M satisfying the properties of (i) above, and they are C0−conjugate if there exist a (local)
homeomorphism h : M → M satisfying the properties of (ii) above. Often, C0−equivalence is
called topological equivalence, and C0−conjugacy is called topological conjugacy or simply
conjugacy.
Warning: While this terminology is standard in dynamical systems, the terms conjugate and equiv-
alent are used differently in linear algebra. Conjugacy as used here is related to matrix similarity (cf.
Fact 6), not to matrix conjugacy, and equivalence as used here is not related to matrix equivalence.
Facts
Literature: [HSD04], [Rob98].
7
1. If the flows Φ and Ψ are Ck−conjugate, then they are Ck−equivalent.
2. Each time-t map ϕt has an inverse (ϕt)−1 = ϕ−t, and ϕt : M −→ M is a homeomorphism, i.e.
a continuous bijective map with continuous inverse.
3. Denote the set of time-t maps again by Φ = ϕt, t ∈ R. A dynamical system is a group in
the sense that (Φ, ), with denoting composition of maps, satisfies the group axioms, and
ϕ : (R, +) −→ (Φ, ), defined by ϕ(t) = ϕt is a group homomorphism.
4. Let M be a C∞-differentiable manifold and X a C∞-vector field on M such that the differential
equation x = X(x) has unique solutions x(t, x0) for all x0 ∈ M and all t ∈ R, with x(0, x0) =
x0. Then Φ(t, x0) = x(t, x0) defines a dynamical system Φ : R × M −→ M .
5. A point x0 ∈ M is a fixed point of the dynamical system Φ associated with a differential
equation x = X(x) as above if and only if X(x0) = 0.
6. For two linear flows Φ (associated with x = Ax) and Ψ (associated with x = Bx) in Rd, the
following are equivalent:
• Φ and Ψ are Ck−conjugate for k ≥ 1,
• Φ and Ψ are linearly conjugate, i.e., the conjugacy map h is a linear operator in GL(Rd),
• A and B are similar, i.e., A = TBT−1 for some T ∈ GL(d, R).
7. Each of the statements in 6 implies that A and B have the same eigenvalue structure and
(up to a linear transformation) the same generalized real eigenspace structure. In particular,
the Ck− conjugacy classes are exactly the real Jordan canonical form equivalence classes in
gl(d, R).
8. For two linear flows Φ (associated with x = Ax) and Ψ (associated with x = Bx) in Rd, the
following are equivalent:
• Φ and Ψ are Ck−equivalent for k ≥ 1
• Φ and Ψ are linearly equivalent, i.e., the equivalence map h is a linear map in GL(Rd),
• A = αTBT−1 for some positive real number α and T ∈ GL(d, R).
9. Each of the statements in 8 implies that A and B have the same real Jordan structure and their
eigenvalues differ by a positive constant. Hence the Ck- equivalence classes are real Jordan
canonical form equivalence classes modulo a positive constant.
8
10. The set of hyperbolic matrices is open and dense in gl(d, R). A matrix A is hyperbolic if and
only if it is structurally stable in gl(d, R), i.e., there exists a neighborhood U ⊂ gl(d, R) of A
such that all B ∈ U are topologically equivalent to A.
11. If A and B are hyperbolic, then the associated linear flows Φ and Ψ in Rd are C0−equivalent
(and C0−conjugate) if and only if the dimensions of the stable subspaces (and hence the
dimensions of the unstable subspaces) of A and B agree.
Examples
1. Linear differential equations: For A ∈ gl(d, R) the solutions of x = Ax form a continuous
dynamical system with time set R and state space M = Rd: Here Φ : R×Rd −→ Rd is defined
by Φ(t,x0) = x(t,x0) = eAtx0.
2. Fixed points of linear differential equations: A point x0 ∈ Rd is a fixed point of the dynamical
system Φ associated with the linear differential equation x = Ax if and only if x0 ∈ kerA, the
kernel of A.
3. Periodic orbits of linear differential equations: The orbit Φ(t,x0) := x(t,x0), t ∈ R is periodic
with period t > 0 if and only if x0 is in the eigenspace of a non-zero complex eigenvalue with
zero real part.
4. For each matrix A ∈ gl(d, R) its associated linear flow in Rd is Ck−conjugate (and hence
Ck−equivalent) for all k ≥ 0 to the dynamical system associated with the Jordan form JR
A.
3 Chain Recurrence and Morse Decompositions of Dynami-
cal Systems
A matrix A ∈ gl(d, R) and hence a linear differential equation x = Ax maps subspaces of Rd into
subspaces of Rd. Therefore the matrix A also defines dynamical systems on spaces of subspaces,
such as the Grassmann and the flag manifolds. These are nonlinear systems, but they can be
studied via linear algebra, and vice versa, the behavior of these systems allows for the investigation
of certain properties of the matrix A. The key topological concepts for the analysis of systems on
compact spaces, like the Grassmann and flag manifolds are chain recurrence, Morse decompositions
and attractor-repeller decompositions. This subsection concentrates on the first two approaches, the
connection to attractor-repeller decompositions can be found, e.g., in [CK00, Appendix B2].
9
Definitions
Given a dynamical system Φ : R × M −→ M . For a subset N ⊂ M the α-limit set is defined as
α(N) = y ∈ M , there exist sequences xn in N and tn → −∞ in R with limn→∞ Φ(tn, xn) = y,
and similarly the ω-limit set of N is defined as ω(N) = y ∈ M , there exist sequences xn in N and
tn → ∞ in R with limn→∞ Φ(tn, xn) = y.
For a flow Φ on a complete metric space M and ε, T > 0 an (ε, T )−chain from x ∈ M to y ∈ M is
given by
n ∈ N, x0 = x, ..., xn = y, T0, ..., Tn−1 > T
with
d(Φ(Ti, xi), xi+1) < ε for all i,
where d is the metric on M .
A set K ⊂ M is chain transitive if for all x, y ∈ K and all ε, T > 0 there is an (ε, T )−chain from
x to y.
The chain recurrent set CR is the set of all points that are chain reachable from themselves, i.e.
CR = x ∈ M , for all ε, T > 0 there is an (ε, T )−chain from x to x.
A set M ⊂ M is a chain recurrent component, if it is a maximal (with respect to set inclusion)
chain transitive set. In this case M is a connected component of the chain recurrent set CR.
For a flow Φ on a complete metric space M , a compact subset K ⊂ M is called isolated invariant,
if it is invariant and there exists a neighborhood N of K, i.e., a set N with K ⊂ intN , such that
Φ(t, x) ∈ N for all t ∈ R implies x ∈ K.
A Morse decomposition of a flow Φ on a complete metric space M is a finite collection Mi, i = 1, ..., l
of nonvoid, pairwise disjoint, and isolated compact invariant sets such that
(i) for all x ∈ M , ω(x), α(x) ⊂l⋃
i=1
Mi; and
(ii) suppose there are Mj0 ,Mj1 , ...,Mjnand x1, ..., xn ∈ M \
l⋃
i=1
Mi with α(xi) ⊂ Mji−1and
ω(xi) ⊂ Mjifor i = 1, ..., n; then Mj0 6= Mjn
.
The elements of a Morse decomposition are called Morse sets.
A Morse decomposition Mi, i = 1, ..., l is finer than another decomposition Nj , j = 1, ..., n, if
for all Mi there exists an index j ∈ 1, ..., n such that Mi ⊂ Nj .
Facts
Literature: [Rob98], [CK00], [ACK05].
10
1. For a Morse decomposition Mi, i = 1, ..., l the relation Mi ≺ Mj , given by α(x) ⊂ Mi and
ω(x) ⊂ Mj for some x ∈ M\l⋃
i=1
Mi, induces an order.
2. Let Φ, Ψ : R × M −→ M be two dynamical systems on a state space M and let h : M → M
be a topological equivalence for Φ and Ψ. Then
(i) the point p ∈ M is a fixed point of Φ if and only if h(p) is a fixed point of Ψ;
(ii) the orbit Φ(·, p) is closed if and only if Ψ(·, h(p)) is closed;
(iii) if K ⊂ M is an α-(or ω-) limit set of Φ from p ∈ M , then h [K] is an α-(or ω-) limit set
of Ψ from h(p) ∈ M .
(iv) Given, in addition, two dynamical systems Θ1,2 : R × N −→ N . If h : M → M is a
topological conjugacy for the flows Φ and Ψ on M , and g : N → N is a topological
conjugacy for Θ1 and Θ2 on N , then the product flows Φ × Θ1 and Ψ × Θ2 on M × N
are topologically conjugate via h× g : M × N −→ M ×N . This result is, in general, not
true for topological equivalence.
3. Topological equivalences (and conjugacies) on a compact metric space M map chain transitive