Dynamical Systems and Chaos Gr´ egory Faye School of Mathematics, University of Minnesota, 206 Church Street S.E., Minneapolis, MN 55455, USA [email protected]December 3, 2012 Abstract These are some notes related to the one-semester course Math 5535 Dynamical Systems and Chaos given at the University of Minnesota during Fall 2012 with an emphasis to the study of continuous and discrete dynamical systems of dimension one and two. An ambitious list of topics to be covered include phase portraits, fixed points, stability, bifurcations, limit sets, periodic orbit, Poincar´ e map and chaotic attractor. Many ideas from analysis and topology will be introduced along the way. The course will follow some chapters of the textbook of Hirsch, Smale and Devaney [1] but not necessarily in the same order. Other interesting references as complementary and extra lectures to the course are listed at the end of these notes [2, 3, 4, 5]. Keywords: Continuous and discrete dynamical systems; Flow; Equilibrium; Phase portrait; Stability; Bifurcation; Periodic orbit; Chaos; Cantor set. Contents 1 Dynamical systems: definitions and first results 3 1.1 Introduction: some motivating examples .................... 3 1.1.1 Continuous time ............................. 3 1.1.2 Discrete time ............................... 3 1.2 Ordinary differential equations .......................... 3 1.3 Flow ........................................ 5 1.4 Existence, uniqueness and regularity of solutions ................ 5 1.4.1 Theorems ................................. 5 1.4.2 Examples ................................. 11 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
1 Dynamical systems: definitions and first results
1.1 Introduction: some motivating examples
1.1.1 Continuous time
System: Rosseler attractor x = −y − zy = x+ ay
z = b+ z(x− c)
1.1.2 Discrete time
System: the logistic map xn+1 = rxn(1− xn).
1.2 Ordinary differential equations
We consider the system of autonomous differential equations of the form
X = F (X) (1.1)
where F : U ⊂ Rn → Rn, U an open set.
3
Figure 1: Rossler attractor in a 3 dimensional euclidean space. The initial condition is
(0.1, 0.2, 0.3) and a = b = 0.2, c = 5.7.
Figure 2: Plot of the logistic map.
4
Remark 1.1. We may also use the notation X ′ ordX
dtfor X. In each case, this stands for
the derivation with respect to the time variable t.
Definition 1.1. A solution of (1.1) is a function X : J → Rn, defined on some interval
J ⊂ R such that, for all t ∈ JX(t) = F (X(t)).
Geometrically, X(t) is a curve in Rn whose tangent vector X(t) exists for all time t ∈ J and
equals F (X(t)). The map F : U → Rn defines a vector field on U .
Definition 1.2. An initial condition for a solution X : J → Rn is a specification of the form
X(t0) = X0 where t0 ∈ J and X0 ∈ U .
One challenge in the theory of differential equations is to find solution of any Cauchy problem;
that is to determine the solution of the system (1.1) that satisfies the initial condition
X(t0) = X0. When it is more convenient we will assume that t0 = 0.
1.3 Flow
Let X0 ∈ U an initial condition and X(t) a solution of (1.1) such that at t = 0 X(0) = X0.
We define the map φt : U → Rn that takes X0 to X(t) at time t that is φt(X0) = X(t). We
can defined the one-parameter family of maps (φt)t≥0. It is reasonable to expect that:
1. φ : R× U → Rn where φ(t,X0) = φt(X0) is smooth,
2. φ0 is the identity function: φ0(X0) = X0,
3. φt as an inverse φ−t,
4. φt ◦ φs = φt+s for all t, s ∈ R.
When all these properties are met we say that (φt)t∈R is a flow for system (1.1). System
(1.1) is then called a smooth dynamical system.
1.4 Existence, uniqueness and regularity of solutions
1.4.1 Theorems
Theorem 1.1 (Existence and uniqueness of solutions). Let U be an open set of Rn. Consider
the Cauchy problem
X = F (X), X(t0) = X0 (1.2)
where X0 ∈ U . Suppose that F : U → Rn is C1. Then there exists an a > 0 and a unique
solution X :]t0 − a; t0 + a[→ Rn of (1.2).
5
Proof. For the proof of the theorem, we set t0 = 0 in order to simplify notations. We first
start the proof of this theorem with some definitions from functional analysis.
Let X ∈ Rn with the notation X = (x1, . . . , xn). We consider a C1 vector field F : Rn → Rn
that we write
F (X) = F (x1, . . . , xn) =
F1(x1, . . . , xn)...
Fn(x1, . . . , xn)
.
The Jacobian matrix of F at a given point X ∈ Rn is the following matrix of partial
derivatives:
DFX =
(∂Fi∂xj
(X)
)1≤i,j,≤n
. (1.3)
Definition 1.3. Lipschitz function Let O ⊂ Rn be an open set. A function F : O → Rn
is said to be Lipschitz on O if there exists a constant K, called the Lipschitz constant, such
that
‖F (X)− F (Y )‖ ≤ K‖X − Y ‖, ∀X, Y ∈ O.
We say that F is locally Lipschitz if each point in O has a neighborhood O′ in O such that
the restriction F to O′ is Lipschitz.
Lemma 1.1. We have the following properties:
1. A set K ⊂ Rn is compact if and only if it is closed and bounded.
2. A continuous function F : K → Rn, with K compact, is bounded on K and attains its
maximum on K.
3. If F : O → Rn is C1. Then F is locally Lipschitz.
Proof. We only give a proof of the last point. Suppose that F : O → Rn is C1 and X0 ∈ O.
Let ε > 0 be small enough that the closed ball Oε (which is then compact) of radius ε is
contained in O. Let K be an upper bound of ‖DFX‖ on Oε. As the set Oε is convex then
for all X, Y ∈ Oε and s ∈ [0, 1], Y + sU = Y + s(X − Y ) ∈ Oε. Let ψ(s) = F (Y + sU). We
have
ψ′(s) = DFY+sUU.
Therefor:
F (X)− F (Y ) = ψ(1)− ψ(0) =
∫ 1
0
DFY+sUUds.
Thus we have
‖F (X)− F (Y )‖ ≤ K‖X − Y ‖.
Existence:
6
In order to prove the theorem we will use the integral form of (1.2). Suppose that J in an
open interval containing zero, X(t) solution of (1.2) satisfies
X(t) = X0 +
∫ t
0
F (X(s))ds. (1.4)
If X is solution of (1.4) then X is automatically solution of (1.2). To prove the existence of
solutions, we will use the integral form.
We have the following assumptions:
1. Oρ is a closed ball of radius ρ > 0 centered at X0.
2. F is Lipschitz on Oρ with constant K.
3. F is bounded by M on Oρ.
4. We set 0 < a < min{ρ/M, 1/K} and J = [−a, a].
We construct a sequence of functions Uk recursively using Picard iteration that will uniformly
converge to a function solution of (1.4). Let
U0(t) = X0 ∈ Oρ.
For t ∈ J define
U1(t) = X0 +
∫ t
0
F (U0(s))ds = X0 + tF (X0).
Since |t| ≤ a and ‖F (X0)‖ ≤M it follows that
‖U1(t)−X0‖ ≤ aM < ρ
so that U1(t) ∈ Oρ for all t ∈ J . By induction, assume that Uk(t) has been defined and that
‖Uk(t)−X0‖ < ρ for all t ∈ J . Then let
Uk+1(t) = X0 +
∫ t
0
F (Uk(s))ds.
It is straightforward to see that
‖Uk+1(t)−X0‖ ≤ aM < ρ
so that Uk+1(t) ∈ Oρ for all t ∈ J and we can continue the sequence. Furthermore, we can
easily prove by induction that there exists a constant L ≥ 0 such that for all k ≥ 0 we have
‖Uk+1(t)− Uk(t)‖ ≤ (aK)kL.
As a < 1/K, aK < 1 and given any ε > 0 there exists N large enough so that for any
p ≥ q ≥ N we have
‖Up(t)− Uq(t)‖ ≤∞∑k=N
‖Uk+1(t)− Uk(t)‖ ≤∞∑k=N
(aK)kL ≤ ε.
7
Then the sequence (Uk(t))n∈N of continuous functions uniformly converges to a continuous
function X : J → Oρ that satisfies equation (1.4).
Uniqueness:
Suppose that we have X, Y : J → O two solutions of (1.4) satisfying X(0) = Y (0) = X0.
As X, Y are continuous on the bounded interval J = [−a, a] let
Q = maxt∈J‖X(t)− Y (t)‖.
This maximum is attained at some point t1 ∈ J . Then
Q = ‖X(t1)− Y (t1)‖ =
∥∥∥∥∫ t1
0
X ′(s)− Y ′(s)ds∥∥∥∥ ≤ ∫ t1
0
‖F (x(s))− F (Y (s))‖ds.
Then Q ≤ aKQ which is impossible unless Q = 0. Therefore X(t) = Y (t) for all t ∈ J .
Remark 1.2. In theorem 1.1 the fact F is C1 is a strong assumption and we can have the
same result with F only locally Lipschitz.
Lemma 1.2 (On the uniqueness). Let X1 and X2 be two solutions of (1.2) defined on the
open intervals J1 and J2 respectively with X1(t0) = X2(t0) = X0. Then
• X1 and X2 are equal on J = J1 ∩ J2.
• There exists a solution Y defined on J1∪J2 which coincides with X1 and X2 on J1 and
J2 respectively.
Proof. Let consider F the subset
F = {t ∈ J | X1(t) = X2(t)}.
As X1 and X2 are continuous F is closed and non empty of t0 ∈ F . Let s ∈ F , then X1 and
X2 are solutions of X = F (X) with same initial conditions X1(s) = X2(s). From theorem
1.1, X1 and X2 coincide on a small neighborhood of s. Then F is open. As F is not empty,
open and closed and J is an interval, then J = F .
Lemma 1.3 (Maximal solutions). There always exists a solution of (1.2) said maximal X
defined on an interval J also called maximal interval with the following property: if s ∈ Jand if Y is a solution with initial condition Y (s) = X(s) defined on an interval J then J ⊂ J
and for all t ∈ J , Y (t) = X(t).
We can then always assume that we have a unique solution defined on a maximal time
interval. There is, of course, no guarantee that a solution X(t) can be defined for all time,
no matter how nice F is, as we will see in the examples.
8
Lemma 1.4 (Extending solutions). Let U be an open set of Rn and F : U → Rn is C1.Let X(t) be a solution of (1.1) defined on the interval J =]α, β[⊂ R. We suppose that there
exists a sequence (tn)n∈N which converges toward β such that:
limn→∞
X(tn) = X0 ∈ U.
Then there exists ε > 0 such that X can be extended to a solution defined on ]α, β + ε[.
Proof. From theorem 1.1, there exists a neighborhood U of X0 and ε > 0 such that for all
(s0, Z0) ∈]β− ε, β+ ε[×U ; there exists a solution Z with Z(s0) = Z0 defined on ]β− ε, β+ ε[.
We apply this result to (s0, Z0) = (tn, X(tn)) for n large enough such that |tn− β| ≤ ε/2. In
particular Z is defined on ]β − ε/2, β + ε/2[. As X is also a solution with initial condition
(tn, X(tn)), we can deduce by the uniqueness of the theorem that X can be extended to
]α, β + ε/2[.
Corollary 1.1 (Exit of any compact sets). Let U be an open set of Rn and F : U → Rn is
C1. Let X(t) be a solution of (1.1) defined on a maximal open interval J =]α, β[⊂ R with
β < ∞. Then given any closed and bounded set K ⊂ U (K is then compact), there is some
t ∈]α, β[ with X(t) /∈ K.
Proof. Suppose that X(t) ∈ K for all t ∈]α, β[. Since F is continuous on K, compact, there
exists M > 0 such that F (X) ≤M for all X ∈ K. Let γ ∈]α, β[. For t0 < t1 ∈ J we have
‖X(t0)−X(t1)‖ ≤ (t1 − t2)M
and X is uniformly continuous on J . Hence we can define:
X(β) = limt→β
X(t).
Then X is continuous on [γ, β] and then differentiable as
X(t) = X(γ) +
∫ t
γ
F (X(s))ds, ∀t ∈ [γ, β]
and X ′(β) = F (X(β)). Then X is a solution on [γ, β]. Since there must be a solution on an
interval [β, δ[ for some δ > β, we can extend X to the interval ]α, δ[ which contradicts the
fact that X is maximal solution on J .
Interpretation: this theorem says that if a solution X(t) cannot be extended to a larger
domain time interval, then this solutions leaves any compact set in U .
Lemma 1.5 (Gronwall’s inequality). Let u : [0, α] → R be a continuous and nonnegative.
Suppose C ≥ 0 and K ≥ 0 are such that
u(t) ≤ C +K
∫ t
0
u(s)ds
for all t ∈ [0, α]. Then, for all t in this interval,
u(t) ≤ CeKt.
9
Proof. Let define the intermediate function U(t) = e−Kt(C +K
∫ t0u(s)ds
). U is differen-
tiable for all time t ∈ [0, α] and we have:
U ′(t) = −KU(t) +Ke−Ktu(t) ≤ −KU(t) +KU(t) = 0
⇒ U(t) ≤ U(0) = C
⇒ e−Ktu(t) ≤ e−Kt(C +K
∫ t
0
u(s)ds
)≤ U(0) = C.
This gives the result.
Theorem 1.2. Consider the system (1.1) where F : U → Rn is C1 and has Lipschitz
constant K. Suppose that X(t) and Y (t) are solutions of X = F (X) which remain in U and
are defined on the closed interval [t0, t1]. Then we have
‖Y (t)−X(t)‖Rn ≤ ‖Y0 −X0‖Rn exp (K(t− t0))
for all t ∈ [t0, t1].
Proof. Let u(t) = ‖Y (t)−X(t)‖. It is straightforward to see that for all t ∈ [t0, t1]
u(t) ≤ u(t0) +K
∫ t
t0
u(s)ds.
By applying Gronwall’s inequality we obtain
u(t) ≤ u(t0) exp(K(t− t0)).
Corollary 1.2 (Continuous dependence on initial conditions). Let φt(X0) be the flow of the
Cauchy problem (1.2) where F is C1. Then φ is a continuous function of X0.
Theorem 1.3 (Continuous dependence on parameters). Let be X = Fλ(X) be a system
of differential equations for which Fλ is C1 in both X and λ. Then the flow of this system
depends continuously on λ as well.
Theorem 1.4. Smoothness of the flow Consider system (1.2) with F is C1. Then the flow
φt(X0) of this system is a C1 function; that is∂φ
∂tand
∂φ
∂Xexist and are continuous in t and
X.
Note that we have:∂φ
∂t(t,X0) = F (φ(t,X0))
and∂φ
∂X(t,X0) = Dφt(X0)
where Dφt is the Jacobian of the function X → φt(X).
10
1.4.2 Examples
• x = x2 with x(0) = x0:
In order to solve this simple equation we separate all the x variable to the left side and
obtainx
x2= 1
recall that x = dxdt
we can use a formal approach and write
dx
x2= dt.
Then we can integrate with respect to time:∫dx
x2=
∫dt
which gives
− 1
x(t)+
1
x0= t
and x(t) =x0
1− tx0.
If x0 = 0, then the only solution is x(t) = 0 defined on R. If x0 6= 0, we can first
remark that x(t) is not defined for all t ∈ R, indeed x(t) → ±∞ when t −→ 1
x0. In
the case x0 > 0, J =
]−∞, 1
x0
[is the maximal time interval and we recover the result
stated in corollary 1.1 and J =
]1
x0,+∞
[for x0 < 0. It is also easy to verify the group
property of the flow.
• x = x1/3 with x(0) = 0:
One obvious solution is x1(t) = 0 for all t. On the other hand, we can also apply the
separation of constant:dx
x1/3= dt
and integrate with respect to time:
3
2x(t)2/3 = t⇒ x(t) =
(2t
3
)3/2
.
We can then define a family of solutions of the form
x(t, t0) =
0 for t ≤ t0(2(t−t0)
3
)3/2for t ≥ t0
for all t0 ≥ 0.
Then the continuity of the map F in theorem 1.1 is not sufficient to provide the
uniqueness of solution (x→ x1/3 is continuous but not differentiable at x = 0).
11
2 Fixed points
Throughout this section, we consider smooth dynamical systems and to avoid any compli-
cations we suppose that F is at least C2 in (1.1).
2.1 Definition
Definition 2.1 (Fixed point or equilibrium point). A point X∗ is called a fixed point provided
that F (X∗) = 0.
The solution starting at a fixed point has zero velocity so it just stays there and φ(t,X∗) = X∗
for all t. Traditionally, such a point was called an equilibrium point because the forces were
in equilibrium and the mass did not move. Note, that the origin is always a fixed point for
a linear system of the form X = AX where A ∈ Mn(R). This is the only fixed point of a
linear system, unless 0 is an eigenvalue.
Definition 2.2 (Invariant manifold). For a fixed point X∗, the stable manifold Ws(X∗) is
the set of all points which tend to the fixed point as t goes to plus infinity:
Ws(X∗) =
{X0 ∈ Rn : φt(X0) −→
t→+∞X∗}. (2.1)
If Ws(X∗) is an open set, then Ws(X∗) is called the basin of attraction of X∗.
In the same way, the unstable manifold Wu(X∗) is the set of all points which tend to the
fixed point as t goes to minus infinity:
Wu(X∗) =
{X0 ∈ Rn : φt(X0) −→
t→−∞X∗}. (2.2)
2.2 Stability of fixed points
We now proceed to give several different ways of designating that a fixed point is stable.
Definition 2.3. A fixed point is said to be stable if nearby solutions stay nearby for all
future time. More precisely, X∗ is stable if for every neighborhood O of X∗ in Rn there is
a neighborhood O1 ⊂ O such that every solution X(t) with X(0) = X0 ∈ O1 is defined and
remains in O for all t > 0.
Same definition but with epsilon’s.
X∗ is stable if for any ε > 0, there is a δ > 0 such that if ‖X∗−X0‖ < δ then ‖φt(X0)−X∗‖ <ε for all t > 0.
Definition 2.4. A fixed point X∗ that is not stable is called unstable. This means there is a
neighborhood O of X∗ such that for every neighborhood O1 ⊂ O of X∗, there is at least one
solution X(t) starting at X(0) = X0 ∈ O1 that does not lie entirely O in for all t > 0.
12
A different form of stability is asymptotic stability which is a stronger stability than the one
stated above.
Definition 2.5. A fixed point X∗ is asymptotically stable if O1 can be chosen above so that,
in addition to the properties of stability, we have φt(X0)→ X∗ as t→ +∞ for all X0 ∈ O1.
An asymptotically stable fixed point is also called a sink. Sometimes, we will use the word
attracting to mean asymptotically stable.
Remark 2.1. We will see in the next subsection the example of stable fixed point which is
not asymptotically stable.
Definition 2.6. A fixed point is called repelling or a source, provided that it is asymptotically
stable backward in time.
2.3 Linear systems
2.3.1 Linear algebra
Eigenvalues and eigenvectors:
Definition 2.7. A vector V is said to be an eigenvector of A ∈ Mn(R) if V is a nonzero
solution to the system (A− λIn)V = 0. λ is called an eigenvalue of A.
Proposition 2.1. Let A ∈Mn(R). Then there is a change of coordinates T such that
T−1AT =
B1
. . .
Bk
where each of the B′js is a square matrix (and all the other entries are zero) of one of the
following forms:
(i)
λ 1
λ 1. . . . . .
. . . 1
λ
,
C2 I2
C2 I2. . . . . .
. . . I2
C2
where
C2 =
(α β
−β α
)I2 =
(1 0
0 1
)and where α, β, λ ∈ R with β 6= 0. The special cases where Bj = (λ) or Bj = C2 are, of
course, allowed.
13
Exponential of a matrix:
Let A ∈ Mn(R). We equipped Mn(R) with the norm (do not forget that all norms are
equivalent on Mn(R)):
‖A‖ = sup {‖AX‖ | ‖X‖ ≤ 1} .
We immediately have the properties
‖AX‖ ≤ ‖A‖‖X‖, ‖AB‖ ≤ ‖A‖‖B‖.
Definition 2.8 (Exponential of a matrix). Let A ∈Mn(R), the exponential of A is defined
through
exp(A) =+∞∑n=0
An
n!. (2.3)
This sum is uniformly convergent for A in a bounded domain of Mn(R).
Proposition 2.2 (Algebraic properties). We have
exp(P−1AP ) = P−1 exp(A)P,
exp(−A) = exp(A)−1.
If A and B commute: AB = BA, then
exp(A+B) = exp(A) exp(B) = exp(B) exp(A).
Proposition 2.3. If V ∈ Rn is an eigenvector of A associated to the eigenvalue λ, then V
is an eigenvector of exp(A) associated to eλ.
Proposition 2.4. The map: {R→Mn(R)
t→ exp(tA)
is differentiable andd
dtexp(tA) = A exp(tA) = exp(tA)A.
2.3.2 Linear differential systems
Form of the solutions:
Theorem 2.1. Consider the linear differential equation X = AX with A ∈ Mn(R) with
initial value X(0) = X0. Then the unique solution is
X(t) = exp(tA)X0.
Examples
14
1. Compute exp(tA) when A is the n× n matrix of the form:
A =
λ 1
λ 1. . . . . .
. . . 1
λ
= λIn + J.
J is a nilpotent matrix: Jn = 0 and Jn−1 6= 0. We first remark that In (the identity
matrix) commutes with J such that
exp(tA) = exp(tλIn) exp(tJ) = eλt exp(tJ).
In order to compute exp(tJ), we use the definition of the exponential of a matrix:
exp(tJ) =+∞∑k=0
(tJ)k
k!=
n−1∑k=0
(tJ)k
k!
which is a finite because of the nilpotent property of J . It is now a straightforward
computation to see that:
exp(tJ) =
1 t t2
2· · · tn−1
(n−1)!
1 t. . . . . . t2
2. . . t
1
.
Then we have:
exp(tA) = etλ
1 t t2
2· · · tn−1
(n−1)!
1 t. . . . . . t2
2. . . t
1
.
2. Compute exp(tA) when A is the 2× 2 matrix of the form:
A =
(α β
−β α
)= αI2 + βJ with J =
(0 1
−1 0
).
We first remark that J enjoys the property J2 = I2 and that I2 commutes with J .
Once again, we apply the definition of the exponential of a matrix:
exp(tβJ) =+∞∑k=0
(tβJ)k
k!= I2 + tβJ − (tβ)2
2− (tβ)3
3!J +
(tβ)4
4!I2 + · · ·
=
(+∞∑k=0
(−1)k(tβ)2k
(2k)!
)I2 +
(+∞∑k=0
(−1)k(tβ)2k+1
(2k + 1)!
)J
15
= cos(βt)I2 + sin(βt)J.
Finally we have:
exp(tA) =
(eαt cos(βt) eαt sin(βt)
−eαt sin(βt) eαt cos(βt)
).
3. Let consider the following differential system
X =
1 2 −1
0 3 −2
0 2 −2
X = AX.
The matrix A has 3 nonnegative eigenvalues with corresponding eigenvectors:
Let Es, Eu and Ec be the L-invariant subspaces of Rn corresponding to the above splitting
of σ. We have
Rn = Es ⊕ Eu ⊕ Ec.Let πs, πu and πc be the projections from Rn onto Es, Eu and Ec. We set Ls = L|Es , Lu = L|Euand Lc = L|Ec .The asymptotic behavior of solutions of the linearized system U = LU is summarized as
follows
25
• if U0 ∈ Es, then ‖φ(t, U0)‖ = ‖etLsU0‖ tends exponentially to 0 as t→ +∞ and to ∞as t→ −∞,
• if U0 ∈ Eu, then ‖φ(t, U0)‖ = ‖etLuU0‖ tends exponentially to ∞ as t→ +∞ and to 0
as t→ −∞,
• all bounded solutions, in particular all steady and periodic ones, lie in E0.
We have seen that, provided that σ0 =, there exists a relation between the solutions of the
nonlinear system X = F (x) and those of U = LU in a neighborhood of X∗. This was the
Hartman-Grobman theorem and its differentiable form with the Belickii’s theorem. When
σ0 6=, then it is difficult to establish such a relationship in general. One might rather ask
the following question:
Question: Does X = F (x) possess a manifold having similar properties as Ec has for
U = LU?
The answer is yes!
Theorem 2.8 (Center manifold theorem). There exists a map
Ψ ∈ Ck(Ec, Es ⊕ Eu),Ψ(0) = 0, DΨ(0) = 0,
and a neighborhood U of X = 0 in Rn such that the manifold
Wc = {x+ Ψ(x) | x ∈ Ec}
has the following properties.
• Wc is locally invariant with respect to X = F (X). More precisely, if Z0 ∈ Wc ∩ U for
t ∈ I, then φ(t, Z0) ∈Mc for t ∈ I where I is an interval containing t = 0.
• If σs and σu are non void, then Wc contains all solutions of X = F (X) staying in U
for all t ∈ R. That is, if Z0 ∈ U and φ(t, Z0) ∈ U for all t ∈ R, then Z0 ∈ Wc.
Moreover, if σu is void, then one has:
• Wc is locally attractive. More precisely, all solutions of X = F (X) staying in U for
all t > 0 tend exponentially to some solution of X = F (X) on Wc.
Wc is a Ck-manifold of Rn parametrized by x ∈ Ec. Hence Wc has the same dimension
as Ec. Wc passes through X = 0, and is tangent to Ec at X = 0. We say that Wc is a
center manifold of X = F (X) at X = 0.
Remark 2.4. Wc is not unique.
26
Reduced equation. If Z0 ∈ Wc ∩ U , then φ(t, Z0) ∈ Wc close to t = 0. Defining,
x0 = πcZ0, x(t) = πcφ(t, Z0),
we write
φ(t, Z0) = x(t) + Ψ(x(t)).
Using X = F (x) we obtain the following characterizations.
• x(t) satisfies the nonlinear differential equation
Equation (2.9) is called the reduced equation. Equation (2.10) is a quasilinear partial differ-
ential equation of order 1 for Ψ. There exists techniques to compute an approximation of Ψ
(Taylor expansions of F and Ψ around X = 0).
Example Consider the dynamical system{x = xy
y = −y − x2
where (x, y) ∈ R2. We have
L = DF (0) =
(0 0
0 −1
)E0 is equal to the x-axis and Es is equal to the y-axis. There exists a center-manifold:
Wc = {x+ Ψ(x) | x ∈ R}
where Ψ : R→ R satisfies:
Ψ′(x)(xΨ(x)) = −Ψ(x)− x2, Ψ(0) = Ψ′(0) = 0
Setting Ψ(x) = c2x2 + c3x
3 + o(x3) we get −c2x2 − x2 = 0 and −c3x3 = 0. Consequently:
Ψ(x) = −x2 + o(x3)
The reduced system isdx
dt= −x3 + o(x4).
27
Figure 9: One-dimensional center manifold.
2.6 Phase portraits using energy and other test functions
For the moment, we have seen how dynamical systems could be analyzed using nullclines and
linearization at fixed points in order to draw their phase portrait. In the previous section,
we have introduced an abstract and theoretical result (center manifold theorem) in order to
consider systems with non hyperbolic fixed points. We continue in this direction by analyzing
systems through ”energy function”.
2.6.1 An introductory example: prey-predator system
In this section we introduce the Lotka-Volterra equations for prey-predator system. The
Lotka-Volterra predator-prey model was initially proposed by Alfred J. Lotka ”in the theory
of autocatalytic chemical reactions” in 1910. Vito Volterra, who made a statistical analysis
of fish catches in the Adriatic, after the first world war, independently investigated the
equations in 1926. The simplest system of equation of this type is given by{x = x(a− by)
y = y(−c+ dx),
with all parameters positive a, b, c and d. Notice that x is the prey population and y is
the predator population. Without any of the predators around, the prey grows at a linear
positive rate a. Without any prey, the predator population has negative growth rate of
−c. These equations have fixed points (0, 0) and (c/d, a/b). We first remark that (0, 0) is
saddle point. The eigenvalues of the Jacobian matrix at the fixed point (c/d, a/b) are purely
imaginary and given by ±i√ac. Thus, (c/d, a/b) is a center. For this system, there is a way
to show that the orbits close when they go around the fixed point. Notice first that the
quantity:
K(x, y) = f(x) + g(y) = dx− c ln(x) + by − a ln(y)
28
is a conserved quantity. That is ddtK(x, y) = 0 along the trajectories. K can be considered
as a function of x and y on the plane (a surface). Each level set of the surface K(x, y) is
a trajectory of the dynamical system. So if we show that K(x, y) has level sets which are
closed curve, then we will have proved that the system has periodic solutions.
The graph of the function f is concave up and has a unique minimum at x = c/d. Similarly,
the graph of g is concave up and has a unique minimum at y = a/b. Thus the minimum
value of K(x, y) is K0 = f(c/d) + g(a/b). The level curve f(x) + g(y) = K0 is just the fixed
point (c/d, a/b). If C is a constant with C < K0, then the level curve K(x, y) = C is empty.
Finally consider, C > K0 and let C = C −K0 > 0. Therefore, for a given x, we need to find
y that satisfy
g(y)− g(a/b) = C − (f(x)− f(c/d)).
For x with f(x)− f(c/d) > C, the right-hand side is negative and there are no solutions for
y. There are two values of x, x1 and x2, for which f(x)−f(c/d) = C. For these xj, the right
hand side is zero and the only solution is y = a/b. For x1 < x < x2, the right hand side is
positive and there are two values of y which satisfies the equations. One of these values of y
will be less than a/b and the other will be greater. Therefore, the level curve for C > K0 is
a closed curve that surrounds the fixed point.
There is another way to prove the existence of periodic solutions which does not require the
study of the level curves of K. We decompose the first quadrant into 4 different regions:
• RI = {(x, y) ∈ R2 | 0 < x < c/d & 0 < y < a/b}
• RII = {(x, y) ∈ R2 | c/d < x & 0 < y < a/b}
• RIII = {(x, y) ∈ R2 | c/d < x & a/b < y}
• RIV = {(x, y) ∈ R2 | 0 < x < c/d & a/b < y}.
Let (x0, y0) be in the first region RI . Then there exists t1 > 0 such that M(t) = (x(t), y(t))
enters in region RII . If M stays in region RI for all time then x and y are bounded and
converge to limits x∞, y∞. Thus x and y also converge. This limit has to be a fixed point.
As x > 0 and y < 0, then x∞ > 0 and y∞ < a/b. But this is impossible as there exist only
two fixed points.
In fact we can show the following result.
Lemma 2.1. If (x(t), y(t)) is a maximal solution then it is bounded.
Proof. There exists two constants A > 0 and B > 0 such that
∀x > A, c ln(x) <dx
2and ∀y > Ba ln(y) <
by
2.
Then for all (x, y) outside the compact [0, A]× [0, B], we have
C = K(x, y) >dx+ by
2
29
Then, if (x0, y0) is the initial condition we have that
0 < x(t) < max
{A,
2
dK(x0, y0)
}and 0 < y(t) < max
{B,
2
bK(x0, y0)
}.
Remark 2.5. This lemma shows that the solution of the Lotka-Volterra system are defined
for all time t > 0.
With this lemma we can prove that there exists t2 > t1 such that M(t) enters in region RIII .
Thus we can define times t5 > t4 > t3 > t2 > t1, such that x(t1) = x(t5) = c/d. As these are
points of a same trajectory we have:
K(x(t1), y(t1)) = K(x(t5), y(t5)).
Then we have g(y(t1)) = g(y(t5)) with y(t1) < a/b and y(t5) < a/b. As g is injective on
(0, a/b) we have y(t1) = y(t5) and the solutions are periodic.
2.6.2 Undamped forces
In this subsection, we analyze systems that model the motion of a particle of mass m. We
assume the motion is determined by forces f(x) that depend only on the position and not on
the velocity (there is no damping or friction). Since the mass times the acceleration equals
the forces acting on the particle, the differential equation determines the motion is given by
mx = f(x). This equation can be written as a system of differential equations with only
first derivatives by setting x = y:
x = y
y =1
mf(x).
We want to find a quantity which is conserved along the trajectories. In this case, we denote:
E(x, x) =1
2mx2 + V (x)
where V (x) is the potential energy:
V (x) = −∫ x
x0
f(y)dy.
And ddtE(x, x) is constant along the solution, this is the total energy.
The pendulum: Consider the system of equations for the pendulum:{x = y
y = − sin(x).
The potential function is V (x) = 1 − cos(x). The potential function has a minimum at
x = 0 and multiples of 2π. It has maxima at x = ±π and other odd integer multiples
30
Figure 10: Plot of the surface (x, y, E(x, y)).
of π. The local maxima and minima of the potential energy V give the fixed points of the
system. We have already studied this problem and showed that (±π, 0) are saddles and (0, 0)
is a center when restricted to [−π, π]. We have been able, for the moment, to determine
the stable and unstable manifolds of the two saddles. We are going to present a method
which allows to draw the full phase portrait by understanding the level sets of the energy
E(x, y) = 12y2 + V (x), which can be thought as a surface in R3 (see Figure 10). We recall
that for a fixed constant c, the level set of the energy E(x, y) at constant c is defined by:
E−1(c) = {(x, y) ∈ R2 | E(x, y) = c}.
First, we notice that V (0) = 0 and V (±π) = 2. Therefore, we need to understand the level
sets of the energy for c with 0 < c < 2, c = 2 and 2 < c. Indeed, if c < 0, then E−1(c) = ∅.Note that for c = 2, we have:
y2 = 2(cos(x) + 1),
which gives two curves h(x) = ±√
2(cos(x) + 1) which correspond to the stable and unstable
manifolds that we have already computed. These form heteroclinic connections between the
two fixed points (−π, 0) and (π, 0). For 0 < c < 2, the level set E−1(c) in −π < x < π
is a level curve surrounding (0, 0). Thus (0, 0) is surrounded by periodic orbits and it is a
nonlinear center. For c > 2, then there are two values of y for each x ∈ E−1(c). Thus E−1(c)
is the union of two curves, one with y > 0 and one with y < 0. These two trajectories are
called rotary solutions. See Figure 11, for a description of the phase portrait.
31
Figure 11: Phase portrait of the undamped pendulum.
2.6.3 Liapunov functions and stability
As we have already seen, determining the stability of fixed point is straightforward if it is
hyperbolic. When it is not the case (a center for example), this determination becomes
more problematic. Let L : O → Rn be a differentiable function defined on an open set Ocontaining X∗ of the system X = F (X). We consider the function
L(X) = DLX(F (X)).
We can also write L(X) as
L(X) =d
dt|t=0L ◦ φt(X) ,
consequently, if L(X) is negative, then L decreases along the solution curve through X.
Theorem 2.9 (Liapunov stability). Let X∗ be an equilibrium point of X = F (X). Let
L : O → Rn be a differentiable function defined on an open set O containing X∗. Suppose
that:
(i) L(X∗) = 0 and L(X) > 0 if X 6= X∗,
(ii) L ≤ 0 in O\X∗.
Then X∗ is stable. Furthermore, if L also satisfies
(iii) L < 0 in O\X∗,
then X∗ is asymptotically stable.
Definition 2.16. A function L satisfying (i) and (ii) is called a Liapunov function for X∗.
If (iii) also holds, we call L a strict Liapunov function.
Examples
1. Prey-predator system. If we set:
L(x, y) = K(x, y)−K(c/d, a/b)
then L(c/d, a/b) = 0 and L(x, y) > 0 for all (x, y) 6= (c/d, a/b). Furthermore, we have
already that L = 0 for all (x, y). Then (c/d, a/b) is stable.
32
Figure 12: (a) Lyapunov stability versus (b) strict Lyapunov stabilty.
2. Undamped pendulum. If we set:
L(x, y) = E(x, y) =1
2y2 + 1− cos(x)
then L(0, 0) = 0 and L(x, y) > 0 in a neighborhood of (x, y) = (0, 0). Furthermore, we
have already that L = 0 for all (x, y). Then (0, 0) is stable.
3. Let consider the following system:
x = (εx+ 2y)(z + 1)
y = (−x+ εy)(z + 1)
z = −z3
with ε ∈ R. There exists a unique fixed point (0, 0, 0). The linearization of the system
at (0, 0, 0) is ε 2 0
−1 ε 0
0 0 0
.
The eigenvalues are 0 and ε± i√
2. So the origin is unstable if ε > 0. When ε ≤ 0, the
origin is not hyperbolic and we cannot conclude. The idea is to search for a Liapunov
function for (0, 0, 0) of the form:
L(x, y, z) =1
2
(ax2 + by2 + cz2
)with a, b, c to be determined later. For such an L we have
L = axx+ byy + czz
= ax(εx+ 2y)(z + 1) + by(−x+ εy)(z + 1)− cz4
= ε(ax2 + by2)(z + 1) + (2a− b)xy(z + 1)− cz4.
33
We set a = 1, b = 2 and c = 1 such that if ε = 0, we have L = −z4 ≤ 0, so the origin
is stable. If ε < 0, then we find that
L = ε(x2 + 2y2)(z + 1)− z4
so that L < 0 in the region O given by z > −1 (minus the origin). We conclude that
the origin is asymptotically stable in this case. This is our first example of a bifurcation
at ε = 0, the origin has changed stability (transcritical bifurcation).
2.6.4 Gradient systems
We now turn to particular type of system for which Liapunov functions are natural.
Definition 2.17 (Gradient system). A gradient system on Rn is a system of the form
X = −−−→grad V (X) (2.11)
where V : Rn → Rn is a C∞ function and
−−→grad V =
(∂V
∂x1, · · · , ∂V
∂xn
).
Proposition 2.8. The functionn V is a Liapunov function for the system (2.11). Moreover,
V (X) = 0 if and only if X is a fixed point.
Proof. By the chain rule we have
V (X) =d
dtV (X(t)) = DVX(t)(X
′(t)) =−−→grad V (X(t)) · (−−−→grad V (X(t)))
then
V (X) = −‖−−→grad V (X(t))‖ ≤ 0.
In particular V (X) = 0 if and only if−−→grad V (X) = 0.
Corollary 2.1. If X∗ is an isolated minimum of V , then X∗ is asymptotically stable.
To understand a gradient flow geometrically, we look at the level surfaces of the function
V : Rn → Rn. These are the subsets V −1 (c) with c ∈ R. X ∈ V −1 (c) is regular point if−−→grad V (X) 6= 0, then V −1 (c) looks like a surface of dimension n− 1 (a curve for n = 2). If
all the points in V −1 (c) are regular points then we call c a regular value for V . Furthermore,−−→grad V (X) is perpendicular to every tangent vector to the level set V −1 (c) at X provided
that c is a regular value of V .
Remark 2.6. The critical points of V are the equilibrium points of the system.
Example: the nonlinear pendulum.
34
2.7 Introduction to bifurcations of dimension 1
Definition 2.18. In dynamical systems, a bifurcation occurs when a small smooth change
made to the parameter values (the bifurcation parameters) of a system causes a sudden “qual-
itative” or topological change in its behaviour. Generally, at a bifurcation, the local stability
properties of equilibria, periodic orbits or other invariant sets changes.
In this section, we consider scalar differential equations of the form
du
dt= f(u, µ). (2.12)
Here the unknown u is a real-valued function of the time t, and the vector field f is real-
valued depending, besides u, upon a parameter µ. The parameter µ is the bifurcation
parameter. We suppose that equation (2.12) is well-defined and satisfies the hypotheses
of the Cauchy-Lipschitz theorem, such that for each initial condition there exists a unique
solution of equation (2.12). Furthermore we assume that the vector field is of class Ck, k ≥ 2,
in a neighborhood of (0, 0) satisfying:
f(0, 0) = 0,∂f
∂u(0, 0) = 0. (2.13)
The first condition shows that u = 0 is an equilibrium of equation (2.12) at µ = 0. We are
interested in local bifurcations that occur in the neighborhood of this equilibrium when we
vary the parameter µ. The second condition is a necessary, but not sufficient, condition for
the appearance of local bifurcations at µ = 0.
Remark 2.7. Suppose that the second condition is not satified: ∂f/∂u(0, 0) 6= 0. A direct
application of the implicit function theorem shows that the equation f(u, µ) = 0 possesses a
unique solution u = u(µ) in a neighborhood of 0, for small enough µ. In particular u = 0
is the only equilibrium of equation (2.12) in a neighborhood of 0 when µ = 0, and the same
property holds for µ small enough. Futhermore, the dynamics of (2.12) in a neighborhood of
0 is qualitatively the same for all sufficiently small values of the parameter µ: no bifurcation
occurs for small values of µ.
2.7.1 Saddle-node bifurcation
Theorem 2.10 (Saddle-node bifurcation). Assume that the vector field f is of class Ck,k ≥ 2, in a neighborhood of (0, 0) and satisfies:
∂f
∂µ(0, 0) =: a 6= 0,
∂2f
∂u2(0, 0) =: 2b 6= 0. (2.14)
The following properties hold in neighborhood of 0 in R for small enough µ:
(i) if ab < 0 (resp. ab > 0) the differential equation has no equilibria for µ < 0 (resp. for
µ > 0),
35
(ii) if ab < 0 (resp. ab > 0) the differential equation possesses two equilibria u±(ε), ε =√|µ| for µ > 0 (resp. µ < 0), with opposite stabilities. Furthermore, the map ε →
u±(ε) is of class Ck−2 in a neighborhood of 0 in R, and u±(ε) = O(ε).
Then for equation (2.12), a saddle-node bifurcation occurs at µ = 0.
A direct consequence of conditions (2.14) is that f has the expansion:
f(u, µ) = aµ+ bu2 + o(|µ|+ u2) as (u, µ)→ (0, 0)
Figure 13: Saddle-node bifurcation: bifurcation diagrams, in the (µ, u)-plane, of the trun-
cated equation (2.15) for different values of a and b. The solid lines represent branches of
stable equilibria, the dashed lines branches of unstable equilibria, and the arrows indicate
the sense of increasing time t.
Exercice 2.1. Consider the truncated equation
du
dt= aµ+ bu2. (2.15)
Plot bifurcation diagrams in the (u, µ)-plane of this truncated equation for different values of
a and b.
Proof. Since a 6= 0, we apply the implicit function theorem which implies the existence of
unique solution µ = g(u) for u close to 0 of the equation f(u, µ) = 0, where g is of class Ck,k ≥ 2 in a neighborhood of the origin with g(0) = 0. Its Taylor extansion is given by
µ = − bau2 + o(u2).
Consequently, if abµ > 0 equation (2.12) has no equilibria, one equilibrium u = 0 if µ = 0
and a pair of equilibria u±(µ) = ±√−aµ/b + o(
√|µ|) if abµ < 0. Finally, in the case
abµ < 0, we have:∂f
∂u(u±(µ), µ) = 2bu±(µ) + o(
√|µ|)
then the equilibrium u−(µ) is attractive, asymptotically stable when b > 0 and repelling,
unstable when b < 0; wheres, the equilibrium u+(µ) has opposite stability properties.
36
Figure 14: Extended phase portrait, in the (t, u)-plane, of the truncated equation (2.15) for
b > 0 and (a) aµ > 0, (b) µ = 0, (c) aµ < 0. Phase portraits at a saddle node bifurcation.
2.7.2 Pitchfork bifurcation
Theorem 2.11 (Pitchfork bifurcation). Assume that the vector field f is of class Ck, k ≥ 3,
in a neighborhood of (0, 0), that it is satisfies conditions (2.13), and that it is odd with repsect
to u:
f(−u, µ) = −f(u, µ) (2.16)
Furthermore assume that:
∂2f
∂µ∂u(0, 0) =: a 6= 0,
∂3f
∂u3(0, 0) =: 6b 6= 0. (2.17)
The following properties hold in neighborhood of 0 in R for small enough µ:
(i) if ab < 0 (resp. ab > 0) the differential equation has one trivial equilibrium u = 0 for
µ < 0 (resp. for µ > 0). This equilibrium is stable when b < 0 and unstable when
b > 0.
(ii) if ab < 0 (resp. ab > 0) the differential equation possesses the trivial equilibrium u = 0
and two nontrivial equilibria u±(ε), ε =√|µ| for µ > 0 (resp. µ < 0), which are
symmetric, u+(ε) = −u−(ε). The map ε→ u±(ε) is of class Ck−3 in a neighborhood of
0 in R, and u±(ε) = O(ε). The nontrivial equilibria are stable when b < 0 and unstable
when b > 0, whereas the trivial equilibrium has opposite stability.
Then for equation (2.12), a pitchfork bifurcation occurs at µ = 0.
A direct consequence of conditions (2.13), (2.16) and (2.17) is that f has the Taylor expan-
• Plot bifurcation diagrams in the (u, µ)-plane of this truncated equationfor different
values of a and b.
• Prove the theorem.
Further readings on bifurcation can be found in the book of Kuznetsov [8].
3 Periodic orbits
In the previous chapter, we concentrated our effort on the study of fixed points of systems
of differential equations. There exists, of course, other types of interesting solutions. Among
them are periodic solutions or closed orbits that we already encountered in some examples:
Lotka-Volterra system and the nonlinear pendulum. This chapter is dedicated to the study
of these solutions and will allow us to establish important connections between continuous
and discrete dynamical systems.
Figure 16: Periodic orbits for continuous and discrete system.
3.1 Definitions
Definition 3.1 (Periodic point and periodic orbit). Let X(t) = φt(X0) be the solution of the
differential equation X = F (X) with initial condition X0. We define a periodic point of period T
to be a point X0 such that φT (X0) = X0 but φt(X0) 6= X0 for all 0 < t < T . If X0 is
a periodic point then the set γ of all the points γ = {φt(X0) | 0 ≤ t ≤ T} is called a
periodic orbit or closed orbit.
Remark 3.1. γ is called periodic orbit because the flow is periodic in time φt+T (X0) = φt(X0)
for all t. γ is called a closed orbit because the set of points on the orbit is a closed set and
the orbit closes up on itself after time T and the whole orbit {φt(X0) | −∞ ≤ t ≤ +∞} is
a closed set.
39
Periodic orbits in the plane, can either be contained in a whole band of periodic orbits,
like the pendulum equation, or they can be isolated in the sense that nearby orbits are not
periodic. The latter case is called a limit cycle.
Definition 3.2 (Limit cycle). A limit cycle is an isolated periodic orbit for a system of
differential equations in the plane.
Figure 17: Example of a limit cycle.
Definition 3.3 (Limit sets). A point Y ∈ Rn is an ω-limit point of a trajectory φt(X0)
provided that there exists a sequence of times tm going to infinity such that limm→∞
φtm(X0) = Y .
This condition means that the orbit φt(X0) keeps coming back near Y infinitely often as t
goes to infinity. The set of all ω-limit points of X0 is called the ω-limit set and is denoted
ω(X0).
Similarly, a point Y ∈ Rn is an α-limit point of a trajectory φt(X0) provided that there exists
a sequence of times tm going to −∞ such that limm→−∞
φtm(X0) = Y . The set of all α-limit
points of X0 is called the α-limit set and is denoted α(X0).
Examples: If X0 is a fixed point, then ω(X0) = α(X0) = {X0}. If X0 is a periodic point
then ω(X0) = α(X0) = γ where γ is the periodic orbit.
Proposition 3.1. Assume that φt(X0) is a trajectory for all t ∈ R. Then the following
properties are true:
(i) ω(X0) and α(X0) are invariant sets: if Y ∈ ω(X0) (resp. α(X0)), then the orbit φt(Y )
is in ω(X0) (resp. α(X0)) for all t ∈ R.
(ii) ω(X0) and α(X0) are closed sets.
(iii) The limit set depends only on the trajectory and not on a particular point, so ω(X0) =
ω(φt(X0)) and α(X0) = α(φt(X0)) for all t ∈ R.
Just like a fixed point, a periodic orbit can have different types of stability.
40
Definition 3.4 (Orbitally stable). A periodic orbit γ = {φt(X0) | 0 ≤ t ≤ T} is called
orbitally stable provided that the following condition holds:
• given any ε > 0, there is a δ > 0 such that, if X0 is an initial condition with within a
distance δ of γ, then φ(t,X0) is within a distance ε of γ for all t ≥ 0.
Definition 3.5 (Orbitally asymptotically stable). A periodic orbit γ = {φt(X0) | 0 ≤ t ≤T} is called orbitally asymptotically stable provided that it is orbitally stable and that the
following condition further holds:
• there is a δ1 > 0 such that an initial condition with within a distance δ1 of γ has the
distance between φ(t,X0) and γ go to zero as t goes to infinity, that is ω(X0) = γ).
Example: consider the system of differential equations{x = y + x(1− x2 − y2)y = −x+ y(1− x2 − y2).
This example can be easily understood by using polar coordinates. If r2 = x2 + y2, differen-
tiating each side by t we have:
rr = xx+ yy
= xy + x2(1− r2)− xy + y2(1− r2)= r2(1− r2) or
r = r(1− r2).
Similarly the angle θ satisfies tan θ = y/x, so again differentiating with respect to t we get
θ
cos(θ)2=−x2 + xy(1− r2)− y2 − xy(1− r2)
x2
= − r2
x2
θ = −1.
Then the two equations are decoupled. From this, we deduce that the origin r = 0 is a fixed
point of the system and repelling. The differential equation for r has also r = 1 as fixed
point and it is attracting. So the planar differential equations have an attracting periodic
orbit of radius one (which is a limit cycle!), and the origin is a repelling point.
For all X0 = (x0, y0) 6= (0, 0), ω(X0) = γ = {(x, y) ∈ R2 | x2 + y2 = 1}. If ‖X0‖ < 1, then
α(X0) = {(0, 0)}. If ‖X0‖ > 1, then α(X0) = ∅.From now on, we restrict ourselves to planar dynamical systems.
3.2 Poincare Bendixson theorem for planar systems
Definition 3.6. A set A is called positively invariant provided that whenever X0 ∈ A, then
φt(X0) ∈ A for all t > 0. See figure 18.
41
Figure 18: Positively invariant set.
Theorem 3.1 (Poincare Bendixson). Consider a differential equation X = F (X) on R2.
1. Assume that F is defined on all R2. Assume a forward orbit (φt(X0))t≥0 is bounded.
Then, ω(X0) either contains a fixed point or is a periodic orbit.
2. Assume that A is a closed and bounded subset of R2 that is positively invariant for
the differential equation. We assume that F is defined at all points of A and has no
fixed point in A. Then, given any X0 ∈ A, the orbit φt(X0) is either periodic or tends
toward a periodic orbit at t goes to ∞, and ω(X0) equals this periodic orbit.
Corollary 3.1. A compact set K that is positively or negatively invariant contains either a
limit cycle or an equilibrium point.
Corollary 3.2. Let γ a closed orbit and let U be the open region in the interior of γ. Then
U contains either an equilibrium point or a limit cycle.
Let γ a closed orbit that forms the boundary of an open set U . Then U contains an equilibrium
point.
Corollary 3.3. If L is a strict Liapunov function for a planar system, then there are no
limit cycles.
Example: consider the differential system
x = y
y = −x+ y(4− x2 − 4y2).
We use a bounding function (like a Liapunov function) L(x, y) = 12(x2 + 4y2). The time
derivative is
L = y2(4− x2 − 4y2) =
{≥ 0 if 2L(x, y) = x2 + 4y2 = 1
≤ 0 if 2L(x, y) = x2 + 4y2 = 4.
42
These inequalities imply that the annulus
A =
{(x, y) ∈ R2 | 1
2≤ L(x, y) ≤ 2
}is positively invariant. The only fixed of the system is at the origin, which is not in A. The
Poincare-Bendixson theorem implies that there is a periodic orbit in A.
3.3 Local sections
Definition 3.7. Suppose that F (X0) 6= 0, X0 is not a fixed point for the system X =
F (X). The transverse line at X0, denoted by `(X0), is the straight line through X0, which is
perpendicular to the vector F (X0) based at X0.
Remark 3.2. Since F (X) is continuous, the vector field is not tangent to `(X0), at least in
some open interval in `(X0) surrounding X0. We call such an open subinterval containing
X0 a local section S at X0. For all X ∈ S, F (X) 6= 0.
If S is local section, the solution through a point Z0 may reach X0 ∈ S at a certain time t0.
We show that in a certain local sense, this time of first arrival at S is a continuous function
of Z0.
Proposition 3.2. Let S be a local section at X0 and suppose φt0(Z0) = X0. Let W be a
neighborhood of Z0. Then there exists an open set U ⊂ W containing Z0 and a differentiable
function τ : U → R such that τ(Z0) = t0 and φτ(X)(X) ∈ S for each X ∈ U .
3.4 Stability of periodic orbits and the Poincare map
Definition 3.8 (Poincare map). Given a periodic orbit γ, let be X0 ∈ γ and S a local section
at X0. We consider the Poincare map as the first return map on S. This is the function
P : U ⊂ S → S, U a neighborhood of X0, such that:
P (X) = φτ(X)(X)
where τ(X) is the smallest positive time for which φτ(X)(X) ∈ S.
Remark 3.3. We always have P (X0) = X0. Close to a periodic, we have transformed the
study of a continuous dynamical system X = F (X) into the the study of the iteration map
Xn+1 = P (Xn) = P n+1(X0) where P is the Poincate map. We can then use the stability
theorem of fixed point for discrete systems.
Theorem 3.2 (Stability of periodic orbits). Let X = F (X) be a planar system and suppose
that X0 lies on a closed orbit γ. Let P be a Poincare map defined on a neighborhood of X0
in some local section. If |P ′(X0)| < 1, then γ is orbitally asymptotically stable.
43
Figure 19: Poincare map P .
Remark 3.4. All these results can generalized to higher order dimensions.
Example: let us consider again the system{x = y + x(1− x2 − y2)y = −x+ y(1− x2 − y2).
and prove that γ = {(x, y) ∈ R2 | x2+y2 = 1} is asymptotically stable. We need to construct
the Poincare map. We discuss the first return trajectories from the half line {(x, 0), x > 0}to itself. In polar coordinates, this amounts to following solutions from θ = 0 to θ = −2π.
After a separation of variables, we have:
2
∫ r(t)
r0
1
r(1− r2)dr =
∫ t
0
2dt
⇒ ln
(r(t)2
1− r(t)2)−(
r201− r20
)= 2t
⇒ r(t) =(1 + e−2t(r−20 − 1)
)−1/2.
And the solution for θ is
θ(t) = θ0 − t.
So it takes a length of time of 2π to go once around the origin from θ0 = 0 to θ(t) = −2π.
So, evaluating r(t) at 2π gives the radius after one revolution in terms of the original radial
r0 as:
r1 = r(2π) = P (r0) =(1 + e−4π(r−20 − 1)
)−1/2.
From this, it is straightforward to see that P (0) = 0 and P (1) = 1, for all other initial values,
P (r0) 6= r0. The derivative of the Poincare map P is
P ′ (r) =e−4π
(r2 + e−4π(1− r2))3/2
=e−4πP (r)3
r3> 0.
Then P ′(r = 1) = e−4π < 1 and γ is asymptotically stable.
44
4 Discrete dynamical systems
4.1 One-dimensional iteration maps
Figure 20: Staircase diagrams for stable fixed points.
4.2 Linearized stability of fixed points
How all the notions and results of stability are translated to discrete dynamical systems?
First of all, let’s define a discrete dynamical system as
Xn+1 = F (Xn) (4.1)
where F : Rn → Rn is, at least, continuous. Given an initial condition X0 ∈ Rn, we naturally
have Xn+1 = F n+1(X0). This allows to define the flow of (4.1) as the one parameter family
(F n)n∈N. If F is invertible then F−n exists and the flow is defined for n ∈ Z.
A fixed point or an equilibrium is an invariant point by the flow such that we have the
following definition.
Definition 4.1. A point X∗ is a fixed point for (4.1) if X∗ = F (X∗).
We can also define local and global invariant stable and unstable manifolds. Let U be a
neighborhood of the fixed point X∗.
Wsloc(X
∗) =
{X0 ∈ U : F n(X0) −→
n→+∞X∗ and F n(X0) ∈ U
}(4.2)
Wuloc(X
∗) =
{X0 ∈ U : F−n(X0) −→
n→+∞X∗ and F−n(X0) ∈ U
}(4.3)
and
Ws(X∗) =⋃n∈N
F−n (Wsloc(X
∗)) (4.4)
45
Wu(X∗) =⋃n∈N
F n (Wuloc(X
∗)) . (4.5)
Remark 4.1. Contrary to the continuous case, invariant manifolds are made of discrete
points. See figures 21 and 22.
Figure 21: Invariant manifold of a saddle fixed point on the plane with positive multipliers.
Figure 22: Invariant manifold of a saddle fixed point on the plane with negative multipliers.
Definition 4.2. A fixed point X∗ for a map F is said to be stable if for all r > 0 there
exists a δ > 0 such that if X is in the open ball B(X∗, r) of radius r and center X∗, then
F j(X) ∈ B(X∗, r) for all j ≥ 0.
A fixed point X∗ for a map F is said to be unstable if it is not stable.
A fixed point X∗ for a map F is called attracting (or asymptotically stable or a sink) provided
that X∗ is stable and there is a δ1 > 0 such that if X ∈ B(X∗, δ1), then ‖F j(X)−X∗‖ −→j→+∞
0.
A fixed point X∗ for a map F is called repelling (or a source) provided that there is r1 > 0
such that, if X 6= X∗ is in B(X∗, r1) then there exists j such that ‖F j(X)−X∗‖ ≥ r1.
Theorem 4.1 (Linearized stability of fixed point). Consider a differential equation Xn+1 =
F (Xn) where F is C2. Let λ1, · · · , λn be the eigenvalues of DFX∗.
1. If all the eigenvalues λj of DFX∗ have |λj| < 1, then X∗ is attracting.
2. If one eigenvalue λj0 of DFX∗ has |λj0| > 1, then X∗ is unstable.
46
3. If all the eigenvalues λj of DFX∗ have |λj| > 1, then X∗ is repelling.
Definition 4.3. A fixed point of (4.1) is called hyperbolic provided that all the eigenvalues
λj of DFX∗ satisfy |λj| 6= 1.
Figure 23: Stable fixed points of one-dimensional systems.
Definition 4.4 (Periodic point). A point X∗ is a period-N point for F provided that FN(X∗) =
X∗ but F j(X∗) 6= X∗ for 0 < j < N .
Definition 4.5. A period-N point X∗ for a map F is said to be stable if for all r > 0 there
exists a δ > 0 such that if X is in the open ball B(X∗, r) of radius r and center X∗, then
F j(X) ∈ B(F j(X∗), r) for all j ≥ 0.
A period-N point X∗ for a map F is said to be unstable if it is not stable.
A period-N point X∗ for a map F is called attracting (or asymptotically stable or a sink)
provided that X∗ is stable and there is a δ1 > 0 such that if X ∈ B(X∗, δ1), then ‖F j(X)−F j(X∗)‖ −→
j→+∞0.
A period-N point X∗ for a map F is called repelling (or a source) provided that there is r1 > 0
such that, if X 6= X∗ is in B(X∗, r1) then there exists j such that ‖F j(X)− F j(X∗)‖ ≥ r1.
Theorem 4.2 (Linearized stability of periodic-N point). Consider a differential equation
Xn+1 = F (Xn) where F is C2 with X∗ a periodic-N point. Let λ1, · · · , λn be the eigenvalues
of D(FN)X∗.
1. If all the eigenvalues λj of D(FN)X∗ have |λj| < 1, then X∗ is attracting.
2. If one eigenvalue λj0 of D(FN)X∗ has |λj0| > 1, then X∗ is unstable.
3. If all the eigenvalues λj of D(FN)X∗ have |λj| > 1, then X∗ is repelling.
Theorem 4.3. Theorems 2.5 and 2.7 are directly transferable to the discrete case.
Examples: F (x) = −x3.
47
4.3 Bifurcation of one-dimensional iteration maps
Consider a discrete dynamical system depending on a parameter:
xn+1 = f(xn, α), xn ∈ RN , α ∈ R
where f is smooth with respect to both x and α. There exists only three ways in which the
hyperbolicity condition of a fixed point can be violated. Either a simple positive multiplier
approaches the unit circle and we have µ1 = 1 (see figure 24(a)), or simple negative multiplier
approaches the unit circle and we have µ1 = −1 (see figure 24(b)), or a pair of simple complex
multipliers reaches the unit circle and we have µ1,2 = e±iθ0 , 0 < θ0 < π (see figure 24(c)).
Figure 24: Codimension 1 critical cases.
Definition 4.6. The bifurcation corresponding to the presence of µ1,2 = e±iθ0, 0 < θ0 < π is
called a Neimark-Sacker bifurcation. Notice that for the Neimark-Sacker bifurcation we need
N ≥ 2.
4.3.1 Saddle-node bifurcation
Consider the following one-dimensional dynamical system depending on one-parameter
x→ α + x+ x2 = f(x, α) = fα(x). (4.6)
The map fα is invertible for |α| small in a neighborhood of the origin. The system has at
α = 0 a nonhyperbolic fixed point x∗ = 0 with µ = fx(0, 0) = 1. The behavior of the system
near x = 0 for small |α| is shown in figure 25.
For α < 0 there are two fixed points in the system: x1,2(α) = ±√−α, the left of which is
stable, while the right one is unstable. For α > 0 there are no fixed points in the system.
While α crosses zero from negative to positive values, the two fixed points (stable and
unstable) “collide”, forming at α = 0 a fixed point with µ = 1, and disappear. This is a fold
(tangent) bifurcation in the discrete-time dynamical system.
There is, as usual, another way of presenting this bifurcation: plotting a bifurcation diagram
in the direct product of the phase and parameter spaces, namely, in the (x, α)-plane. The
48
Figure 25: Fold bifurcation.
4.2 The normal form of the fold bifurcation 115
2
x( )fα x( )fα x( )fα
x~ x~ x~
x1
x
µ = 1
x
α < 0 α = 0 α > 0
x0
x
FIGURE 4.2. Fold bifurcation.
system has at α = 0 a nonhyperbolic fixed point x0 = 0 with µ = fx(0, 0) =1. The behavior of the system near x = 0 for small |α| is shown in Figure4.2. For α < 0 there are two fixed points in the system: x1,2(α) = ±√−α,the left of which is stable, while the right one is unstable. For α > 0 thereare no fixed points in the system. While α crosses zero from negative topositive values, the two fixed points (stable and unstable) “collide,” formingat α = 0 a fixed point with µ = 1, and disappear. This is a fold (tangent)bifurcation in the discrete-time dynamical system.
There is, as usual, another way of presenting this bifurcation: plottinga bifurcation diagram in the direct product of the phase and parameterspaces, namely, in the (x,α)-plane. The fixed-point manifold x−f(x,α) = 0is simply the parabola α = −x2 (see Figure 4.3). Fixing some α, we caneasily determine the number of fixed points in the system for this parametervalue. At (x,α) = (0, 0) a map projecting the fixed-point manifold onto theα-axis has a singularity of the fold type.
1
x 2
x
= ( )
α
+ x = 2 0α x
f x,x α
FIGURE 4.3. Fixed point manifold.Figure 26: Fixed point manifold.
49
fixed-point manifold x− f(x, α) = 0 is simply the parabola α = −x2 (see figure 26). Fixing
some α, we can easily determine the number of fixed points in the system for this parameter
value. At (x, α) = (0, 0) a map projecting the fixed-point manifold onto the α-axis has a
singularity of the fold type.
Lemma 4.1. The system
x→ α + x+ x2 +O(x3)
is locally topologically equivalent near the origin to the system
x→ α + x+ x2.
Theorem 4.4. Suppose that a one-dimensional system
xn+1 = f(xn, α), xn ∈ R, α ∈ R
with smooth f , has at α = 0 the fixed point x∗ = 0 and let µ = fx(0, 0) = 1. Assume that
the following conditions are satisfied:
1. fxx(0, 0) 6= 0
2. fα(0, 0) 6= 0.
Then there are smooth invertible coordinate and parameter changes transforming the system
into
η → β + η ± η2 +O(η3)
Proof. Expand f in a Taylor series with respect to x at x = 0:
f(x, α) = f0(α) + f1(α)x+ f2(α)x2 +O(x3).
Two conditions are satisfied: f0(0) = f(0, 0) = 0 (fixed-point condition) and f1(0) =
fx(0, 0) = 1 (fold bifurcation condition). since f1(0) = 1, we may write
f(x, α) = f0(α) + (1 + g(α))x+ f2(α)x2 +O(x3)
where g(α) is smooth and g(0) = 0.
We introduce a new variable (coordinate shift):
ξ = x+ δ
where δ = δ(α) is to be defined suitably. This transformation yields
ξ = x+ δ = f(x, α) + δ = f(ξ − δ, α) + δ.
Therefore
ξ =(f0(α)− g(α)δ + f2(α)δ2 +O(δ3)
)+ ξ +
(g(α)− 2f2(α)δ +O(δ)2
)ξ
+ (f2(α) +O(δ)) ξ2 +O(ξ3).
50
Assume that f2(0) = 12fxx(0, 0) 6= 0. Then there is a smooth function δ(α), which annihilates
the parameter-dependent linear term in the above map for all sufficiently small |α|. Indeed,
the condition for that term to vanish can be written as