Top Banner
Institut für Theoretische Volkswirtschaftslehre Makroökonomik Prof. Dr. Thomas Steger Advanced Macroeconomics | Lecture| SS 2014 Dynamic Optimization N P i G diti Dynamic Optimization (June 05, 2014) No-P onzi-Game condition Method of Lagrange Multipliers Dynamic Programming Control Theory
20

Dynamic Optimization

Jul 20, 2016

Download

Documents

rivaldo10j
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Prof. Dr. Thomas StegerAdvanced Macroeconomics | Lecture| SS 2014

Dynamic Optimization

N P i G diti

Dynamic Optimization(June 05, 2014)

No-Ponzi-Game condition Method of Lagrange Multipliers Dynamic Programming Control Theory

Page 2: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th N P i G ditiThe No-Ponzi-Game condition Finite life

( ) i e ( ) excludeda T a T≥ <0 0

Everyone must repay his/her debt, i.e. leave the scene without debt at terminal point in time. No-Ponzi-Game condition (NPGC) represents an equilibrium constraint that is imposed on every agent.

Infinite life

( ) , i.e. ( ) excludeda T a T≥ <0 0

Infinite life

Assume Mr. Ponzi (and his dynasty) wishes to increase consumption today by x€. Consumption

lim ( ) , i.e. lim ( ) excludedrt rtt te a t e a t− −→∞ →∞≥ <0 0

Charles Ponzi (photo: 1910) became known in the 1920s as a swindler for his money making scheme. He promised clients huge profits by buying assets in other countries and redeemingexpenditures are being financed by borrowing money. Debt repayment as well as interest

payments are being financed by increasing indebtedness further. Debt then evolves according to

t 0 1 2 …

debt x€ x(1+r)€ x(1+r)2€discrete time: ( ) €t

td xr= +1

countries and redeeming them at face value in the US. In reality, Ponzi was paying early investors using the investments of later investors. This type of scheme is now known as a "Ponzi scheme". (Wikipedia June 3rd 2013)

Noting that d(t)=-a(t) the above NPGC may be stated as

debt x€ x(1+r)€ x(1+r)2€ …consumption debt repayment debt repayment continuous time: ( ) €rtd t e x=

(Wikipedia, June 3 2013)

If M P i i ti b € fi d b l i hi i ti fi i h d bt l di t

[ ]lim ( )

lim ( ) , i.e. lim exclu d( ) de

rtt

rt rtt t

e a t

e td e dt

−→∞

− −→∞ →∞≤

>

≤ 0

0 0

2

If Mr. Ponzi increases consumption by x€, financed by employing his innovative financing scheme, debt evolves according to d(t)=ertx€ such that the present value of debt would remain positive, which is excluded

€lim €trtrtee xx−

→∞ = > 0

Page 3: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th M th d f L M lti li (1)The Method of Lagrange Multipliers (1)

Consider the problem of maximizing an intertemporal objective function extending over three periodsp

( ) ( ) ( ) ( ) (*)

s t ( ) with given (**)

tt

tu c u c u c u c

x f x c x

β β β β=

= + +

=

2

0 1 20 1 2

0

s.t. ( , ) with given ((real economy)(NPGC, finan

)

cial economy)

t t t

t

x f x c xxx

+

≥≥

1 0

2

00

where ct denotes the control variable, xt the state variable, and 0<β<1 is the discount factor.

The problem is to maximize the objective function (*) with respect to c₀, c₁, and c₂ subject to the constraint (**).

This problem can easily be solved by the Method of Lagrange multipliers. This requires to form the Lagrangian function (We focus on interior solutions )form the Lagrangian function. (We focus on interior solutions.)

[ ] [ ]20 1 2 0

1 21 1 2 120 1( ) ( ) ( ) ( , ) ( , )xu c u c u c f x c f x cxβ β β λ β λ= + + − − − −L

3

where we treat x₀ as given and the variables c₀, c₁, c₂, x₁ and x₂ as unknowns.

Page 4: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th M th d f L M lti li (2)The Method of Lagrange Multipliers (2)

Differentiating ℒ w.r.t. the unknowns yields

( ) ( )

( ) ( )

u c u cc

f x c f x c

β∂ ′ ′= = =∂

∂ ∂∂

L

L

22 2

2

0 0

( , ) ( , )( ) ( )

( , )( )

f x c f x cu c u cc c c

f x cu cc c

β β λ βλ

βλ

∂ ∂∂ ′ ′= + = + =∂ ∂ ∂

∂∂ ′= + =∂ ∂

L

L

2 1 1 1 11 2 1 2

1 1 1

0 00 1

0 0

0

( ) ( )

c c

xf x c f x c

β λ λ

∂ ∂∂ = − = =∂

∂ ∂∂

L

L

0 0

22 2

2

0 0

T th ith f( ) thi t f ti d fi ll λ d λ

( , ) ( , )f x c f x cx x x

βλ β λ λ βλ∂ ∂∂ = − + = − + =∂ ∂ ∂L 2 1 1 1 1

1 2 1 21 1 1

0 0

Together with xt+1=f(xt,ct) this system of equation defines c₀, c₁, c₂, x₁, x₂ as well as λ₁ and λ2.

The FOC u′(c₂)=0, assuming a concave utility function with u′(c)>0 for all c, has the following interpretation. It basically says that consumption should be increased as far as possible.

4

p y y p pThat is, the entire wealth is to be consumed in the terminal period. This implication can be made explicit by taking border conditions into account.

Page 5: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th M th d f L M lti li (3)The Method of Lagrange Multipliers (3) Consider now the following stochastic intertemporal optimization problem

( )tE β∞

{ }

s.t. , given(real economy)

max ( )

( , )t

ttc t

t t t t

t

E u c

x f x cx

x

β

ε=

+ +

=≥

= +

0

00

1 1

0

E₀ denotes the expected value given information at t=0 and εt is an i.i.d. random variable with E(εt)=0 and V(εt)=σε. No Ponzi Game condition (NPGC) says that a Ponzi type financing scheme of consumption (increase consumption today by

(NPGC, financial economl )im yt

t

t rx→∞ ≥ +

1 01

No-Ponzi Game condition (NPGC) says that a Ponzi-type financing scheme of consumption (increase consumption today by running into debt and finance repayment and interest payments permanently by further increasing indebtedness) is excluded.

At t=0 the agent decides on c₀, taking x₀ as given. The unknowns at this stage are c₀ and x₁. The associated Lagrangian function readsassociated Lagrangian function reads

[ ]10 10 0

: ( ) ( , )tt t

tt tt t

t tE u c fx cxβ ελβ

∞ ∞

+= =

+ = − − − L

This formulation is in line with the RBC examples (Heer and Maussner, 2004, p. 37)

It leads to a FOC of the form u’(ct)=λt. (this formulation is quite usual). If consumption today reduces the capital stock today (consumption at the

Remark: The definition of ℒ differs from Chow, where it reads

∞ ∞ This formulation is in line with Chow (Chapter 2)

beginning of period), this formulation is appropriate.

5

[ ]10 1

0 011: ( ) ( , )t

tt tt

t

tt

ttcE u c x f x ελβ β +

+=

+ +=

= − − − L

This formulation is in line with Chow (Chapter 2). It leads to a FOC of the form u’(ct)=λt+1 (this formulation is less usual). If consumption today reduces the capital stock tomorrow (consumption at the

end of period), this formulation is appropriate.

Page 6: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th M th d f L M lti li (4)The Method of Lagrange Multipliers (4)

For convenience the Lagrangian for the problem at t=0 is restated

[ ]10 10 0

: ( ) ( , )tt t

tt tt t

t tE u c fx cxβ ελβ

∞ ∞

+= =

+ = − − − L

Let us write the relevant parts of the Lagrangian function (i.e. those including c₀ and x₁)

[ ] [ ]{ ( ) ( ) ( , ) ( , ) ...}x c xE u c u c cf x f xβ β λ ββ ε ελ= + − − − − − − +L 0 10 0 1 0 1 1 2

11 20 11

00

The FOCs ∂ℒ/∂c0=0 and ∂ℒ/∂x1=0 read as follows

( )f x c∂∂L 0 00 0 0

0 0

1 1 1

( , ){ ( ) } 0

( , ){ } 0

f x cE u cc c

f x cE

λ

λ β λ

∂∂ ′= + =∂ ∂

∂ ∂+

L

L

6

1 1 10 0 1

1 1

( , ){ } 0fEx x

λ β λ= − + =∂ ∂

Page 7: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th M th d f L M lti li (5)The Method of Lagrange Multipliers (5)

At t=1 the agent decides on c1, taking x1 as given. The unknowns at this stage are c1 and x2. The associated Lagrangian function readsThe associated Lagrangian function reads

[ ]1 11 1 1

1 1: ( ) ( , )t t

t t t t t tt t

E u c x f x cβ β λ ε∞ ∞

− −+ +

= =

= − − − L

Let us write the relevant parts of the Lagrangian function (i.e. those including c1 and x2)

[ ] [ ]{ ( ) ( ) ( ) ( ) }E u c u c x f x c x f x cβ β β λ ε β λ ε= + − − − − − − +L 0 1 0 1

1 1t t

The FOCs ∂ℒ/∂c1=0 and ∂ℒ/∂x2=0 read as follows

[ ] [ ]{ ( ) ( ) ( , ) ( , ) ...}E u c u c x f x c x f x cβ β β λ ε β λ ε+ +L 1 1 2 1 2 1 1 2 2 3 2 2 3

generalizing1 11 1 1

1 1

( , )( , ){ ( ) } 0 { ( ) } 0

( )( )

t tt t t

t

f x cf x cE u c E u cc c c

f x cf x c

λ λ ∂∂∂ ′ ′= + = ⎯⎯⎯⎯→ + =∂ ∂ ∂

∂∂∂

L

L

Solution strategy: {c₀,c₁,c₂,...} were chosen sequentially, given the information xt at time t (closed-loop policy), rather

generalizing1 1 12 21 1 2 1

2 2 1

( , )( , ){ } 0 { } 0t t tt t t

t

f x cf x cE Ex x x

λ β λ λ β λ + ++

+

∂∂∂ = − + = ⎯⎯⎯⎯→ − + =∂ ∂ ∂L

7

than choosing {c₀,c₁,c₂,...} all at once at t=0 (open loop policy).

The dynamic constraint xt+1=f(xt, ct)+εt+1 and the TVC limt→∞βtEt(λtxt)=0 belong to the set of FOCs.

Because xt is in the information set when ct is to be determined, the expectations are Et and not E0.

Page 8: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i i l l (1)Dynamic Programming: a simple example (1) Consider an individual living for T periods. The intertemporal utility function is given by

T

The intertemporal budget constraint reads at+1=(at-ct)Rt with Rt=1+rt.

( ,..., ) ( )T

tT t

tU c c u cβ

=

=10

The optimal consumption plan {c₀, c₁,..., cT} can be determined by backward induction. At T-1 the individual has wealth aT-1 and the maximum utility may be described as follows

[ ]( ) : max{ ( ) ( ) } (*)T

T T

T T T T T Tcc a

V a u c u a c Rβ−

− − − − − −

≥=

= + −11 1 1

0

1 1 1

Th fi t d diti f th bl th RHS i

Value function VT-1(aT-1) shows an indirect utility, i.e. the maximum attainable utility given wealth aT-1.

The first-order condition for the problem on the RHS is

[ ]( ) ( ) ( )T T T T T T T Tu c u a c R R c c aβ− − − − − − − −′ ′= − =1 1 1 1 1 1 1 1( )

1

Example: ( ) ln

11 1

tt

a c

u c c

β =

=

8

Substituting cT-1=cT-1(aT-1) back into equ. (*) gives VT-1=VT-1(aT-1).

1 1

1 1 1 1

1 1

1 11

1

1 1

2

T T

T T T T

T T

a cc a c

c a

c

βρ

ρρρ

β=

+ − −

− − − −

− −

−= ⎯⎯⎯→ =

=

+++

Page 9: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i i l l (2)Dynamic Programming: a simple example (2)

At T-2 the consumer's maximization problem may be expressed as

This is just like equ (*) but with "second-period" utility replaced by indirect utility V (a )

[ ]( ) max{ ( ) ( ) }T

T

T T T T T T Tca

V a u c V a c Rβ−

− − − − − − −= + −2

1

2 2 2 1 2 2 2

This is just like equ. ( ) but with second-period utility replaced by indirect utility VT-1(aT-1).

This equation is often referred to as the Bellman equation.

The first-order condition for the problem on the RHS is

[ ]( ) ( ) ( )T T T T T T T T Tu c V a c R R c c aβ− − − − − − − − −′ ′= − =2 1 2 2 2 2 2 2 2

Substituting cT-2=cT-2(aT-2) back into the Bellman equation gives VT-2=VT-2(aT-2).

Hence, the entire sequence Vt=Vt(at) for all t∈{0,...,T} can be traced out.

Once we know Vt=Vt(at), optimal ct for all t∈{0,...,T} follows from the first-order conditions

( ) ( )t t t tu c V a Rβ + +′ ′= 1 1

9

Finally, the time path at for all t∈{0,...,T} results from at+1=(at-ct)Rt with a₀ given.

Page 10: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i i l l (2 )Dynamic Programming: a simple example (2a)

Consider an agent with u(c)=ln(c) living for 10 periods who is endowed with initial wealth a₀=1. The subsequent figure displays the time paths for wealth (at and ct) assuming different constellations with regard to R and β.

10

Page 11: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i l t t t (1)Dynamic Programming: a more general treatment (1)

The setup

d f l l d h i l lf b ( β ) Consider an infinitely-lived, representative agent with intertemporal welfare given by (0<β<1)

( )ttU u cτ

τβ∞

−=

The agent is assumed to solve the following problem

tτ =

{ }max s.t. ( , ),t t t t tc

U x f x c xτ

+ = ≥1 0

Notice that the state variable xt cannot be controlled directly but is under indirect control by choosing ct.

An optimal program is a sequence {ct, ct+1,...} which solves the above stated problem. The value of an optimal program is denoted by

{ }( ) max s.t. ( , )t t t t tc

V x U x f x cτ

+= =1

11

The value function V(xt) shows the maximum level of intertemporal welfare Ut given the amount of xt.

Page 12: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i l t t t (2)Dynamic Programming: a more general treatment (2)

Step #1: Bellman equation and first-order conditions Since the objective function U is additively separable it may be written as U =u(c )+βU andSince the objective function Ut is additively separable, it may be written as Ut=u(ct)+βUt+1 and

hence the maximization problem may be stated as

( ) max{ ( ) ( )}t t tV x u c V xβ += + 1

This is the Bellman equation. The problem with potentially infinitely many control variables was broken down into many problems with one control variable.

( ) { ( ) ( )}t

t t tcβ +1

Notice that, compared to Ut=u(ct)+βUt+1, Ut+1 is replaced by V(xt). This says that we assume that the optimal program for tomorrow is solved and we worry about the maximization problem for today only.

The first-order condition for the problem on the RHS of Bellman equation, noting xt+1=f(xt,ct), is

The benefit of an increase in ct consist in higher utility today, which is reflected here by marginal utility u′(ct).

( )( **) ()t ct V x fu c β ++ =′ ′ 1 0

The cost consists in lower overall utility - the value function V(.) - tomorrow. The reduction in overall utility amounts to the change in xt+1, i.e. the derivative fc, times the marginal value of xt+1, i.e. V′(xt+1). As the disadvantage arises only tomorrow, this is discounted at the rate β.

12

Notice that this FOC, together with xt+1=f(xt,ct), implicitly defines ct = ct(xt). As we know very little about the properties of V(.) at this stage, however, we need to go through two further steps in order to eliminate V(.) from this FOC and obtain a condition that uses only functions of which properties like signs of first and second derivatives are known.

Page 13: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i l t t t (3)Dynamic Programming: a more general treatment (3)

Step #2: Evolution of the costate variable – V´(xt) At first we set up the maximized Bellman equation This is obtained by replacing the control variable inAt first we set up the maximized Bellman equation. This is obtained by replacing the control variable in

the Bellman equation by the optimal level of the control variable according to the FOC, i.e. ct=ct(xt)

( ) ( ( )) [ ( , ( ))]t t t t t tV x u c x V f x c xβ= + The derivative of V(xt) w.r.t. xt gives tx +1

( ) ( )( ) ( ( )) [ ( ( ))]t t t tdc x dc xV x u c x V f x c x f fβ ′ ′ ′= + +

Inserting the first-order condition u′(ct)=-βV′(xt+1)fc gives

( ) ( ( )) [ ( , ( ))]tt t t t t t x c

t t

V x u c x V f x c x f fdx dx

β= + +

( ) ( )( ) ( ) [ ( , ( ))] (( ***)) ( )t t t

t t t tt t c t t t x c t t x

t t

dc x dc xV x V x f V f x c x f f V x V x fdx dx

β β β+ + ′ ′ ′ ′ ′= − + + =

1 1

This equation is a difference equation in the costate variable V′(xt). The costate variable is also called the shadow price of the state variable xt.

V′(x ) says how much an additional unit of the x is valued: As V(x ) gives the value of optimal behavior between t and

13

V (xt) says how much an additional unit of the xt is valued: As V(xt) gives the value of optimal behavior between t and the end of the planning horizon, V′(xt) says by how much V(xt) changes when xt is changed marginally.

Equ. (***) describes how the shadow price of the state variable changes over time when the agent behaves optimally.

Page 14: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

D i P i l t t t (4)Dynamic Programming: a more general treatment (4)

Step #3: Euler equation

i (**) i

( ) ( ) (**(

)**) *( )( )

t

t t c

t t x

u c V x fV x V x f

ββ

+

+

′ ′+ =′ ′=

1

1

0

Noting (**) one may write

;( ) ( )( )( ) t tt

ct

c

u cV xf

u cV xfβ β−

+

′′ −=′ =′

− 11

Inserting into equ. (***) gives( )( ( )( ))

t tx xt

ct

tt

c

u c u cV x f ff

V xf β

β ββ

−+= =

′′ −⎯⎯′′ −→ 1

1

This is the famous Euler equation

( )( ) t

tx

t

u cu

fc

β−′ =

′1

This is the famous Euler equation. It represents a difference equation (DE) in marginal utility, which can be transformed into a DE in ct.

If, for instance, the utility function is of the usual CIES shape, then the Euler equation implies

( )t

tx

t

c fc

σβ−

− =1

1

14

The dynamic evolution is then determined by this equation together with xt+1=f(xt,ct) Boundary conditions read: x₀=given and a terminal condition c∞=c.

Page 15: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

C t l Th M i P i i l (1)Control Theory: Maximum Principle (1)

The dynamic problem under consideration can be stated as follows

{ ( )}max ( , , ) ( , )

T

Tu tt

J I x u t dt F x T= +0

. . ( , , )

( ) ; ( ) ; ( )T

s t x f x u t

x t x x T x u t U

=

= = ∈0 0

I(.), F(.), and f(.) are continuously differentiable functions. x is an n-dimensional vector of state variables and u is an r-dimensional vector of control variables (n⋛r). The control trajectory u(t) ∀ t∈[t₀,T] must belong to a given control set U, and must be a piecewise

continuous function of time.

The maximum principle can be considered as an extension of the method of Lagrange multipliersto dynamic optimization problems. To keep the exposition simple, it is assumed that the control trajectory u(t) is unconstrained.

15

Page 16: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

C t l Th M i P i i l (2)Control Theory: Maximum Principle (2)

Recall that the Method of Lagrange Multipliers requires to first introduce Lagrange multipliers and then to set up the Lagrangian function. Subsequently, the saddle point of this function (maximizing w.r.t. to set up the Lagrangian function. Subsequently, the saddle point of this function (maximizing w.r.t. choice variables and minimizing w.r.t. the Lagrange multipliers) is determined. At first, we set up a (row) vector of so-called costate variables, which are the dynamic equivalent of the

Lagrange multipliers

Next we set up the Lagrangian function

( ) ( ( ), ( ),..., ( ))nt t t tλ λ λ λ= 1 2

Next we set up the Lagrangian function

[ ] [ ]{ } ( )( , , ) ( , ) ( , , ) ( , , ) ( , , ) , (*)T T T

T TL I x u t dt F x T f x u t x dt I x u t f x u t x dt F x Tλ λ= + + − = + − +

Lagrange multipliers times constraintsobjective function

t t t 0 0 0

By analogy to the static case, a saddle point of the Lagrangian would yield the solution. Here, however, the saddle point is in the space of functions, where (u*(t),λ*(t)) represent a saddle point if

16

( ), ( ) ( ), ( ) ( ), ( )L u t t L u t t L u t tλ λ λ∗ ∗ ∗ ∗ ≤ ≤

Page 17: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

C t l Th M i P i i l (3)Control Theory: Maximum Principle (3)

We now consider the necessary conditions for a saddle point of (*). A change in the costate variable trajectory from λ(t) to λ(t)+Δλ(t), where Δλ(t) is any continuous function of time, would change the Lagrangian to ( ) ( ) ( ) ( ) y g g g

H one set of first order necessar conditions lti f ΔL 0 d f ll

( , , )T

t

L f x u t x dtλ ∗ ∗ ∗ Δ = Δ − 0

Hence, one set of first-order necessary conditions, resulting from ΔL=0, reads as follows

( , , )x f x u t∗ ∗ ∗=

To develop the remaining necessary first-order conditions, we first rewrite (*) as follows

{ } ( )( , , ) ( , , ) ,T T

TL I x u t f x u t dt xdt F x Tλ λ= + − + ( ) ( ) ( ) ( ) | ( ) ( )

(i t ti b t )

b bbaa a

f x g x dx f x g x f x g x dx′ ′= −

The preceding equation may be expressed as

t t 0 0

(integration by parts)

|T T

Tt

t t

xdt x xdtλ λ λ = − 0

0 0

{ } ( )( , , ) ( , , ) | ,

T TTt T

t t

T

L I x u t f x u t dt x xdt F x Tλ λ λ

= + − − +

0

0 0

17

{ } [ ] ( )( , , ) ( , , ) ( ) ( ) ( ) ( ) , (**)T

Tt

I x u t f x u t x dt T x T t x t F x Tλ λ λ λ= + + − − +0

0 0

Page 18: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

C t l Th M i P i i l (4)Control Theory: Maximum Principle (4)

We now define the following function, labeled the Hamiltonian function

Equ. (**) may then be written as

( , , , ) : ( , , ) ( , , )H x u t I x u t f x u tλ λ= +

{ } [ ] ( )( , , , ) ( ) ( ) ( ) ( ) ,T

Tt

L H x u t x dt T x T t x t F x Tλ λ λ λ= + − − +0

0 0

Now consider the effect of a change in the control trajectory from u(t) to u(t)+Δu(t), associated by a corresponding change in the state trajectory from x(t) to x(t)+Δx(t), which yields

( )T

TTt

H H FL u x dt T xu x x

λ λ ∂ ∂ ∂ Δ = Δ + + Δ + − Δ ∂ ∂ ∂ 0

For a maximum it is necessary that ΔL=0 implying that

[ ]; ; ( )H H FT t t Tλ λ∂ ∂ ∂ ∀ ∈0

18

[ ]; ; ( ) ,T

T t t Tu x x

λ λ= = − = ∀ ∈∂ ∂ ∂ 00

Page 19: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

C t l Th M i P i i l (5)Control Theory: Maximum Principle (5)

Some comments

T h k f i ffi i diti t b id d I thi t t th f ll i iti To check for a maximum, sufficiency conditions must be considered. In this context the following propositions are helpful:Provided that the Hamiltonian is jointly concave in the control and the state variable (Mangasarian

sufficiency conditions) or Provided that the maximized Hamiltonian is concave in the state variable (Arrow sufficiency conditions),

the necessary conditions are also sufficient (Kamien and Schwartz, 1981, part II, sections 3 and 15).

Interpretation of the costate variables λ: ∂J*/∂x(t₀)=λ*(t₀). That is, the (initial) costate variables give the change in the optimal value of the objective functional due to changes in the corresponding initial state variables. This is analogous to the interpretation of the static Langrange multipliers.

The (current-value) Hamiltonian can be viewed as net national product in utility terms (Solow, 2000, p. 127). This can be seen more clearly by writing the Hamiltonian as H=u(C)+λ·I(C) where I denotes investment andThis can be seen more clearly by writing the Hamiltonian as H=u(C)+λ I(C), where I denotes investment and I(C) indicates that investment depends negatively on consumption. Note that the shadow price λ has the dimension "utility per unit of numeraire good“. Hence, maximization of H w.r.t. C requires the following first-order necessary condition to hold: ∂H/∂C=0.

The costate equation, λ=ρλ-∂H/∂K, can be viewed as arbitrage condition or Fisher equation (Solow, 2001, p. 160). For this purpose rewrite the costate equation as: λ+∂H/∂K=ρλ. LHS shows the (marginal) benefit that can be obtained by devoting one unit of output for investment. This comprises an increase in "income in utility terms" ∂H/∂K, accounting for a change in the value of capital. RHS gives the opportunity costs of devoting one

19

te s ∂ /∂ , accou t g o a c a ge t e a ue o cap ta . S g es t e oppo tu ty costs o de ot g o eunit of output to investment rather than consumption. Notice that λ=ρλ-∂H/∂K implies λ(0)=∫0

∞ ∂H/∂K·exp(-ρt)dt (provided that appropriate boundary conditions hold).

Page 20: Dynamic Optimization

Institut für Theoretische VolkswirtschaftslehreMakroökonomik

Dynamic Optimization

Th M th d f L M lti li (5 ) fi i h dThe Method of Lagrange Multipliers (5a): unfinished

Consider the investment problem of a firm

The FOC read

( ) ( )1

{ , } 0

1max 11t t

tt

t t t t tI L t t

IA K L w L Ir K

ηα α θ

∞−

=

− − + +

The FOC read

( ) ( )( ) t t tt

A K L wL

α αα −∂ = − − =∂L 1 0

( )

11

111 01

t t tt t t t

t t t t

I I qI q I KI K K K

η η ηθ θη

η θ

− −∂ = − + − + = ⎯⎯→ = ∂ +

L

20