1 Linear Programming Problem 11 2 2 11 1 12 2 1 1 21 1 22 2 2 2 11 2 2 min () s.t. bounds , 1, , ( ) n n n n n n m m mn n m i i i f cx cx cx ax ax ax b ax ax a x b a x a x a x b l x u i n x x
Jan 01, 2016
1
Linear Programming Problem
1 1 2 2
11 1 12 2 1 1
21 1 22 2 2 2
1 1 2 2
min ( )
s.t.
bounds , 1, , ( )
n n
n n
n n
m m mn n m
i i i
f c x c x c x
a x a x a x b
a x a x a x b
a x a x a x b
l x u i n
xx
2
which can be written in the form:- min ( )
xx c x
Ax b
l x u
s.t.
f T
Assuming a feasible solution exists it will occur at a corner of the feasible region. That is at an intersection of equality constraints. The simplex linear programming algorithm systematically searches the corners of the feasible region in order to locate the optimum.
3
Example: Consider the
problem:
max ( ) . .
. ,
. ,
. ,
,
xx
s.t. .8 (a)
.05 (b)
.1 (c)
f x x
x x
x x
x x
x x
81 108
0 0 44 24 000
0 01 2 000
0 0 36 6 000
0 0
1 2
1 2
1 2
1 2
1 2
4
Plotting contours of f(x) and the constraints produces:
0 0.5 1 1.5 2 2.5 3 3.5 4
x 104
0
0.5
1
1.5
2
2.5
3x 10
4
x1
x2
solution
(b)
(c)
(a)
f increasing
5
The maximum occurs at the intersection of (a) and (b):- .8
.05
giving
0 0 44 24 000
0 01 2 000 26 207 6 897
81 108 286 760
1 2
1 2 1 2
1 2
x x
x x x x
f x x
. ,
. , , , ,
. . ,At the other intersections (corners) of the feasible region:
intersection x f
x1=0, (c) (0, 16,667) 180,000
(b), (c) (15,000, 12,500) 256,500
(a), (b) (26,207, 6,897) 286,760
(a), x2=0 (30,000, 0) 243,000
max
6
Solution using MATLAB Optimisation Toolbox Routine LPf=[-8.1,-10.8];A=[0.8 0.44;0.05 0.1;0.1 0.36];b=[24000;2000;6000];vlb=[0;0];vub=[40000;30000];[x,lambda,how]=lp(f,A,b,vlb,vub);disp('solution x ='),disp(x)disp('f ='),disp(-f*x)disp('constraint values ='),disp(A*x-b)disp('Lagragian multipliers'),disp(lambda)disp(how)
solution x = 1.0e+004 * 2.6207 0.6897f = 2.8676e+005constraint values = 0.0000 0.0000 -896.5517Lagragian multipliers 4.6552 87.5172 0 0 0 0 0ok
DEMO
7
GENETIC ALGORITHMS
Refs: - Goldberg, D.E.: ‘ Genetic Algorithms in Search, Optimization and Machine Learning’ (Addison Wesley,1989)
Michalewicz, Z.: ‘Genetic Algorithms + Data Structures = Evolution Programs’ (Springer Verlag, 1992)
8
Genetic Algorithms are search algorithms based on the mechanics of natural selection and natural genetics. They start with a group of knowledge structures which are usually coded into binary strings (chromosomes). These structures are evaluated within some environment and the strength (fitness) of a structure is defined. The fitness of each chromosome is calculated and a new set of chromosomes is then formulated by random selection and reproduction. Each chromosome is selected with a probability determined by it’s fitness and, hence, chromosomes with the higher fitness values will tend to survive and those with lower fitness values will tend to become extinct.
9
The selected chromosomes then undergo certain genetic operations such as crossover, where chromosomes are paired and randomly exchange information, and mutation, where individual chromosomes are altered. The resulting chromosomes are re-evaluated and the process is repeated until no further improvement in overall fitness is achieved. In addition, there is often a mechanism to preserve the current best chromosome (elitism).
“Survival of the fittest”
10
Genetic Algorithm Flow DiagramInitial
Populationand Coding
Selection“survival of the
fittest”
Elitism
Crossover
Mutation
qMating
11
Components of a Genetic Algorithm (GA)
• a genetic representation
• a way to create an initial population of potential solutions
• an evaluation function rating solutions in terms of their “fitness”
• genetic operators that alter the composition of children during reproduction
• values of various parameters (population size, probabilities of applying genetic operators, etc)
12
Differences from Conventional Optimisation
•GAs work with a coding of the parameter set, not the parameters themselves
•GAs search from a population of points, not a single point
•GAs use probabilistic transition rules, not deterministic rules
•GAs have the capability of finding a global optimum within a set of local optima
13
Initial Population and CodingConsider the problem:where, without loss of generality, we assume that f is always +ve (achieved by adding a +ve constant if necessary). Also assume:
max ( ),x
x xf n
a x b i ni i i , , , , . 1 2
Suppose we wish to represent xi to d decimal places. That is each range needs to be cut into (bi-ai).10d equal sizes. Let mi be the smallest integer such that
Then xi can be coded as a binary string of length mi.
a bi i
( ). .b ai id mi 10 2 1
x a decimalb a
i ii imi
( ).'binary string'2 1
Also, to interpret the string, we use:
14
Each chromosome (population member) is represented by a binary string of length:m mi
i
n
1
where the first m1 bits map x1 into a value from the range [a1,b1], the next group of m2 bits map x2 into a value from the range [a2,b2] etc; the last mn bits map xn into a value from the range [an,bn].To initialise a population, we need to decide upon the number of chromosomes (pop_size). We then initialise the bit patterns, often randomly, to provide an initial set of potential solutions.
15
Selection (roulette wheel principle)
We mathematically construct a ‘roulette wheel’ with slots sized according to fitness values. Spinning this wheel will then select a new population according to these fitness values with the chromosomes with the highest fitness having the greatest chance of selection. The procedure is:
16
1)Calculate the fitness value eval(vi) for each chromosome vi (i = 1,...,pop_size)
2)Find the total fitness of the population
F eval vii
pop size
( )
_
1
3)Calculate the probability of a selection, pi, for each chromosome vi (i = 1,...,pop_size)p
eval v
Fii
( )
4)Calculate a cumulative probability qi for each chromosome vi (i = 1,...,pop_size)
q pi ij
i
1
17
The selection process is based on spinning the roulette wheel pop_size times; each time we select a single chromosome for a new population as follows:1) Generate a random number r in the range [0,1]
2) If r < q1, select the first chromosome v1; otherwise select the ith chromosome vi such that: q r q i pop sizei i 1 2 , ( _ )
Note that some chromosomes would be selected more than once: the best chromosomes get more copies and worst die off.
“survival of the fittest”All the chromosomes selected then replace the previous set to obtain a new population.
18
p1
p2p3
p4
p5
p6
p7
p8
p9
p10 p11
p12
segment area proportional to pi, i=1,...,12
example:
19
CrossoverWe choose a parameter value pc as the probability of crossover. Then the expected number of chromosomes to undergo the crossover operation will be pc.pop_size. We proceed as follows:- (for each chromosome in the new population)1)Generate a random number r from the range
[0,1].
2)If r < pc, then select the given chromosome for crossover.
ensuring that an even number is selected. Now we mate the selected chromosomes randomly:-
20
For each pair of chromosomes we generate a random number pos from the range [1,m-1], where m is the number of bits in each chromosome. The number pos indicates the position of the crossing point. Two chromosomes:
1 2 1
1 2 1pos p
pos pos
os m
mc
b b b b b
c c c c
are replaced by a pair of their offspring (children)
11
2
2
11
popos
p
s m
po o ms s
c c
c
b b
c
b
b bc
21
MutationWe choose a parameter value pm as the probability of mutation. Mutation is performed on a bit-by-bit basis giving the expected number of mutated bits aspm.m.pop_size. Every bit, in all chromosomes in the whole population, has an equal chance to undergo mutation, that is change from a 0 to 1 or vice versa. The procedure is:
For each chromosome in the current population, and for each bit within the chromosome:-1) Generate a random number r from the range
[0,1].
2) If r < pm, mutate the bit.
22
Elitism
It is usual to have a means for ensuring that the best value in a population is not lost in the selection process. One way is to store the best value before selection and, after selection, replace the poorest value with this stored best value.
23
Example max ( ) sin( ) . ,x
f x x x x 10 10 1 2
-1 -0.5 0 0.5 1 1.5 2-1
-0.5
0
0.5
1
1.5
2
2.5
3
x
f(x)
global max
24
Let us work to a precision of two decimal places. then the chromosome length m must satisfy:
2 1 10 2 1 2 301 92 ( )b g m m m
Also let pop_size = 10, pc = 0.25, pm = 0.04
To ensure that a positive fitness value is always achieved we will work on val = f(x) + 2
25
Consider that the initial population has been randomly selected as follows (giving also the corresponding values of x, val, probabilities and accumulated probabilities)
population x val p q v1 0 1 1 1 0 1 1 0 1 0.39 2.89 0.09 0.09 v2 0 0 0 1 1 1 0 1 0 -0.66 3.63 0.11 0.20 v3* 1 1 0 1 0 0 0 1 0 1.45 4.44 0.14 0.34 v4 1 0 1 0 1 1 1 1 0 1.05 4.04 0.13 0.47 v5 0 0 1 0 0 1 1 0 1 -0.55 2.45 0.08 0.55 v6 0 0 0 1 0 0 1 1 0 -0.78 2.48 0.08 0.63 v7 0 0 0 0 0 1 1 1 0 -0.92 2.51 0.08 0.71 v8 0 0 0 1 1 1 0 0 0 -0.67 3.53 0.11 0.82 v9 0 1 1 1 1 1 1 1 1 0.50 3.05 0.09 0.91 v10 1 0 0 1 1 0 0 1 1 0.80 3.06 0.09 1.00
* fittest member of the population
Note for v1:
dec v
x
F val p
( )
.
..
..
12 3 5 6 7
9
1 2 2 2 2 2 237
1237 3
2 10 39
32 082 89
32 080 09
26
Selection
Assume 10 random numbers, range [0,1], have been obtained as follows:-0.47 0.61 0.72 0.03 0.18 0.69 0.83 0.68 0.54 0.83These will select:
0.47 0.61 0.72 0.03 0.18 0.69 0.83 0.68 0.54 0.83 v4 v6 v8 v1 v2 v7 v9 v7 v5 v9giving the new population:
27
Note that the best chromosome v3 in the original population has not been selected and would be destroyed unless elitism is applied.
Population before selection
selection
Population after selection
v1 0 1 1 1 0 1 1 0 1 4 1 0 1 0 1 1 1 1 0 v2 0 0 0 1 1 1 0 1 0 5 0 0 0 1 0 0 1 1 0 v3 1 1 0 1 0 0 0 1 0 0 0 0 1 1 1 0 0 0 v4 1 0 1 0 1 1 1 1 0 1 0 1 1 1 0 1 1 0 1 v5 0 0 1 0 0 1 1 0 1 9 0 0 0 1 1 1 0 1 0 v6 0 0 0 1 0 0 1 1 0 2 0 0 0 0 0 1 1 1 0 v7 0 0 0 0 0 1 1 1 0 6,8 0 1 1 1 1 1 1 1 1 v8 0 0 0 1 1 1 0 0 0 3 0 0 0 0 0 1 1 1 0 v9 0 1 1 1 1 1 1 1 1 7,10 0 0 1 0 0 1 1 0 1 v10 1 0 0 1 1 0 0 1 1 0 1 1 1 1 1 1 1 1
28
Assume the 10 random numbers:- 1 2 3 4 5 6 7 8 9 100.07 0.94 0.57 0.36 0.31 0.14 0.60 0.07 0.07 1.00These will select v1, v6, v8, v9 for crossover.
Now assume 2 more random numbers in the range [1,8] are obtained:-
Crossover (pc = 0.25)
7 20 335. . bits 8 and 4.
29
Mating v1 and v6 crossing over at bit 8:-
v
v1
6
1 0 1 0 1 1 1 1 0
0 0 0 0 0 1 1 1 0no change
Mating v8 and v9 crossing over at bit 4:-
v
v8
9
0 0 0 0 0 1 1 1 0
0 0 1 0 0 1 1 0 1
produces
v
v8
9
0 0 0 0 0 1 1 0 1
0 0 1 0 0 1 1 1 0
giving the new population:-
30
population beforecrossover
population aftercrossover
v1 1 0 1 0 1 1 1 1 0 1 0 1 0 1 1 1 1 0v2 0 0 0 1 0 0 1 1 0 0 0 0 1 0 0 1 1 0v3 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0 0 0v4 0 1 1 1 0 1 1 0 1 0 1 1 1 0 1 1 0 1v5 0 0 0 1 1 1 0 1 0 0 0 0 1 1 1 0 1 0v6 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 0v7 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1v8 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 1v9 0 0 1 0 0 1 1 0 1 0 0 1 0 0 1 1 1 0v10 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1
bit for mutation
31
mutation (pm = 0.04)
Suppose a random number generator selects bit 2 of v2 and bit 8 of v9 to mutate, resulting in:-
population aftermutation
x val
v1 1 0 1 0 1 1 1 1 0 1.05 4.04v2 0 1 0 1 0 0 1 1 0 -0.02 3.02v3 0 0 0 1 1 1 0 0 0 -0.67 3.53v4 0 1 1 1 0 1 1 0 1 0.39 2.89v5 0 0 0 1 1 1 0 1 0 -0.66 3.63v6 0 0 0 0 0 1 1 1 0 -0.92 2.51v7 0 1 1 1 1 1 1 1 1 0.50 3.05
v8** 0 0 0 0 0 1 1 0 1 -0.92 2.37v9 0 0 1 0 0 1 1 0 0 -0.55 2.45v10 0 1 1 1 1 1 1 1 1 0.50 3.05
Total fitness F = 30.54
** weakest
32
ElitismSo far the iteration has resulted in a decrease in overall fitness (from 32.08 to 30.54). However, if we now apply elitism we replace v8 in the current population by v3 from the original population, to produce:
population after mutation
x val
v1 1 0 1 0 1 1 1 1 0 1.05 4.04 v2 0 1 0 1 0 0 1 1 0 -0.02 3.02 v3 0 0 0 1 1 1 0 0 0 -0.67 3.53 v4 0 1 1 1 0 1 1 0 1 0.39 2.89 v5 0 0 0 1 1 1 0 1 0 -0.66 3.63 v6 0 0 0 0 0 1 1 1 0 -0.92 2.51 v7 0 1 1 1 1 1 1 1 1 0.50 3.05 v8 1 1 0 1 0 0 0 1 0 1.45 4.44 v9 0 0 1 0 0 1 1 0 0 -0.55 2.45 v10 0 1 1 1 1 1 1 1 1 0.50 3.05
Total fitness F = 32.61
33
resulting now in an increase of overall fitness (from 32.08 to 32.61) at the end of the iteration.
The GA would now start again by computing a new roulette wheel and repeating selection, crossover, mutation and elitism; repeating this procedure for a pre-selected number of iterations.
34
-1 0 1 2-1
0
1
2
3iteration 40
0 20 403
3.5
4
4.5
5best and average values
-1 0 1 20
10
20
30x distribution
0 10 20
0
10
20
30
chromosomes
Final results from a MATLAB GA program using parameters:
pop_size = 30, m = 22, pc = 0.25, pm=0.01
35
1.8500 4.8500 1.8503 4.8502 1.8500 4.8500 1.8496 4.8495 1.8500 4.8500 1.8500 4.8500 0.3503 2.6497 1.8504 4.8502 1.8269 4.3663 1.8504 4.8502 1.8503 4.8502 1.8500 4.8500 1.8265 4.3520 1.8503 4.8502 1.8386 4.7222 1.8500 4.8500 1.8496 4.8495 1.8500 4.8500 1.8503 4.8502 1.8504 4.8502 1.8500 4.8500 1.8500 4.8500 1.8503 4.8502 1.8500 4.8500 1.8496 4.8495 1.8496 4.8495 1.8503 4.8502 1.8500 4.8500 1.8500 4.8500 1.8968 3.1880
x val x val
x val
Tabulated results:
The optimum val = 4.8502 at x = 1.8504
Hence:
remembering that val(x) = f(x) + 2
max ( ) sin( ) . ,
. .x
f x x x x
x
10 10 1 2
2 8502 18504
at
36
DEMO
37
ON-LINE OPTIMISATION - INTEGRATED SYSTEM
OPTIMISATION AND PARAMETER ESTIMATION (ISOPE)
An important application of numerical optimisation is the determination and maintenance of optimal steady-state operation of industrial processes, achieved through selection of regulatory controller set-point values. Often, the optimisation criterion is chosen in terms of maximising profit, minimising costs, achieving a desired quality of product, minimising energy usage etc. The scheme is of a two-layer hierarchical structure:-
38
OPTIMISATION(based on steady-state model)
set points
outputs
Note that the steady-state values of the outputs are determined by the controller set-points assuming, of course, that the regulatory controllers maintain stability.
REGULATORY CONTROLe.g. PID Controllers
controlsignalsmeasurements
INDUSTRIAL PROCESSinputs
39
The set points are calculated by solving an optimisation problem, usually based on the optimisation of a performance criterion (index) subject to a steady-state mathematical model of the industrial process. Note that it is not practical to adjust the set points directly using a ‘trial and error’ technique because of process uncertainty and non-repeatability of measurements of the outputs. Inevitably, the steady-state model will be an approximation of the real industrial process, the approximation being both in structure and parameters. We call this the model-reality difference problem.
40
ISOPE PrincipleROP - Real Optimisation Problem• Complex• Intractable
MOP - Model Based Optimisation Problem• Simplified (e.g... Linear - Quadratic)• Tractable
???
Can We Find the Correct Solution of ROP ByIterating on MOP in an Appropriate Way
YES
By Applying Integrated System Optimisation AndParameter Estimation - ISOPE
41
Iterative Optimisation and Parameter Estimation
In order to cope with model-reality differences, parameter estimation can be used giving the following standard two-step approach:-
42
1. Apply current set points values and, once transients have died away, take measurements of the real process outputs. use these measurements to estimate the steady-state model parameters corresponding to these set point values. This is the parameter estimation step.
2. Solve the optimisation problem of determining the extremum of the performance index subject to the steady-state model with current parameter values. This is the optimisation step and the solution will provide new values of the controller set points.
The method is iterative applied through repeated application of steps 1 and 2 until convergence is achieved.
43
Standard Two-Step Approach
MODEL BASED OPTIMISATION
min ( )
s.t. ( , )
J
c
c,y
y f c
c
REGULATORYCONTROL
REAL PROCESS
y f c* *( )
PARAMETER ESTIMATION
*( , ) ( )
y c y c
y
y*
y*
44
Example min ( )
*
cJ y c
y c
y c
where (model)
(reality)
2
2 1
2 2
real solutionJ y c c c c c
dJ
dcc c c
y J
* *
*
* *
( ) ( ) ( )
( ) .
. .
2 2 1 2 2 1
0 4 2 1 2 0 0 4
18 0 2
2 2 2 2 2 2
At a min,
giving and
Now consider the two-step approach:-
parameter estimation
y c y c
c c c
( , ) ( )*
i.e. 2 1 1
45
optimisationmin ( )
cJ y c
y c
s.t. with given
2 2 2
J c cdJ
dcc c
c c c
( ) , ( )
( ) .
2 2 2 2
0 2 2 2 0 1 0 5
2 2
for a min when
Hence, at iteration k:
k k
k k
c
c
1
1 0 51 .
i.e. c c ck k k 1 1 0 5 05 0 5 0 5. . . .This first-order difference equation will converge (i.e. stable) since 05 1 05 05 0 3333. . . . giving c c c
and y c J 2 1 16667 0 2222 13333. , . , .
46
HENCE, THE STANDARD TWO STEP APPROACH DOES NOT CONVERGE TO THE CORRECT
SOLUTION!!!
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0
0.2
0.4
0.6
0.8
1
c
J*
Standard Two-Step Final Solution c = 0.3333 J* = 0.2222
1
2 3
4 5
final solution
47
Integrated Approach
The standard two-step approach fails, in general, to converge to the correct solution because it does not properly take account of the interaction between the parameter estimation problem and the system optimisation problem.
Initially, we use an equality v = c to decouple the set points used in the estimation problem from those in the optimisation problem. We then consider an equivalent integrated problem:-
48
* *
min ( , )
s.t. ( , ) ( , )
( ) ( )
model based optimisation
decoupling
parameter
( , ) ( ,
)
J
cc y
y f c y c
v c
y v f v f v y v
estimation
This is clearly equivalent to the real optimisation problem ROP
min ( , )
( )
*
* *
cc y
y f c
s. t.
J
49
If we also write the model based optimisation problem as: (by eliminating y in J(c,y))
min ( , )
where ( , ) , ( , )
F
F Jc
c
c c f c
giving the equivalent problem:-
*
min ( , )
s.t.
( , ) ( )
F
cc
v = c
y v y v
50
Form the Lagrangian:
*( , , ) ( , ) ( ) ( , ) ( )T TL F c v c v c y v y v
with associated optimality conditions:
*
(1)
(2)
(3)
TT
T
L F
L
L F
c c
v
0
y y0
v v
y0
together with:-
*( , ) ( )
v c
y v y v
51
Condition (1) gives rise to the modified optimisation problem:
min ( , ) TF c
c c
which is the same as:-
min ( , )
s.t. ( , )
TJ
c
c y c
y f c
and the modifier is given from (2) and (3) :-
* TT T
F
y y y
v v
52
Modified Two-Step ApproachMODIFIER
*
where ( , ) , )
TT T
F
F J
y y y
c c
c c,f(c
c,y
PARAMETER ESTIMATION
*( , ) ( )
y c y c
y
c
REGULATORYCONTROL
REAL PROCESS
y f c* *( )
y*
y*
min ( )
s.t. ( , )
TJ
c
c,y c
y f c
MODEL BASED OPTIMISATION
53
Modified two-step algorithm
The starting point of iteration k is an estimated set point vector vk.
Step 1 : parameter estimationApply the current set points vk and measure the corresponding real process outputs y*k. Also, compute the model outputs yk = f(vk,) and determine the model parameters k such that yk = y*k.
Step 2 : modified optimisation(i) compute the modifier vector
*
( , )k
k
T Tk kk k k
k k kF
y y yv
v v
54
(ii) solve the modified optimisation problem
min ( ) ( )
s.t. ( , )
k T
k
J
c
c,y c
y f c
to produce a new estimated set point vector ck.
(iii) update the set points (relaxation)
v v K c vk k k k 1 ( )
where the matrix K, usually diagonal, is chosen to regulate stability. (Note: if K = I, then vk+1 = ck).
We then repeat from Step 1 until convergence is achieved.
55
Acquisition of derivatives
The computation of modifier requires the derivatives:-
*
[ ]
model based - not
usually a probl
em
on the real measurements
- quite
a
proble
m
TFF
y
vy
y
v
56
Some methods for obtaining y
v
*
(i) applying perturbations to the set points and computing the derivatives by finite differences.
(ii) using a dynamic model to estimate the derivatives (recommended method).
(iii) estimate the derivative matrix using Broyden’s method (recommended method).
57
Note:- Digital filtering can be used with effect to smooth the computational values of .
Note:- When = 0 we arrive back to the standard two-step approach. From the expression for we see that the standard two step approach will only achieve the correct solution when the model structure is chosen such that:
y
c
y
cLNM
OQP
LNM
OQP
*
a condition rarely achieved in practice.
58
Example: Consider the same example as used previously:
min ( )
*
cJ y c
y c
y c
where (model)
(reality)
2
2 1
2 2
where the real solution is :- c = 0.4, y* = 1.8, J* = 0.2
parameter estimation This is unchanged. Hence: v 1
where
**
2 2
1, 1; 2 1 2
, ( , ) ( 2) 2( 2)
y y yy v y v
v v
F J v y v v v F v
Hence: 1 2 1 2 2 2 2
1b gbgb g( ) ( )v v
modifier* TT T
F
y y y
v v
59
modified optimisation
2 2
2 2
min ( 2)
s.t. , , given
i.e. ( 2)
2( 2) 2 0
1 0.5 0.25
m
m
cy c c
y c
F c c c
Fc c
cc
60
At iteration k (with K = 1)
k k
k k k
k k k
k k
v
v
c
v c
1
2 2
1 05 0 251
( )
. .
i.e. v v v v
v v
k k k k
k k
1
1
1 0 5 1 0 25 2 1 2
15 1
. ( ) . ( )( ( ) )
.
IF this difference equation converges the result will be:-
v v v Note c v 15 1 0 4. . : ) (
61
which is the correct solution
However, it does not converge since
(the eigenvalue is outside the unit circle)
Hence, it is necessary to apply relaxation in the algorithm to produce the iterative scheme:-
15 1.
k k
k k k
k k k
k k k k k k
v
v
c
v v g c v gc g v
1
2 2
1 0 5 0 25
11
( )
. .
( ) ( )
where g is a gain parameter (g > 0).
62
Then:-
v g v g v g v gk k k k 1 15 1 1 1 2 5( . ) ( ) ( . )
This will converge provided
1 2 5 1 0 0 8 . .g g, i.e.
(Hence, typically use g=0.4)
Then: v g v g v ( . ) .1 2 5 0 4
Then: 14 0 4 0 4. , . , . c
and: y y J * *. , .18 0 2
i.e. THE CORRECT REAL SOLUTION IS OBTAINED
63
ISOPE (THE MODIFIED TWO-STEP APPROACH) ACHIEVES THE CORRECT STEADY STATE REAL PROCESS OPTIMUM IN SPITE OF MODEL-REALITY DIFFERENCES
64-2 -1.5 -1 -0.5 0 0.5 1 1.5 20
5
10
15
20
25
Modified Two-Step g = 1 (divergence)
v
J*
12
3
4
5
650 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0
0.2
0.4
0.6
0.8
1
Modified Two-Step Final Solution v = 0.3999 J* = 0.2
v g = 0.2
J*
1
2
34 5
final solution
66
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0
0.2
0.4
0.6
0.8
1
Modified Two-Step Final Solution v = 0.4 J* = 0.2
v g = 0.4
J*
1
2
When g = 0.4 convergence is achieved in a single iteration. This is because the eigenvalue = 0. (|1 - 2.5g| = 0)
67
Example:
1 2
2 2 2 21 2 1 2
*1 1
*2 1 2
1 1 1
2 1 2 2
,min ( 2) ( 3)
2 1s.t.
3 2reality
0.1 mod
5.5el
c cJ y y c c
y c
y c c
y c
y c c
680 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
-0.2
-0.1
0
0.1
0.2
0.3
0.4
0.5
v1
v2
standard J = 0.6563
modified J = 0.2353
69
DEMO