Page 1
1
EE699 (06790)
ADAPTIVE NEUROFUZZY CONTROL
Instructor: Dr. YuMing Zhang
223 CRMS Building
Phone: 257-6262 Ext. 223
245-4518 (Home)
Email: [email protected]
Adaptive control and neurofuzzy control are two advanced methods for time-varying and non-
linear processes. This course will begin with adaptive control of linear systems. Nonlinear
systems and related control issues will be then briefly reviewed. Neural network and fuzzy
model will be described as general structures for approximating non-linear functions and
dynamic processes. Based on the comparison of the two methods, neurofuzzy model will be
proposed as a promising technology for the control and adaptive control of nonlinear processes.
This course will emphasize basic concepts, design procedures, and practical examples. The
assignments include two design projects: adaptive control of a linear system and neurofuzzy
method based modeling and adaptive control of a nonlinear system. A presentation on a selected
subject is required.
TR 03:30 PM-04:45 PM Funkhouser 313
Page 2
2
ADAPTIVE NEUROFUZZY CONTROL
Introduction (1-2): Actually 3 including 2 for the example
Adaptive Control of Linear Systems (3-5)
Identification of Linear Models (2-3)
Project 1
Control of Nonlinear Systems (1-2)
Neural and Fuzzy Control (1-2)
Neural and Fuzzy Modeling (4-6)
Project 2: Modeling
Adaptive Neurofuzzy Control Design (7-9) & Project 2: Control
Design Examples (2)
Presentation (2)
Final Examination (1)
Projects:
1. Adaptive control of a linear system
2. Neurofuzzy modeling and control of a non-linear system
Page 3
3
CHAPTER I: INTRODUCTION
Primary References:
Y. M. Zhang and R. Kovacevic, “Neurofuzzy model based control of weld fusion zone
geometry," IEEE Transactions on Fuzzy Systems, 6(3): 389-401.
R. Kovacevic and Y. M. Zhang, "Neurofuzzy model-based weld fusion state estimation," IEEE
Control Systems, 17(2): 30-42, 1997.
1. Linear Systems
Classical Control, Linear Control (LQG, Optimal Control)
Model Mismatch between the process and the nominal model.
Reasons:
-Substantial range of physical conditions, modeling error (actual model is fixed, but different
with the nominal model) Robust control or adaptive control
-Time-varying model: Varying physical condition Robust control or adaptive control
Adaptive Control: Identify the real parameters of the model to minimize the mismatch
Robust Control: Allow the mismatch
2. Non-linear Systems
Lack of unified models, a variety of models and design methods
Unified model structure for non-linear systems: neural network models and fuzzy models
Comparison
Modeling: Disadvantage: large number of parameters
Advantages: adequate accuracy, simplicity
Control: Disadvantages: performance evaluation
Advantage: unified methods
Neural network and fuzzy methods
Modeling:
Neural networks: large number of parameters, but automated algorithm
Fuzzy models: moderate number of parameters, lack of automated algorithm
Control design:
Neural networks: large number of parameters
Fuzzy models: moderate number of parameters, time consuming
Neurofuzzy Control
Page 4
4
Compared with Fuzzy Logic: automated identification algorithm, easier design
Compared with Neural Networks: less number of parameters, faster adaptation
3. Adaptive Non-Linear Control
Acceptable convergence speed (number of parameters), general model
4. Example: Neurofuzzy Control of Arc Welding Process
Page 5
5
CHAPTER 2: ADAPTIVE CONTROL
Primary Reference:
D. W. Clarke, “Self-tuning control,” in The Control Handbook edited by W. S. Levine. IEEE
Press, 1996.
1. Introduction
Most Control Theory: assuming (1) time-invariant, known (nominal) model,
(2) no difference between the nominal and actual model
Problems: initial model uncertainties (a difference between the nominal and actual model),
actual model varies during process
Examples:
Solutions
Robust fixed controller
Adaptive controller (self-tuning controller)
For unknown but constant dynamics, identify the model during initial period (auto-tuning or
self-tuning).
For time-varying system, identify and update the model all the time (adaptive control).
Structure of self-tuning control system
2. Simple Methods
Industrial Processes G s eK
sT
sTd( )
1 parameters: T K Td , ,
y t KU e t T
u t U t
u t t
t T T
dd( ) ( ) (
( ) ( )
( ) ( )
( )/
1
0
0 0
)
Identify parameters from the step response
Page 6
6
T K T ondd 0 5 5, , sec
T K T ondd 0 5 1, , sec
Time (sec.)
Am
plit
ude
Step Response
0 1 2 3 4 5 60
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Time (sec.)
Am
plit
ude
Step Response
0 5 10 15 20 25 300
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Page 7
7
Tdy
dty Ku t
Tdy
dty dt Ku t dt
T y t y t y t dt K u t dt
t jh t kh h
a y t y t
a y t dt h y ih
b u t dt h u ih
a T
t t
t
t t
t
t t
t
t t
t
i j
k
t t
t
t t
t
i j
k
( )
( ) ( )
[ ( ) ( )] ( ) ( )
, ( :
( ) ( )
( ) ( )
( ) ( )
1
2
1
2
2 1
1
2
1
2
1 2
1 2 1
2
1
1
2
1
1
2 1
1
sampling period)
Denote
Then
+
a Kb
a T b K a
a l T b l K a l l
2 1
1 2
1 2
1
1
( )
( ) ( ) ( ) ( ,2,... )
Control of a plant with unknown gain
-Plant: y t Ku t( ) ( ) 1
-Set point: w t( )
-Control Problem: At t instant, for the known y t y t u t u t( ), ( ),..., ( ), ( ) 1 1 2 , determine u t( )
such that y t( )1 approaches w t( )1 .
-Controller:
y t y t Ku t Ku t
y t K u t y t y t y t u t u t u t
e t w t y t
Desired y t e t
K u t e t
u t u t e t K
u t u t e t K
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ), ( ) ( ) ( ) )
( ) ( ) ( )
( ) ( )
( ) ( )
( ) ( ) ( ) /
( ) ( ) ( ) /
1 1
1 1 1 1
1
1
1
(
Control Algorithm
-On-line Identification
At t 1 instant: the estimate of the gain is ( )K t 1
Predicted output ( / ) ( ) ( )y t t K t u t 1 1 1
Page 8
8
At t instant: y t( ) becomes available
The prediction error ( )K t 1 generated: ( ) ( ) ( / )t y t y t t 1
In order to eliminate the prediction error,
u t K t u t K t K t t
K t K t t u t
( ) ( ) ( )( ( ) ( )) ( )
( ) ( ) ( ) / ( )
1 1 1
1 1
On-line estimator: ( ) ( ) ( ) / ( )K t K t t u t 1 1
Page 9
9
3. Plant Model
Model Structure, Parameterization, and Parameter Set
First-order system M
b
a s a
a a b
( )
{ , , )
0
1 0
0 1 0
Second-order system M
b s b
a s a s a
a a a b b
( )
{ , , , , )
1 0
2
2
1 0
0 1 2 0 1
Uniqueness of Parameterization and Parameter Set
First-order systemM
b
s a
a b
( )
{ , )
0
0
0 0
or M K
a s
K a
( )
{ , )
1
11
1
Second-order system M
b s b
s a s a
a a b b
( )
{ , , , )
1 0
2
1 0
0 1 0 1
or M K
b s
a s a s
K a a b
( )
{ , , , )
1
2
2
1
1 2 1
1
1
Selection of Model Structure
Criteria: - Sufficiency
- Uniqueness
- Simplicity, Realization, Robustness
Linear System: a general model structure
Continuous time: y tB s
A su t T d td( )
( )
( )( ) ( )
Dead time Td : mass transport, approximation of complex dynamics
Disturbance d t( ) :
measurement noise, unmodeled dynamics, nonlinear effects, disturbance (load)
On-line identification:
Faster faster tracking of the changed dynamics, less robust to noise
(easier to be affected by noise)
Slower slower tracking of the changed dynamics, more robust to noise
Pulse Response
Discrete-Time: y t h u t iii
( ) ( )
1
(for open-loop stable system)
Why not y t h u t iii
( ) ( )
0
? How to handle a dead time?
Page 10
10
Truncation: y t h u t iii
N
( ) ( )
1
Advantage: simplicity in algorithm design and computation
Disadvantage: large number of parameters
y t ay t bu t y t ba u t ii
i
( ) ( ) ( ) ( ) ( )
1 1 1
1
1%: a 0 5. i=7; a 0 9. i=44
DARMA (deterministic autoregressive and moving average) difference equation
y t a y t a y t a y t na b u t b u t b u t nbna nb( ) ( ) ( ) ... ( ) ( ) ( ) ... ( ) 1 2 1 21 2 1 2
y t a y t j b u t jjj
na
jj
nb
( ) ( ) ( )
1 1
Backward-shift operator z 1:
z y j y j z y j y j n
z u j u j z u j u j n
n
n
1
1
1
1
( ) ( ), ( ) ( )
( ) ( ), ( ) ( )
A z y t B z u t( ) ( ) ( ) ( ) 1 1
y tB z
A zu t( )
( )
( )( )
1
1
Disturbance Modeling: zero-mean disturbance
Additive disturbance y tB z
A zu t d t( )
( )
( )( ) ( )
1
1
Modeling of disturbance: d tC z
D ze t( )
( )
( )( )
1
1
1
1 (stationary random sequence)
Uncorrelated Random Sequence e t( ) (while noise):
( 1
( 1
( positive or negative integer
( 1
e t e tt t
e t
e t e tt t
e t
e t e t j j j
e t e t jt t
e t e t j
t t
t
et t
t
t t
t
( ) ( ) ( ))
( ) ( ) ( ))
( ) ( ) , : )
( ) ( ) ( ) ( ))
02 1 1
2 1 1
0 0
2 1 1
1
2
2 2 2 2
1
2
1
2
Random Sequence: partially predictable
Uncorrelated Random Sequence: unpredictable
CARMA (controlled autoregressive and moving average) difference equation
A z y t B z u t C z e t( ) ( ) ( ) ( ) ( ) ( ) 1 1 1
Page 11
11
y t a y t j b u t j c e t jjj
na
jj
nb
jj
nc
( ) ( ) ( ) ( )
1 1 0
( )c0 1
Disturbance Modeling: non zero-mean disturbance
( ) ( )( )
( )( )t d t c
C z
D ze t c
1
1
1
1 ( :c unknown constant or slowly changing)
( ) ( )( ) ( )
( )( )t d t c
z C z
D ze t
1 1
1
1
1
1 (Difference operator 1 1z )
CARIMA model: A z y t B z u t C z e t( ) ( ) ( ) ( ) ( ) ( ) 1 1 1
4A. Least Squares Method
Model:
y t a y t a y t a y t na b u t b u t b u t nb e tna nb( ) ( ) ( ) ... ( ) ( ) ( ) ... ( ) ( ) 1 2 1 21 2 1 2
y t a y t j b u t j e t x t e tjj
na
jj
nbT( ) ( ) ( ) ( ) ( ) ( )
1 1
x t y t y t y t na u t u t u t nb
a a a b b b
T
na nb
T
( ) ( ( ), ( ),... ( ), ( ), ( ),..., ( ))
( , ,..., , , ,..., )
1 2 1 2
1 2 1 2
e t y t x tT( ) ( ) ( )
( ) ( ) ( )e t y t x tT
For t t t na nb t t t N 0 0 0 0 02 3+1 ( max( , )), , ,..., :
( ) ( ) ( )
( ) ( ) ( )
.......
( ) ( ) ( )
e t y t x t
e t y t x t
e t N y t N x t N
T
T
T
0 0 0
0 0 0
0 0 0
1 1 1
2 2 2
E Y X
Page 12
12
( ( ), ( ),..., ( ))
( ( ), ( ),..., ( ))
( )
( )
......
( )( )
E e t e t e t N
Y y t y t y t N
X
x t
x t
x t N
T
T
T
T
T
N na nb
0 0 0
0 0 0
0
0
0
1 2
1 2
1
2
Cost Function J e t j E E Y X Y Xj
NT T
( ) ( ) ( )2
01
Criterion for determining the optimal estimate : min*
( )
R na nbJ
dJ
dX Y X XT T
2 2
X Y X XT T *
( )* X X X YT T1
Page 13
13
4. Recursive Prediction Error Estimators
Recursive Estimators: why
Principle
( / ) ( )( )y t t x t tT 1 1
y t x t e tT( ) ( ) ( )
Prediction Error: ( ) ( ) ( / ) ( )( ( )) ( )t y t y t t x t t e tT 1 1
e t zero mean( ): and unpredictable
( ) ( )( ( ))t x t tT 1
( ) t
x t t x t x t t tT( ) ( ) ( ) ( )(( ) ( )) 1
( ( ) ( )) ( ) ( ) ( ) ( )x t x t x t t t tT 1 1
(This is for illustration. Det ( ( ) ( )) )x t x tT 0
( ) ( ) ( ( ) ( )) ( ) ( ) t t x t x t x t tT 1 1
Recursive Estimator: ( ) ( ) ( ) ( ) ( ) ( ) t t a t M t x t t 1
a t( ) : large, small estimation speed, noise sensitivity
A Recursive Estimator
- Cost Function J t t S t y i x i tT T
i
t
( ) ( ( ) ( )) ( )( ( ) ( )) ( ( ) ( )( ))
0 0 0 2
1
Function of the first term:
The role of the first term ~ time
- Recursive Form
S t S t x t x t
t t S t x t t
T( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
1
1 1
Initials: S( )0 , ( ) 0
Effects of S( )0 , ( ) 0 : S t t S S t S( )( ) ( )( ) [ ( ) ( )] 0 0 0
-
Gain vector: k tP t x t
x t P t x tT( )
( ) ( )
( ) ( ) ( )
1
1 1
Parameter Update: ( ) ( ) ( ) ( ) t t k t t 1
Page 14
14
Covariance Update: P t I k t x t P tT( ) [ ( ) ( )] ( ) 1
Initials: P I( )0 2 and ( ) 0
Forgetting Factor
Why? Filter effect
Solution: ( ( ) ( )( )) ( ( ) ( )( ))( )y x t d e y x t dTt t Tt
2
0
2
0
( ( ) ( )( )) { ( ( ) ( )( )}y i x i t y i x i tT
i
tt i T
i
t
2
1
2
1
Recursive Equations:
Gain vector: k tP t x t
x t P t x tT( )
( ) ( )
( ) ( ) ( )
1
1
Parameter Update: ( ) ( ) ( ) ( ) t t k t t 1
Covariance Update: P t I k t x t P tT( ) [ ( ) ( )] ( ) / 1
( )0 12
Page 15
15
5. Predictive Models
Consider y t y t e t( ) . ( ) ( ) 0 9 1
y tz
e t z e t
e t j e t e t e t e t
j j
j
j
j
( )( . )
( ) ( . ) ( )
. ( ) ( ) . ( ) . ( ) . ( ) ...
1
1 0 90 9
0 9 0 9 1 0 9 2 0 9 3
11
1
2 3
- k -step-ahead prediction:
Model: y t k e t k e t k e t k e t k( ) ( ) . ( ) . ( ) . ( ) ... 0 9 1 0 9 2 0 9 32 3
Prediction:
( / ) . ( ) . ( ) . ( )...
. { ( ) . ( ) . ( ) ...}
.( )
.. ( )
y t k t e t e t e t
e t e t e t
e t
zy t
k k k
k
k k
0 9 0 9 1 0 9 2
0 9 0 9 1 0 9 2
0 91 0 9
0 9
1 2
2
1
Prediction Error:
~( ) ( ) ( / ) . ( )y t k y t k y t k t e t k jj
j
k
0 90
1
Variance of Prediction Error: 0 91 0 9
1 0 9
2 2
0
12
2
2.
.
.
j
j
k k
Variance of y : 1
1 0 92
2
.
Variance of prediction Error/Variance of y = 1 0 92 . k
MA Model: y t N z e t( ) ( ) ( ) 1
N z n z n z n z n z
n z n z z n n z n z
N z z N z
k
k
k
k
k
k
k
k k
k k k j
j
k
k
k
( ) ...
{ ... } { ... ..}
( ) ( )
( (
(
*
1
1
1
1
1)
1
1)
1
1
1
1)
1
1
1 1
1
1
Model: y t k N z e t k N z e t k N z e tk k( ) ( ) ( ) ( ) ( ) ( ) ( )* 1 1 1
k-Step-Ahead Prediction: ( / ) ( ) ( )( )
( )( )y t k t N z e t
N z
N zy tk
k
1
1
1
Page 16
16
ARMA Model: y tC z
A ze t( )
( )
( )( )
1
1
C z
A zE z z
F z
A zN z z N zk
k k
( )
( )( )
( )
( )( ) ( )*
1
1
11
1
1 1 1
E z N z e e z e z
F z f f z
k k
k( ) ( ) ...
( ) ...
* (
1 1
0 1
1
1
1)
1
0 1
1
Diophantine Identity: C z A z E z z F zk( ) ( ) ( ) ( ) 1 1 1 1
{ : , : , : : max( , ( )) } C nc A na E ne k F nf nc na k k 1 1
k-step-ahead prediction
( / )( )
( )( )
( )
( )( )
( )
( )( )y t k t
C z
A ze t k
F z
A ze t
F z
C zy t
1
1
1
1
1
1
Prediction error
~( / ) ( ) ( / ) ( ) ( )y t k t y t k y t k t E z e t k 1
Example:
A z z
C z z
k
( ) .
( ) .
1 1
1 1
1 0 9
1 0 7
2
na nc ne nf 1 1 1 0, , ,
Diophantine Identity:
1 0 7 1 0 9
1 0 7 0 9 0 9
1 1
0 1
1 2
0
1
0 1 0
1
1 0
2
. ( . )( )
. ( . ) ( . )
z z e e z z f
z e e e z e f z
Solution: e e e f e0 1 0 0 11 0 9 0 7 16 0 9 1 , . . . , . .44
Two-step-ahead prediction:
( / )( )
( )( )
.44
.( )y t t
F z
C zy t
zy t
2
1
1 0 7
1
1 1
Page 17
17
6. Minimum-Variance (MV) Control
Model: A z y t B z u t k C z e t( ) ( ) ( ) ( ) ( ) ( ) 1 1 1
Set-Point: y0 0
Prediction Equation:
y t kB z
A zu t
C z
A ze t k( )
( )
( )( )
( )
( )( )
1
1
1
1
Diophantine Identity: EA C z Fk
EAy t k Cy t k z Fy t k EBu t ECe t kk( ) ( ) ( ) ( ) ( )
y t kF
Cy t
EB
Cu t Ee t k( ) ( ) ( ) ( )
Prediction: ( / ) ( ) ( )y t k tF
Cy t
EB
Cu t
Prediction Error: ~( / ) ( ) ( )y t k t E z e t k 1
MV Control: u tF z
E z B zy t( )
( )
( ) ( )( )
1
1 1
Potential Problem: nonminimum-phase system
Example: ( . ) ( ) . ( ) ( . ) ( )1 0 9 0 5 2 1 0 71 1 z y t u t z e t
k E z F B 2 1 16 1 0 51, . , .44, .
MV Controller: u tz
y t( ).44 / .
( . )( )
1 0 5
1 16 1
u t u t y t( ) . ( ) . ( ) 16 1 2 88
Page 18
18
7. Minimum-Variance Self-Tuning
Direct Adaptive Control: identify control model
Indirect Adaptive Control: identify process model design controller
Indirect Adaptive Control: closed-loop identification
Direct Adaptive MV:
MV: u tF z
E z B zy t F z y t G z u t( )
( )
( ) ( )( ) ( ) ( ) ( ) ( )
1
1 1
1 1 0
{ ( ) ( ) ( ) ...
( ) ... ( max( , ( )) )
(
{ }
}
G z E z B z g g z g z
F z f f z f z nf nc na k k
k nb
k nb
nf
nf
1 1 1
0 1
1
1)
1
1
0 1
1 1
Adaptive MV: ( ) ( ) ( ) ( )F z y t G z u t 1 1 0
Direct Estimation of F and G :
( / ) ( ) ( )y t k tF
Cy t
G
Cu t
C z y t k t F z y t G z u t( ) ( / ) ( ) ( ) ( ) ( ) 1 1 1
C z y t t k F z y t k G z u t k( ) ( / ) ( ) ( ) ( ) ( ) 1 1 1
( / ) ( ) ( ) ( ) ( ) ( / )y t t k F z y t k G z u t k c y t t kj
1 1
( / ) ( ) ( ) ( ) ( )y t t k F z y t k G z u t k 1 1
y t F z y t k G z u t k E z e t
x t tT
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( )
1 1 1
LS
Page 19
19
8. Pole-Placement (PP) Self-Tuning
9. Long-Range Predictive Control
Problems of MV:
(1) non-minimum phase
u tF z
E z B zy t( )
( )
( ) ( )( )
1
1 1
(2) Nominal delay < Actual Delay
Cause: control of output at a single instant
Long-Range Predictive Control
Simultaneous control of y k j s( )
Principle:
Future output = Free response + Forced response
Free response: function of known data
Forced response: function of control actions to be determined.
A z y t B z u t( ) ( ) ( ) ( ) 1 1
y t y t a y t j b u t jjj
na
jj
nb
( ) ( ) ( ) ( )
11 1
Free Response:
p t y t
y t a y t j b u t b u t j
y t a y t j b u t j
u t j j
jj
na
jj
nb
jj
na
jj
nb
( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( )
( ) ( , , ...)
1 1
1 1
1 1
0 0 1 2
11 2
1 2
p t p t a p t a y t j b u t jjj
na
jj
nb
( ) ( ) ( ) ( ) ( )
2 1 1 1 112 3
......
p t i p t i a p t i a p t i i nb( ) ( ) ( ) ( ) ..... ( ) 1 1 21 2
Prediction
y t s u t p t
y t s u t s u t p t
y t i s u t i s u t i s u t p t ii
( ) ( ) ( )
( ) ( ) ( ) ( )
.......
( ) ( ) ( ) ... ( ) ( )
1 1
2 1 2
1 2
1
1 2
1 2
Simultaneous control of y t y t y t N( ), ( ),..., ( ) 1 2
Page 20
20
Y y t y t y t N
U u t u t u t N
P p t p t p t N
W w t w t w t N
E e t e t e t N
w t y t w t w t w t N y t N
T
T
T
T
T
( ( ), ( ),..., ( ))
( ( ), ( ),..., ( ))
( ( ), ( ),..., ( ))
( ( ), ( ),..., ( ))
( ( ), ( ),..., ( ))
( ) ( ), ( ) ( ),..., ( ) ( ))
1 2
1 1
1 2
1 2
1 2
1 1 2 2
= ( T
G=
Y G U P
minU R
T
NE E
U G G G W PT T ( ) ( )1
Problems: excessive controls, delay system
Solutions: less number of free control actions
u t u t u t NU( ), ( ),..., ( ) 1 1
NU N
Page 21
21
ADAPTIVE CONTROL SYSTEM DESIGN
EE 699 Project I
Consider the following process
( )( ) ( ) ( ) ( ) ( )1 1 3 41
1
2
1
3 4 z z y t b u t b u t e t
The parameters of the process are time-varying:
1
2
3
4
2
0 3 0 0002
0 9 0 0002
1 0 0005
0 5 0 0002
0 0 1
1000
. .
. .
.
. .
~ ( , . )
( )
t
t
b t
b t
e N
t
Design an adaptive system to control y t( ) ( )0 1000 t for set-point y0 1 .
Report Requirements:
(1) Method selection
(2) System Design
(3) Program
(4) Simulation Results
(5) Results Analysis
(6) Conclusions
Report Due: Nov. 22, 1998
Page 22
22
CHAPTER 3 FUZZY LOGIC SYSTEMS
Primary Reference: J. M. Mendel, "Fuzzy Logic Systems for Engineering: A Tutorial," IEEE
Proceedings, 83(3): 345-377, 1995.
I. INTRODUCTION A. Problem Knowledge
Objective Knowledge (mathematical models)
Subjective knowledge: linguistic information,
difficult to quantify using traditional mathematics
Importance of Subjective Knowledge: idea development, high level
decision making and overall design
Coordination of Two Forms of Knowledge
- Model based approach: Objective information: mathematical models
Subjective information:
linguistic statement Rules FL based Quantification
- Model-free approach: Numerical data rules + linguistic information.
B. Purpose of the Chapter
Basic Parts for synthesis of FLS
FLS: numbers to numbers mapping: fuzzifier, defuzzifier
(inputs: numbers, output: numbers, mechanism: fuzzy logic)
C. What is a Fuzzy Logic System
Input-output characteristic: nonlinear mapping of an input vector into a scalar output
Mechanism: linguistic statement based IF-THEN inference or its mathematical variants
D. Potential of FLS's
E. Rationale for FL in Engineering
Lotfi Zadeh, 1965: imprecisely defined "classes" play an important role in human thinking
(fuzzy logic)
Lotfi Zadeh, 1973: Principle of Incompatibility
(engineering application)
F. Fuzzy Concepts in Engineering: examples
Page 23
23
G. Fuzzy Logic System: A High-Level Introduction
Crisp inputs to crisp outputs mapping: y=f(x)
Four Components: Fuzzifier, rules, inference engine, defuzzifier
Rules (Collection of IF-THEN statements):
provided by experts or extracted from numerical data
Understanding of (1) linguistic variables ~ numerical values
(2) Quantification of linguistic variables: terms
(3) Logical connections: "or" "and"
(4) Implications: "IF A Then B"
(5) Combination of rules
Fuzzifier: crisp numbers fuzzy sets that will be used to activate rules
Inference Engine: maps fuzzy sets into fuzzy sets based on the rules
Defuzzifier: fuzzy sets crisp output
Page 24
24
II. SHORT PRIMER ON FUZZY SETS
A. Crisp Sets
- Crisp set A in a universe of discourse U:
Defined by: listing all of its members, or
specifying a condition by which x A
Notation: A x x { meet some condition}
Membership function A x( ) :
A
A
x x A
x x A
( )
( )
1
0
Equivalence: Set A xA ( ) membership function
Example 1: Cars: color, domestic/foreign, cylinders
B. Fuzzy Sets
Membership function A x( ) [ , ] 0 1 : a measurement of the degree of similarity
Example 1 (contd.): domestic/foreign
an element can resides in more than one fuzzy sets with
different degrees of similarity (membership function)
Representation of fuzzy set
- F x x x UF {( , ( )) ) (pairs of element and membership function)
- F x xFU ( ) / (continuous discourse U), or
F x xFU ( ) /
Example 2: F = integers close to 10
F= 0.1/7+0.5/8+0.8/9+1/10+0.8/11+0.5/12+0.1/13
(Elements with zero A x( ) , subjectiveness of A x( ) , symmetry)
C. Linguistic Variables
Linguistic Variables: variables when their values are not given by numbers but by words or
sentences
u: name of a (linguistic) variable
x: numerical value of a (linguistic) variable x U
(often interchangeable with u when u is a single letter)
Set of Terms T(u): linguistic values of a (linguistic) variable
Specification of terms: fuzzy sets (names of the terms and membership functions)
Example 3: Pressure
- Name of the variable: pressure
Page 25
25
- Terms: T(pressure)={week, low, okay, strong, high}
- Universe of discourse U=[100 psi, 2300 psi]
- Week: below 200 psi, low: close to 700 psi, okay: close to 1050 psi,
strong: close to 1500 psi, high: above 2200 psi
linguistic descriptions membership functions
D. Membership Functions
F x( )
Examples
Number of membership functions (terms) Resolution Computational Complexity
Overlap (glass can be partially full and partially empty at the same time)
E. Some Terminology
The support of a fuzzy set
Crossover point
Fuzzy singleton: a fuzzy set whose support is a single point with unity membership function.
F. Set Theoretic Operations
F1. Crisp Sets
A and B: subsets of U
Union of A and B: A B
A B
xx A x B
x A x B ( )
1
0
if or
if and
Intersection of A and B: A B
A B
xx A x B
x A x B ( )
1
0
if and
if or
Complement of A: A
A
xx
x A( )
1
0
if
if
Page 26
26
A B x x x
A B x x x
x x
A B A B
A B A B
A A
( ) max[ ( ), ( )]
( ) min[ ( ), ( )]
( ) ( )
1
Union and intersection: commutative, associative, and distributive
De Morgan's Laws: A B A B
A B A B
The two fundamental (Aristotelian) laws of crisp set theory:
- Law of Contradiction: A A U
- Law of Excluded Middle: A A
F2. Fuzzy Sets
Fuzzy set A: A x( )
Fuzzy set B: B x( )
Operation of fuzzy sets:
A B A B
A B A B
A A
x x x
x x x
x x
( ) max[ ( ), ( )]
( ) min[ ( ), ( )]
( ) ( )
1
Law of Contradiction? A A U ?
Law of Excluded Middle? A A ?
Multiple definitions:
- Fuzzy union: maximum and algebraic sum A B A B A Bx x x x x ( ) ( ) ( ) ( ) ( )
Fuzzy intersection: minimum and algebraic product A B A Bx x x ( ) ( ) ( )
- Fuzzy union: t-conorm (s-norm)
Fuzzy intersection: t-norm
Examples:
t-conorm
Bounded sum: A B A B min( , )1
Drastic sum:
A B
A B
B A
A B
if
if
if > 0 and
0
0
1 0
Page 27
27
t-norm
Bounded product: A B A B max( , )0 1
Drastic product:
A B
A B
B A
A B
if
if
if < 1 and
1
1
0 1
Generalization of De Morgan's Laws
s x x c t c x c xA B A B[ ( ), ( )] { [ ( ( )), ( ( ))]}
t x x c s c x c xA B A B[ ( ), ( )] { [ ( ( )), ( ( ))]}
Page 28
28
III. SHORT PRIMER ON FUZZY LOGIC A. Crisp Logic
Rules: a form of propositions
Proposition: an ordinary statement involving terms which have been defined
Example: IF the damping ratio is low, THEN the system's impulse response oscillates a long
time before it dies.
Proposition: true, false
Logical reasoning: the process of combining given propositions into other propositions, ....
Combination:
- Conjunction p q (simultaneous truth)
- Disjunction p q (truth of either or both)
- Implication p q (IF-THEN rule). Antecedent, consequent
- Operation of Negation ~ p
- Equivalence Relation p q (both true or false)
Truth Table
The fundamental axioms of traditional propositional logic:
- Every proposition is either true or false
- The expression given by defined terms are propositions
- The true table for conjunction, disjunction, implication, negation, and equivalence
Tautology: a proposition formed by combining other propositions (p, q, r,...) which is true
regardless of the truth or falsehood of p, q, r,...
Example: ( ) ~ [ (~ )]p q p q
( ) (~ )p q p q
Membership function for p q :
p q p q p qx y x y x y ( , ) ( , ) min[ ( ), ( )]1 1 1
p q p q p qx y x y x y ( , ) ( , ) max[ ( ), ( )] 1
p q p qx y x y ( , ) ( )( ( ))1 1
p q p qx y x y ( , ) min[ , ( ) ( )]11
Inference Rules:
- Modus Ponens: Premise 1: "x is A"; Premise 2: "IF x is A THEN y is B"
Consequence: "y is B" ( )A B
( ( )) )p p q q
- Modus Tollens: Premise 1: "y is not B"; Premise 2: "IF x is A THEN y is B"
Page 29
29
Consequence: "x is A"
( ( )) )q p q p
B. Fuzzy Logic
Membership function of the IF-THEN statement: "IF u is A, THEN v is B" ( , )u U v V
A B x y ( , ) : truth degree of the implication relation between x and y
B1. Crisp Logic Fuzzy Logic ?
From crisp logic:
A B A Bx y x y ( , ) min[ ( ), ( )]1 1
A B A Bx y x y ( , ) max[ ( ), ( )]1
A B A Bx y x y ( , ) ( )( ( ))1 1
Do they make sense in fuzzy logic?
Generalized Modus Ponens- Premise 1: "u is A*"; Premise 2: "IF u is A THEN v is B"
Consequence: "v is B*"
Example: "IF a man is short, THEN he will make a very
good professional basketball player"
A: short man, B: not a very good player
- "This man is under 5 feet tall" A*: man under 5 feet tall
- "He will make a poor professional basketball player" B*: poor player
Crisp logic ( ( )) )A A B B (composition of relations)
Bx A
A A By x x y* ( ) sup[ ( ) ( , )]*
*
Examine Bx A
A A By x x y* ( ) sup[ ( ) ( , )]*
*
using A B x y ( , ) borrowed from crisp logic
and singleton fuzzifier A A
x x x* *( ) & ( )' ' 1 0
Bx A
A A B
A A B
A B A B
A B A B
y x x y
x x y
x y x y
x y x y
*
' '
' '
'
( ) sup[ ( ) ( , )]
( ) ( , )
( , ) min[ , ( , )]
( , ) min[ ( ' ), ( )]
**
*
= 1
= 1-
1
1
If x x ' B A By x y*( ) min[ ( ' ), ( )] 1 1-
If x x ' B A By x y*( ) min[ ( ' ), ( )] 1 1 0 1 1-
Page 30
30
B2. Engineering Implications of Fuzzy Logic
Minimum implication: A B A Bx y x y ( , ) min[ ( ), ( )]
Product implication: A B A Bx y x y ( , ) ( ) ( )
Disagreement with propositional logic
IV. FUZZINESS AND OTHER MODELS
V. FUZZY LOGIC SYSTEMS
A. Rules
:)(lR IF 1u is lF1 and 2u is lF2 and … pu is l
pF , THEN v is lG
Ml ..., ,2 ,1
l
iF s: fuzzy sets in RU i
lG : fuzzy set in RV
pp UUUuuuu ...) ..., , ,(col 2121
Multiple Antecedents
Example 18: Ball on beam
Objective: to drive the ball to the origin and maintain it at origin
Control variable: 2
2
dt
d
Nonlinear system, states: dt
d
dt
drr
, , ,
Rules:
)1(R : IF r is positive and
dt
dr is near zero and is positive and
dt
d is near zero,
THEN u is negative
)2(R : IF r is negative and
dt
dr is near zero and is negative and
dt
d is near zero,
THEN u is positive
)3(R : IF r is positive and
dt
dr is near zero and is negative and
dt
d is near zero,
THEN u is positive big
)4(R : IF r is negative and
dt
dr is near zero and is positive and
dt
d is near zero,
THEN u is negative big
Example 19: Truck Backing Up Problem
Objective: x=10, 90 ( ])270 ,[-90 ],20 ,0[ x
Page 31
31
Control Variable: ]40 ,40[
Rules: relational matrix (fuzzy associative memory)
Membership functions:
Example 20: A nonlinear dynamical system
Rough knowledge (qualitative information):
Nonlinearity f(*): y(k) and y(k-1)
f(*) is close to zero when y(k) is close to zero or -4
f(*) is close to zero when y(k-1) is close zero
Rules:
Example 21: Time Series x(k), k=1, 2, …
Problem: x(k-n+1), x(k-n+2),….x(k) (predict) x(k+1)
Given: x(1), x(2),…, x(D)
D-n training pairs:
)1(x : [x(1), x(2),…, x(n): x(n+1)]
)2(x : [x(2), x(3),…, x(n+1): x(n+2)]
………
)( nDx :[x(D-n), x(D-n+1),…, x(D-1): x(D)]
n antecedents in each rule: nuuu ,...,, 21
D-n rules
Extract rules from numerical data:
First method: data establish the fuzzy sets (identify or optimize the parameters in the
membership functions for these fuzzy sets) in the antecedents and the
consequents (first)
Second method: prespecify fuzzy sets in the antecedents and the consequents and then
associate the data with these fuzzy sets
Second method:
Establish domain intervals for all input and output variables: ],[ XX
Divide each domain interval into a prespecified number of overlapping regions
Label and assign a membership function to each region
Generate fuzzy rules from the data: consider data pair )( jx
- Determine the degrees (membership functions) of each element of )( jx to all
possible
fuzzy sets
- Select the fuzzy set corresponding to the maximum degree for each element
- Obtain a rule from the combination of the selected fuzzy set for the data pair )( jx
Page 32
32
D-n rules
Conflicting rules: same antecedents, different consequents
Solution: select the rule with the maximum degree in the group
)( )( jRD )()()....()( )()()(
2
)(
1
j
X
j
nX
j
X
j
X yxxx
Nonobvious Rules:
Page 33
33
B. Fuzzy Inference Engine
Uses fuzzy logic principles to combine fuzzy IF-THEN rules from the fuzzy
rule base into a mapping from fuzzy input sets to fuzzy output sets.
:)(lR IF 1u is lF1 and 2u is lF2 and … pu is l
pF , THEN v is lG
( ...
)
Input Sets: defined on
Output Set: defiend on
U U U U
V
n 1 2
F F F Al l
n
l
1 2 ...
G Bl
R A Bl( ):
R A B F F F p Gl l l
pl ly y x x x y( ) ( , ) ( , ) ( )* ( )* ... ( )* ( )x x
1 21 2
Input to :)(lR fuzzy set Ax , the output of the fuzzifier
A X X X px px x x( ) ( )* ( )* ... ( )x
1 21 2 *
X sk : fuzzy sets describing the inputs
R l( ): determines a fuzzy set B A Rl
x
l ( )
B A R A A A Bl
xl
x xy y y( ) ( ) sup [ ( )* ( , )]( )
Xx x
Combining Rules:
Final fuzzy set: B A R R R A R Bx
M
l
M
x
l
l
M l [ , ,..., ]( ( ) ( ) ( )1) 2
1 1
Using t-conorm: B B B BM 1 2 ...
Additive combiner: weights
Example 22: Truck backing up
( ) , ( )
( ): ,
( ): ,
t x t
t B B
x t S S
i i
i
i
140 6
1 2
1 2
0
C. Fuzzification
Maps a crisp point x col x x x Un( , ,..., )1 2 into a fuzzy set A* defined in U
Singleton fuzzifier: A* ( )x'
x = x'
x x'
1
0
B A R A A A B A Bl
xl
x xy y y y( ) ( ) sup [ ( )* ( , )] ( , )( )
Xx x x'
Nonsingleton fuzzifier: A* ( )x' x = x' 1 ,
A* ( )x' decreases when x - x' increases
Page 34
34
B A R
A A A B
x UX X X p F F F p G
Gx U
X X X p F F F p
Gx U
lx
l
x x
pl l
pl l
lp
l lpl
l
y y
y
x x x x x x y
y x x x x x x
y
( ) ( )
sup [ ( ) * ( , )]
( ) * ( )*...* ( ) * ( ) * ( )*...* ( ) * ( )
( ) ( ) * ( )*...* ( ) * ( ) * ( )*...* ( )
( )
( )
sup
sup
sup
Xx x
1 2 1 2
1 2 1 2
1 2 1 2
1 2 1 2
[ ( ) * ( )]*[ ( ) * ( )]*...*[ ( ) * ( )] X F X F X p F px x x x x xl lp p
l1 1 2 2
1 1 2 2
Example 23: t-norm: product
membership functions: Gaussian
k-th input fuzzy set: X k k X Xk k kx x m( ) exp{ / [( ) / ] } 1 2 2
k-th antecedent fuzzy set: F k k F Fk
lkl
klx x m( ) exp{ / [( ) / ] } 1 2 2
Q k X k F k
kl
k klx x x( ) ( ) ( )
maximized at x m mk X F F X X Fk kl
kl
k k kl,max ( ) / ( ) 2 2 2 2
X Xkk2 2
m xX kk '
x m xk X F F X Fkl
kl
kl,max ( ' ) / ( ) 2 2 2 2
Fuzzier: prefilter
B G Q k
k
p
l lkly y x( ) ( ) ( ),max
1
X
2 0 : zero uncertainty of input x xk k,max '
D. Defuzzifier
1) Maximum Defuzzifier
2) Mean of Maximum Defuzzifier
3) Centroid Defuzzifier
4) Height Defuzzifier
5) Modified Defuzzifier
E. Possibilities
Page 35
35
F. Formulas for Specific FLS's: Fuzzy Basis Functions
Geometric Interpretation
y f ( )x : for specific choices of fuzzifier, membership functions,
composition, inference and defuzzifier
Example 24: singleton fuzzifier, height defuzzification
max-product composition, product inference,
y f y x xs
l
F ii
p
l
M
F ii
p
l
M
il
il
( ) [ ( )] / [ ( )]x 11 11
B
l
A B
l
F ii
p
G
l
G
l
G
lil l
l l
y y x y
y y
( ) ( , ) [ ( ' )] ( )
( ) max ( )
x'1
1
max-min composition, minimum inference
y f y x xs
l
i p F il
M
i p F il
M
il
il
( ) [ min { ( )}] / [ min { ( )}],..., ,...,x 11
11
Example 25: nonsingleton fuzzifier, height defuzzification
max-product composition, product inference,
Gaussian membership functions for X kkx( ) ,
F kkl x( ) , and
Gl y( )
(maxGl y( )=1)
B
l
Q kk
p
lkly x( ) ( ),max
1
y f y x xns
l
Q kk
p
l
M
Q kk
p
l
M
kl
kl
( ) [ ( )] / [ ( )],max ,maxx 11 11
Fuzzy basis functions
y f y l
ll
M
( ) ( )x x1
FBF l ( )x (l=1,...,M):
l
F kk
p
F kk
p
j
M
kl
kj
x
x
( )
( )
( )
x
1
11
(singleton)
Page 36
36
l
Q kk
p
Q kk
p
j
M
kl
kj
x
x
( )
( )
( )
,max
,max
x
1
11
(nonsingleton, Gaussian)
FBFs: depend on fuzzifier, membership functions,
composition, inference, defuzzifier, and number of the rules
Combining rules from numerical data and expert linguistic knowledge
y f y y yj
jj
M
N
i
N ii
M
L
k
L kk
MN L
( ) ( ) ( ) ( ), ,x x x x 1 1 1
FBFs from the numerical data
N i F ss
p
F ss
p
j
M
Nsi
sjix x i M, ( ) ( ) / [ ( )] ,...,x
1 11
1
L k F ss
p
F ss
p
j
M
Lski
sjix x k M, ( ) ( ) / [ ( )] ,...,x
1 11
1
VI. DESIGNING FUZZY LOGIC SYSTEMS
Linguistic rules
Numerical data
Tune the parameters in the FLS
Training data
x
x
x
( (
( ) ( )
( ) ( )
:
:
:
1) 1)
2 2
y
y
yN N
Parameter Set
y f ( , ) x
Minimize the amplitude of y fi( ) ( , ) x (i)
Non-linear optimization of cost function
min ( ) [ ( , )]( )
J y f
i
Ni
1
2x(i)
Page 37
37
CHAPTER 4 Neuro-Fuzzy Modeling and Control
Primary Reference: J.-S. R. Jang and C.-T. Sun, "Neuro-fuzzy modeling and control," IEEE
Proceedings, 83(3): 378-406, 1995.
I. INTRODUCTION II. FUZZY SETS, FUZZY RULES, FUZZY REASONING, AND FUZZY MODELS III. ADAPTIVE NETWORKS H. Architecture
Feedforward adaptive network & Recurrent adaptive network
Fixed nodes & Adaptive nodes
Layered representation & Topological ordering representation (no links from node i to j, i j )
Example 3: An adaptive network with a single linear node
x f x x a a a a x a x a3 3 1 2 1 2 3 1 1 2 2 3 ( , ; , , )
Example 4: A building block for the perceptron or the back-propagation neural network
x f x x a a a a x a x a3 3 1 2 1 2 3 1 1 2 2 3 ( , ; , , )
x f xx
x4 4 3
3
3
1 0
0 0
( ) if
if
Linear Classifier
Building block of the classical perceptron
Step function: discontinuous gradient
Sigmoid function: continuous gradient
x f xe
x4 4 3
1
1 3
( )
Composition of f3 and f4 : building block for the back-propagation neural networks
Example 5 A back-propagation neural network
xw x w x w x t
7
4 7 4 5 7 5 6 7 6 7
1
1
exp[ ( )], , ,
Page 38
38
I. Back-Propagation Learning Rule
Recursively obtain the gradient vector: derivatives of the error with respect to parameters
Back-propagation learning rule: gradient vector is calculated in the direction opposite to the
flow of the output of each node
Layer l (l=0, 1, ..., L) l=0: input layer
Node i (i=1, 2, ..., N(l))
Output of node i in layer l: x l i,
Function of node i in layer l: fl i,
No jumping links
x f x xl i l i l l N l, , , , (( ,..., , , ,...) 1 1 1 1),
Measurements of the outputs of the network: d d dN L1 2, ,..., ( )
Calculated outputs of the network: x x xL L L N L, , , ( ), ,...,1 2
Entries of the training data set (sample size): P
Using entry p (p=1,...,P) generates error
E d xp k L kk
N L
( ),
( )2
1
Cost Function for Training E Epp
P
1
Ordered derivative
l i
p
l i
E
x,
,
The derivative of Ep with respect to x l i, , taking both
direct and indirect paths into consideration.
Example 6: Ordinary partial derivative
E
x
p
l i,
and the ordered derivative
l i
p
l i
E
x,
,
y f x
z g x y
( )
( , )
z
x
g x y
x
( , )
z
x
g x f x
x
g x y
x
g x y
y
f x
xy f x y f x
( , ( )) ( , ) ( , ) ( )
( ) ( )
Page 39
39
y f x x
z g x y x y
( )
( , )
2
5 2
z
x
g x y
x
( , )5
z
x5 2 2
Back-Propagation Equation:
L i
p
L i
p
L i
E
x
E
x,
, ,
l i
p
l i
p
l mm
N ll m
l i
l mm
N ll m
l i
E
x
E
x
f
x
f
x,
, ,
(,
,
,
(,
,
11
1)1
11
1)1
( )0 1 l L
il
il
il
il
pp ff
x
EE,
,
,
,
or
*
**
f
x
EE p
Sx
p
(S: the set of nodes containing as a parameter)
The derivative of the overall error measure
P
p
pEE1
will be
P
p
pEE
1
Update formula:
E : learning rate
: can be determined by
2)(E
: step size (changing the speed of the convergence)
Off-line learning & On-line learning
Recurrent network: transform into an equivalent feedforward
network by using “unfolding of time” technique
J. Hybrid Learning Rule: Combining BP and LSE
Off-Line Learning
Page 40
40
On-Line Learning
Different Ways of Combining GD and LSE
K. Neural Networks as Special Cases of Adaptive Networks
D1. Back Propagation Neural Networks (BPNN's)
Node function: composition of weighted sum and a nonlinear
function (activation function or transfer function)
Activation function: differentiable sigmoidal or hyper-tangent type function
which approximates the step function
Four types of activation functions
Step function f xx
x( )
1 0
0 0
if
if
Sigmoidal function f xe x
( )
1
1
Hyper-tangent function f xe
e
x
x( )
1
1
Identity function f x x( )
Example: three inputs node
Inputs: x x x1 2 3, ,
Output of the node: x4
Weighted sum: x w x ti ii
4 4 41
3
Sigmoid function: xe
x4
1
1 4
wi4 :
t4 :
Example: two-layer BPNN with 3 inputs and 2 outputs
D2. The Radial Basis Function Networks (RBFN's)
Radial basis function approximation: local receptive fields
Example: An RBFN with five receptive field units
Activation level of the ith receptive filed:
w R x R x c i Hi i i i i ( ) ( / ) ,2,...,
1
Page 41
41
Gaussian function R xx c
i
i
i
( ) exp( )
2
2
or
Logistic function R xx c
i
i
i
( )
exp{ }
1
1
2
2
Maximized at the center x ci
Final output:
f x f w f R xi ii
H
i ii
H
( ) ( )
1 1
or
f x
f w
w
f R x
R x
i ii
H
ii
H
i ii
H
ii
H( )
( )
( )
1
1
1
1
Parameters: c fi i i, , nonlinear:
ci i, linear: fi
Identification:
c si : clustering techniques
is : heuristic
then
f a x bi i i
least squares method
IV. ANFIS: ADAPTIVE NEURO-FUZZY INFERENCE SYSTEMS
A. ANFIS Architecture
Example: A two inputs (x and y) and one output (z) ANFIS
Rule 1 : IF x is A1 and y is B1 , then f p x q y r1 1 1 1
Rule 2 : IF x is A2 and y is B2 , then f p x q y r2 2 2 2
ANFIS architecture
Layer 1: adaptive nodes
O x i
O x i
i A
i B
i
i
1
1
1 2
32
,
,
( ) ,
( ) ,
4
Aix( ) and Bi
x( ) : any appropriate parameterized membership functions
Page 42
42
A
i
i
bi
i
xx c
a
( )
[( )
]
1
12
2
{ , ,a b ci i i } premise parameters
Layer 2: fixed nodes with function of multiplication
O w x x ii i A Bi i2 1 2, ( ) ( ) , (firing strength of a rule)
Layer 3: fixed nodes with function of normalization
O ww
w wii i
i
3,
1 2
1 2
, (normalized firing strength)
Layer 4: adaptive nodes
O w f w p x q y ri i i i i i i4, ( )
{ , , }p q ti i i consequent parameters
Layer 5: a fixed node with function of summation
O overall output w fi ii
5 1,
Example: A two-input first-order Sugeno fuzzy model with nine rules
B. Hybrid Learning Algorithm
When the premise parameters are fixed:
fw
w wf
w
w wf
w f w f
w x p w y q w r
w x q w y q w r
1
1 2
1
2
1 2
2
1 1 2 2
1 1 1 1 1 1
2 2 2 2 2 2
+
( ) ( ) ( )
( ) ( ) ( )
linear function of consequent parameters
Hybrid learning scheme
C. Application to Chaotic Time Series Prediction
Example: Mackey Glass differential delay
( ). ( )
( ). ( )x t
x t
x tx t
0 2
101
10
Prediction problem
x t D x t x t( ( ) ),..., ( ), ( ) 1 x t P( )
D P 4 6, , 1000 data pairs
x t x t x t x t( ), ( ), ( ), ( ) 18 12 6 x t( ) 6
500 pairs for training, 500 for verification
Input partition: 2 Rules: 16 number of parameters: 104
Page 43
43
(premise: 24, consequent: 80)
Prediction results:
no significant difference in prediction error for training and validating
Page 44
44
Reasons for excellence
Page 45
45
V. NEURO-FUZZY CONTROL
Dynamic Model: ( ( ), ( ))x = f x t t u
Desired Trajectory: x d t( )
Control Law: u( ) ( ( ))t t= g x
Discrete System
Dynamic Model: x = f x( ) ( ( ), ( ))k k k1 u
Desired Trajectory: x d k( )
Control Law: u( ) ( ( ))k k= g x
A. Mimicking Another Working Controller
Skilled human operators
Nonlinear approximation ability
Refining the membership functions
B. Inverse Control
Minimizing the control error
C. Specialized Learning
Minimizing the output error: needs the model of the process
D. Back-Propagation Through Time and Real Time Recurrent Learning
Principle
Computation and Implementation: Off-Line On-line
L. Feedback Linearization and Sliding Control
M. Gain Scheduling
Sugeno fuzzy controller
If pole is short, then f k k k z k z1 11 12 13 14
If pole is medium, then f k k k z k z2 21 22 23 24
If pole is long, then f k k k z k z3 31 32 33 34
Operating Points linear controllers fuzzy control rules
G. Analytic Design
Page 46
46
Project 2: Neuro-Fuzzy Non-Linear Control System Design
1. Given Process
is described by the following fuzzy model
A. Rules:
Rule 1: IF u k( )1 is VERY SMALL, then y k a u k b u k1 1 11 2( ) ( ) ( )
Rule 2: IF u k( )1 is SMALL, then y k a u k b u k2 2 21 2( ) ( ) ( )
Rule 3: IF u k( )1 is MEDIUM, then y k a u k b u k3 3 31 2( ) ( ) ( )
Rule 4: IF u k( )1 is LARGE, then y k a u k b u k4 4 41 2( ) ( ) ( )
Rule 5: IF u k( )1 is VERY LARGE, then y k a u k b u k5 5 51 2( ) ( ) ( )
where u is the input, y ii ( , , ..., ) 1 2 5 is the output from Rule i, and a1 0 5 . , a2 0 .4,
a3 0 3 . , a4 0 2 . , a5 01 . , b1 1 , b2 0 9 . , b3 0 8 . , b4 0 7 . , and b5 0 6 . are the
consequent parameters.
B. Membership functions:
VerySmall ( ) exp(( )
.)u
u
0
0 5
2
2
Small ( ) exp(( )
.)u
u
1
0 5
2
2
Medium ( ) exp(( )
.)u
u
2
0 5
2
2
Large ( ) exp(( )
.)u
u
3
0 5
2
2
VeryLarge ( ) exp(( )
.)u
u
4
0 5
2
2
C. System output
y w yw
w
yii
ii
i
jj
i
1
5
1
5
1
5
where w ii ( , , , , ) 1 2 3 4 5 is the firing strength of Rule i.
Page 47
47
2. The desired trajectory of the system output is:
y k k
y k k
y k k
0
0
0
0 5 0 400
1 400 700
1 5 700 1000
( ) . ( )
( ) ( )
( ) . ( )
3. Assume that the consequent parameters a sj ' and b sj ' are unknown. Design an adaptive
control system for the given system to achieve the desired trajectory of the output under the
constraint 0 4 u . (Off-line identification procedure may be used to obtain the initials of
the premise parameters.)
Report Requirements:
(1) Method selection
(2) System Design
(3) Program
(4) Simulation Results
(5) Results Analysis
(6) Conclusions
Due: 12/14/98
Page 48
48
EE 699 Final Examination
Fall 1998
Name:
Grade:
1. What are the major elements of a fuzzy logic system? What are their functions? (20%)
2. Describe an approach which can be used to extract fuzzy rules from numerical data. (20%)
3. The system is described by a Takagi and Sugeno's fuzzy model with the following rules and
membership functions,
Rules:
Rule 1: IF u k( )1 is VERY SMALL, then y k a u k b u k1 1 11 2( ) ( ) ( )
Rule 2: IF u k( )1 is SMALL, then y k a u k b u k2 2 21 2( ) ( ) ( )
Rule 3: IF u k( )1 is MEDIUM, then y k a u k b u k3 3 31 2( ) ( ) ( )
Rule 4: IF u k( )1 is LARGE, then y k a u k b u k4 4 41 2( ) ( ) ( )
Rule 5: IF u k( )1 is VERY LARGE, then y k a u k b u k5 5 51 2( ) ( ) ( )
where u is the input, y ii ( , , ..., ) 1 2 5 is the output from Rule i,
Membership functions:
VerySmall ( ) exp(( )
.)u
u
0
0 5
2
2
Small ( ) exp(( )
.)u
u
1
0 5
2
2
Medium ( ) exp(( )
.)u
u
2
0 5
2
2
Large ( ) exp(( )
.)u
u
3
0 5
2
2
VeryLarge ( ) exp(( )
.)u
u
4
0 5
2
2
If the output of the system is given by
y w yii
i
1
5
,
explain the role of w ii ( , , , , ) 1 2 3 4 5 and give a way to determine w ii ( , , , , ) 1 2 3 4 5 .
(20%)
4. The system is
y ay buk k k k 1 1
where
a and b : parameters of the system
yk and uk : output and input at instant k, and
Page 49
49
k : system's noise at instant k, E k( ) 0 ( )k , and E k jk j( ) ( )) 0 .
Given data pairs { , }u yk k s ( k N 1 2, , ..., ), determine the Least Squares estimates of the
parameters a and b ? (20%)
5. The system is
y a y a y b u b uk k k k k k 1 1 2 2 3 3 4 4
where yk : output at instant k,
uk : input at instant k,
k : noise at instant k, E k( ) 0 ( )k , and E k jk j( ) ( )) 0 ,
a a b b1 2 3 4, , :, parameters of the system.
At instant t, { , }u yk k s ( k t 1 2 1, , ..., ) and yt are known. Give an equation which predicts
yt10 . (20%)