1 Introduction to Fuzzy Logic Control
Aug 14, 2015
1
Introduction to Fuzzy Logic Control
2
OutlineGeneral DefinitionApplicationsOperationsRulesFuzzy Logic ToolboxFIS EditorTipping Problem: Fuzzy ApproachDefining Inputs & OutputsDefining MFsDefining Fuzzy Rules
General Definition
Fuzzy Logic - 1965 Lotfi Zadeh, Berkelysuperset of conventional (Boolean) logic that has been extended to handle the concept of partial truth central notion of fuzzy systems is that truth values (in fuzzy logic) or membership values (in fuzzy sets) are indicated by a value on the range [0.0, 1.0], with 0.0 representing absolute Falseness and 1.0 representing absolute Truth. deals with real world vagueness
Applications
Expert SystemsControl UnitsBullet train between Tokyo and OsakaVideo CamerasAutomatic Transmissions
Operations
A B
A ∧ B A ∨ B ¬A
Controller Structure
FuzzificationScales and maps input variables to fuzzy sets
Inference MechanismApproximate reasoningDeduces the control action
DefuzzificationConvert fuzzy output values to control signals
MATLAB fuzzy logic toolboxMATLAB fuzzy logic toolbox facilitates the development of fuzzy-logic systems using:
graphical user interface (GUI) toolscommand line functionality
The tool can be used for buildingFuzzy Expert SystemsAdaptive Neuro-Fuzzy Inference Systems (ANFIS)
7
Graphical User Interface (GUI) ToolsThere are five primary GUI tools for building, editing, and observing fuzzy inference systems in the Fuzzy Logic Toolbox:
Fuzzy Inference System (FIS) EditorMembership Function EditorRule EditorRule ViewerSurface Viewer
8
MATLAB: Fuzzy Logic Toolbox
9
MATLAB: Fuzzy Logic Toolbox
10
Fuzzy Inference system
Two type of inference system
Mamdni inference methodSugeno inference method
*Mamdani's fuzzy inference method, the most common methodology
11
FIS Editor: Mamdani ‘s inference system
12
Fuzzy Logic Examples using Matlab
To control the speed of a motor by changing the inputvoltage When a set point is defined, if for somereason, the motor runs faster, we need to slow it downby reducing the input voltage. If the motor slowsbelow the set point, the input voltage must beincreased so that the motor speed reaches the setpoint.
13
Input/OutputInput status words be:
Too slowJust rightToo fast
output action words be:Less voltage (Slow down)No changeMore voltage (Speed up)
14
FIS Editor: Adding Input / Output
15
FIS Editor: Adding Input / Output
16
Membership Function Editor
17
Input Membership Function
18
Output Membership Function
19
Membership Functions
20
Rules
Define the rule-base:
1) If the motor is running too slow, then more voltage.2) If motor speed is about right, then no change.3) If motor speed is to fast, then less voltage.
21
Member function Editor:Adding Rules
22
Rule Base
23
Rule Viewer
24
Surface Viewer
25
Save the file as “one.fis”.Now type in the commend window to get the result:>>fis = readfis('one');out=evalfis(2437.4,fis)>>out =2.376
26
Sugeno-Type Fuzzy Inference
Takagi-Sugeno-Kang, method of fuzzy inferencesimilar to the Mamdani method in many respectsFuzzifying the inputs and applying the fuzzy operator, are exactly the same.The main difference between Mamdani and Sugeno is that the Sugeno output membership functions are either linear or constant.
27
FIS Editor: Sugeno inference system
28
Add Input/output variables
29
Define Input/output variables
30
Add Input MF
31
Define Input MF
32
Add output MF
33
Define output MF
34
Add rules
35
Define Rule Base
36
View rules
37
Rules viewer
38
Surface viewer
39
Advantages of the Sugeno MethodSugeno is a more compact and computationally efficient representation than a Mamdani system.It is computationally efficient.It works well with linear techniques (e.g., PID control).It works well with optimization and adaptive techniques.It has guaranteed continuity of the output surface.It is well suited to mathematical analysis.
40
Advantages of the Mamdani Method
It is intuitive.It has widespread acceptance.It is well suited to human input.
41
Support Vector Machine & Its Applications
Overview
Introduction to Support Vector Machines (SVM)
Properties of SVM
Applications
Gene Expression Data Classification
Text Categorization if time permits
Discussion
Support Vector Machine(SVM)The fundamental principle of classification using theSVM is to separate the two categories of patterns
Map data x into a higher‐dimensional feature space viaa nonlinear mapping.
The linear classification (regression) in the highdimensional space is equivalent to the nonlinearclassification (regression) in the low‐dimensional space
Linear Classifiers
f x
αyest
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you classify this data?
w x + b<0
w x + b>0
Linear Classifiers
f xα
yest
denotes +1
denotes -1 f(x,w,b) = sign(w x + b)
How would you classify this data?
Linear Classifiers
f xα
yest
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you classify this data?
Linear Classifiers
f xα
yest
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
Any of these would be fine..
..but which is best?
Linear Classifiers
f x
α
yest
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
How would you classify this data?
Misclassifiedto +1 class
denotes +1
denotes -1Define the marginof a linear classifier as the width that the boundary could be increased by before hitting a datapoint.
Classifier Margin
f x
α
yest
f(x,w,b) = sign(w x + b)
Maximum Margin
denotes +1
denotes -1
f(x,w,b) = sign(w x + b)
The maximum margin linear classifier is the linear classifier with the, um, maximum margin.
This is the simplest kind of SVM (Called an LSVM)
Linear SVM
Support Vectors are those datapoints that the margin pushes up against
1. Maximizing the margin is good according to intuition and PAC theory
2. Implies that only support vectors are important; other training examples are ignorable.
3. Empirically it works very very well.
Linear SVM Mathematically
What we know:w . x+ + b = +1 w . x- + b = -1 w . (x+-x-) = 2
X-
x+
wwwxxM 2)(=
⋅−=
−+
M=Margin Width
Linear SVM MathematicallyGoal: 1) Correctly classify all training data
if yi = +1if yi = -1for all i
2) Maximize the Margin
same as minimize
We can formulate a Quadratic Optimization Problem and solve for w and b
Minimize subject to
wM 2
=
www t
21)( =Φ
1≥+ bwx i1≤+ bwx i
1)( ≥+ bwxy ii
1)( ≥+ bwxy ii
i∀
wwt
21
Solving the Optimization Problem
Need to optimize a quadratic function subject to linear constraints.
Quadratic optimization problems are a well‐known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them.
The solution involves constructing a dual problem where a Lagrange multiplier αi is associated with every constraint in the primary problem:
Find w and b such thatΦ(w) =½ wTw is minimized;
and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1
Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and
(1) Σαiyi = 0(2) αi ≥ 0 for all αi
The Optimization Problem SolutionThe solution has the form:
Each non-zero αi indicates that corresponding xi is a support vector.Then the classifying function will have the form:
Notice that it relies on an inner product between the test point x and the support vectors xi – we will return to this later.Also keep in mind that solving the optimization problem involved computing the inner products xi
Txj between all pairs of training points.
w =Σαiyixi b= yk- wTxk for any xk such that αk≠ 0
f(x) = ΣαiyixiTx + b
Dataset with noise Hard Margin: So far we require all data points be classified correctly
- No training error
What if the training set is noisy?
- Solution 1: use very powerful kernels
denotes +1
denotes -1
OVERFITTING!
Slack variables ξi can be added to allow misclassification of difficult or noisy examples.
ε7
ε11
ε2
Soft Margin Classification
∑=
+R
kkεC
1.
21 ww
What should our quadratic optimization criterion be?
Minimize
Hard Margin v.s. Soft MarginThe old formulation:
The new formulation incorporating slack variables:
Parameter C can be viewed as a way to control overfitting.
Find w and b such thatΦ(w) =½ wTw is minimized and for all {(xi ,yi)}yi (wTxi + b) ≥ 1
Find w and b such thatΦ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)}yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i
Linear SVMs:OverviewThe classifier is a separating hyperplane.Most “important” training points are support vectors; they define the hyperplane.Quadratic optimization algorithms can identify which training points xi are support vectors with non-zero Lagrangian multipliers αi. Both in the dual formulation of the problem and in the solution training points appear only inside dot products:
Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjxi
Txj is maximized and (1) Σαiyi = 0(2) 0 ≤ αi ≤ C for all αi
f(x) = ΣαiyixiTx + b
Non-linear SVMsDatasets that are linearly separable with some noise work out great:
But what are we going to do if the dataset is just too hard?
How about… mapping data to a higher-dimensional space:
0 x
0 x
0 x
x2
Non-linear SVMs: Feature spaces
General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable:
Φ: x→ φ(x)
The “Kernel Trick”The linear classifier relies on dot product between vectors K(xi,xj)=xi
Txj
If every data point is mapped into high-dimensional space via some transformation Φ: x→ φ(x), the dot product becomes:
K(xi,xj)= φ(xi) Tφ(xj)A kernel function is some function that corresponds to an inner product in some expanded feature space.Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xi
Txj)2,
Need to show that K(xi,xj)= φ(xi) Tφ(xj):K(xi,xj)=(1 + xi
Txj)2,
= 1+ xi12xj1
2 + 2 xi1xj1 xi2xj2+ xi22xj2
2 + 2xi1xj1 + 2xi2xj2
= [1 xi12 √2 xi1xi2 xi2
2 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj2
2 √2xj1 √2xj2] = φ(xi) Tφ(xj), where φ(x) = [1 x1
2 √2 x1x2 x22 √2x1 √2x2]
What Functions are Kernels?For some functions K(xi,xj) checking that
K(xi,xj)= φ(xi) Tφ(xj) can be cumbersome.Mercer’s theorem: Every semi-positive definite symmetric function is a kernelSemi-positive definite symmetric functions correspond to a semi-positive definite symmetric Gram matrix:
K(x1,x1) K(x1,x2) K(x1,x3) … K(x1,xN)K(x2,x1) K(x2,x2) K(x2,x3) K(x2,xN)
… … … … …K(xN,x1) K(xN,x2) K(xN,x3) … K(xN,xN)
K=
Examples of Kernel FunctionsLinear: K(xi,xj)= xi
Txj
Polynomial of power p: K(xi,xj)= (1+ xi Txj)p
Gaussian (radial-basis function network):
Sigmoid: K(xi,xj)= tanh(β0xi Txj + β1)
)2
exp(),( 2
2
σji
ji
xxxx
−−=K
Non-linear SVMs MathematicallyDual problem formulation:
The solution is:
Optimization techniques for finding αi’s remain the same!
Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and (1) Σαiyi = 0(2) αi ≥ 0 for all αi
f(x) = ΣαiyiK(xi, xj)+ b
SVM locates a separating hyper plane in the feature space and classify points in that space It does not need to represent the space explicitly, simply by defining a kernel functionThe kernel function plays the role of the dot product in the feature space.
Nonlinear SVM - Overview
Properties of SVMFlexibility in choosing a similarity functionSparseness of solution when dealing with large data sets- only support vectors are used to specify the separating hyper plane Ability to handle large feature spaces
- complexity does not depend on the dimensionality of the feature spaceOver fitting can be controlled by soft margin approachNice math property: a simple convex optimization problem which is guaranteed to converge to a single global solutionFeature Selection
SVM Applications
SVM has been used successfully in many real-world problems- text (and hypertext) categorization- image classification- bioinformatics (Protein classification,
Cancer classification)- hand-written character recognition
Weakness of SVMIt is sensitive to noise- A relatively small number of mislabeled examples can dramatically decrease the performance
It only considers two classes- how to do multi-class classification with SVM?- Answer: 1) with output arity m, learn m SVM’s
SVM 1 learns “Output==1” vs “Output != 1”SVM 2 learns “Output==2” vs “Output != 2”:SVM m learns “Output==m” vs “Output != m”
2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.
Some IssuesChoice of kernel
Gaussian or polynomial kernel is defaultif ineffective, more elaborate kernels are neededdomain experts can give assistance in formulating appropriate similarity measures
Choice of kernel parametersσ in Gaussian kernelσ is the distance between closest points with different classifications
Optimization criterion – Hard margin vs. Soft marginA lengthy series of experiments in which various parameters are tested
Wind Power Forecasting(WPF)
WPF is a technique which provides the information of how much wind power can be expected at a given point of time.Due to the increasing penetration of wind power into the electric power grid.A good short‐term forecasting will ensure grid stability and a favorable trading performance on the electricity markets.
Ɛ-SVMThe objective function of the ε ‐SVM is based on a ε ‐insensitive loss function. The formula for the ε‐SVM isgiven as follows:
Structure of SVM
Data Resolution The resolution of the dataset is 10 minutes.Each data represents the average wind speed and power within one hour.The data values between two adjacent samples are linearly changed, that is:
Where is the time interval between and .
xj
ii
iiij dttt
dtxxxtx ≤≤
++= + 0.)( 1)
idt ix 1+ix
Data ValueThe average value of the data within can be calculated as
where = 60 minutes is used in the very short‐term forecasting (less than 6 hours) and = 2 hours is used for short‐term forecasting.
sT
sT
sT
tdtxT
txsi
i
Tt
tj
sj ∫
+
= )(1)( ))
Fixed‐Step Prediction SchemePrediction horizon of h stepsfixed‐step forecasting means only the value of the next sample is predicted by using the historical data.
ŷ(t + h) = f (yt, yt-1,…,yt-d)Where f is a nonlinear function generated by SVM
is predicted with the data before (the red blocks), is predicted with the data before (the green blocks)
thh
hty +
1−+htyty1−ty
Wind speed normalization
Autocorrelations of the wind speed samples
SVM model and the RBF model
1h-ahead wind power prediction using the SVM model.
CONCLUSIONSThe SVM has been successfully applied to the problems of pattern classification, particularly the classification of two different categories of patterns.
SVM model is more suitable for very short‐term and short‐term WPF
Provides a powerful tool for enhancing the WPF accuracy.