Top Banner
Kak Neural Network Mehdi Soufifar: [email protected] Mehdi Hoseini: [email protected] Amir hosein Ahmadi: [email protected]
69

Kak Neural Network Mehdi Soufifar: [email protected] Mehdi Hoseini: [email protected] Amir hosein Ahmadi: [email protected].

Dec 27, 2015

Download

Documents

Mavis Adams
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

Kak Neural Network

Mehdi Soufifar:[email protected]

Mehdi Hoseini:[email protected]

Amir hosein Ahmadi:[email protected]

Page 2: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

2

Corner Classification approach

00.2

0.40.6

0.8

1

0

0.2

0.4

0.6

0.8

10

0.2

0.4

0.6

0.8

1

Corners For XOR Function:

1

0

0

1

Page 3: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

3

Corner Classification approach… Map n-dimensional binary vectors (input)

into m-dimensional binary vectors (as output)

Mapping function (f) is:

Using…: Backpropagation (does not quarantee

convergence). …

)( ii XfY

1

0

1

0

1

1

0

0

0

1

1

0

fY

Page 4: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

4

Introduction Feedback (Hopfield with delta learning) and feedforward

(backpropagation) networks learn patterns slowly: the network must adjust weights connecting links between input and output until it obtains the correct response to the training patterns.

But biological learning is not a single process: some forms are very quick and others relatively slow. Short-term biological memory, in particular, works very quickly, so slow neural network models are not plausible candidates in this case

Page 5: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

5

Training feedforward NN [1] Kak proposed CC1,CC2 in January 1993. Example:

Exclusive-OR mapping

Page 6: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

6

Training feedforward NN [1] Kak proposed CC1,CC2 in January 1993. Example:

Exclusive-OR mapping

Page 7: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

7

CC1 as an example Initialize all weight with zero. If result is true do nothing. If result=1 and supervise say 0 subtract x vector from

weight vector. If result=0 and supervise say 1 add x vector to weight

vector.

X1

X2

y11

1

Hidden Layer as corners

Input Layer Output Layer (OR Gate)

0 1

1 0

Page 8: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

8

CC1… Result on first output corner:

samples

W1 W2 W3

Init,1 0 0 0

2 0 1 1

3 -1 1 0

Page 9: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

9

CC1… Result on second output corner:

samples

W1 W2 W3

Init,1,2 0 0 0

3 1 0 1

4,1,2 0 -1 0

3 1 -1 1

4,1,2 0 -2 0

3,4 1 -2 1

1 1 -2 0

Page 10: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

10

CC1 Algorithm Notations:

Mapping is Y=f(X), X,Y are n and m dimensional binary vectors. Therefore we have (i=1,…,k) (k=number of vectors).

Weight of Vector: number of 1 element on it. If the k output sequence are written out in an array then

the columns may be viewed as a sequence of m, k dimensional vectors .

Weight of is . Weight of is . Weight of is .

)( ii XfY

iWiY iiX is

)...( 21 kjjjj yyyW j

kmkjk

mj

mj

yyy

yyy

yyy

1

2221

1111

1W mWjW

Page 11: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

11

CC1 Algorithm… Start with the random initial weight vector. If the neuron says no when it should say yes,

add the input vector to the weight vector. If the neuron says yes when it should say no,

subtract the input vector from the weight vector. Do nothing otherwise.

Note that a main problem is “what’s the number neurons in the hidden layer?”

Page 12: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

12

Number of hidden neurons•Consider that:

iki

i 0

• And the number of hidden neurons can be reduced by the duplicating neurons equals to:

Page 13: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

13

Number of hidden neurons… Theorem: The number of hidden neurons required to

realize the mapping ,i=1,2,…,k is equal to:

And since we can say: The number of hidden neurons required to realize

the mapping is at most k.

ii XY

k

i i

m

j j 11

k

i i

m

j j 11

Page 14: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

14

Real Applications problem [1]

Comparison Training results:

Alg. On XOR problem Number of Iteration

BP 6,587 [1]

CC (CC1) 8 [1]

Page 15: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

15

Proof of convergence [1] We would establish that the classification algorithm

converges if there is a weight vector such that for the corner that needs to be classified, and otherwise.

Wt is the weight vector of t-th iteration Θ is the angle between and Wt

If neuron say no, when it must say yes:

W XW XW

W

Page 16: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

16

Proof of convergence… Numerator on cosine becomes:

produces correct result, we know that:

And:

And we get same inequality for the other type of misclassification( ).

XW

XW

W

Page 17: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

17

Proof of convergence… Repeating this process for t iteration produces:

For the cosine’s denominator( ):

If neuron says no we have then:

And same result will be obtained for other type of misclassification( ).

01 XWt

XW

Page 18: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

18

Proof of convergence… Repeating substitution produces:

Since ,we have:

Then we have:

tnWt 2

nX 2

Page 19: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

19

Proof of convergence… From (1), (2) we can say:

nW

t

ntW

t

WW

WW

t

t

cos

Page 20: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

20

Types of memory

Long-term In AI like BP & RBF,…

Short-term Learn instantaneously with good generalization

Page 21: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

21

Current network characteristics

What the problem of BP and RBF They require iterative training Take long time to learn Sometimes doesn’t converge

Result They are not applicable in real-time application They could never learn short-term,

instantaneously-learned memory (the most significant aspects of biological working memory ).

Page 22: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

22

CC2 algorithm In this algorithm weight are given as

follows:

The value of implies that the threshold of hidden neurons to separate this sequence is .

Ex: Result of CC2 on last example is:

0 1 1 0

-1 1 1 -1

W3 = -(si-1)=-(1-1)=0

Page 23: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

23

Real Applications problem

Comparison Training results:

Alg. On XOR problem Number of Iteration

BP 6,587 [1]

CC (CC1) 8 [1]

CC (CC2) 1 [1]

Page 24: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

24

CC2’s Generalization…[3] Hidden neurons’ weight are:

r is the radius of the generalized region If no generalization is needed then r = 0. For function mapping, where the input vectors are

equally distributed into the 0 and the 1 classes, then:

2

nr

Page 25: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

25

About choice of h[3] consider a 2¡dimensional problem:

The function of the hidden node can be expressed by the separating line:

Page 26: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

26

About choice of h[3] Assume that the input pattern being

classified is (0 1), then x2 = 1. Also,w1 = h, w2 = 1, and s = 1. The equation of the dividing line represented by the hidden node now becomes:

Page 27: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

27

About choice of h…

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

(h=-1 and r=0)

Page 28: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

28

About choice of h…

-4 -3 -2 -1 0 1 2 3 4-4

-3

-2

-1

0

1

2

3

4

(h=-0.8 and r=0)

Page 29: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

29

About choice of h…

(h=-1 and r=1)

-4 -3 -2 -1 0 1 2 3 4-5

-4

-3

-2

-1

0

1

2

3

Page 30: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

30

CC4 [6] The CC4 network maps an input binary vector X to

an output vector Y. The input and output layers are fully connected. The neurons are all binary neurons with binary step

activation function as follows:

The number of hidden neurons is equal to the number of training samples with each hidden neuron representing one training sample.

Page 31: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

31

CC4 Training[6] Let

be the weight of the connection from input neuron i to hidden neuron j and

let be the input for the i-th input neuron when the j-th training sample is presented to the network.

Then the weights are assigned as follows:

),...,1,..,1( HjNiwij

ijX

Page 32: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

32

CC4 Training [6]… Let

be the weight of the connection from j-th hidden neuron to the k-th output neuron.

let be the output of the k-th output neuron for the j-th training sample.

The value of are determined by the following equation:

),...,1,..,1( MkHju jk

jkY

jku

Page 33: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

33

Sample of CC4 Consider The 16 by 16 area of a spiral pattern that

contains 256 binary pixel (as black and white) as figure 2..

And we want to train a system with 1 exemplar sample as figure 2 that total 75 point are used for training.

Figure 1 Figure 2

Page 34: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

34

Sample of CC4… We can code 16 integer

numbers with 4 binary bits.

Therefore for location (x,y), we will use 4 bits for x and 4 bits for y, and 1 extra bit (always equal to 1) for the bias.

Totally we have 9 inputs.

16

16

Page 35: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

35

Sample of CC4…

(5,6)0

1

0

1

0

1

0

0

-1

1

-1

-1

-1

1

-1

-1

r-s+1=r-3+1=r-2

# corner

0 corner

Page 36: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

36

Sample of CC4 result… Number of point

classified /misclassified in the spiral pattern.

Original spiral Training sample Output, r=1

Output, r=2 Output, r=3 Output, r=4

Page 37: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

37

FC motivation

Disadvantages of CCs algorithm Input and output must be discrete Input is best presented in a unary code

increases the number of input neurons considerably

Degree of generalization for all nodes is the same

Page 38: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

38

In reality this degree vary from node to node We need to work on real data

An interative version of the CC algorithm that does provide a varying degree of generalization has been devised .

Problem :It is not instantaneous

Problem

Page 39: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

39

Fast classification network

What is FC? a generalization of the CC network This network can operate on real data

directly Learn instantaneously

It reduces to CC in a way that : data is binary amount of generalization is fixed

Page 40: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

40

Input

•All xi and Y are real data•K is determined by problem nature

X=( x1, x2, …,xk ) , F(x) Y

What to do Define weight for input & output weightDefine radius of generalization

Page 41: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

41

Input

Index Input Output

1 x1,x2,x3,x4

Y1,Y2

2 x1,x2,x3,x4

Y1,Y2

..

Page 42: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

42

FC network structure

Page 43: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

43

The hidden neurons

Page 44: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

44

The rule base

M=the number of hi that equal to 0

• value of k is typically a small fraction of the size of the training set.

• Membership grades are normalized,

Rule 1: IF m = 1, THEN assign μi using single-nearest-neighbor (1NN)

Rule 2: IF m = 0, THEN assign μi using k-nearest-neighbor (kNN) heuristic.

Page 45: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

45

1NN heuristic

when exactly one element in the distance vector h is 0

Page 46: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

46

kNN heuristic

Based on k nearest neighbors.

Triangular membership

Page 47: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

47

Training of the FC network

Training involves two separate step:

Step1:input and output weights are prescribed simply by inspection of the training input/output pairs

Step2:the radius of generalization for each hidden neuron is determined

ri=1/2dmin i

Page 48: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

48

Radius of generalization

Soft generalization together with interpolationhard generalization with separated decision regions

Page 49: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

49

Generalization by fuzzy membership

The output neuron then computes the dot product between the output weight vector and the membership grade vector

Page 50: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

50

Other consideration Other membership function.

quadratic function known as S

Page 51: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

51

Other consideration Other distance metric.

city block distance..

Result :

performance of the network is not seriously affected by the choice of distance metric and membership function

Page 52: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

52

Hidden neuron

As in CC4: Number of training samples that

the network is required to learn.

Note: training sample are exemplar

Page 53: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

53

}Example

d23 = 11.27

r1=2.5r2=2.5r3=5

Page 54: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

54

}

Example

d23 = 11.27

r1=2.5r2=2.5r3=5

Input :

Y=0.372*7 + 0.256*4 + 0.372*9 =6.976

Page 55: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

55

Experimental result

Time-series prediction electric load demand Forecast Traffic volume forecast Prediction of stock prices, currency, and interest rates

describe the performance of the FC network using two benchmark

With different characteristic Henon map time series Mackey–Glass time series

Page 56: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

56

Henon map

one-dimensional Henon map:

Generated point Training samples Testing samples Window size

544 500 out of 504 50 4

Input X Output Y

X(1), X(2), X(3), X(4) X(5)

X(2), X(3), X(4) ,X(5) X(6)

Page 57: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

57

Henon map time-seriesprediction using FC (4-500-1), k = 5.

Page 58: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

58

Result

Henon map time-series prediction using FC network

SSE : sum-of-squared error

Page 59: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

59

Mackey-Glass time series

nonlinear time delay differential equation originally developed for modeling white blood cell production.

A, B, C : constants

D : the time delay parameter.

A B C D

0.2 0.1 10 30Popular case :

Page 60: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

60

Henon map time-series prediction using FC (4-500-1), k = 5.

Page 61: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

61

PERFORMANCE SCALABILITY

FC network and RBF network are optimized for a sample size of 500 and window size of 4.Parameter such as spread constant for RBF are set to the best value

Then

The window and the sample size are allowed to change without reoptimization

Page 62: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

62

PERFORMANCE SCALABILITY

Page 63: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

63

PERFORMANCE SCALABILITY

Page 64: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

64

Result

performance of the FC network remains good and reasonably consistent throughout all window and sample sizes

RBF network is adversely affected by changes in the window size or sample size or both

Conclusion

The performance of the RBF network can become erratic for certain combinations of these parameters.

FC is generally applicable to other window sizes and sample sizes

Page 65: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

65

Pattern recognition

pattern in a 32-by-32 area Input : row and column coordinates

of the training samples [1,32] Two output neurons, one for each

class White region : (1,0) black region : (0,1)

Page 66: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

66

Result

Two-class spiral pattern classification

Output neuronInput neuron Training sample

Page 67: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

67

Result

Output neuronInput neuron Training sample

Four-class spiral pattern classification

Page 68: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

68

References[1] S.C. Kak, On training feedforward neural networks.

Pramana -J. of Physics, 40, 35-42 (1993).[2] G. Mirchandani and W. Cao, On hidden nodes for neural

nets. IEEE Trans. on Circuits and Systems 36, 661-664 (1989).

[3] S. Kak (1998), “On generalization by neural networks”, Information Sciences, vol. 111, pp. 293-302.

[4] S. Kak, Better web searches and prediction with instantaneously trained neural networks, IEEE Intelligent Systems, vol. 14(6), pp. 78–81, 1999.

[5] CHAPTER 7 , RESULTS AND DISCUSSION[6] Bo Shu, Subhash Kak, A neural network-based intelligent

metasearch engine ,Information Sciences, 120 (1999)1-11

Page 69: Kak Neural Network Mehdi Soufifar: Soufifar@ce.sharif.edu Mehdi Hoseini: Me_hosseini@ce.sharif.edu Amir hosein Ahmadi: A_H_Ahmadi@yahoo.com.

69

References [7] S. Kak (2002), “A class of instantaneously trained

neural networks”, Information Sciences, vol. 148, pp. 97-102.

[8] K.W. Tang and S. Kak (2002), “Fast Classification Networks for Signal Processing”, Circuits Systems Signal Processing, vol. 21, pp. 207-224.

[9] S. Kak, “Three languages of the brain: Quantum, reorganizational, and associative, “ In Learning as Self-Organization, K. Pribram and J. King, eds., Lawrence Erlbaum, Mahwah, N.J., 1996, pp. 185--219.