Top Banner
1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 [email protected] [email protected] Michael T. Manry The University of Texas at Arlington Arlington, TX 76010 [email protected] Memphis Area Engineering and Science Conference 2005 May 11, 2005
30

1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 [email protected].

Jan 01, 2016

Download

Documents

Aubrey Terry
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

1

RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS

Walter H. Delashmit

Lockheed Martin Missiles and Fire Control

Dallas, TX 75265

[email protected]

[email protected]

Michael T. Manry

The University of Texas at Arlington

Arlington, TX 76010

[email protected]

Memphis Area Engineering and Science Conference 2005

May 11, 2005

Page 2: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

2

Outline of Presentation

• Review of Multilayer Perceptron Neural Networks

• Network Initial Types and Training Problems

• Common Starting Point Initialized Networks

• Dependently Initialized Networks

• Separating Mean Processing

• Summary

Page 3: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

3

Review of Multilayer Perceptron Neural Networks

Page 4: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

4

Typical 3 Layer MLP

Output Layer

Hidden Layer Input Layer

net p (1) O p (1) w

oh (1,1) y p (1)

y p (2)

y p (3)

y p (M)

O p ( N h ) net p ( N h )

w hi ( N h ,N) x p (N)

x p (3)

x p (2)

x p (1)

w hi (1,1)

w oh (M, N h )

Page 5: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

5

MLP Performance Equations

Mean Square Error (MSE):

2N

1p

M

1ipp

v

N

1pp

v

vv

)i(y)i(tN

1E

N

1E

Output:

hN

1jpohp

1N

1koip )j(O)j,i(w)k(x)k,i(w)i(y

Net Function:

)j(netpp pe1

1))j(net(f)j(O

1N

1kphip

)k(x)k,j(w)j(net

Page 6: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

6

Net Control

Scales and shifts all net functions so that they do not generate small gradients and do not allow large inputs to mask the potential effects of small inputs

)j(

)i,j(w)i,j(w

h

hdhihi

)j(

)j(mm)1N,j(w)1N,j(w

h

hdhhdhihi

Page 7: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

7

Neural Network Training Algorithms

• Backpropagation Training

• Output Weight Optimization – Hidden Weight Optimization (OWO-HWO)

• Full Conjugate Gradient

Page 8: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

8

Output Weight Optimization – Hidden Weight Optimization (OWO-HWO)

• Used in this development

• Linear equations used to solve for output weights in OWO

• Separate error functions for each hidden unit are used and multiple sets of linear equations solved to determine the weights connecting to the hidden units in HWO

Page 9: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

9

Network Initial Types and Training Problems

Page 10: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

10

Problem Definition

• Assume that a set of MLPs of different sizes are to be designed for a given training data set

• Let be the set of all MLPs for that training data having Nh hidden units, Eint(Nh) denote the

corresponding training error of am initial network that belongs to

• Let Ef(Nh) denote the corresponding training error of a well-trained network

• Let Nhmax denote the maximum number of hidden units for which networks are to be designed

• Goal: Choose a set of initial networks from {S0, S1, S2, … }such that

Eint(0) Eint (1) Eint (2) …. Eint(Nhmax) and train the network to minimize Ef(Nh)

such that Ef(0) Ef (1) Ef (2) …. Ef(Nhmax)

• Axiom 3.1: If Ef(Nh) Ef (Nh-1) then the network having Nh hidden units is useless since the

training resulted in a larger, more complex network with a larger or the same training error.

hNS

hNS

maxhN

S

Page 11: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

11

Network Design Methodologies

• Design Methodology One (DM-1) – A well-organized researcher may design a set of different size networks in an orderly fashion, each with one or more hidden units than the previous networko Thorough design approach

o May take longer time to design

o Allows achieving a trade-off between network performance and size

• Design Methodology Two (DM-2) – A researcher may design different size networks in no particular ordero May be quickly pursued for only a few networks

o Possible that design could be significantly improved with a bit more attention to network design

Page 12: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

12

Three Types of Networks Defined

• Randomly Initialized (RI) Networks – No members of this set of networks have any initial weights and thresholds in common. Practically this means that the initial random number seeds (IRNS) are widely separated. Useful when the goal is to quickly design one or more networks of the same or different sizes whose weights are statistically independent of each other. Can be designed using DM-1 or DM-2

• Common Starting Points Initialized (CSPI) Networks – When a set of networks are CSPI, each one starts with the same IRNS. These networks are useful when it is desired to make performance comparisons of networks that have the same IRNS for the starting point. Can be designed using DM-1 or DM-2

• Dependently Initialized (DI) Networks – A series of networks are designed with each subsequent network having one or more hidden units than the previous network. Larger size networks are initialized using the final weights and thresholds from training a smaller size network for the values of the common weights and thresholds. DI networks are useful when the goal is a thorough analysis of network performance versus size and are most relevant to being designed using DM-1.

Page 13: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

13

Network Properties

• Theorem 3.1: If two initial RI networks (1) are the same size, (2) have the same training data set and (3) the training data set has more than one unique input vector, then the hidden unit basis functions are different for the two networks.

• Theorem 3.2: If two CSPI networks (1) are the same size and (2) use the same algorithm for processing random numbers into weights, then they are identical.

• Corollary 3.2: If two initial CSPI networks are the same size and use the same algorithm for processing random numbers into weights, then they have all common basis functions.

Page 14: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

14

Problems with MLP Training

• Non-monotonic Ef(Nh)

• No standard way to initialize and train additional hidden units

• Net control parameters are arbitrary

• No procedure to initialize and train DI networks

• Network linear and nonlinear component interference

Page 15: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

15

Mapping Error Examples

0

0.0005

0.001

0.0015

0.002

0.0025

0.003

0.0035

0.004

0.0045

3 4 5 6 7 8 9 10 11 12

Number of hidden units

Map

ping

erro

r

Single seed

0

0.005

0.01

0.015

0.02

0.025

3 4 5 6 7 8 9 10 11 12Number of hidden units

Avera

ge er

ror

Mean squareerrorMedian error

0

0.001

0.002

0.003

0.004

0.005

3 4 5 6 7 8 9 10 11 12Number of hidden units

Min

imum

erro

r

0

2

4

6

8

10

Seed

num

ber

Minimum error

Seed number

Page 16: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

16

Tasks Performed in this Research

• Analysis of RI networks• Improved Initialization in CSPI networks• Improved initialization of new hidden units in DI

networks• Analysis of separating mean training approaches

Page 17: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

17

CSPI and CSPI-SWI Networks

• Improvement to RI networksEach CSPI network starts with same IRNS

• Extended to CSPI-SWI (Structured Weight Initialization) networkso Every hidden unit of the larger network has the same initial weights and

threshold values as the corresponding units of the smaller networko Input to output weights and thresholds are also identical

• Theorem 5.1: If two CSPI networks are designed with structured weight initialization, the common subset of the hidden unit basis functions are identical.

• Corollary 5.1: If two CSPI networks are designed using structured weight initialization, the only initial basis functions that are not the same are the hidden unit basis functions for the additional hidden units in the larger network.

• Detailed flow chart for CSPI-SWI initialization in dissertation

Page 18: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

18

CSPI-SWI Examples

0.10

0.12

0.14

0.16

0.18

0.20

0.22

3 4 5 6 7 8 9 10 11 12Nh

Eav

(Nh)

CSPI-SWI

RI

0.000

0.005

0.010

0.015

0.020

0.025

0.030

0.035

3 4 5 6 7 8 9 10 11 12 13 14 15Nh

Eav

(Nh)

CSPI-SWI

RI

fm twod

Page 19: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

19

DI Network Development and Evaluation

• Improvement over RI, CSPI and CSPI-SWI networks

• The values of the common subset of the initial weights and thresholds for the larger network are initialized with the final weights and thresholds from a previously well-trained smaller network

• Designed with DM-1

• Single network designs networks are implementable

• After training, testing is feasible on a different set of data set

Page 20: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

20

Create an initial network with Nh

hidden units

Train this initial

network

Nh Nh+p

Nh>Nhmax ?

Initialize new hidden units

Nh-p+1 j Nh

woh(k,j) 0, 1 k M

whi(j,i) RN(ind+), 1 i N+1

Net control for whi(j,i), 1 i N+1

Train new

network

Stop

Yes

No

Basic DI Network Flowgraph

Page 21: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

21

Properties of DI Networks

• Eint(Nh) < Eint(Nh-p)

• Ef(Np) curve is monotonic non-increasing (i. e., Ef(Nh) Ef(Nh-p))

• Eint(Nh) = Ef(Nh-p)

Page 22: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

22

Performance Results for DI Networks with Fixed Iterations

0.00

0.02

0.04

0.06

0.08

0.10

3 4 5 6 7 8 9 10 11 12Nh

Ef(

Nh)

TrainingTesting

0.10

0.12

0.14

0.16

0.18

0.20

0.22

3 4 5 6 7 8 9 10 11 12Nh

Ef(

Nh)

TrainingTesting

0.0E+00

5.0E+06

1.0E+07

1.5E+07

2.0E+07

2.5E+07

3.0E+07

3 4 5 6 7 8 9 10 11 12Nh

Ef(

Nh)

TrainingTesting

0.0E+00

2.0E+07

4.0E+07

6.0E+07

8.0E+07

1.0E+08

3 4 5 6 7 8 9 10 11 12Nh

Ef(

Nh)

TrainingTesting

fm twod

F24 F17

Page 23: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

23

RI Network and DI Network Comparison

(1)   DI network: standard DI network design for Nh hidden units

(2)   RI type 1: RI networks were designed using a single network for each value of Nh and every network of size Nh was trained using the value of Niter that the

corresponding network was trained with for the DI network.

(3)   RI type 2: RI networks were designed using a single network for each value of Nh and every network was trained using the total number of Niter that was

used for the entire sequence of DI networks. This can be expressed by

This results in the RI type 2 network actually having a larger value of N iter than the

DI network.

maxhN

1jwiter

)j(NN

Page 24: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

24

RI Network and DI Network Comparison Results

0

0.002

0.004

0.006

0.008

0.01

0.012

5 6 7 8 9 10 11 12

Nh

Ef(N

h)

DI network

RI type 1

RI type 2

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

5 6 7 8 9 10 11 12

Nh

Ef(N

h)

DI network

RI type 1

RI type 2

fm twod

Page 25: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

25

Separating Mean Processing Techniques

• Bottom-Up Separating Mean• Top-Down Separating Mean

Page 26: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

26

Generate linear mapping results

pt

pp tt

Train MLP using new data

ppp ttx,

Bottom-Up Separating Mean

2N

1p

M

1ippp

v

N

1pp

v

vv

)i(y)i(t̂)i(tN

1E

N

1E

Basic Idea:

•A linear mapping is removed from the training data.

•The nonlinear fit to the resulting data may perform better.

Generate new desired output vector

Page 27: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

27

Bottom-up Separating Mean Results

0

0.02

0.04

0.06

0.08

0.1

3 4 5 6 7 8 9 10 11 12Nh

Ef(

Nh)

Baseline

Separating mean

4000

4500

5000

5500

6000

6500

3 4 5 6 7 8 9 10 11 12Nh

Ef(N

h)

Baseline

Separating mean

0

0.05

0.1

0.15

0.2

3 4 5 6 7 8 9 10 11 12Nh

Ef(

Nh)

Baseline

Separating mean

fm power12

single2

Page 28: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

28

Top-Down Separating Mean

Determine input and output subsets with similar means

Remove means from corresponding input and output subsets

Train MLP using modified inputs and outputs

Basic Idea:

•If we know which subsets of inputs and outputs have the same means in Signal Model 2 and 3, we can estimate and remove these means.

•Network performance is more robust.

Page 29: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

29

Separating Mean Results

power12

4000

4500

5000

5500

6000

6500

3 4 5 6 7 8 9 10 11 12Nh

Ef(N

h)

Bottom-up separating meanTop-down separating meanBaseline

Page 30: 1 RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 walter.delashmit@lmco.com.

30

Conclusions

• On the average CSPI-SWI networks have more monotonic non-increasing MSE versus Nh curves than RI networks

• MSE versus Nh curves are always monotonic non-increasing for DI networks

• DI network training was improved by calculating the number of training iterations and limiting the amount of training used for previously trained units

• DI networks always produce more consistent MSE versus Nh curves than RI, CSPI and CSPI-SWI networks

• Separating mean processing using both a bottom-up and top-down architecture often produce improved performance results

• A new technique was developed to determine which inputs and outputs are similar to use for top-down separating mean processing