Forward & Backward selection in hybrid network

Post on 31-Dec-2015

31 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Forward & Backward selection in hybrid network. Introduction. A training algorithm for an hybrid neural network for regression. Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons). When is it good?. Hidden Units. RBF:. MLP:. Overall algorithm. - PowerPoint PPT Presentation

Transcript

04/19/23 1

Forward & Backward selection in hybrid network

04/19/23 2

Introduction

A training algorithm for an hybrid neural network for regression.

Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons).

04/19/23 3

When is it good?

04/19/23 4

Hidden Units

RBF:

MLP:

04/19/23 5

Overall algorithm

Divide input space and assign units to each sub-region.

Optimize parameters. Prune un-necessary weights using

Bayesian Information Criteria.

04/19/23 6

Forward leg

Divide the input space into sub-regions

Select type of hidden unit for each sub-region

Stop when error goal or maximum number of units is achieved.

04/19/23 7

Input space division

Like CART using

Maximum reduction in

04/19/23 8

Unit type selection (RBF)

04/19/23 9

Unit type selection (projection)

04/19/23 10

Units parameters

RBF unit: center at maximum point.

Projection unit: weight normalized of maximum point

04/19/23 11

ML estimate for unit type

04/19/23 12

Pruning

Target function values corrupted with Gaussian noise

04/19/23 13

BIC approximation

Schwartz, Kass and Raftery

04/19/23 14

Evidence for the model

04/19/23 15

Evidence for unit type1

04/19/23 16

Evidence for unit type cont2’

04/19/23 17

Evidence fore unit type cont3’

04/19/23 18

Evidence Unit Type alg4.

Initialize alfa and beta Loop: compute w,wo Recompute alfa and beta Until difference in the evidence is

low.

04/19/23 19

Pumadyn data set DELVE archive Dynamic of a puma robot arm. Target: annular acceleration of one

of the links. Inputs: various joint angles,

velocities and torques. Large Guassian noise. Data set non linear. Input dimension: 8, 32.

04/19/23 20

Results pumadyn-32nh

04/19/23 21

Results pumadyn-8nh

04/19/23 22

Related work

Hassibi et al. with Optimal Brain Surgeon

Mackey with Bayesian inference of weights and regularization parameters.

HME Jordan and Jacob, division on input space.

Kass & Raftery Schwarz with BIC.

04/19/23 23

Discussion

Pruning removes 90% of parameters. Pruning reduces variance of estimator. The pruning algorithm is slow. PRBFN better then MLP of RBF alone. Bayesian techniques disadvantage: the

prior distribution parameter. Bayesian techniques are better then LRT. Unit type selection is a crucial element in

PRBFN Curse of dimensionality is well seen on

pumadyn data sets.

top related