x 1 x 2 1 4 2 - 3.93 1 . 0 1 5 . 0 982 . 0 d x The transfer function is unipolar continuous (logsig) x o o o d w o o net f ) 1 )( ( ) 1 ( ) ( ' x w net e o 1 1 0 net=2*0.982+4*0.5-3.93*1=0.034 o=1/(1+exp(-0.04)) = 0.51 1225 . ) 51 )(. 51 . 1 )( 51 . 1 ( 012 . 2 012 . 2 012 . 012 . 0 982 . * 1225 . * 1 . 0 982 . 0 * * old new w w w d = t Example 2 4+0.1*0.1225*.5=4.0061 -3.93+.1*.1225*1=-3.9178 net = 2.012*0.982+4.0061*0.5-3.9178*1=0.061 o=1/(1+exp(-0.061)=0.5152 Error=1-0.51=0.49 Error=1-0.5152=0.4848
Training panels for Neural Fitting Tool and Neural Time Series Tool Provide Choice of Training Algorithms Bayesian Regularization Supports Optional Validation Stops Neural Network Training Tool Shows Calculations Mode
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
x1
x2 1
4
2
-3.93
1.015.0
982.0
dx
The transfer function is unipolar continuous (logsig)
Output layer Define:ai = activation of neuron iwij = synaptic weight from neuron j to neuron ixi = excitation of neuron i (sum of weighted activations coming into neuron i, before squashing)=netdi = target vector=ti
oi = output of neuron iBy definition:xi = ∑ wijaj j
oi = 1 / (1 + e-xi)Summed, squared error at output layer: E = 1/2 ∑ (di - oi)2
∂E Δwij = - η ----- (where η is an arbitrary learning rate) ∂wij
wijt+1
= wijt + η (ti - oi) (1 - oi) oi aj
Derivation of BackpropNow need to compute weight changes in the hidden layer, so, as before, we write out the equation for the error function slope w.r.t. a particular weight leading into the hidden layer:
(where i now corresponds to a unit in the hidden layer and j now corresponds to a unit in the input or earlier hidden layer)
From previous derivation, last two terms can simply be written down:∂ai---- = (1 - ai) ai ∂xi
∂xi---- = aj ∂wij
Derivation of BackpropNow need to compute weight changes in the hidden layer, so, as before, we write out the equation for the error function slope w.r.t. a particular weight leading into the hidden layer:
(where i now corresponds to a unit in the hidden layer and j now corresponds to a unit in the input or earlier hidden layer)
From previous derivation, last two terms can simply be written down:∂ai---- = (1 - ai) ai ∂xi
∂xi---- = aj ∂wij
Derivation of Backprop
However, the first term is more difficult to understand for this hidden layer. It is what Minsky called the credit assignment problem, and is what stumped connectionists for two decades. The trick is to realize that the hidden nodes do not themselves make errors, rather they contribute to the errors of the output nodes. So, the derivative of the total error w.r.t. a hidden neuron’s activation is the sum of that hidden neuron’s contributions to the errors in all of the output neurons:
∂E ∂E ∂ok ∂xk ---- = ∑ ---- ---- ---- (where k indexes over all output units)∂ai k ∂ok ∂xk ∂ai
contribution of each output neuron
contribution of all inputs to the output neuron (from the hidden layer)
contribution of the particular neuron in the hidden layer
Derivation of BackpropFrom our previous derivations, the first two terms are easy:
∂E---- = (ok - dk)∂ok
∂ok---- = (1 - ok) ok ∂xk
∂xk---- = wki ∂ai
For the third term, remember:
xk = ∑ wkiai i
And since only one member of the sum involves ai:
Derivation of Backprop
∂E---- = - ∑ (dk - ok) (1 - ok) ok wki ∂ai k
Combining these terms then yields:
δk Weight between hidden and output layers
And combining with previous results yields:
∂E---- = - (∑ δk wki) (1 - ai) ai aj ∂wij k
wijt+1
= wijt + η (∑ δk wki) (1 - ai) ai aj k
δi
ei
Derivation of Backprop
Forward Propagation of Activity
• Forward Direction layer by layer:
– Inputs applied
– Multiplied by weights
– Summed
– ‘Squashed’ by sigmoid activation function
– Output passed to each neuron in next layer
• Repeat above until network output produced
Back-propagation of error
• Compute error (delta or local gradient) for each output unit
• Layer-by-layer, compute error (delta or local gradient) for each hidden unit by backpropagating errors (as shown previously)
Can then update the weights using the Generalised Delta Rule (GDR), also known as the Back Propagation (BP) algorithm
For output neuron
wijt+1
= wijt + η (di - oi) (1 - oi) oi aj
For hidden neuron
i
wijt+1
= wijt + η (∑ δk wki) (1 - ai) ai aj k
δ k=(dk - ok) (1 - ok) ok
i
The chain rule does the following: distribute the error of an output unit o to all the hidden units that is it connected to, weighted by this connection. Differently put, a hidden unit h receives a delta from each output unit o equal to the delta of that output unit weighted with (= multiplied by) the weight of the connection between those units.
Algorithm (Backpropagation)Start with random weightswhile error is unsatisfactory do for each input pattern compute hidden node input (net) compute hidden node output (o) compute input to output node (net) compute network output (o) Modify outer layer weights
Modify outer layer weights
end end
wijt+1
= wijt + η (di - oi) (1 - oi) oi aj
wijt+1
= wijt + η (∑ δk wki) (1 - ai) ai aj k
δ k=(dk - ok) (1 - ok) ok
So the error for this training example is: (1 - 0.510)= 0.490
1225.)1)((**
0043.)1)((**
1225.)51)(.51.1)(51.1(
445454
335353
5
oow
oow
9078.301225.92.301225.
01225.01225.*1.01**
5050
550
ww
w
012.2012.2012.
012.0982.*1225.*1.0982.0**
5353
553
ww
w
0043.30043.30043.
0043.01*0043.*1.01**
01225.60125.601225.
01225.01*1225.*1.01**
3153
331
41341
441
ww
w
ww
w
w δ a new w
Verification that it works
Thus the new error (1 - 0.5239)=0.476
has been reduced by 0.014
(from 0.490 to 0.476)
Update the weights of the multi-layer network using backpropagation algorithm. The transfer function of the neurons are unipolar sigmoid functions. Target outputs are y2*=1 and y3*=0.5. Learning rate is 0.5.Show that with the updated weights there is a reduction in the total error.