CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam Introduction Outline Basic Units Nonlinearly Separable Problems Backprop Types of Units Putting Things Together CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam (Adapted from Ethem Alpaydin and Tom Mitchell) [email protected]1 / 33 CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam Introduction Properties History Outline Basic Units Nonlinearly Separable Problems Backprop Types of Units Putting Things Together Introduction Deep learning is based in artificial neural networks Consider humans: Total number of neurons ⇡ 10 10 Neuron switching time ⇡ 10 -3 second (vs. 10 -10 ) Connections per neuron ⇡ 10 4 –10 5 Scene recognition time ⇡ 0.1 second 100 inference steps doesn’t seem like enough ) massive parallel computation 2 / 33 CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam Introduction Properties History Outline Basic Units Nonlinearly Separable Problems Backprop Types of Units Putting Things Together Introduction Properties Properties of artificial neural nets (ANNs): Many “neuron-like” switching units Many weighted interconnections among units Highly parallel, distributed process Emphasis on tuning weights automatically Strong differences between ANNs for ML and ANNs for biological modeling 3 / 33 CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam Introduction Properties History Outline Basic Units Nonlinearly Separable Problems Backprop Types of Units Putting Things Together Introduction History of ANNs The Beginning: Linear units and the Perceptron algorithm (1940s) Spoiler Alert: stagnated because of inability to handle data not linearly separable Aware of usefulness of multi-layer networks, but could not train The Comeback: Training of multi-layer networks with Backpropagation (1980s) Many applications, but in 1990s replaced by large-margin approaches such as support vector machines and boosting 4 / 33 CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam Introduction Properties History Outline Basic Units Nonlinearly Separable Problems Backprop Types of Units Putting Things Together Introduction History of ANNs (cont’d) The Resurgence: Deep architectures (2000s) Better hardware and software support allow for deep (> 5–8 layers) networks Still use Backpropagation, but Larger datasets, algorithmic improvements (new loss and activation functions), and deeper networks improve performance considerably Very impressive applications, e.g., captioning images The Problem: Skynet (TBD) Sorry. 5 / 33 CSCE 970 Lecture 2: Artificial Neural Networks and Deep Learning Stephen Scott and Vinod Variyam Introduction Properties History Outline Basic Units Nonlinearly Separable Problems Backprop Types of Units Putting Things Together When to Consider ANNs Input is high-dimensional discrete- or real-valued (e.g., raw sensor input) Output is discrete- or real-valued Output is a vector of values Possibly noisy data Form of target function is unknown Human readability of result is unimportant Long training times acceptable 6 / 33
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Deep learning is based in artificial neural networks
Consider humans:
Total number of neurons ⇡ 10
10
Neuron switching time ⇡ 10
�3 second (vs. 10
�10)Connections per neuron ⇡ 10
4–10
5
Scene recognition time ⇡ 0.1 second100 inference steps doesn’t seem like enough
) massive parallel computation
2 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
IntroductionProperties
History
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
IntroductionProperties
Properties of artificial neural nets (ANNs):
Many “neuron-like” switching unitsMany weighted interconnections among unitsHighly parallel, distributed processEmphasis on tuning weights automatically
Strong differences between ANNs for ML and ANNs forbiological modeling
3 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
IntroductionProperties
History
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
IntroductionHistory of ANNs
The Beginning: Linear units and the Perceptronalgorithm (1940s)
Spoiler Alert: stagnated because of inability to handledata not linearly separable
Aware of usefulness of multi-layer networks, but couldnot train
The Comeback: Training of multi-layer networks withBackpropagation (1980s)
Many applications, but in 1990s replaced bylarge-margin approaches such as support vectormachines and boosting
4 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
IntroductionProperties
History
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
IntroductionHistory of ANNs (cont’d)
The Resurgence: Deep architectures (2000s)Better hardware and software support allow for deep(> 5–8 layers) networksStill use Backpropagation, but
Larger datasets, algorithmic improvements (new lossand activation functions), and deeper networks improveperformance considerably
Very impressive applications, e.g., captioning imagesThe Problem: Skynet (TBD)
Sorry.
5 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
IntroductionProperties
History
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
When to Consider ANNs
Input is high-dimensional discrete- or real-valued (e.g.,raw sensor input)Output is discrete- or real-valuedOutput is a vector of valuesPossibly noisy dataForm of target function is unknownHuman readability of result is unimportantLong training times acceptable
6 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
Outline
Basic unitsLinear unitLinear threshold unitsPerceptron training rule
Nonlinearly separable problems and multilayernetworksBackpropagationPutting everything together
7 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic UnitsLinear Unit
Linear ThresholdUnit
Perceptron TrainingRule
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
Linear Unit
w1
w2
wn
w0
x1
x2
xn
x0=1
.
.
.�
� wi xin
i=0 1 if > 0
-1 otherwise{o = � wi xin
i=0
y = f (x;w, b) = x
Tw + b = w
1
x
1
+ · · ·+ w
n
x
n
+ b
If set w
0
= b, can simplify aboveForms the basis for many other activation functions
8 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic UnitsLinear Unit
Linear ThresholdUnit
Perceptron TrainingRule
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
Linear Threshold Unit
w1
w2
wn
w0
x1
x2
xn
x0=1
.
.
.�
� wi xin
i=0 1 if > 0
-1 otherwise{o = � wi xin
i=0
y = o(x;w, b) =
⇢+1 if f (x;w, b) > 0
�1 otherwise
(sometimes use 0 instead of �1)
9 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic UnitsLinear Unit
Linear ThresholdUnit
Perceptron TrainingRule
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
Linear Threshold UnitDecision Surface
x1
x2+
+
--
+-
x1
x2
(a) (b)
-
+ -
+
Represents some useful functions
What parameters (w, b) representg(x
1
, x
2
;w, b) = AND(x1
, x
2
)?
But some functions not representable
I.e., those not linearly separableTherefore, we’ll want networks of units
10 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic UnitsLinear Unit
Linear ThresholdUnit
Perceptron TrainingRule
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
Perceptron Training Rule
w
t+1
j
w
t
j
+�w
t
j
, where �w
t
j
= ⌘ (yt � y
t) x
t
j
and
y
t is label of training instance t
y
t is Perceptron output on training instance t
⌘ is small constant (e.g., 0.1) called learning rate
I.e., if (yt � y
t) > 0 then increase w
t
j
w.r.t. x
t
j
, else decrease
Can prove rule will converge if training data is linearlyseparable and ⌘ sufficiently small
11 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic UnitsLinear Unit
Linear ThresholdUnit
Perceptron TrainingRule
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogether
Where Does the Training Rule Come From?
Recall initial linear unit, where output
y
t = f (x;w, b) = x
>w + b
(i.e., no threshold)For each training example, compromise betweencorrectiveness and conservativeness
Correctiveness: Tendency to improve on x
t (reduceloss)Conservativeness: Tendency to keep w
t+1 close to w
t
(minimize distance)Use cost function that measures both (let w
By adding up to 2 hidden layers of linear threshold units,can represent any union of intersection of halfspaces
pos
pospos
neg
neg
neg
pos
First hidden layer defines halfspaces, second hidden layertakes intersection (AND), output layer takes union (OR)
20 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
BackpropSigmoid Unit
Multilayer Networks
Training MultilayerNetworks
Backprop Alg
Types of Units
Putting ThingsTogether
The Sigmoid Unit
[Rarely used in deep ANNs, but continuous and differentiable]w1
w2
wn
w0
x1
x2
xn
x0 = 1
.
.
.�
net = � wi xii=0
n1
1 + e-neto = �(net) = = f(x; w,b)
�(net) is the logistic function
�(net) =1
1 + e
�net
(a type of sigmoid function)
Squashes net into [0, 1] range
Nice property:
d�(x)
dx
= �(x)(1� �(x))
21 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
BackpropSigmoid Unit
Multilayer Networks
Training MultilayerNetworks
Backprop Alg
Types of Units
Putting ThingsTogether
Sigmoid UnitGradient Descent
Again, use squared loss for correctiveness:
E(wt) =1
2
�y
t � y
t
�2
(folding 1/2 of correctiveness into loss func)
Thus@E
@w
t
j
=@
@w
t
j
1
2
�y
t � y
t
�2
=1
2
2
�y
t � y
t
� @
@w
t
j
�y
t � y
t
�=�y
t � y
t
� � @y
t
@w
t
j
!
22 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
BackpropSigmoid Unit
Multilayer Networks
Training MultilayerNetworks
Backprop Alg
Types of Units
Putting ThingsTogether
Sigmoid UnitGradient Descent (cont’d)
Since y
t is a function of f (x;w, b) = net
t = w
t · x
t,
@E
@w
t
j
= ��y
t � y
t
� @y
t
@net
t
@net
t
@w
t
j
= ��y
t � y
t
� @� (net
t)
@net
t
@net
t
@w
t
j
= ��y
t � y
t
�y
t
�1� y
t
�x
t
j
Update rule:
w
t+1
j
= w
t
j
+ ⌘ y
t
�1� y
t
� �y
t � y
t
�x
t
j
23 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
BackpropSigmoid Unit
Multilayer Networks
Training MultilayerNetworks
Backprop Alg
Types of Units
Putting ThingsTogether
Multilayer Networks
x0
x2
xn
Σ
=1
Σ1
σ
σ
Σ
Σ
σ
σ
w
w
w
w
w
w
net n+1
net n+2
net n+3
net n+4
n+3,n+1w
w
w
w
n+3,n+2
n+4,n+1
n+4,n+2
x1 x n+3,n+1n+1,1
n+1,n
n+2,1
n+2,n
n+2,0
n+1,0
xji = input from i to j
= wt from i to jwji
Hidden layer Output Layer
Input
layer
y n+4
y n+3
For now, using sigmoid units
E
t = E(wt) =1
2
X
k2outputs
�y
t
k
� y
t
k
�2
24 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
BackpropSigmoid Unit
Multilayer Networks
Training MultilayerNetworks
Backprop Alg
Types of Units
Putting ThingsTogether
Training Multilayer NetworksOutput Units
Adjust weight w
t
ji
according to E
t as before
For output units, this is easy since contribution of w
t
ji
to E
t
when j is an output unit is the same as for single neuroncase1, i.e.,
@E
t
@w
t
ji
= ��y
t
j
� y
t
j
�y
t
j
�1� y
t
j
�x
t
ji
= ��t
j
x
t
ji
where �t
j
= � @E
t
@net
t
j
= error term of unit j
1This is because all other outputs are constants w.r.t. w
t
ji25 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
BackpropSigmoid Unit
Multilayer Networks
Training MultilayerNetworks
Backprop Alg
Types of Units
Putting ThingsTogether
Training Multilayer NetworksHidden Units
How can we compute the error term for hidden layerswhen there is no target output r
t for these layers?Instead propagate back error values from output layertoward input layers, scaling with the weightsScaling with the weights characterizes how much of theerror term each hidden unit is “responsible for”
Minimizing square loss with this output unit maximizeslog likelihood when labels from normal distribution
I.e., find a set of parameters ✓ that is most likely togenerate the labels of the training data
Works well with GD trainingSigmoid (Sec 6.2.2.2): y = �(w>
h + b)Approximates non-differentiable threshold functionMore common in older, shallower networksCan be used to predict probabilities
Softmax unit (Sec 6.2.2.3): Start with z = W
>h + b
Predict probability of label i to besoftmax(z)
i
= exp(zi
)/⇣P
j
exp(zj
)⌘
Continuous, differentiable approximation to argmax30 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of UnitsTypes of OutputUnits
Types of HiddenUnits
Putting ThingsTogether
Types of Hidden Units
Rectified linear unit (ReLU) (Sec 6.3.1): max{0,W
>x + b}
Good default choiceSecond derivative is 0almost everywhere andderivatives largeIn general, GD workswell when functionsnearly linear
Logistic sigmoid (done already) and tanh (6.3.2)
Nice approximation to threshold, but don’t train well indeep networksStill potentially useful when piecewise functionsinappropriate
Softmax (occasionally used as hidden)31 / 33
CSCE 970Lecture 2:ArtificialNeural
Networks andDeep
Learning
Stephen Scottand VinodVariyam
Introduction
Outline
Basic Units
NonlinearlySeparableProblems
Backprop
Types of Units
Putting ThingsTogetherHidden Layers
Putting Everything TogetherHidden Layers
How many layers to use?Deep networks tend to build potentially usefulrepresentations of the data via composition of simplefunctionsPerformance improvement not simply from morecomplex network (number of parameters)Increasing number of layers still increases chances ofoverfitting, so need significant amount of training datawith deep network; training time increases as well
Any boolean function can be represented with twolayersAny bounded, continuous function can be representedwith arbitrarily small error with two layersAny function can be represented with arbitrarily smallerror with three layers
Only an EXISTENCE PROOF
Could need exponentially many nodes in a layerMay not be able to find the right weights