learning from a novice perspective and recent innovations from KGP Anirban Santara Doctoral Research Fellow Department of CSE, IIT Kharagpur bit.do/AnirbanSantara
Aug 19, 2015
Deep learning from a novice perspective and recent innovations from KGPians
Anirban SantaraDoctoral Research Fellow
Department of CSE, IIT Kharagpurbit.do/AnirbanSantara
Deep Learning
Just a kind of
Machine Learning
Classification
Regression
Clustering
3 main tasks:
CLASSIFICATION
Pandas Dogs
Cats
? ??
Rather:
P(class| )?
REGRESSION
Independent variable (feature)
Dependent variable(target attribute)
CLUSTERING
Attribute 1
Attribute 2
The methodology:
1. Design a hypothesis function: h(y|x,ฮธ)
Target attribute Input Parameters of the learning machine
2. Keep improving the hypothesis until the prediction happens really good
Well, how bad is your hypothesis?In case of regressions:
A very common measure is mean squared error:
๐ธ= โ๐ ๐๐๐ก๐๐๐๐๐๐๐๐๐ฅ๐๐๐๐๐๐
ยฟ ๐ฆ ๐๐๐ ๐๐๐๐โ๐ฆ ๐๐ ๐๐๐ h h๐ฆ๐๐๐ก ๐๐ ๐๐ โจ 2
In classification problems: [10 ][01 ]
In one-hot classification frameworks, we often use mean square error
However, often we ask for the probabilities of occurrence of the different classes for a given input ( Pr(class|X) ). In that case we use K-L divergence between the observed (p(output classes)) and predicted (q(output classes)) distributions as the measure of error. This is sometimes referred to as the cross entropy error criterion.
๐พ๐ฟยฟ
Clustering uses a plethora of criteria like:โข Entropy of a clusterโข Maximum distance between 2
neighbors in a cluster
--and a lot more
Now its time to rectify the machine and improve$100,000
$50,000
Learning
We perform โgradient descentโ along the โerror-planeโ in the โparameter spaceโ:
โ ๐๐๐๐๐๐๐ก๐๐=โlearningrateโ๐ป๐๐๐๐๐๐๐ก๐๐ ๐๐๐๐๐ ๐๐ข๐๐๐ก๐๐๐
๐๐๐๐๐๐๐ก๐๐โ๐๐๐๐๐๐๐ก๐๐ +โ๐๐๐๐๐๐๐ก๐๐
Lets now look into a practical learning system: Artificial Neural Network
Cat
Dog
Panda
- A very small unit of computation
So the parameters of an ANN are:1. Incoming weights of every neuron2. Bias of every neuron
These are the ones that need to be tuned during learning
We perform gradient descent on these parameters
Backpropagation algorithm is a popular method of computing
Backpropagation algorithm
Input pattern vector
W21 W32
Forward propagate:
Error calculation:
Backward propagation:
If k output layer
If k hidden layer
Well after all, life is toughโฆโข The parameters of a neural network are generally initialized to random values.โข Starting from these random values (with useless information)
it is very difficult (well not impossible, in fact time consuming) for backpropagation to arrive at the correct values of these parameters.
โข Exponential activation functions like sigmoid and hyperbolic-tangent are traditionally used in artificial neurons. Thesefunctions have gradients that are prone to become zero in course of backpropagation.
โข If the gradients in a layer get close to zero, they induce the gradients in the previous layers to vanish too. As a result the weights and biases in the lower layers remain immature.
โข This phenomenon is called โvanishing gradientโ problem in the literature.
These problems crop up very frequently in neural networks that contain a large number of hidden layers and way too many parameters (the so called Deep Neural Networks).
How to get around? Ans: Make โinformedโ initializationโข A signal is nothing but a set of random variables. โข These random variables jointly take values from a probability distribution that is dependent on the nature of the
source of the signal.
E.g.: A blank 28x28 pixel array like can house numerous kinds of images. The set of 784 random variables assume values from a different joint probability distribution for every class of objects/scenes.
๐๐๐๐๐๐ก (๐ฅ1 , ๐ฅ2 ,โฆ, ๐ฅ784)
๐h๐ข๐๐๐ ๐๐๐๐ (๐ฅ1 ,๐ฅ2 ,โฆ,๐ฅ784 )
Lets try and model the probability distribution of interest
Our target distribution: ๐h๐ข๐๐๐ ๐๐๐๐ (๐ฅ1 , ๐ฅ2 ,โฆ,๐ฅ784 )We try to capture this distribution in a model that looks quite similar toa single layer neural network
The Restricted Boltzmann Machine: Itโs a probabilistic graphical model (a special kind of Markov Random Field) that is capable of modelling a wide variety of probability distributions.
Capture the dependencies among the โvisibleโvariables
The working of RBMParameters of the RBM:1. Weights on the edges 2. Biases on each node and
Using these we define a joint probability distribution over the โvisibleโ variables and the โhiddenโ variables as:
Where the energy function is defined as:
And Z is a normalization term called the โPartition functionโ
๐ ๐ ๐ต๐ (๐ ,๐)= 1๐๐๐ธ (๐ ,๐ )
๐h๐ข๐๐๐ ๐๐๐๐ (๐ฃ1 ,๐ฃ2,โฆ,๐ฃ784 )
โ๐
๐๐ ๐ต๐ (๐ ,๐ )
๐ ๐ ๐ต๐ (๐ฃ1 ,๐ฃ2 ,โฆ ,๐ฃ784 )
๐พ๐ฟยฟยฟโ๐ป (๐h๐ข๐๐๐ ๐๐๐๐)โ โ
๐ฃ1 ,๐ฃ2 ,โฆ,๐ฃ784
๐h๐ข๐๐๐ ๐๐๐๐ (๐ฃ1 ,โฆ ,๐ฃ784 ) ๐๐๐ ๐ ๐ต๐ (๐ฃ1 ,โฆ,๐ฃ784 )
Empirical average of the log-likelihood of data under the model distributionNot under our control
MAXIMIZE
Layer-wise pre-training using RBMโข Every hidden layer is pre-trained
as the hidden layer of a RBM
As RBM models the statistics of the input, the weights and biases carry meaningful information about the input. Use of these as initial values of the parameters of a deep neural network has shown phenomenal improvement over random initialization both in terms of time complexity and performance.
โข This is followed by fine-tuning over the entire network via back-propagation
โข Autoencoder is a neural network operating in unsupervised learning mode
โข The output and the input are set equal to each otherโข Learns an identity mapping from the input to the outputโข Applications:
โข Dimensionality reduction (Efficient, non-linear)โข Representation learning (discovering interesting structures)โข Alternative to RBM for layer-wise pre-training of DNN.
The Autoencoder
A deep stacked autoencoder
So deep learning โ training โdeepโ neural networks with many hidden layers
Step 1: Unsupervised layer-wise pre-training Step 2: Supervised fine-tuning
- This is pretty much all about how deep learning works. However there is a class of deep networks called convolutional neural networks that often do not need pre-training because these networks use extensive parameter sharing and use rectified linear activation functions.
Well, deep learning when viewed from a different perspective looks really amazing!!!
Traditional machine learning v.s. deep learning
Data
Hand-engineering of feature extractors
Dataโdriven target-oriented representation learning
Data representations by feature extractors
โข Classificationโข Regressionโข Clusteringโข Efficient
coding
Inference engine
Traditional machine learning
Deep Learning
Whatโs so special about it?Traditional machine learning Deep learning
โข Designing feature detectors requires careful engineering and considerable domain expertise
โข Representations must be selective to aspects of data that are important for our task and invariant to the irrelevant aspects (selectivity-invariance dilemma)
โข Abstractions of hierarchically increasing complexity are learnt by a data driven approach using general purpose learning procedures
โข A composition of simple non-linear modules can learn very complex functions
โข Cost functions specific to the problem amplify aspects of the input that are important for the task and suppress irrelevant variations
Pretty much how we humans go about analyzingโฆ
Some deep architectures:-
Deep stacked autoencoder
Deep convolutional neural network
Recurrent neural network
Used for efficient non-linear dimensionality reduction and discovering salient underlying structures in data
Exploits stationarity of natural data and uses the concept of parameter sharing to study large images, long spoken/ written strings to make inferences from them
Custom made for modelling dynamic systems and find use in natural language (speech and text) processing, machine translation, etc.
Classical automatic speech recognition system
Viterbi beam
search / A*
decoding
N-best sentences or word lattice
Rescoring
FINAL UTTERRENCE
Acoustic model generationSentence model preparation
Phonetic utterance models
Sentence model
Signal acquisition
Feature extraction
Acoustic modelling
Some of our works:-
2015:
Deep neural network and Random Forest hybrid architecture for learning to detect retinal vessels in fundus images (accepted at EMBC-2015, Milan, Italy)
Our architecture:
Average accuracy of detection: 93.27%
2014-15:
Faster learning of deep stacked autoencoders on multi-core systems through synchronized layer-wise pre-training (accepted at PDCKDD Workshop, a part of ECML-PKDD 2015, Porto, Portugal)
Conventional serial pre-training:
Proposed algorithm:
26% speedup for compression of MNIST handwritten digits
Take-home messagesโข Deep learning is a set of algorithms that have been designed to
1. Train neural networks with a large number of hidden layers.2. Learn features of hierarchically increasing complexity in a data and objective โ driven method.
โข Deep neural networks are breaking all world records in AI because it can be proved that they have the capacity of modelling highly non-linear functions of the data with fewer parameters than shallow networks.
โข Deep learning is extremely interesting and a breeze to implement once the underlying philosophies are understood. It has great potential of being used in a lot of ongoing projects at KGP.
If you are interested to go deep into deep learningโฆ
Take Andrew Ngโs Machine Learning
course on Coursera
Visit ufldl.Stanford.edu
and read the entire tutorial
Read LeCunโs latest deep learning
review published in Nature
Thank you so much
Please give me some feedback for this talk by visiting: bit.do/RateAnirban Or just scan the QR code