January 5, 2017 1 Deep Learning Er. Shiva K. Shrestha, ME Computer, NCIT
1January 5, 2017
Deep Learning
Er. Shiva K. Shrestha, ME Computer, NCIT
2
Slide Credito Jeff Dean, Google, Large Scale Deep Learningo Andrew Ng, Deep Learning o Aditya Khosla & Joseph Lim, Visual Recognition through ML Competition
January 5, 2017
3
Structure◦ General Questions of the World ◦ What is Deep Learning?◦ Why Deep Learning?◦ Deep Neural Network Architectures◦ Deep Learning Applications◦ Conclusions, Recommendations
January 5, 2017
4
How Can We Build More Intelligent Computer Systems?
According to Jeff Dean, Google:o Need to perceive and understand the worldo Basic speech and vision capabilitieso Language understandingo User behavior predictiono …
January 5, 2017
5
How can we do this? According to Jeff Dean, Google:o Cannot write algorithms for each task we want to accomplish separately.o Need to write general algorithms that learn from observationso Can we build systems that:o Generate understanding from raw datao Solve difficult problems to improve productso Minimize software engineering effort
January 5, 2017
6
Plenty of Datao Text: trillions of words of English + other languageso Visual: billions of images and videos o Audio: thousands of hours of speech per dayo User Activity: queries, result page clicks, map requests, etc.o Knowledge Graph: billions of labelled relation tripleso …
January 5, 2017
7
Image Models
January 5, 2017
8
What are these numbers?
January 5, 2017
9
What are all these words?
January 5, 2017
10
How about these words?
January 5, 2017
11
Textual Understanding “This movie should have NEVER been made. From the poorly done animation, to the beyond bad acting. I am not sure at what point the people behind this movie said "Ok, looks good! Lets do it!" I was in awe of how truly horrid this movie was.”
January 5, 2017
12
General Machine Learning Approacheso Learning by labeled example: Supervised Learningo e.g. An email spam detector o amazingly effective if you have lots of examples
o Discovering patterns: Unsupervised Learningo e.g. data clusteringo difficult in practice, but useful if you lack labeled examples
o Feedback right/wrong: Reinforcement Learningo e.g. learning to play chess by winning or losingo works well in some domains, becoming more important
January 5, 2017
13
Machine Learningo For many of these problems, we have lots of datao gives computers the ability to learn without being explicitly programmed
January 5, 2017
Approacheso Decision tree learningo Association rule learningo Artificial neural networkso Deep learningo Inductive logic programmingo Support vector machineso Clusteringo Bayesian networks
Approaches …o Reinforcement learningo Representation learningo Similarity and metric learningo Sparse dictionary learningo Genetic algorithmso Rule-based machine learningo Learning classifier systems
14
Typical Goal of Machine LearningLabel: “Motorcycle”Suggest tagsImage search…
Speech recognitionMusic classificationSpeaker identification…
Web searchAnti-spamMachine translation…
text
audio
images/video
I/p O/pML
ML
ML
January 5, 2017
15
Basic Idea of Deep Learning Is there some way to extract meaningful features from data even without knowing the task to be performed?
Then, throw in some hierarchical ‘stuff’ to make it ‘deep’
January 5, 2017
16
What is Deep Learning?o The modern reincarnation of ANNs from the 1980s and 90s.o A collection of simple trainable mathematical units, which collaborate to compute a complicated function.oCompatible with (3) General ML Approaches
January 5, 2017
17
What is Deep Learning? (2)o Loosely inspired by what (little) we know about the biological brain.o AKA:o Deep Structure Learningo Hierarchical Learningo Deep M/c Learning
January 5, 2017
18
Deep Learning DefinitionsDeep learning is characterized as a class of machine learning algorithms that o use a cascade of many layers of nonlinear processing units for feature extraction and transformation. o are based on the learning of multiple levels of features or representations of the data. o are part of the broader machine learning field of learning representations of data.o learn multiple levels of representations that correspond to different levels of abstraction;
January 5, 2017
19
DL - Why is this hard?You see this:
But the camera sees this:
January 5, 2017
20
Pixel-based Representation
Input
Raw image
Motorbikes“Non”-Motorbikes
Learningalgorithm
pixel 1
pixe
l 2pixel 1
pixel 2
January 5, 2017
21
Pixel-based Representation (2)
InputMotorbikes“Non”-Motorbikes
Learningalgorithm
pixel 1
pixe
l 2pixel 1
pixel 2
Raw image
January 5, 2017
22
Pixel-based Representation (2)
InputMotorbikes“Non”-Motorbikes
Learningalgorithm
pixel 1
pixe
l 2pixel 1
pixel 2
Raw image
January 5, 2017
23
What We Want
InputMotorbikes“Non”-Motorbikes
Learningalgorithm
pixel 1
pixe
l 2Feature
representation
handlebars
wheelE.g., Does it have Handlebars? Wheels?
Handlebars
Whe
els
Raw image Features
January 5, 2017
24
Some Feature Representations
SIFT Spin image
HoG RIFT
Textons GLOHJanuary 5, 2017
25
Some Feature Representations (2)
SIFT Spin image
HoG RIFT
Textons GLOH
Coming up with features is often difficult, time-consuming, and requires expert knowledge.
January 5, 2017
26
The Brain: Potential Motivation for Deep Learning
[Roe et al., 1992]
Auditory Cortex learns to see!
Auditory Cortex
January 5, 2017
27
The Brain adapts!
[BrainPort; Welsh & Blasch, 1997; Nagel et al., 2005; Constantine-Paton & Law, 2009]
Seeing with your Tongue Human Echolocation (Sonar)
Haptic belt: Direction Sense Implanting a 3rd Eye
January 5, 2017
28
Feature Learning Problem Given a 14x14 image patch x, can represent it using 196 real numbers.
Problem: Can we find a learn a better feature vector to represent this?
255989387899148…
January 5, 2017
29
Why Deep Learning?Method AccuracyHessian + ESURF [Williems et al 2008] 38%Harris3D + HOG/HOF [Laptev et al 2003, 2004]
45%
Cuboids + HOG/HOF [Dollar et al 2005, Laptev 2004]
46%
Hessian + HOG/HOF [Laptev 2004, Williems et al 2008]
46%
Dense + HOG / HOF [Laptev 2004] 47%Cuboids + HOG3D [Klaser 2008, Dollar et al 2005]
46%
Unsupervised Feature Learning (DL) 52%
[Le, Zhou & Ng, 2011]
Task: Video Activity Recognition
January 5, 2017
30
Deep Neural Network Architectureso GMDH: 1st DLN of 1965o Convolutional NNo Neural history compressoro Recursive NNo Long short-term memory (LSTM)o Deep belief networks (DBN)o Convolutional deep belief networkso Large memory storage & retrieval NNo Deep Boltzmann machines
o Stacked (de-noising) auto-encoderso Deep stacking networkso Tensor deep stacking networkso Spike-and-slab RBMso Compound hierarchical-deep modelso Deep coding networkso Deep Q-networkso Networks with separate memory structures
January 5, 2017
31
Neural Network (NN)x1
x2
x3
+1 +1
Layer 1 Layer 2
Layer 4+1
Layer 34 layer network with 2 output units:
January 5, 2017
Unsupervised Feature Learning with a NN
x4
x5
x6
+1
x1
x2
x3
+1
a1
a2
a3
+1
b1
b2
b3
+1
c1
c2
c3
New representation for input.
Use [c1, c3, c3] as representation to feed to learning algorithm.
Deep Belief NetworkDBN is algorithm for learning a feature hierarchy.
Building Block: 2-layer graphical model (Restricted Boltzmann Machine).
Can then learn additional layers one at a time. Schematic overview of
a deep belief net.
34
Deep Belief Network (2)
Input [x1, x2, x3, x4]
Layer 2. [a1, a2, a3]
Layer 3. [b1, b2, b3]
Similar to a sparse auto-encoder in many ways. Stack RBMs on top of each other to get DBN.
Train with approximate maximum likelihood (often with sparsity constraint on ai’s):
January 5, 2017
35
Convolutional DBN for AudioS
pect
rogr
am
Detection units
Max pooling unit
January 5, 2017
36
Convolutional DBN for Audio (2)S
pect
rogr
am
January 5, 2017
37
Convolutional DBN for Images
January 5, 2017
38
Going Deep
Pixels
Object Models
[Honglak Lee]
Training Set: Alignedimages of faces.
January 5, 2017
Edges
Object Parts(combination of edges)
39
Applicationso Computer Vision: Object Detection & Recognitiono Speech Recognitiono Speaker Identificationo Web Searcheso Text Classification - Sentiment Analysis
o Translationso Miscellaneouso Fine-grained Classificationo Generalizationo Generating Image Captions from
Pixelso …
January 5, 2017
40
Applications (2)
January 5, 2017
41
Speech Recognition on Android
January 5, 2017
42
Impact on Speech Recognition
January 5, 2017
43
Text Classifications
January 5, 2017
Results for IMDB Sentiment Classification (long paragraphs)
44
Translationo Google Translate:o As Reuters noted for the first time in July, the seating configuration is exactly
what fuels the battle between the latest devices.
o Neural LSTM Model:o As Reuters reported for the first time in July, the configuration of seats is
exactly what drives the battle between the latest aircraft.
o Human Translation:o As Reuters first reported in July, seat layout is exactly what drives the battle
between the latest jets.
January 5, 2017
45
Good Fine-grained Classification
January 5, 2017
46
Good Generalization
January 5, 2017
47
Sensible Errors
January 5, 2017
48
Generating Image Captions from Pixels
January 5, 2017
Work by Oriol Vinyals et al.
49
Generating Image Captions from Pixels(2)
January 5, 2017
50
Generating Image Captions from Pixels(3)
January 5, 2017
51
Generating Image Captions from Pixels(4)
January 5, 2017
52
ConclusionDeep Neural Networks are very effective for wide range of taskso By using parallelism, we can quickly train very large and effective deep neural models on very large datasetso Automatically build high-level representations to solve desired taskso By using embedding, can work with sparse datao Effective in many domains: speech, vision, language modeling, user prediction, language understanding, translation, advertising, …
January 5, 2017
An important tool in building Intelligent Systems !
53
Thank You !
Q/A ?
January 5, 2017
54
Recommendationso Le, Ranzato, Monga, Devin, Chen, Corrado, Dean, & Ng. Building High-Level Features Using Large Scale Unsupervised Learning, ICML 2012.
o Dean, Corrado, et al. , Large Scale Distributed Deep Networks, NIPS 2012.
o Mikolov, Chen, Corrado and Dean. Efficient Estimation of Word Representations in Vector Space, http://arxiv.org/abs/1301.3781.
o Distributed Representations of Sentences and Documents, by Quoc Le and Tomas Mikolov, ICML 2014, http://arxiv.org/abs/1405.4053
o Vanhoucke, Devin and Heigold. Deep Neural Networks for Acoustic Modeling, ICASSP 2013.
o Sequence to Sequence Learning with Neural Networks, Ilya Sutskever, Oriol Vinyals, and Quoc Le. http://arxiv.org/abs/1409.3215. To appear in NIPS, 2014.
o http://research.google.com/papers
o http://research.google.com/people/jeff
January 5, 2017