Learning Deep Representations Yoshua Bengio September 25th, 2008 DARPA Deep Learning Workshop Thanks to : James Bergstra, Olivier Breuleux, Aaron Courville, Olivier Delalleau, Dumitru Erhan, Pascal Lamblin, Hugo Larochelle, Jerome Louradour, Nicolas Le Roux, Pierre-Antoine Manzagol, Dan Popovici, Fran¸ cois Rivest, Joseph Turian, Pascal Vincent Check review paper “Learning Deep Architectures for AI” on my web page 1/23
28
Embed
Learning Deep Representations - Université de Montréallisa/pointeurs/talk_darpa_deep_workshop.pdf · Learning Deep Representations Yoshua Bengio September 25th, 2008 ... Expoit
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Learning Deep Representations
Yoshua Bengio
September 25th, 2008
DARPA Deep Learning Workshop
Thanks to : James Bergstra, Olivier Breuleux, Aaron Courville, Olivier Delalleau,
Dumitru Erhan, Pascal Lamblin, Hugo Larochelle, Jerome Louradour, Nicolas Le Roux,
Pierre-Antoine Manzagol, Dan Popovici, Francois Rivest, Joseph Turian, Pascal Vincent
Check review paper “Learning Deep Architectures for AI” onmy web page
1/23
Outline
Why deep learning ?Theoretical results : to efficiently represent highly-varying functions
Why is it hard ? : non-convexity
Why our current algorithms working ?
Going Forward : Research program
Focus on optimization, large scale, sequential aspectExpoit un-/semi-supervised, multi-task, multi-modal learningCurriculumParallel search for solutionsSynthetically generated + real data of increasing complexity
1/23
1D Generalization
Why Deep Learning ? Let us go back to basics.
Easy 1D generalization if the target function is smooth (few variations).
2/23
Curse of Dimensionality
Local generalization : local kernel SVMs, GP, decision trees, LLE,Isomap, etc.
Theorem Sketch
Local learning algorithms cannotgeneralize to variations not coveredby the training set.
Informal Corollary
Local learning algorithms can requirea number of training examplesexponential in the input dimensionto obtain a given generalizationerror.
Local learning ok in high dimension if target function is smooth
3/23
Strategy : Distributed Representations
Distributed representation : input ⇒ combination of many features
Parametrisation : Exponential advantage : distr. vs local
Missing in most learning algorithms
C2=0C3=0
C1=1
C2=1C3=0
C1=0
C2=0C3=0
C1=0
C2=1C3=0
C1=1C2=1C3=1
C1=1
C2=0C3=1
C1=1
DISTRIBUTED PARTITION
C2=1C3=1
C1=0
Sub−partition 3Sub−partition 2
Sub−partition 1
LOCAL PARTITION
regions definedby learnedprototypes
4/23
Exploiting Multiple Levels of Representation
Distributed not enough : need non-linear + depth of composition
V4
V2
V1
Retina
Higher-level abstractions
Primitive pattern detectors
Oriented edge detectors
Pixels
5/23
Architecture Depth
Most current learning algorithms have depth 1, 2 or 3 : shallow
Theorem Sketch
When a function can be compactly represented by a deep architecture,representing it with a shallow architecture can require a number ofelements exponential in the input dimension.
SHALLOW DEEP
Fat architecture ⇒ too rich space ⇒ poor generalization
6/23
Training Deep Architectures : the Challenge
Two levels suffice to represent any function
Shallow & local learning works for simplerproblems : insufficient for AI-type tasks
Up to 2006, failure of attempts to train deeparchitectures (except Yann Le Cun’sconvolutional nets !)
Why ? Non-convex optimisationand stochastic !
Focus NIPS 1995-2005 : convex learningalgorithms
⇒ Let us face the challenge !
7/23
2006 : Breakthrough !
FIRST : successful training of deep architectures !Hinton et al (UofT) Neural Comp. 2006, followed by Bengio et al(U.Montreal), and Ranzato et al (NYU) at NIPS’2006
One trains one layer after the other of a deep MLP
Unsupervised learning in each layer of initial representation
Continue training an ordinary but deep MLP near a better minimum
Deep Belief Network (DBN)
8/23
Individual Layer : RBMs and auto-encoders
State-of-the-art ’layer components’ : variants of RBMs and Auto-Encoders
Deep connections between the two...
Restricted Boltzmann MachineEfficient inference of factors h
Auto-encoder :Find compact representation :encode x into h(x),decode into x(h(x)).
x
h
errorobserved inputreconstruction
reconstruction model
learned representation
x
9/23
Denoising Auto-Encoders
More flexible alternative to RBMs/DBNs, while competitive in accuracy
xx
Clean input x ∈ [0, 1]d is partially destroyed,yielding corrupted input : x ∼ qD(x|x).x is mapped to hidden representation y = fθ(x).
From y we reconstruct a z = gθ′(y).
Train parameters to minimize the cross-entropy “reconstructionerror”
Corresponds to maximizing variational bound on likelihood of agenerative model
semi-supervised + multi-task ⇒ Self-Taught Learning (Raina et al)
Generalize even with 0 examples on new task !
14/23
Semi-Supervised Discriminant RBM
RBMs and auto-encoders easily extend to semi-supervised andmulti-task settings !
Larochelle & Bengio, ICML’2008, Hybrid Discriminant RBM :Comparisons against the current state-of-the-art in semi-supervisedlearning : local Non-Parametric semi-supervised algorithms based onneighborborhood graph ; using only 1000 labeled examples.
Semi-Supervised Classification Error
0,0%
10,0%
20,0%
30,0%
40,0%
50,0%
60,0%
70,0%
80,0%
90,0%
MNIST-BI 20-newsgroups
NP Gaus. NP Trunc.Gauss. HDRBM Semi-sup HDRBM
15/23
Understanding The Challenge
Hypothesis : under constraint of compact deep architecture,main challenge is difficulty of optimization.
Clues :
Ordinary training of deep architectures (random initialization) :much more sensitive to initialization seed ⇒ local minima
Comparative experiments (Bengio et al, NIPS’2006) show that themain difficulty is getting the lower layers to do something useful.
Current learning algorithms for deep nets appear to be guiding theoptimization to a “good basin of attraction”
16/23
Understanding Why it Works
Hypothesis : current solutions similar to continuation methods
targetcost fn
slightlysmoothed
heavilysmoothed
track minima
easy to find initial solution
finalsolution
17/23
Several Strategies are Continuations
Older : stochastic gradient from small parameters
Breakthrough : greedy layer-wise construction
New : gradually bring in more difficult examples
18/23
Curriculum Strategy
Start with simpler, easier examples, and gradually introduce more of themore complicated ones as the learner is ready to learn them.
Design the sequence of tasks / datasets to guide learning/optimization.
19/23
Strategy : Society = Parallel Optimisation
Each agent = potential solution
Better solutions spread through learned language
Similar to genetic evolution : parallel search + recombination
R. Dawkins’ Memes
Simulations support this hypothesis
20/23
Baby AI Project
Combine many strategies, to obtain a baby AI that masters the semanticsof a simple visual + linguistic universe
There is a small triangle. What color is it ? Green
Current work : generating synthetic videos, exploit hints in syntheticallygenerated data (knowing semantic ground truth)