Recurrent Neural Networks Fall 2020 2020-10-16 CMPT 413 / 825: Natural Language Processing How to model sequences using neural networks? (Some slides adapted from Chris Manning, Abigail See, Andrej Karpathy) Adapted from slides from Anoop Sarkar, Danqi Chen, Karthik Narasimhan, and Justin Johnson 1
52
Embed
Angel Xuan Chang | Angel Xuan Chang - Recurrent Neural Networks · 2020. 12. 2. · Recurrent Neural Networks Fall 2020 2020-10-16 CMPT 413 / 825: Natural Language Processing How
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Recurrent Neural Networks
Fall 20202020-10-16
CMPT 413 / 825: Natural Language Processing
How to model sequences using neural networks?
(Some slides adapted from Chris Manning, Abigail See, Andrej Karpathy)
SFUNatLangLab
Adapted from slides from Anoop Sarkar, Danqi Chen, Karthik Narasimhan, and Justin Johnson1
Overview
• What is a recurrent neural network (RNN)? • Simple RNNs • Backpropagation through time • Long short-term memory networks (LSTMs) • Applications • Variants: Stacked RNNs, Bidirectional RNNs
2
Recurrent neural networks (RNNs)
A class of neural networks allowing to handle variable length inputs
A function: y = RNN(x1, x2, …, xn) ∈ ℝd
where x1, …, xn ∈ ℝdin
3
Recurrent neural networks (RNNs)
Proven to be an highly effective approach to language modeling, sequence tagging as well as text classification tasks:
Language modeling Sequence tagging
The movie sucks .
👎
Text classification
4
Recurrent neural networks (RNNs)
Form the basis for the modern approaches to machine translation, question answering and dialogue:
5
Why variable-length?Recall the feedfoward neural LMs we learned:
(Yang et al, 2018): Breaking the Softmax Bottleneck: A High-Rank RNN Language Model
On the Penn Treebank (PTB) dataset Metric: perplexity
dropping perplexity
15
Training the RNN
16
RNN Computation Graph
h0 fW h1
x1
17
h0 fW h1 fW h2
x2x1
RNN Computation Graph
18
h0 fW h1 fW h2 fW h3
x3
…x2x1
hT
RNN Computation Graph
19
h0 fW h1 fW h2 fW h3
x3
yT
…x2x1W
hT
y3y2y1
RNN Computation Graph
20
h0 fW h1 fW h2 fW h3
x3
yT
…x2x1W
hT
y3y2y1 L1 L2 L3 LT
RNN Computation Graph
21
h0 fW h1 fW h2 fW h3
x3
yT
…x2x1W
hT
y3y2y1 L1 L2 L3 LT
L
RNN Computation Graph
22
Training RNNLMs
• Backpropagation? Yes, but not that simple!
• The algorithm is called Backpropagation Through Time (BPTT).
23
Backpropagation through time
h1 = g(Wh0 + Ux1 + b)
h2 = g(Wh1 + Ux2 + b)
h3 = g(Wh2 + Ux3 + b)
L3 = − log y3(w4)
You should know how to compute: ∂L3
∂h3
∂L3
∂W=
∂L3
∂h3
∂h3
∂W+
∂L3
∂h3
∂h3
∂h2
∂h2
∂W+
∂L3
∂h3
∂h3
∂h2
∂h2
∂h1
∂h1
∂W
∂L∂W
= −1n
n
∑t=1
t
∑k=1
∂Lt
∂ht
t
∏j=k+1
∂hj
∂hj−1
∂hk
∂W
24
Loss
Forward through entire sequence to compute loss, then backward through entire sequence to compute gradient
25
Truncated backpropagation through time
• Backpropagation is very expensive if you handle long sequences
• Run forward and backward through chunks of the sequence instead of whole sequence
• Carry hidden states forward in time forever, but only backpropagate for some smaller number of steps
26
Vanishing gradients
27
ht-1
xt
W
stack
tanh
ht
Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013
28
ht-1
xt
W
stack
tanh
ht
Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013
29
Vanishing gradients
30
Vanishing Gradients
31
Computing gradient of involves many factors of (and repeated tanh)
h0
W
Largest singular value > 1: Exploding gradients
Largest singular value < 1: Vanishing gradients
Gradient clipping: Scale gradient if its norm is too big
Change RNN architecture32
Different RNN cells
33
Long Short-term Memory (LSTM)• A type of RNN proposed by Hochreiter and Schmidhuber
in 1997 as a solution to the vanishing gradients problem
ht = f(ht−1, xt) ∈ ℝd
• Work extremely well in practice
• Basic idea: turning multiplication into addition
• Use “gates” to control how much information to add/erase
• At each timestep, there is a hidden state and also a cell state
• stores long-term information
• We write/erase after each step
• We read from
ht ∈ ℝd ct ∈ ℝd
ct
ct
ht ct
34
Long Short-term Memory (LSTM)
There are 4 gates:
• Input gate (how much to write): it = σ(W(i)ht−1 + U(i)xt + b(i)) ∈ ℝd
• Forget gate (how much to erase): ft = σ(W( f )ht−1 + U( f )xt + b( f )) ∈ ℝd
• Output gate (how much to reveal): ot = σ(W(o)ht−1 + U(o)xt + b(o)) ∈ ℝd
• New memory cell (what to write): gt = tanh(W(c)ht−1 + U(c)xt + b(c)) ∈ ℝd
How many parameters in total?
• Final memory cell: ct = ft ⊙ ct−1 + it ⊙ gt
• Final hidden cell: ht = ot ⊙ tanh(ct)
element-wise product
Backpropagation from to only element wise multiplication by , no matrix multiply by
• LSTM doesn’t guarantee that there is no vanishing/exploding gradient, but it does provide an easier way for the model to learn long-distance dependencies
• LSTMs were invented in 1997 but finally got working from 2013-2015.
37
Is the LSTM architecture optimal?
(Jozefowicz et al, 2015): An Empirical Exploration of Recurrent Network Architectures38
• Use mask matrix to aid with computations that ignore padded zeros
1 1 1 1 0 0
1 0 0 0 0 0
1 1 1 1 1 1
1 1 1 0 0 0
41
Batching
• Sorting (partially) can help to create more efficient mini-batches • However, the input is less randomized
Unsorted
Sorted
42
Overview
• What is a recurrent neural network (RNN)? • Simple RNNs • Backpropagation through time • Long short-term memory networks (LSTMs) • Applications • Variants: Stacked RNNs, Bidirectional RNNs
43
Application: Text Generation
You can generate text by repeated sampling. Sampled output is next step’s input.44
Fun with RNNs
Andrej Karpathy “The Unreasonable Effectiveness of Recurrent Neural Networks”
Obama speeches Latex generation
45
Application: Sequence Tagging
P(yi = k) = softmaxk(Wohi) Wo ∈ ℝC×d
L = −1n
n
∑i=1
log P(yi = k)
Input: a sentence of n words: x1, …, xn
Output: y1, …, yn, yi ∈ {1,…C}
46
Application: Text Classification
the movie was terribly exciting !
hn
P(y = k) = softmaxk(Wohn) Wo ∈ ℝC×d
Input: a sentence of n words
Output: y ∈ {1,2,…, C}
47
Multi-layer RNNs
• RNNs are already “deep” on one dimension (unroll over time steps)
• We can also make them “deep” in another dimension by applying multiple RNNs
• Multi-layer RNNs are also called stacked RNNs.
48
Multi-layer RNNs
The hidden states from RNN layer are the inputs to RNN layer
ii + 1
• In practice, using 2 to 4 layers is common (usually better than 1 layer) • Transformer-based networks can be up to 24 layers with lots of skip-
connections.
49
Bidirectional RNNs
• Bidirectionality is important in language representations:
terribly: • left context “the movie was” • right context “exciting !”
50
Bidirectional RNNs
ht = f(ht−1, xt) ∈ ℝd
h t = f1(h t−1, xt), t = 1,2,…n
h t = f2(h t+1, xt), t = n, n − 1,…1
ht = [h t, h t] ∈ ℝ2d
51
Bidirectional RNNs
• Sequence tagging: Yes! • Text classification: Yes! With slight modifications.
• Text generation: No. Why?
terribly exciting !the movie wasterribly exciting !the movie was