Compositional Distributional Models of Meaning Dimitri Kartsaklis Mehrnoosh Sadrzadeh School of Electronic Engineering and Computer Science COLING 2016 11th December 2016 Osaka, Japan D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 1/63
64
Embed
Compositional Distributional Models of Meaning - …coling2016.anlp.jp/doc/tutorial/slides/T1/KartsaklisSadr... · · 2016-12-10Compositional distributional models of meaning ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Compositional Distributional Models of Meaning
Dimitri Kartsaklis Mehrnoosh Sadrzadeh
School of Electronic Engineeringand Computer Science
COLING 2016
11th December 2016Osaka, Japan
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 1/63
In a nutshell
Compositional distributional models of meaning (CDMs)extend distributional semantics to the phrase/sentence level.
They provide a function that produces a vectorialrepresentation of the meaning of a phrase or a sentence fromthe distributional vectors of its words.
Useful in every NLP task: sentence similarity, paraphrasedetection, sentiment analysis, machine translation etc.
In this tutorial:
We review three generic classes of CDMs: vector mixtures,tensor-based models and neural models.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 2/63
Outline
1 Introduction
2 Vector Mixture Models
3 Tensor-based Models
4 Neural Models
5 Afterword
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 3/63
Computers and meaning
How can we define Computational Linguistics?
Computational linguistics is the scientific and engineeringdiscipline concerned with understanding written andspoken language from a computational perspective.
—Stanford Encyclopedia of Philosophy1
1http://plato.stanford.eduD. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 4/63
Compositional semantics
The principle of compositionality
The meaning of a complex expression is determined by themeanings of its parts and the rules used for combining them.
Montague Grammar: A systematic way of processingfragments of the English language in order to get semanticrepresentations capturing their meaning.
There is in my opinion no important theoreticaldifference between natural languages and theartificial languages of logicians.
—Richard Montague, Universal Grammar (1970)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 5/63
Syntax-to-semantics correspondence (1/2)
A lexicon:
(1) a. every ` Dt : λP.λQ.∀x [P(x)→ Q(x)]
b. man ` N : λy .man(y)
c. walks ` VI : λz .walk(z)
A parse tree, so syntax guides the semantic composition:
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 23/63
Pregroup grammars
A pregroup grammar P(Σ,B) is a relation that assigns gram-matical types from a pregroup algebra freely generated overa set of atomic types B to words of a vocabulary Σ.
A pregroup algebra is a partially ordered monoid, where eachelement p has a left and a right adjoint such that:
p · pr ≤ 1 ≤ pr · p pl · p ≤ 1 ≤ p · pl
Elements of the pregroup are basic (atomic) grammaticaltypes, e.g. B = {n, s}.Atomic grammatical types can be combined to form types ofhigher order (e.g. n · nl or nr · s · nl)A sentence w1w2 . . .wn (with word wi to be of type ti ) isgrammatical whenever:
t1 · t2 · . . . · tn ≤ s
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 24/63
Pregroup derivation: example
p · pr ≤ 1 ≤ pr · p pl · p ≤ 1 ≤ p · pl
S
NP
Adj
trembling
N
shadows
VP
V
play
N
hide-and-seek
trembling shadows play hide-and-seek
n nl n nr s nl n
n · nl · n · nr · s · nl · n ≤ n · 1 · nr · s · 1= n · nr · s≤ 1 · s= s
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 25/63
Compact closed categories
A monoidal category (C,⊗, I ) is compact closed when everyobject has a left and a right adjoint, for which the followingmorphisms exist:
A⊗ Ar εr−→ Iηr−→ Ar ⊗ A Al ⊗ A
εl−→ Iηl−→ A⊗ Al
Pregroup grammars are CCCs, with ε and η mapscorresponding to the partial orders
FdVect, the category of finite-dimensional vector spaces andlinear maps, is a also a (symmetric) CCC:
ε maps correspond to inner productη maps to identity maps and multiples of those
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 26/63
A functor from syntax to semantics
We define a strongly monoidal functor F such that:
F : P(Σ,B)→ FdVect
F(p) = P ∀p ∈ BF(1) = R
F(p · q) = F(p)⊗F(q)
F(pr ) = F(pl) = F(p)
F(p ≤ q) = F(p)→ F(q)
F(εr ) = F(εl) = inner product in FdVect
F(ηr ) = F(ηl) = identity maps in FdVect
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 27/63
A multi-linear model
The grammatical type of a word defines the vector spacein which the word lives:
Nouns are vectors in N;
adjectives are linear maps N → N, i.e elements inN ⊗ N;
intransitive verbs are linear maps N → S , i.e. elementsin N ⊗ S ;
transitive verbs are bi-linear maps N ⊗ N → S , i.e.elements of N ⊗ S ⊗ N;
The composition operation is tensor contraction, i.e.elimination of matching dimensions by application of innerproduct.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 28/63
Categorical composition: example
S
NP
Adj
trembling
N
shadows
VP
V
play
N
hide-and-seek
trembling shadows play hide-and-seek
n nl n nr s nl n
Type reduction morphism:
(εrn · 1s) ◦ (1n · εln · 1nr ·s · εln) : n · nl · n · nr · s · nl · n→ s
F[(εrn · 1s) ◦ (1n · εln · 1nr ·s · εln)
] (trembling ⊗
−−−−−→shadows ⊗ play ⊗
−−−−−−−−−→hide-and-seek
)=
(εN ⊗ 1S ) ◦ (1N ⊗ εN ⊗ 1N⊗S ⊗ εN)(trembling ⊗
−−−−−→shadows ⊗ play ⊗
−−−−−−−−−→hide-and-seek
)=
trembling ×−−−−−→shadows × play ×
−−−−−−−−−→hide-and-seek
−−−−−→shadows,
−−−−−−−−−→hide-and-seek ∈ N trembling ∈ N ⊗ N play ∈ N ⊗ S ⊗ N
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 29/63
Creating relational tensors: Extensional approach
A relational word is defined as the set of its arguments:
[[red ]] = {car , door , dress, ink, · · · }
Grefenstette and Sadrzadeh (2011):
adj =∑i
−−→nouni verbint =∑i
−−→subj i verbtr =
∑i
−−→subj i⊗
−→obj i
Kartsaklis and Sadrzadeh (2016):
adj =∑i
−−−→nouni ⊗−−−→nouni verbint =∑i
−−→subji ⊗
−−→subji
verbtr =∑i
−−→subj i ⊗
(−−→subj i +
−→obj i
2
)⊗−→obj i
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 30/63
Creating relational tensors: Statistical approach
Baroni and Zamparelli (2010):
Create holistic distributional vectors for whole compounds (asif they were words) and use them to train a linear regressionmodel.
red
× car = red car
× door = red door
× dress = red dress
× ink = red ink
ˆadj = arg minadj
[1
2m
∑i
(adj ×−−−→nouni −−−−−−−→adj nouni )
2
]
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 31/63
Functional words
Certain classes of words, such as determiners, relativepronouns, prepositions, or coordinators occur in almost everypossible context.
Thus, they are considered semantically vacuous from adistributional perspective and most often they are simplyignored.
In the tensor-based setting, these special words can be mod-elled by exploiting additional mathematical structures, suchas Frobenius algebras and bialgebras.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 32/63
Frobenius algebras in FdVect
Given a symmetric CCC (C,⊗, I ), an object X ∈ C has aFrobenius structure on it if there exist morphisms:
∆ : X → X ⊗ X , ι : X → I and µ : X ⊗ X → X , ζ : I → X
conforming to the Frobenius condition:
(µ⊗ 1X ) ◦ (1X ⊗∆) = ∆ ◦ µ = (1X ⊗ µ) ◦ (∆⊗ 1X )
In FdVect, any vector space V with a fixed basis {−→vi }i has acommutative special Frobenius algebra over it [Coecke and
Pavlovic, 2006]:
∆ : −→vi 7→ −→vi ⊗−→vi µ : −→vi ⊗−→vi 7→ −→viIt can be seen as copying and merging of the basis.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 33/63
Frobenius algebras: Relative pronouns
How to represent relative pronouns in a tensor-based setting?
A relative clause modifies the head noun of a phrase:
who the man likes Marythe man likes Mary
N N r N l N=N N r N S l N N r S N l N
The result is a merging of the vectors of the noun and therelative clause:
−−→man � (likes ×−−−→Mary)
[Sadrzadeh, Clark, Coecke (2013)]
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 34/63
Frobenius algebras: Coordination
Copying and merging are the key processes in coordination:
and
John sleeps snores
7→N Nr S Nr SSr NNrr Nr S S l
SNrN
John
Nr
S
S
sleeps and snores
Nr
The subject is copied by a ∆-map and interacts individuallywith the two verbs
The results are merged together with a µ-map−−→JohnT × (sleep � snore)
[Kartsaklis (2016)]
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 35/63
Tensor-based models: Intuition
Tensor-based composition goes beyond a simple compatibilitycheck between the two argument vectors; it transforms theinput into an output of possibly different type.
A verb, for example, is a function that takes as input a nounand transforms it into a sentence:
fint : N → S ftr : N × N → S
Size and form of the sentence space become tunableparameters of the models, and can depend on the task.
Taking S = {( 01 ) , ( 1
0 )}, for example, provides an equivalentto formal semantics.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 36/63
Tensor-based models: Pros and Cons
Distinguishing feature:
Relational words are multi-linear maps acting on arguments
PROS:
Aligned with the formal semantics perspective
More powerful than vector mixtures
Flexible regarding the representation of functional words, suchas relative pronouns and prepositions.
CONS:
Every logical and functional word must be assigned to anappropriate tensor representation–it’s not always clear how
Space complexity problems for functions of higher arity (e.g. aditransitive verb is a tensor of order 4)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 37/63
Outline
1 Introduction
2 Vector Mixture Models
3 Tensor-based Models
4 Neural Models
5 Afterword
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 38/63
An artificial neuron
The xi s form the input vector
The wji s is a set of weights associated with the i-th output ofthe layer
f is a non-linear function such as tanh or sigmoid
ai is the i-th output of the layer, computed as:
ai = f (w1ix1 + w2ix2 + w3ix3)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 39/63
A simple neural net
A feed-forward neural network with one hidden layer:
h1 = f (w11x1+w21x2+w31x3+w41x4+w51x5+b1)h2 = f (w12x1+w22x2+w32x3+w42x4+w52x5+b2)h3 = f (w13x1+w23x2+w33x3+w43x4+w53x5+b3)
or−→h = f (W(1)−→x +
−→b (1))
Similarly:
−→y = f (W(2)−→h +−→b (2))
Note that W(1) ∈ R3×5 and W(2) ∈ R2×3
f is a non-linear function such as tanh or sigmoid(take f = Id and you have a tensor-based model)
A universal approximator
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 40/63
Objective functions
The goal of NN training is to find the set of parameters thatoptimizes a given objective function
Or, to put it differently, to minimize an error function.
Assume, for example, the goal of the NN is to produce avector −→y that matches a specific target vector
−→t . The
function:
E =1
2m
∑i
||−→ti −−→yi ||2
gives the total error across all training instances.
We want to set the weights of the NN such that E becomeszero or as close to zero as possible.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 41/63
Gradient descent
Take steps proportional to the negative of the gradient of Eat the current point.
Θt = Θt−1 − α∇E (Θt−1)
Θt : the parameters ofthe model at timestep t
α: a learning rate
(Graph taken from “The Beginner Programmer” blog,http://firsttimeprogrammer.blogspot.co.uk)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 42/63
Backpropagation of errors
How do we compute the error terms at the inner layers?
These are computed based one the errors of the next layer byusing backpropagation. In general:
δk = ΘTk δk+1 � f ′(zk)
δk is the error vector at layer k
Θk is the weight matrix of layer k
zk is the weighted sum at the output of layer k
f ′ is the derivative of the non-linear function f
Just an application of the chain rule for derivatives.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 43/63
Recurrent and recursive NNs
Standard NNs assume that inputs are independent of eachother
That is not the case in language; a word, for example, alwaysdepends on the previous words in the same sentence
In a recurrent NN, connections form a directed cycle so thateach output depends on the previous ones
A recursive NN is applied recursively following a specificstructure.
input
output
Recurrent NNinput
output
Recursive NN
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 44/63
Recursive neural networks for composition
Pollack (1990); Socher et al. (2011;2012):
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 45/63
Unsupervised learning with NNs
How can we train a NN in an unsupervised manner?
Train the network to reproduce its input via an expansionlayer:
Use the output of the hidden layer as a compressed version ofthe input [Socher et al. (2011)]
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 46/63
Long Short-Term Memory networks (1/2)
RNNs are effective, but fail to capture long-rangedependencies such as:
The movie I liked and John said Mary and Annreally hated
“Vanishing gradient” problem: Back-propagating the errorrequires the multiplication of many very small numberstogether, and training for the bottom layers starts to stall.
Long Short-Term Memory networks (LSTMs) (Hochreiter andSchmidhuber, 1997) provide a solution, by equipping eachneuron with an internal state.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 47/63
Long Short-Term Memory networks (2/2)
RNN
LSTM
(Diagrams taken from Christopher Olah’s blog, http://colah.github.io/)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 48/63
Linguistically aware NNs
NN-based methods come mainly from image processing. Howcan we make them more linguistically aware?
Cheng and Kartsaklis (2015):
Take into accountsyntax, by optimizingagainst a scrambledversion of each sentence
Dynamicallydisambiguate themeaning of words duringtraining based on theircontext
main (ambiguous)
vectors
sense vectors
gate
compositionallayer
phrase vector
plausibility layer
compositional layer
sentence vectorplausibility layer
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 49/63
Convolutional NNs
Originated in pattern recognition [Fukushima, 1980]
Small filters apply on every position of the input vector:
Capable of extracting fine-grained local features independentlyof the exact position in input
Features become increasingly global as more layers are stacked
Each convolutional layer is usually followed by a pooling layer
Top layer is fully connected, usually a soft-max classifier
Application to language: Collobert and Weston (2008)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 50/63
DCNNs for modelling sentences
Kalchbrenner, Grefenstetteand Blunsom (2014): A deep
architecture using dynamick-max pooling
Syntactic structure isinduced automatically:
(Figures reused with permission)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 51/63
Beyond sentence level
An additional convolutional layer can provide document vectors[Denil et al. (2014)]:
(Figure reused with permission)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 52/63
Neural models: Intuition (1/2)
Recall that tensor-based composition involves a lineartransformation of the input into some output.
Neural models make this process more effective by applyingconsecutive non-linear layers of transformation.
A NN does not only project a noun vector onto a sentencespace, but it can also transform the geometry of the spaceitself in order to make it reflect better the relationships be-tween the points (sentences) in it.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 53/63
Neural models: Intuition (2/2)
Example: Although there is no linear map to send an inputx ∈ {0, 1} to the correct XOR value, the function can beapproximated by a simple NN with one hidden layer.
Points in (b) can be seen as representing two semanticallydistinct groups of sentences, which the NN is able todistinguish (while a linear map cannot)
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 54/63
Neural models: Pros and Cons
Distinguishing feature:
Drastic transformation of the sentence space.
PROS:
Non-linearity and layered approach allow the simulation of avery wide range of functions
Word vectors are parameters of the model, optimized duringtraining
State-of-the-art results in a number of NLP tasks
CONS:
Requires expensive training based on backpropagation
Difficult to discover the right configuration
A “black-box” approach: not easy to correlate inner workingswith output
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 55/63
Outline
1 Introduction
2 Vector Mixture Models
3 Tensor-based Models
4 Neural Models
5 Afterword
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 56/63
Refresher: A CDMs hierarchy
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 57/63
Open issues-Future work
No convincing solution for logical connectives, negation,quantifiers and so on.
Functional words, such as prepositions and relative pronouns,are also a problem.
Sentence space is usually identified with word space. This isconvenient, but is it the right thing to do?
Solutions depend on the specific CDM class—e.g. not muchto do in a vector mixture setting
Important: How can we make NNs more linguistically aware?[Cheng and Kartsaklis (2015)]
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 58/63
Summary
CDMs provide quantitative semantic representations forsentences (or even documents)
Element-wise operations on word vectors constitute an easyand reasonably effective way to get sentence vectors
Categorical compositional distributional models allowreasoning on a theoretical level—a glass box approach
Neural models are extremely powerful and effective; still ablack-box approach, not easy to explain why a specificconfiguration works and some other does not.
Convolutional networks seem to constitute the most promisingsolution to the problem of capturing the meaning of sentences
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 59/63
Thank you for your attention!
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 60/63
References I
Baroni, M. and Zamparelli, R. (2010).
Nouns are Vectors, Adjectives are Matrices.In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP).
Cheng, J. and Kartsaklis, D. (2015).
Syntax-aware multi-sense word embeddings for deep compositional models of meaning.In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages1531–1542, Lisbon, Portugal. Association for Computational Linguistics.
Coecke, B., Sadrzadeh, M., and Clark, S. (2010).
Mathematical Foundations for a Compositional Distributional Model of Meaning. Lambek Festschrift.Linguistic Analysis, 36:345–384.
Collobert, R. and Weston, J. (2008).
A unified architecture for natural language processing: Deep neural networks with multitask learning.In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM.
Denil, M., Demiraj, A., Kalchbrenner, N., Blunsom, P., and de Freitas, N. (2014).
Modelling, visualising and summarising documents with a single convolutional neural network.Technical Report arXiv:1406.3830, University of Oxford.
Grefenstette, E. and Sadrzadeh, M. (2011).
Experimental support for a categorical compositional distributional model of meaning.In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages1394–1404, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Harris, Z. (1968).
Mathematical Structures of Language.Wiley.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 61/63
References II
Kalchbrenner, N., Grefenstette, E., and Blunsom, P. (2014).
A convolutional neural network for modelling sentences.In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers), pages 655–665, Baltimore, Maryland. Association for Computational Linguistics.
Kartsaklis, D. (2015).
Compositional Distributional Semantics with Compact Closed Categories and Frobenius Algebras.PhD thesis, University of Oxford.
Kartsaklis, D. and Sadrzadeh, M. (2016).
A compositional distributional inclusion hypothesis.In Proceedings of the 2017 Conference on Logical Aspects of Computational Linguistics, Nancy, France.Springer.
Mitchell, J. and Lapata, M. (2010).
Composition in distributional models of semantics.Cognitive Science, 34(8):1388–1439.
Montague, R. (1970a).
English as a formal language.In Linguaggi nella Societa e nella Tecnica, pages 189–224. Edizioni di Comunita, Milan.
Montague, R. (1970b).
Universal grammar.Theoria, 36:373–398.
Sadrzadeh, M., Clark, S., and Coecke, B. (2013).
The Frobenius anatomy of word meanings I: subject and object relative pronouns.Journal of Logic and Computation, Advance Access.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 62/63
References III
Sadrzadeh, M., Clark, S., and Coecke, B. (2014).
The Frobenius anatomy of word meanings II: Possessive relative pronouns.Journal of Logic and Computation.
Socher, R., Huang, E., Pennington, J., Ng, A., and Manning, C. (2011).
Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection.Advances in Neural Information Processing Systems, 24.
Socher, R., Huval, B., Manning, C., and A., N. (2012).
Semantic compositionality through recursive matrix-vector spaces.In Conference on Empirical Methods in Natural Language Processing 2012.
Socher, R., Manning, C., and Ng, A. (2010).
Learning Continuous Phrase Representations and Syntactic Parsing with recursive neural networks.In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop.
D. Kartsaklis, M. Sadrzadeh Compositional Distributional Models of Meaning 63/63