Adding morphological information to a connectionist Part-Of-Speech tagger F. Zamora-Martínez M.J. Castro-Bleda S. España-Boquera S. Tortajada-Velert Departamento de Sistemas Informáticos y Computación Universidad Politécnica de Valencia, Spain Escuela Superior de Enseñanzas Técnicas Universidad CEU-Cadenal Herrera, Alfara del Patriarca, Valencia, Spain 10-12 November 2009, Sevilla F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 1 / 33
34
Embed
Adding morphological information to a connectionist Part-Of-Speech tagger
In this paper, we describe our recent advances on a novel approach to Part-Of-Speech tagging based on neural networks. Multilayer perceptrons are used following corpus-based learning from contextual, lexical and morphological information. The Penn Treebank corpus has been used for the training and evaluation of the tagging system. The results show that the connectionist approach is feasible and comparable with other approaches.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Adding morphological information to a connectionistPart-Of-Speech tagger
F. Zamora-Martínez M.J. Castro-Bleda S. España-BoqueraS. Tortajada-Velert
Departamento de Sistemas Informáticos y ComputaciónUniversidad Politécnica de Valencia, Spain
Escuela Superior de Enseñanzas TécnicasUniversidad CEU-Cadenal Herrera, Alfara del Patriarca, Valencia, Spain
10-12 November 2009, Sevilla
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 1 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 2 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 3 / 33
What is Part-Of-Speech (POS) tagging?
T = τ1, τ2, . . . , τk: a set of POS tagsΩ = ω1, ω2, . . . , ωm: the vocabulary of the application
The goal of a Part-Of-Speech tagger is to associate each word in a textwith its correct lexical-syntactic category (represented by a tag).
ExampleThe grand jury commented on a number of other topicsDT JJ NN VBD IN DT NN IN JJ NNS
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 4 / 33
Ambiguity and applications
Words often have more than one POS tag: lowerEurope proposed lower rate increases . . . = JJRTo push the pound even lower . . . = RBR. . . should be able to lower long-term . . . = VB
A simple approach which assigns only the most common tag to eachword performs with 90% accuracy!
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 6 / 33
Unknown Words
How can one assign a tag to a given word if that word is unknown tothe tagger?
Unknown words are the hardest problem for POS tagging!
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 7 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 8 / 33
Probabilistic model
We are given a sentence: what is the best sequence of tags whichcorresponds to the sequence of words?
Probabilistic view: Consider all possible sequences of tags and out ofthis universe of sequences, choose the tag sequence which is mostprobable given the observation sequence of words.
tn1 = argmax
tn1
P(tn1 |wn
1 ) = argmaxtn1
P(wn1 |tn
1 )P(tn1 ).
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 9 / 33
Probabilistic model: Simplifications
To simplify:1 Words are independent of each other and a word’s identity only
depends on its tag→ lexical probabilities:
P(wn1 |tn
1 ) ≈n∏
i=1
P(wi |ti)
2 Another one establishes that the probability of one tag to appearonly depends on its predecessor tag (bigram, trigram, ...) →contextual probabilities:
P(tn1 ) ≈
n∏i=1
P(ti |ti−1).
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 10 / 33
Probabilistic model: Limitations
With these assumptions, a typical probabilistic model is expressed as:
tn1 = argmax
tn1
P(tn1 |wn
1 ) ≈ argmaxtn1
n∏i=1
P(wi |ti)P(ti |ti−1),
where tn1 is the best estimation of POS tags for the given sentence
wn1 = w1w2 . . .wn and considering that P(t1|t0) = 1.
1 It does not model long-distance relationships.2 The contextual information takes into account the context on the
left while the context on the right is not considered.
Both limitations can be overwhelmed using ANNs models.
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 11 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 12 / 33
Basic connectionist model
Europe proposed lower rate increasesNNP VBD ????? NN NNS
MLPs as POS tags classifiers:MLP Input:
lower — wi : the ambiguous input word, loc. cod. → projection layerNNP , VBD, NN, NNS — ci : the tags of the words surrounding theambiguous word to be tagged (past and future context), loc. cod.
MLP Output:the probability of each tag given the input:Pr(JJR|input)=0.6, Pr(RBR|input)=0.2, Pr(VB|input)=0.1, . . .
Therefore, the network learnt the following mapping:
F (wi , ci , ti ,Θ) = PrΘ(ti |wi , ci)
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 13 / 33
Morphological extended connectionist model
Europe proposed lower rate increasesNNP-Cap VBD-NCap ????? NN-NCap NNS-NCap
NCap, -er
MLPs as POS tags classifiers:MLP Input:
lower — wi : the ambiguous input word, loc. cod. → projection layerNCap, -er — mi : morph. info related to the amb. input word.NNP-Cap., VBD-NCap, NN-NCap, NNS-NCap — c′
i : the tags of thewords surrounding the ambiguous word to be tagged (past andfuture context) extended with morphological information, loc. cod.
MLP Output:the probability of each tag given the input.
Therefore, the network learnt the following mapping:
F (wi ,mi , c′i , ti ,Θ) = PrΘ(ti |wi ,mi , c′i ),
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 14 / 33
And what about Unknown Words?
When evaluating the model, there are words that have never beenseen during training; therefore, they do not belong neither to thevocabulary of known ambiguous words nor to the vocabulary of knownnon-ambiguous words→ “Unknown words”: the hardest problem forthe network to tag correctly.
Proposed solutionA combination of two especialized models:
MLPKnow : the MLP specialized for known ambiguous wordsMLPUnk : the MLP specialized in unknown words
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 15 / 33
MLPKnow for known ambiguous words
wi : known ambiguousinput word locallycodified at the input ofthe projection layer
mi : morphological inforelated to the inputambiguous word
Context: two labels ofpast context and onelabel of future context,extended withmorphological info.
Tminutes = NNS,NNPS Known ambiguous wordTmagnification = NN Known non-ambiguous word
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 18 / 33
Final connectionist model
For each posible known word (ambiguous and non-ambiguous) wehave a Twi table with the POS tags observed in training for word wi :
F (wi ,mi , si , c′i , ti ,ΘK ,ΘU) =
0 if ti 6∈ Twi ,
1 if Twi = ti,FKnow (wi ,mi , c′
i , ti ,ΘK ) if wi ∈ Ω′ ∧ ti ∈ Twi ,
FUnk (mi , si , c′i , ti ,ΘU) in other case.
Where Ω′ is the ambiguous words vocabulary.
tn1 = argmax
tn1
Pr(tn1 |wn
1 ) ≈ argmaxtn1
n∏i=1
F (wi ,mi , si , c′i , ti ,ΘK ,ΘU)
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 19 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 20 / 33
The Penn Treebank Corpus
This corpus consists of a set of English texts from the Wall StreetJournal distributed in 25 directories containing 100 files withseveral sentences each one.The total number of words is about one million, being 49 000different.The whole corpus was labeled with POS and synyactic tags.The POS tag labeling consists of a set of 45 different categories.Two more tag were added to take into account the beginning andending of a sentence, thus resulting in a total amount of 47different POS tags.
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 21 / 33
The Penn Treebank Corpus: Partitions
Dataset Directory Num. of Num. of Vocabularysentences words size
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 22 / 33
The Penn Treebank Corpus: Preprocess
Huge corpus with a lot of words in ambiguous vocabulary. Preprocessto reduce the vocabulary:
Ten random partitions from training set of equal size. Words thatappeared just in one partition were considered as unknown words.POS tags appearing in a word less than 1% of its possible tagswere eliminated (tagging errors).
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 23 / 33
The Penn Treebank Corpus: Morph. information
Two morphological preprocessing filters:Deleting the prefixes from the composed words (using a set of the125 more common English prefixes). In this way, some unknownwords were converted to known words.
Examplepre-, electro-, tele-, . . .
All the cardinal and ordinal numbers (except “one” and “second”that are polysemic) were replaced with the special token *CD*.
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 24 / 33
The Penn Treebank Corpus: Morph. information
Morphological added to MLPs:Three input units⇒ input word has the first capital letter, all capsor a subset. This is an important morphological characteristic andit was also added to the POS tags of the context (both MLPs).A unit indicating if the word has any dash “-” (both MLPs).A unit indicating if the word has any point “.” (both MLPs).Suffix analysis to deal with unknown words (only MLPUnk ):
Compute the probability distribution of tags for suffixes of lengthless or equal to 10⇒ 709 suffixes found.An agglomerative hierarchical clustering process was followed, anda empirical set of clusters was chosen.Finally, a set of the 21 more common grammatical suffixes wereadded.
MLPUnk needs 209 units for take into account the presence ofsuffixes in words.
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 24 / 33
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 25 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 26 / 33
The connectionist POS taggers
Projection layer.Error backpropagation algorithm for training.The topology and parameters of multilayer perceptrons in thetrainings were selected in previous experimentation.For the experiments we have used a toolkit for pattern recognitiontasks developed by our research group.MLPKnow trained with ambiguous vocabulary words.MLPUnk trained with words that appear less than 4 times.
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 28 / 33
Test POS tagging performance
POS tagging error rate for the tuning and test sets for the globalsystem. Comparison of our connectionist system with morphologicalinformation versus our previous system without morphologicalinformation.
Partition With morp. info. Without morp. info.Tuning 3.2% 4.2%Test 3.3% 4.3%
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 29 / 33
Index
1 POS tagging
2 Probalilistic tagging
3 Connectionist tagging
4 The Penn Treebank Corpus
5 The connectionist POS taggers
6 Conclusions
F. Zamora et al (UPV/CEU-UCH) CAEPIA 2009 10-12 November 2009, Sevilla 30 / 33
Conclusions: Comparison with other tagging systems
POS tagging error rate for the test set. Known refers to thedisambiguation error for known ambiguous words. Unk refers to thePOS tag error for unknown words. Total is the total POS tag error, withambiguous, non-ambiguous, and unknown words.