Statistical Methods Traditional grammars may be “brittle” Statistical methods are built on formal theories Vary in complexity from simple trigrams to conditional random fields Can be used for language identification, text classification, information retrieval, and information extraction
Statistical Methods. Traditional grammars may be “brittle” Statistical methods are built on formal theories Vary in complexity from simple trigrams to conditional random fields Can be used for language identification, text classification, information retrieval, and information extraction. - PowerPoint PPT Presentation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Statistical Methods
Traditional grammars may be “brittle” Statistical methods are built on formal theories Vary in complexity from simple trigrams to
conditional random fields Can be used for language identification, text
classification, information retrieval, and
information extraction
N-Grams
Text is comprised of characters (or words or
phonemes) An N-gram is a sequence of n consecutive
characters (or words ...) unigram, bigram, trigram
Technically, it is a Markov chain of order n-1 P(c
i | c
1:i-1) = P(c
i|c
i-n+1:ci-1)
Calculate N-grams by looking at large corpus
Example – Language Identification
Use P(ci|c
i-2:ci-1,l), where l ranges over languages
About 100,000 characters of each language are needed l* = argmax
l P(l|c
1:N) = argmax
l P(l) P(c
i|c
i-2:ci-1,l)
Learn the model from a corpus P(l), the probability of a given language can be
estimated Other examples: spelling correction, genre
classification, and named-entity recognition
Smoothing
Problem: What if a particular n-gram does not
appear in the training corpus? Probability would be 0 – should be a small, but
positive number Smoothing – adjusting the probability of low-
frequency counts Laplace: use 1/(n+2) instead of 0 (n observations) Backoff model: back off to n-1 grams
Model Evaluation
Use cross-validation (split corpus into training and
evaluation sets) Need a metric for evaluation Can use perplexity to describe the probability of a
sequence Perplexity(c
1:N) = P(c
1:n)-1/N
Can be thought of as the reciprocal of probability
normalized by the sequence length
N-gram Word Models
Can be used for text classification Example: spam vs. ham Problem: out-of-vocabulary word
Trick: During training, use <UNK> first time word is
seen, then after that use word regularly. Then when an
unknown word is seen, treat it as <UNK> Calculate probabilities from a corpus, then
randomly generate phrases
Example – Spam Detection
Text classification problem Train for P(Message|spam) and P(Message|ham)
using n-grams Calculate P(message|spam) P(spam) and
P(message|ham) P(ham) and take whichever is
greater
Spam Detection – Other Methods
Represent message as a set of feature/value pairs Apply a classification algorithm for the feature vector Strongly depends on the features chosen
Data compression Data compression algorithms such a LZW look for
commonly re-occurring sequences and replace later
copies with pointers to earlier ones. Append new message to list of spam messages and
compress, do the same for ham, and whichever
compresses smaller...
Information Retrieval
Think WWW and search engines Characterized by
Corpus of documents Queries in some query language Result set Presentation of result sort (some ordering)
Methods: Simple Boolean keyword models, IR
scoring functions, PageRank algorithm, HITS
algorithm
IR Scoring Function - BM25
Okapi Project (Robertson, et. al.) Three factors:
Frequency word appears in the document (TF) The inverse document frequency (IDF) – inverse of
times word appears in all documents Length of document
|dj| is the length of the document, L is the average
document length, k and b are tuned parameters
BM25 d j , q1 : N =∑i=1
N
IDF qi∗TF qi , d j∗k1
TF qi , d j k∗1−bb∗∣d j∣/ L
BM25 cont'd.
IDF qi= log N−DFqi0.5
DFq i0.5
Precision and Recall
Precision measures the proportion of the
documents in the result set that are actually
relevant, e.g., if the result set contains 30 relevant
documents and 10 non-relevant documents,
precision is .75 Recall is the proportion of relevant documents that
are in the result set, e.g., if 30 relevant documents
are in the result set out of a possible 50, recall
is .60
IR Refinement
Pivoted document length normalization Longer documents tend to be favored Instead of document length, use a different
normalization function that can be tuned Use word stems Use synonyms Look at metadata
PageRank Algorithm (Google)
Count the links that point to the page Weight links from “high-quality sites” higher
Minimizes the effect of creating lots of pages that point
to the chosen page
where PR(p) is the PageRank of p, N is the total number
of pages in the corpus, xi is a page that link to p, and
C(xi) is the count of the total number of out-links on
the page xi
PR p=1−dN
d∗∑i
PRx iC x i
Information Extraction
Ability to answer questions Possibilities range from simple template matching
to full-blown language understanding systems May be domain specific or general Used as DB front-end, or WWW searching Examples: AskMSR, IBM's Watson, Wolfram