2002.10.29 - SLIDE 1 IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2002 http://www.sims.berkeley.edu/academics/courses/ is202/f02/ SIMS 202: Information Organization and Retrieval Lecture 17: Statistical Properties of Text
59
Embed
2002.10.29 - SLIDE 1IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2002
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
2002.10.29 - SLIDE 1IS 202 – FALL 2002
Prof. Ray Larson & Prof. Marc Davis
UC Berkeley SIMS
Tuesday and Thursday 10:30 am - 12:00 pm
Fall 2002http://www.sims.berkeley.edu/academics/courses/is202/f02/
SIMS 202:
Information Organization
and Retrieval
Lecture 17: Statistical Properties of Text
2002.10.29 - SLIDE 2IS 202 – FALL 2002
Lecture Overview
• Review– Central Concepts in IR– Boolean Logic
• Content Analysis
• Statistical Properties of Text– Zipf distribution– Statistical dependence
• Indexing and Inverted Files
Credit for some of the slides in this lecture goes to Marti Hearst
2002.10.29 - SLIDE 3IS 202 – FALL 2002
Central Concepts in IR
• Documents
• Queries
• Collections
• Evaluation
• Relevance
2002.10.29 - SLIDE 4IS 202 – FALL 2002
Relevance (introduction)
• In what ways can a document be relevant to a query?– Answer precise question precisely
– Who is buried in grant’s tomb? Grant
– Partially answer question– Where is Danville? Near Walnut Creek
– Suggest a source for more information– What is lymphodema? Look in this Medical Dictionary…
– Give background information– Remind the user of other knowledge– Others ...
2002.10.29 - SLIDE 5IS 202 – FALL 2002
Relevance
• “Intuitively, we understand quite well what relevance means. It is a primitive ‘y’ know’ concept, as is information for which we hardly need a definition. … if and when any productive contact [in communication] is desired, consciously or not, we involve and use this intuitive notion or relevance.”
» Saracevic, 1975 p. 324
2002.10.29 - SLIDE 6IS 202 – FALL 2002
Janes’ View
Topicality
Pertinence
Relevance
Utility
Satisfaction
2002.10.29 - SLIDE 7IS 202 – FALL 2002
Boolean Queries
• Cat
• Cat OR Dog
• Cat AND Dog
• (Cat AND Dog)
• (Cat AND Dog) OR Collar
• (Cat AND Dog) OR (Collar AND Leash)
• (Cat OR Dog) AND (Collar OR Leash)
2002.10.29 - SLIDE 8IS 202 – FALL 2002
Boolean Logic
A B
BABA
BABA
BAC
BAC
AC
AC
:Law sDeMorgan'
2002.10.29 - SLIDE 9IS 202 – FALL 2002
Boolean Logic
t33
t11 t22
D11D22
D33
D44D55
D66
D88D77
D99
D1010
D1111
m1
m2
m3m5
m4
m7m8
m6
m2 = t1 t2 t3
m1 = t1 t2 t3
m4 = t1 t2 t3
m3 = t1 t2 t3
m6 = t1 t2 t3
m5 = t1 t2 t3
m8 = t1 t2 t3
m7 = t1 t2 t3
2002.10.29 - SLIDE 10IS 202 – FALL 2002
Boolean Systems
• Most of the commercial database search systems that pre-date the WWW are based on Boolean search– Dialog, Lexis-Nexis, etc.
• Most Online Library Catalogs are Boolean systems– E.g. MELVYL
• Database systems use Boolean logic for searching
• Many of the search engines sold for intranet search of web sites are Boolean
2002.10.29 - SLIDE 11IS 202 – FALL 2002
Content Analysis
• Automated Transformation of raw text into a form that represents some aspect(s) of its meaning
• Including, but not limited to:– Automated Thesaurus Generation– Phrase Detection– Categorization– Clustering– Summarization
2002.10.29 - SLIDE 12IS 202 – FALL 2002
Techniques for Content Analysis
• Statistical– Single Document– Full Collection
• Linguistic– Syntactic– Semantic– Pragmatic
• Knowledge-Based (Artificial Intelligence)
• Hybrid (Combinations)
2002.10.29 - SLIDE 13IS 202 – FALL 2002
Text Processing
• Standard Steps:– Recognize document structure
• titles, sections, paragraphs, etc.
– Break into tokens• usually space and punctuation delineated• special issues with Asian languages
– Stemming/morphological analysis– Store in inverted index (to be discussed later)
2002.10.29 - SLIDE 14IS 202 – FALL 2002
Content Analysis Areas
How isthe text processed?
Informationneed
Index
Pre-process
Parse
Collections
Rank
Query
text input
How isthe queryconstructed?
2002.10.29 - SLIDE 15
Document Processing Steps
From “Modern IR” textbook
2002.10.29 - SLIDE 16IS 202 – FALL 2002
Stemming and Morphological Analysis
• Goal: “normalize” similar words• Morphology (“form” of words)
– Inflectional Morphology• E.g,. inflect verb endings and noun number• Never change grammatical class
– dog, dogs– tengo, tienes, tiene, tenemos, tienen
– Derivational Morphology • Derive one word from another, • Often change grammatical class
– build, building; health, healthy
2002.10.29 - SLIDE 17IS 202 – FALL 2002
Automated Methods
• Powerful multilingual tools exist for morphological analysis– PCKimmo, Xerox Lexical technology– Require a grammar and dictionary– Use “two-level” automata
• Stemmers:– Very dumb rules work well (for English)– Porter Stemmer: Iteratively remove suffixes– Improvement: pass results through a lexicon
2002.10.29 - SLIDE 18IS 202 – FALL 2002
Errors Generated by Porter Stemmer
Too Aggressive Too Timid organization/ organ european/ europe
policy/ police cylinder/ cylindrical
execute/ executive create/ creation
arm/ army search/ searcher
From Krovetz ‘93
2002.10.29 - SLIDE 19IS 202 – FALL 2002
Statistical Properties of Text
• Token occurrences in text are not uniformly distributed
• They are also not normally distributed
• They do exhibit a Zipf distribution
2002.10.29 - SLIDE 20IS 202 – FALL 2002
Plotting Word Frequency by Rank
• Main idea:– Count how many times tokens occur in the
text• Sum over all of the texts in the collection
• Now order these tokens according to how often they occur (highest to lowest)
• This is called the rank
2002.10.29 - SLIDE 21IS 202 – FALL 2002
A Typical Collection
8164 the4771 of4005 to2834 a2827 and2802 in1592 The1370 for1326 is1324 s1194 that 973 by
969 on 915 FT 883 Mr 860 was 855 be 849 Pounds 798 TEXT 798 PUB 798 PROFILE 798 PAGE 798 HEADLINE 798 DOCNO
• The Important Points:– a few elements occur very frequently– a medium number of elements have medium
frequency– many elements occur very infrequently
2002.10.29 - SLIDE 26IS 202 – FALL 2002
• The product of the frequency of words (f) and their rank (r) is approximately constant– Rank = order of words’ frequency of occurrence
• Another way to state this is with an approximately correct rule of thumb:– Say the most common term occurs C times– The second most common occurs C/2 times– The third most common occurs C/3 times
Zipf Distribution
10/
/1
NC
rCf
2002.10.29 - SLIDE 27
Zipf Distribution
Linear Scale Logarithmic Scale
2002.10.29 - SLIDE 28IS 202 – FALL 2002
What has a Zipf Distribution?
• Words in a text collection– Virtually any use of natural language
• Library book checkout patterns
• Incoming Web Page Requests (Nielsen)
• Outgoing Web Page Requests (Cunha & Crovella)
• Document Size on Web (Cunha & Crovella)
2002.10.29 - SLIDE 29IS 202 – FALL 2002
Related Distributions/”Laws”
• Bradford’s Law of Scattering
• Lotka’s Law of Productivity
• De Solla Price’s Urn Model for “Cumulative Advantage Processes”
½ = 50% 2/3 = 66% ¾ = 75%Pick Pick
Replace +1 Replace +1
2002.10.29 - SLIDE 30IS 202 – FALL 2002
Very frequent word stemsWORD FREQu 63245ha 65470california 67251m 67903
(see http://elib.cs.berkeley.edu/docfreq/docfreq.html)
2002.10.29 - SLIDE 32IS 202 – FALL 2002
Words that occur few times WORD FREQagendaaugust 1anelectronic 1centerjanuary 1packardequipment 1systemjuly 1systemscs186 1todaymcb 1workshopsfinding 1workshopsthe 1lollini 10+ 1
From the Cha-Cha Web Index for the Berkeley.EDU domain
2002.10.29 - SLIDE 33IS 202 – FALL 2002
Consequences of Zipf
• There are always a few very frequent tokens that are not good discriminators.– Called “stop words” in IR– Usually correspond to linguistic notion of “closed-
class” words• English examples: to, from, on, and, the, ...• Grammatical classes that don’t take on new members.
• There are always a large number of tokens that occur once (and can have unexpected consequences with some IR algorithms)
• Medium frequency words most descriptive
2002.10.29 - SLIDE 34IS 202 – FALL 2002
Word Frequency vs. Resolving Power
The most frequent words are not the most descriptive.
(from van Rijsbergen 79)
2002.10.29 - SLIDE 35IS 202 – FALL 2002
• How likely is a red car to drive by given we’ve seen a black one?
• How likely is the word “ambulence” to appear, given that we’ve seen “car accident”?
• Color of cars driving by are independent (although more frequent colors are more likely)
• Words in text are not independent (although again more frequent words are more likely)
Statistical Independence vs. Dependence
2002.10.29 - SLIDE 36IS 202 – FALL 2002
Statistical Independence
• Two events x and y are statistically independent if the product of the probabilities of their happening individually equals the probability of their happening together
),()()( yxPyPxP
2002.10.29 - SLIDE 37IS 202 – FALL 2002
Statistical Independence and Dependence
• What are examples of things that are statistically independent?
• What are examples of things that are statistically dependent?
2002.10.29 - SLIDE 38IS 202 – FALL 2002
Lexical Associations
• Subjects write first word that comes to mind– doctor/nurse; black/white (Palermo & Jenkins 64)
• Text Corpora can yield similar associations• One measure: Mutual Information (Church and
Hanks 89)
• If word occurrences were independent, the numerator and denominator would be equal (if measured across a large collection)
)(),(
),(log),( 2 yPxP
yxPyxI
2002.10.29 - SLIDE 39IS 202 – FALL 2002
Statistical Independence
• Compute for a window of words
collectionin wordsofnumber
in occur -co and timesofnumber ),(
position at starting ndow within wiwords
5)(say windowoflength ||
),(1
),(
:follows as ),( eapproximat llWe'
/)()(
t.independen if ),()()(
||
1
N
wyxyxw
iw
ww
yxwN
yxP
yxP
NxfxP
yxPyPxP
i
wN
ii
w1 w11w21
a b c d e f g h i j k l m n o p
2002.10.29 - SLIDE 40IS 202 – FALL 2002
Interesting Associations with “Doctor”
I(x,y) f(x,y) f(x) x f(y) y11.3 12 111 Honorary 621 Doctor
11.3 8 1105 Doctors 44 Dentists
10.7 30 1105 Doctors 241 Nurses
9.4 8 1105 Doctors 154 Treating
9.0 6 275 Examined 621 Doctor
8.9 11 1105 Doctors 317 Treat
8.7 25 621 Doctor 1407 Bills
AP Corpus, N=15 million, Church & Hanks 89
2002.10.29 - SLIDE 41IS 202 – FALL 2002
I(x,y) f(x,y) f(x) x f(y) y0.96 6 621 doctor 73785 with
0.95 41 284690 a 1105 doctors
0.93 12 84716 is 1105 doctors
These associations were likely to happen because the non-doctor words shown here are very common
and therefore likely to co-occur with any noun.
Un-Interesting Associations with “Doctor”
AP Corpus, N=15 million, Church & Hanks 89
2002.10.29 - SLIDE 42IS 202 – FALL 2002
Document Vectors
• Documents are represented as “bags of words”
• Represented as vectors when used computationally– A vector is like an array of floating point– Has direction and magnitude– Each vector holds a place for every term in
the collection– Therefore, most vectors are sparse
2002.10.29 - SLIDE 43IS 202 – FALL 2002
Document Vectors
ID nova galaxy heat h'wood film role diet furA 10 5 3B 5 10C 10 8 7D 9 10 5E 10 10F 9 10G 5 7 9H 6 10 2 8I 7 5 1 3
“Nova” occurs 10 times in text A“Galaxy” occurs 5 times in text A“Heat” occurs 3 times in text A(Blank means 0 occurrences.)
2002.10.29 - SLIDE 44IS 202 – FALL 2002
Document Vectors
ID nova galaxy heat h'wood film role diet furA 10 5 3B 5 10C 10 8 7D 9 10 5E 10 10F 9 10G 5 7 9H 6 10 2 8I 7 5 1 3
“Hollywood” occurs 7 times in text I“Film” occurs 5 times in text I“Diet” occurs 1 time in text I“Fur” occurs 3 times in text I
2002.10.29 - SLIDE 45IS 202 – FALL 2002
Document Vectors
ID nova galaxy heat h'wood film role diet furA 10 5 3B 5 10C 10 8 7D 9 10 5E 10 10F 9 10G 5 7 9H 6 10 2 8I 7 5 1 3
2002.10.29 - SLIDE 46IS 202 – FALL 2002
We Can Plot the Vectors
Star
Diet
Doc about astronomyDoc about movie stars
Doc about mammal behavior
2002.10.29 - SLIDE 47
Documents in 3D Space
2002.10.29 - SLIDE 48IS 202 – FALL 2002
Content Analysis Summary
• Content Analysis: transforming raw text into more computationally useful forms
• Words in text collections exhibit interesting statistical properties– Word frequencies have a Zipf distribution– Word co-occurrences exhibit dependencies
• Text documents are transformed to vectors– Pre-processing includes tokenization, stemming,
collocations/phrases– Documents occupy multi-dimensional space
2002.10.29 - SLIDE 49IS 202 – FALL 2002
Inverted Index
• This is the primary data structure for text indexes
• Main Idea:– Invert documents into a big index
• Basic steps:– Make a “dictionary” of all the tokens in the
collection– For each token, list all the docs it occurs in.– Do a few things to reduce redundancy in the data
structure
2002.10.29 - SLIDE 50IS 202 – FALL 2002
Informationneed
Index
Pre-process
Parse
Collections
Rank
Query
text inputHow isthe indexconstructed?
2002.10.29 - SLIDE 51IS 202 – FALL 2002
Inverted Indexes
• We have seen “Vector files” conceptually– An Inverted File is a vector file “inverted” so