CISC 7610 Lecture 7 Principles of content-based retrieval Topics: Housekeeping Content-based retrieval Information retrieval Term-document matrices Inverted indices Boolean querying Evaluating retrieval
CISC 7610 Lecture 7Principles of content-based retrieval
Topics:Housekeeping
Content-based retrievalInformation retrieval
Term-document matricesInverted indices
Boolean queryingEvaluating retrieval
Housekeeping
● Please complete the midterm feedback form I am distributing right now
Housekeeping
● Please complete the midterm feedback form I am distributing right now
● The midterm exam is graded– Min: 63
– Mean: 79
– Median: 80
– Max: 93
● Let's go over the solutions
Course project
● Half of your final grade is based on the final project, implementing a multimedia database
● Your idea does not have to be totally new, but your implementation should be original
● You will be graded on three components– Project proposal
– Project presentation
– Project writeup and code submission
● You may work in groups
Course project: Proposal
● The project proposal should be a 2-page writeup
● It should describe– What problem are you addressing?
– What multimedia data will you index?
– What sort of queries will your system support?
– What technologies will you use to address it?
– You may want to include some sort of system diagram sketch or other representation of the various system components and how they fit together
Course project examples
● Image database that indexes images using facial recognition software and allows query-by-image
● Image database that classifies images according to the objects they contain and allows search-by-description (see Caffe image classifier from Berkeley)
● Video database that allows closed captions to be searched and skips to those locations in the video
● Speech database that recognizes what is said and allows keyword search, skipping to the relevant locations
● Video database that identifies text on-screen and allows it to be searched
Recall: Main question
How can we process and store multimedia data so that we can
find what we are looking for in the future?
Information Retrieval
● Text Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers)
● Multimedia Information Retrieval (MIR) is finding material of a multimedia nature that satisfies an information need from within large collections
● Multimedia IR has a lot in common with text IR, especially when moving beyond SQL
8
Content-based retrieval
● Consider using an image as a query:
● Q1: Tell me about this building
● Q2: Find other buildings like this one
● Q3: Find paintings and drawings of this building
Content-based retrieval
● Consider using an image as a query:
● These queries can best be understood and answered using ideas from multimedia information retrieval
Similar text retrieval
● Similar queries would occur for text retrieval:– Q1: Find me web pages about “Brooklyn College”
– Q2: Find other pages like http://www.brooklyn.cuny.edu
– Q3: Find books that mention “Brooklyn College”
● Text information retrieval has been extensively studies for decades
Basic assumptions of Information Retrieval
• Collection: A set of documents– Assume it is a static collection for the moment
• Goal: Retrieve documents with information that is relevant to the user’s information need and helps the user complete a task
12
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
how trap mice alive
The classic search model
Collection
User task
Info need
Query
Results
Searchengine
Queryrefinement
Get rid of mice in a politically correct way
Info about removing micewithout killing them
Misconception?
Misformulation?
Search
How good are the retrieved docs?
Precision : Fraction of retrieved docs that are relevant to the user’s information need
Recall : Fraction of relevant docs in collection that are retrieved
More precise definitions and measurements later
14
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Term-document incidence matrices
Unstructured data in 1620
● Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia?
● One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia?
● Why is that not the answer?– Slow (for large corpora)– NOT Calpurnia is non-trivial– Other operations (e.g., find the word Romans near
countrymen) not feasible– Ranked retrieval (best documents to return)
16
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Term-document incidence matrices
Antony and Cleopatra J ulius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
1 if play contains word, 0 otherwise
Brutus AND Caesar BUT NOT Calpurnia
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Incidence vectors
● So we have a 0/1 vector for each term.● To answer query: take the vectors for Brutus,
Caesar and Calpurnia (complemented) bitwise AND.
110100 AND110111 AND101111 =
100100
18
Sec. 1.1
Antony and Cleopatra J ulius Caesar The Tempest Hamlet Othello Macbeth
Antony 1 1 0 0 0 1
Brutus 1 1 0 1 0 0
Caesar 1 1 0 1 1 1
Calpurnia 0 1 0 0 0 0
Cleopatra 1 0 0 0 0 0
mercy 1 0 1 1 1 1
worser 1 0 1 1 1 0
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Answers to query
● Antony and Cleopatra, Act III, Scene iiAgrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus,
When Antony found Julius Caesar dead,
He cried almost to roaring; and he wept
When at Philippi he found Brutus slain.
● Hamlet, Act III, Scene iiLord Polonius: I did enact Julius Caesar I was killed i’ the
Capitol; Brutus killed me.
19
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Bigger collections
● Consider N = 1 million documents, each with about 1000 words.
● Avg 6 bytes/word including spaces/punctuation – 6GB of data in the documents.
● Say there are M = 500K distinct terms among these.
20
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Can’t build the matrix
● 500K x 1M matrix has half-a-trillion 0’s and 1’s.
● But it has no more than one billion 1’s.– matrix is extremely sparse.
● What’s a better representation?– We only record the 1 positions.
21
Why?
Sec. 1.1
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
The inverted index:
They key data structure underlying modern IR
Inverted index
● For each term t, we must store a list of all documents that contain t.– Identify each doc by a docID, a document serial
number● Can we used fixed-size arrays for this?
23
What happens if the word Caesar is added to document 14?
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Inverted index
● We need variable-size postings lists– On disk, a continuous run of postings is best– In memory, can use linked lists or variable length
arrays● Some tradeoffs in size/ease of insertion
24
Dictionary Postings
Sorted by docID (more later on why).
PostingPosting
Sec. 1.2
Brutus
Calpurnia
Caesar 1 2 4 5 6 16 57 132
1 2 4 11 31 45 173
2 31
174
54 101
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Indexer steps: Token sequence
● Sequence of (Modified token, Document ID) pairs.
I did enact JuliusCaesar I was killed
i’ the Capitol; Brutus killed me.
Doc 1
So let it be withCaesar. The noble
Brutus hath told youCaesar was ambitious
Doc 2
Sec. 1.2
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Indexer steps: Sort
● Sort by terms– And then docID
Core indexing step
Sec. 1.2
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Indexer steps: Dictionary & Postings
● Multiple term entries in a single document are merged.
● Split into Dictionary and Postings
● Doc. frequency information is added.
Why frequency?Will discuss later.
Sec. 1.2
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Where do we pay in storage?
28Pointers
Terms and
counts IR system implementation•How do we index efficiently?•How much storage do we need?
Sec. 1.2
Lists of docIDs
Query processing with an inverted index
Query processing: AND
● Consider processing the query:Brutus AND Caesar– Locate Brutus in the Dictionary;
● Retrieve its postings.
– Locate Caesar in the Dictionary;● Retrieve its postings.
– “Merge” the two postings (intersect the document sets):
30
128
34
2 4 8 16 32 64
1 2 3 5 8 13 21
Brutus
Caesar
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
The merge
● Walk through the two postings simultaneously, in time linear in the total number of postings entries
31
34
1282 4 8 16 32 64
1 2 3 5 8 13 21
Brutus
Caesar
If the list lengths are x and y, the merge takes O(x+y)operations.Crucial: postings sorted by docID.
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Intersecting two postings lists(a “merge” algorithm)
32Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Boolean queries: Exact match
● The Boolean retrieval model is being able to ask a query that is a Boolean expression:– Boolean Queries are queries using AND, OR and NOT to join
query terms● Views each document as a set of words● Is precise: document matches condition or not.
– Perhaps the simplest model to build an IR system on● Primary commercial retrieval tool for 3 decades. ● Many search systems you still use are Boolean:
– Email, library catalog, Mac OS X Spotlight
33
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Boolean queries: More general merges
• Exercise: Adapt the merge for the queries:
Brutus AND NOT Caesar
Brutus OR NOT Caesar
● Can we still run through the merge in time O(x+y)? What can we achieve?
34
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Merging
What about an arbitrary Boolean formula?
(Brutus OR Caesar) AND NOT(Antony OR Cleopatra)● Can we always merge in “linear” time?
– Linear in what?● Can we do better?
35
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Query optimization
● What is the best order for query processing?● Consider a query that is an AND of n terms.● For each of the n terms, get its postings, then
AND them together.
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
Query: Brutus AND Calpurnia AND Caesar36
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Query optimization example
● Process in order of increasing freq:– start with smallest set, then keep cutting further.
37
This is why we keptdocument freq. in dictionary
Execute the query as (Calpurnia AND Brutus) AND Caesar.
Sec. 1.3
Brutus
Caesar
Calpurnia
1 2 3 5 8 16 21 34
2 4 8 16 32 64 128
13 16
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
More general optimization
● e.g., (madding OR crowd) AND (ignoble OR strife)● Get doc. freq.’s for all terms.● Estimate the size of each OR by the sum of its doc.
freq.’s (conservative).● Process in increasing order of OR sizes.
38
Sec. 1.3
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Exercise• Recommend a query processing
order for
• Which two terms should we process first?
Term Freq eyes 213312 kaleidoscope 87009 marmalade 107913 skies 271658 tangerine 46653 trees 316812
39
(tangerine OR trees) AND(marmalade OR skies) AND(kaleidoscope OR eyes)
Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Query processing exercises
• Exercise: If the query is friends AND romans AND (NOT countrymen), how could we use the freq of countrymen?
• Exercise: Extend the merge to an arbitrary Boolean query. Can we always guarantee execution in time linear in the total postings size?
• Hint: Begin with the case of a Boolean formula query: in this, each query term appears only once in the query.
40Manning and Nayak, CS276, Lecture 1: Introduction http://web.stanford.edu/class/cs276/
Evaluating retrieval
42
● How fast does it index Number of documents/hour (Average document size)
● How fast does it search Latency as a function of index size
● Expressiveness of query language Ability to express complex information needs Speed on complex queries
● Uncluttered UI● Is it free?
Sec. 8.6
Measures for a search engine
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
43
Most important: user happiness But difficult to measure
Most common proxy: relevance of search results Relevance measurement requires 3 elements:
1. A benchmark document collection
2. A benchmark suite of queries
3. A usually binary assessment of either Relevant or Nonrelevant for each query and each document Some work on more-than-binary, but not the standard
Sec. 8.1
Measures for a search engine
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
44
● Note: the information need is translated into a query● Relevance is assessed relative to the information need
not the query● E.g., Information need: I'm looking for information on
whether drinking red wine is more effective at reducing your risk of heart attacks than white wine.
● Query: wine red white heart attack effective● Evaluate whether the doc addresses the information
need, not whether it has these words
Sec. 8.1
Evaluating an IR system
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
Example: Image retrieval
Rueger, “Multimedia Information Retrieval” Figure 2.11. Morgan & Claypool: 2010.
Example: Image retrieval
Rueger, “Multimedia Information Retrieval” Figure 2.11. Morgan & Claypool: 2010.
Example: Image retrieval
Rueger, “Multimedia Information Retrieval” Figure 2.11. Morgan & Claypool: 2010.
48
Unranked retrieval evaluation:Precision and recall
● Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved)
● Recall: fraction of relevant docs that are retrieved= P(retrieved|relevant)
● Precision P = tp/(tp + fp)● Recall R = tp/(tp + fn)
Relevant Nonrelevant
Retrieved True positives False positives
Not Retrieved False negatives True negatives
Sec. 8.3
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
49
Should we instead use the accuracy measure for evaluation?
● Given a query, an engine classifies each doc as “Relevant” or “Nonrelevant”
● The accuracy of an engine: the fraction of these classifications that are correct (tp + tn) / ( tp + fp + fn + tn)
● Accuracy is a commonly used evaluation measure in machine learning classification work
● Why is this not a very useful evaluation measure in IR?
Sec. 8.3
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
50
Search for:
0 matching results found.
Sec. 8.3
Why not just use accuracy?
● How to build a 99.9999% accurate search engine on a low budget….
● People doing information retrieval want to find something and have a certain tolerance for junk.
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
Precision / recall example
● We have a database with 10,000 documents in it– 8 are relevant to the current query
● The IR system returns 20 results:– R = relevant, N = not relevant
● What the precision of the system?– What is the precision-at-5? Precision-at-2?
● What is the recall of the system?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
R R N N N N N N R N R N N N R N N N N R
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
52
● You can get high recall (but low precision) by retrieving all docs for all queries!
● Recall is a non-decreasing function of the number of docs retrieved
● In a good system, precision decreases as either the number of docs retrieved or recall increases This is not a theorem, but a result with strong empirical
confirmation
Sec. 8.3
Precision/Recall
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
53
● Customers typically want a single figure of merit● Combined measure that assesses precision/recall tradeoff is F measure
(weighted harmonic mean):
● People usually use balanced F1 measure i.e., with = 1 or = ½
● Harmonic mean is a conservative average See CJ van Rijsbergen, Information Retrieval
RP
PR
RP
F
2
2 )1(1
)1(1
1
Sec. 8.3
A combined measure: F
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/ 54
● Instead of returning only some results, the system returns all results● But ordered from best to worst● The evaluator can then decide how many to keep● Leads to precision-recall curves
Sec. 8.4
Evaluating ranked results
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/ 55
● Idea: If locally precision increases with increasing recall, then you should get to count that…
● So take the max of precisions to right of value
Sec. 8.4
Interpolated precision
Precision / recall example
● We have a database with 10,000 documents in it– 8 are relevant to the current query
● The IR system returns 20 results:– R = relevant, N = not relevant
● What the F-measure of the system?
● What is the uninterpolated precision at 25% recall?
● What is the interpolated precision at 33% recall?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
R R N N N N N N R N R N N N R N N N N R
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/
● Search engines have test collections of queries and hand-ranked results● Recall is difficult to measure on the web● Search engines often use precision at top k, e.g., k = 10● . . . or measures that reward you more for getting rank 1 right than for
getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain)
● Search engines also use non-relevance-based measures. Clickthrough on first result
Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.
Studies of user behavior in the lab A/B testing
57
Sec. 8.6.3
Evaluation at large search engines
Manning and Nayak, CS276, Lecture 8: Evaluation http://web.stanford.edu/class/cs276/