Top Banner
Philosophy of IR Evaluation Ellen Voorhees
24

Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

Dec 29, 2015

Download

Documents

Elinor Skinner
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

Philosophy of IR Evaluation

Ellen Voorhees

Page 2: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Evaluation: How well does system meet information need?

• System evaluation: how good are document rankings?

• User-based evaluation: how satisfied is user?

Page 3: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Why do system evaluation?

• Allows sufficient control of variables to increase power of comparative experiments– laboratory tests less expensive– laboratory tests more diagnostic– laboratory tests necessarily an abstraction

• It works!– numerous examples of techniques developed in the

laboratory that improve performance in operational settings

Page 4: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Cranfield Tradition

• Laboratory testing of retrieval systems first done in Cranfield II experiment (1963)– fixed document and query sets– evaluation based on relevance judgments– relevance abstracted to topical similarity

• Test collections– set of documents– set of questions– relevance judgments

Page 5: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Cranfield Tradition Assumptions• Relevance can be approximated by topical

similarity– relevance of one doc is independent of others– all relevant documents equally desirable– user information need doesn’t change

• Single set of judgments is representative of user population

• Complete judgments (i.e., recall is knowable)

• [Binary judgments]

Page 6: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

The Case Against the Cranfield Tradition

• Relevance judgments– vary too much to be the basis of evaluation– topical similarity is not utility– static set of judgments cannot reflect user’s changing

information need

• Recall is unknowable

• Results on test collections are not representative of operational retrieval systems

Page 7: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Response to Criticism

• Goal in Cranfield tradition is to compare systems• gives relative scores of evaluation measures, not absolute• differences in relevance judgments matter only if relative

measures based on those judgments change

• Realism is a concern• historically concern has been collection size• for TREC and similar collections, bigger concern is realism

of topic statement

Page 8: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Using Pooling to Create Large Test Collections

Assessors create topics.

Systems are evaluated using relevance judgments.

Form pools of unique documents from all submissions which the assessors judge for relevance.

A variety of different systems retrieve the top 1000 documents for each topic.

Page 9: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Documents

• Must be representative of real task of interest– genre– diversity (subjects, style, vocabulary)– amount– full text vs. abstract

Page 10: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Topics

• Distinguish between statement of user need (topic) & system data structure (query)– topic gives criteria for relevance– allows for different query construction techniques

Page 11: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Creating Relevance Judgments

RUN A

401

401

RUN B

Pools

401

403

402

Top 100

Alphabetized Docnos

Page 12: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Test Collection Reliability

• Recap• test collections are abstractions of operational retrieval

settings used to explore the relative merits of different retrieval strategies

• test collections are reliable if they predict the relative worth of different approaches

• Two dimensions to explore• inconsistency: differences in relevance judgments caused

by using different assessors• incompleteness: violation of assumption that all documents

are judged for all test queries

Page 13: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Inconsistency

• Most frequently cited “problem” of test collections– undeniably true that relevance is highly subjective;

judgments vary by assessor and for same assessor over time ...

– … but no evidence that these differences affect comparative evaluation of systems

Page 14: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Experiment:

• Given three independent sets of judgments for each of 48 TREC-4 topics

• Rank the TREC-4 runs by mean average precision as evaluated using different combinations of judgments

• Compute correlation among run rankings

Page 15: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

0

0.1

0.2

0.3

0.4

System

Ave

rage

Pre

cisi

on

Line 1

Line 2

Mean

Original

Union

I ntersect ion

Average Precision by Qrel

Page 16: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Effect of Different Judgments

• Similar highly-correlated results found using• different query sets• different evaluation measures• different groups of assessors• single opinion vs. group opinion judgments

• Conclusion: comparative results are stable despite the idiosyncratic nature of relevance judgments

Page 17: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Incompleteness

• Relatively new concern regarding test collection quality– early test collections were small enough to have

complete judgments – current collections can have only a small portion

examined for relevance for each query; portion judged is usually selected by pooling

Page 18: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Incompleteness

• Study by Zobel [SIGIR-98]:– Quality of relevance judgments does depend on pool

depth and diversity

– TREC judgments not complete• additional relevant documents distributed roughly

uniformly across systems but highly skewed across topics

– TREC ad hoc collections not biased against systems that do not contribute to the pools

Page 19: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Uniques Effect on Evaluation

0

0.01

0.02

0.03

0.04

0.05

Run

Diff

erence in M

AP

Page 20: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

0

10

20

30

40

50

60

70

80

90

Run

Num

ber

Uni

que

by

Gro

up

0

0.0005

0.001

0.0015

0.002

0.0025

Diff

eren

ce in

MA

P

Uniques Effect on Evaluation: Automatic Only

Page 21: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Incompleteness

• Adequate pool depth (and diversity) is important to building reliable test collections

• With such controls, large test collections are viable laboratory tools

• For test collections, bias is much worse than incompleteness– smaller, fair judgment sets always preferable to

larger, potentially-biased sets– need to carefully evaluate effects of new pool building

paradigms with respect to bias introduced

Page 22: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Cross-language Collections• More difficult to build a cross-language collection

than a monolingual collection– consistency harder to obtain

• multiple assessors per topic (one per language)• must take care when comparing different language

evaluations (e.g., cross run to mono baseline)

– pooling harder to coordinate• need to have large, diverse pools for all languages• retrieval results are not balanced across languages• haven’t tended to get recall-oriented manual runs in cross-

language tasks

Page 23: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Cranfield Tradition

• Test collections are abstractions, but laboratory tests are useful nonetheless– evaluation technology is predictive (i.e., results

transfer to operational settings)

– relevance judgments by different assessors almost always produce the same comparative results

– adequate pools allow unbiased evaluation of unjudged runs

Page 24: Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?

NIST

Cranfield Tradition• Note the emphasis on comparative !!

– absolute score of some effectiveness measure not meaningful

• absolute score changes when assessor changes• query variability not accounted for• impact of collection size, generality not accounted for• theoretical maximum of 1.0 for both recall & precision not

obtainable by humans

– evaluation results are only comparable when they are from the same collection

• a subset of a collection is a different collection• direct comparison of scores from two different TREC

collections (e.g., scores from TRECs 7&8) is invalid