Top Banner
Information Extraction and Named Entity Recognition Introducing the tasks: Getting simple structured information out of text
95

Information Extraction and Named Entity Recognition

Jan 18, 2016

Download

Documents

Skylar

Information Extraction and Named Entity Recognition. Introducing the tasks: Getting simple structured information out of text. Information Extraction. Information extraction (IE) systems Find and understand limited relevant parts of texts Gather information from many pieces of text - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Information Extraction and Named Entity Recognition

Information Extraction and Named

Entity Recognition

Introducing the tasks:

Getting simple structured information out of text

Page 2: Information Extraction and Named Entity Recognition

Christopher Manning

Information Extraction

• Information extraction (IE) systems• Find and understand limited relevant parts of texts• Gather information from many pieces of text• Produce a structured representation of relevant information:

• relations (in the database sense), a.k.a.,• a knowledge base

• Goals:1. Organize information so that it is useful to people2. Put information in a semantically precise form that allows further

inferences to be made by computer algorithms

Page 3: Information Extraction and Named Entity Recognition

Christopher Manning

Information Extraction (IE)

• IE systems extract clear, factual information• Roughly: Who did what to whom when?

• E.g.,• Gathering earnings, profits, board members, headquarters, etc. from

company reports • The headquarters of BHP Billiton Limited, and the global headquarters

of the combined BHP Billiton Group, are located in Melbourne, Australia.

• headquarters(“BHP Biliton Limited”, “Melbourne, Australia”)• Learn drug-gene product interactions from medical research literature

Page 4: Information Extraction and Named Entity Recognition

Christopher Manning

Low-level information extraction

• Is now available – and I think popular – in applications like Apple or Google mail, and web indexing

• Often seems to be based on regular expressions and name lists

Page 5: Information Extraction and Named Entity Recognition

Christopher Manning

Low-level information extraction

Page 6: Information Extraction and Named Entity Recognition

Christopher Manning

Named Entity Recognition (NER)

• A very important sub-task: find and classify names in text, for example:

• The decision by the independent MP Andrew Wilkie to withdraw his support for the minority Labor government sounded dramatic but it should not further threaten its stability. When, after the 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor, they gave just two guarantees: confidence and supply.

Page 7: Information Extraction and Named Entity Recognition

Christopher Manning

• A very important sub-task: find and classify names in text, for example:

• The decision by the independent MP Andrew Wilkie to withdraw his support for the minority Labor government sounded dramatic but it should not further threaten its stability. When, after the 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor, they gave just two guarantees: confidence and supply.

Named Entity Recognition (NER)

Page 8: Information Extraction and Named Entity Recognition

Christopher Manning

• A very important sub-task: find and classify names in text, for example:

• The decision by the independent MP Andrew Wilkie to withdraw his support for the minority Labor government sounded dramatic but it should not further threaten its stability. When, after the 2010 election, Wilkie, Rob Oakeshott, Tony Windsor and the Greens agreed to support Labor, they gave just two guarantees: confidence and supply.

Named Entity Recognition (NER)

PersonDateLocationOrgani- zation

Page 9: Information Extraction and Named Entity Recognition

Christopher Manning

Named Entity Recognition (NER)

• The uses:• Named entities can be indexed, linked off, etc.• Sentiment can be attributed to companies or products• A lot of IE relations are associations between named entities• For question answering, answers are often named entities.

• Concretely:• Many web pages tag various entities, with links to bio or topic pages, etc.

• Reuters’ OpenCalais, Evri, AlchemyAPI, Yahoo’s Term Extraction, …• Apple/Google/Microsoft/… smart recognizers for document content

Page 10: Information Extraction and Named Entity Recognition

Evaluation of Named Entity Recognition

The extension of Precision, Recall, and the F measure to

sequences

Page 11: Information Extraction and Named Entity Recognition

Christopher Manning

The Named Entity Recognition Task

Task: Predict entities in a text

Foreign ORGMinistry ORGspokesman OShen PERGuofang PERtold OReuters ORG: :

}Standard evaluationis per entity, not per token

Page 12: Information Extraction and Named Entity Recognition

Christopher Manning

Precision/Recall/F1 for IE/NER

• Recall and precision are straightforward for tasks like IR and text categorization, where there is only one grain size (documents)

• The measure behaves a bit funnily for IE/NER when there are boundary errors (which are common):• First Bank of Chicago announced earnings …

• This counts as both a fp and a fn• Selecting nothing would have been better• Some other metrics (e.g., MUC scorer) give partial credit

(according to complex rules)

Page 13: Information Extraction and Named Entity Recognition

Sequence Models for Named Entity Recognition

Page 14: Information Extraction and Named Entity Recognition

Christopher Manning

The ML sequence model approach to NER

Training1.Collect a set of representative training documents2.Label each token for its entity class or other (O)3.Design feature extractors appropriate to the text and classes4.Train a sequence classifier to predict the labels from the data

Testing1.Receive a set of testing documents2.Run sequence model inference to label each token3.Appropriately output the recognized entities

Page 15: Information Extraction and Named Entity Recognition

Christopher Manning

Encoding classes for sequence labeling

IO encoding IOB encoding

Fred PER B-PERshowed O OSue PER B-PERMengqiu PER B-PERHuang PER I-PER‘s O Onew O Opainting O O

Page 16: Information Extraction and Named Entity Recognition

Christopher Manning

Features for sequence labeling

• Words• Current word (essentially like a learned dictionary)• Previous/next word (context)

• Other kinds of inferred linguistic classification• Part-of-speech tags

• Label context• Previous (and perhaps next) label

16

Page 17: Information Extraction and Named Entity Recognition

Christopher Manning

Features: Word substrings

CotrimoxazoleCotrimoxazole WethersfieldWethersfield

Alien Fury: Countdown to InvasionAlien Fury: Countdown to Invasion

oxa : field

Page 18: Information Extraction and Named Entity Recognition

Christopher Manning

Features: Word shapes

• Word Shapes • Map words to simplified representation that encodes attributes

such as length, capitalization, numerals, Greek letters, internal punctuation, etc.

Varicella-zoster

Xx-xxx

mRNA xXXX

CPA1 XXXd

Page 19: Information Extraction and Named Entity Recognition

Maximum entropy sequence models

Maximum entropy Markov models (MEMMs) or

Conditional Markov models

Page 20: Information Extraction and Named Entity Recognition

Christopher Manning

Sequence problems

• Many problems in NLP have data which is a sequence of characters, words, phrases, lines, or sentences …

• We can think of our task as one of labeling each item

VBG

NN IN

DT

NN IN NN

Chasing

opportunity

in an age of upheaval

POS tagging

B B I I B I B I B B

而相对于这些品牌的价Word segmentation

PERS O O O ORG

ORG

Murdoch

discusses

future

of News Corp.

Named entity recognitionText segmen-tation

QAQAAAQA

Page 21: Information Extraction and Named Entity Recognition

Christopher Manning

MEMM inference in systems

• For a Conditional Markov Model (CMM) a.k.a. a Maximum Entropy Markov Model (MEMM), the classifier makes a single decision at a time, conditioned on evidence from observations and previous decisions

• A larger space of sequences is usually explored via search

-3 -2 -1 0 +1

DT NNP VBD ??? ???

The Dow fell 22.6 %

Local ContextFeatures

W0 22.6

W+1 %

W-1 fell

T-1 VBD

T-1-T-2 NNP-VBD

hasDigit? true

… …(Ratnaparkhi 1996; Toutanova et al. 2003, etc.)

Decision Point

Page 22: Information Extraction and Named Entity Recognition

Christopher Manning

Example: POS Tagging

• Scoring individual labeling decisions is no more complex than standard classification decisions• We have some assumed labels to use for prior positions• We use features of those and the observed data (which can include current,

previous, and next words) to predict the current label

-3 -2 -1 0 +1

DT NNP VBD ??? ???

The Dow fell 22.6 %

Local ContextFeatures

W0 22.6

W+1 %

W-1 fell

T-1 VBD

T-1-T-2 NNP-VBD

hasDigit? true

… …

Decision Point

(Ratnaparkhi 1996; Toutanova et al. 2003, etc.)

Page 23: Information Extraction and Named Entity Recognition

Christopher Manning

Example: POS Tagging

• POS tagging Features can include:• Current, previous, next words in isolation or together.• Previous one, two, three tags.• Word-internal features: word types, suffixes, dashes, etc.

-3 -2 -1 0 +1

DT NNP VBD ??? ???

The Dow fell 22.6 %

Local ContextFeatures

W0 22.6

W+1 %

W-1 fell

T-1 VBD

T-1-T-2 NNP-VBD

hasDigit? true

… …(Ratnaparkhi 1996; Toutanova et al. 2003, etc.)

Decision Point

Page 24: Information Extraction and Named Entity Recognition

Christopher Manning

Inference in SystemsSequence Level

Local Level

LocalData

FeatureExtraction

Features

Label

Optimization

Smoothing

Classifier Type

Features

Label

SequenceData

Maximum Entropy Models

QuadraticPenalties

ConjugateGradient

Sequence ModelInference

LocalData

LocalData

Page 25: Information Extraction and Named Entity Recognition

Christopher Manning

Greedy Inference

• Greedy inference:• We just start at the left, and use our classifier at each position to assign a label• The classifier can depend on previous labeling decisions as well as observed

data• Advantages:

• Fast, no extra memory requirements• Very easy to implement• With rich features including observations to the right, it may perform quite well

• Disadvantage:• Greedy. We make commit errors we cannot recover from

Sequence Model

Inference

Best Sequence

Page 26: Information Extraction and Named Entity Recognition

Christopher Manning

Beam Inference

• Beam inference:• At each position keep the top k complete sequences.• Extend each sequence in each local way.• The extensions compete for the k slots at the next position.

• Advantages:• Fast; beam sizes of 3–5 are almost as good as exact inference in many cases.• Easy to implement (no dynamic programming required).

• Disadvantage:• Inexact: the globally best sequence can fall off the beam.

Sequence Model

Inference

Best Sequence

Page 27: Information Extraction and Named Entity Recognition

Christopher Manning

Viterbi Inference

• Viterbi inference:• Dynamic programming or memoization.• Requires small window of state influence (e.g., past two states are relevant).

• Advantage:• Exact: the global best sequence is returned.

• Disadvantage:• Harder to implement long-distance state-state interactions (but beam inference

tends not to allow long-distance resurrection of sequences anyway).

Sequence Model

Inference

Best Sequence

Page 28: Information Extraction and Named Entity Recognition

Christopher Manning

CRFs [Lafferty, Pereira, and McCallum 2001]

• Another sequence model: Conditional Random Fields (CRFs)• A whole-sequence conditional model rather than a chaining of local

models.

• The space of c’s is now the space of sequences• But if the features fi remain local, the conditional sequence likelihood can be

calculated exactly using dynamic programming

• Training is slower, but CRFs avoid causal-competition biases• These (or a variant using a max margin criterion) are seen as the state-of-

the-art these days … but in practice usually work much the same as MEMMs.

[c,d: sequence]

Page 29: Information Extraction and Named Entity Recognition

Relation Extraction

What is relation extraction?

Page 30: Information Extraction and Named Entity Recognition

Dan Jurafsky

Extracting relations from text

Page 31: Information Extraction and Named Entity Recognition

Dan Jurafsky

Extracting Relation Triples from Text The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California … near Palo Alto, California… Leland Stanford…founded the university in 1891

Stanford EQ Leland Stanford Junior UniversityStanford LOC-IN CaliforniaStanford IS-A research universityStanford LOC-NEAR Palo AltoStanford FOUNDED-IN 1891Stanford FOUNDER Leland Stanford

Page 32: Information Extraction and Named Entity Recognition

Dan Jurafsky

Why Relation Extraction?

• Create new structured knowledge bases, useful for any app• Augment current knowledge bases

• Adding words to WordNet thesaurus, facts to FreeBase or DBPedia

• Support question answering• The granddaughter of which actor starred in the movie “E.T.”?(acted-in ?x “E.T.”)(is-a ?y actor)(granddaughter-of ?x ?y)

• But which relations should we extract?

32

Page 33: Information Extraction and Named Entity Recognition

Dan Jurafsky

Automated Content Extraction (ACE)17 relations from 2008 “Relation Extraction Task”

Page 34: Information Extraction and Named Entity Recognition

Dan Jurafsky

Automated Content Extraction (ACE)

• Physical-Located PER-GPE He was in Tennessee

• Part-Whole-Subsidiary ORG-ORG XYZ, the parent company of ABC

• Person-Social-Family PER-PER John’s wife Yoko

• Org-AFF-Founder PER-ORGSteve Jobs, co-founder of Apple…

34

Page 35: Information Extraction and Named Entity Recognition

Dan Jurafsky

UMLS: Unified Medical Language System

• 134 entity types, 54 relations

Injury disrupts Physiological FunctionBodily Location location-of Biologic FunctionAnatomical Structure part-of OrganismPharmacologic Substance causes Pathological FunctionPharmacologic Substance treats Pathologic Function

Page 36: Information Extraction and Named Entity Recognition

Dan Jurafsky

Extracting UMLS relations from a sentence

Doppler echocardiography can be used to diagnose left anterior descending artery stenosis in patients with type 2 diabetes Echocardiography, Doppler DIAGNOSES Acquired stenosis

36

Page 37: Information Extraction and Named Entity Recognition

Dan Jurafsky

Databases of Wikipedia Relations

37

Relations extracted from InfoboxStanford state CaliforniaStanford motto “Die Luft der Freiheit weht”…

Wikipedia Infobox

Page 38: Information Extraction and Named Entity Recognition

Dan Jurafsky

Relation databases that draw from Wikipedia

• Resource Description Framework (RDF) triplessubject predicate objectGolden Gate Park location San Franciscodbpedia:Golden_Gate_Park dbpedia-owl:location dbpedia:San_Francisco

• DBPedia: 1 billion RDF triples, 385 from English Wikipedia• Frequent Freebase relations:

people/person/nationality, location/location/containspeople/person/profession, people/person/place-of-birthbiology/organism_higher_classification film/film/genre

38

Page 39: Information Extraction and Named Entity Recognition

Dan Jurafsky

Ontological relations

• IS-A (hypernym): subsumption between classes• Giraffe IS-A ruminant IS-A ungulate IS-A mammal IS-A vertebrate IS-A animal…

• Instance-of: relation between individual and class• San Francisco instance-of city

Examples from the WordNet Thesaurus

Page 40: Information Extraction and Named Entity Recognition

Dan Jurafsky

How to build relation extractors

1. Hand-written patterns2. Supervised machine learning3. Semi-supervised and unsupervised

• Bootstrapping (using seeds)• Distant supervision• Unsupervised learning from the web

Page 41: Information Extraction and Named Entity Recognition

Relation Extraction

Using patterns to extract relations

Page 42: Information Extraction and Named Entity Recognition

Dan Jurafsky

Rules for extracting IS-A relation

Early intuition from Hearst (1992) • “Agar is a substance prepared from a mixture of

red algae, such as Gelidium, for laboratory or industrial use”

• What does Gelidium mean? • How do you know?`

Page 43: Information Extraction and Named Entity Recognition

Dan Jurafsky

Rules for extracting IS-A relation

Early intuition from Hearst (1992) • “Agar is a substance prepared from a mixture of

red algae, such as Gelidium, for laboratory or industrial use”

• What does Gelidium mean? • How do you know?`

Page 44: Information Extraction and Named Entity Recognition

Dan Jurafsky

Hearst’s Patterns for extracting IS-A relations

(Hearst, 1992): Automatic Acquisition of Hyponyms

“Y such as X ((, X)* (, and|or) X)”“such Y as X”“X or other Y”“X and other Y”“Y including X”“Y, especially X”

Page 45: Information Extraction and Named Entity Recognition

Dan Jurafsky

Hearst’s Patterns for extracting IS-A relationsHearst pattern

Example occurrences

X and other Y

...temples, treasuries, and other important civic buildings.

X or other Y Bruises, wounds, broken bones or other injuries...

Y such as X The bow lute, such as the Bambara ndang...

Such Y as X ...such authors as Herrick, Goldsmith, and Shakespeare.

Y including X ...common-law countries, including Canada and England...

Y , especially X

European countries, especially France, England, and Spain...

Page 46: Information Extraction and Named Entity Recognition

Dan Jurafsky

Extracting Richer Relations Using Rules

• Intuition: relations often hold between specific entities• located-in (ORGANIZATION, LOCATION)• founded (PERSON, ORGANIZATION)• cures (DRUG, DISEASE)

• Start with Named Entity tags to help extract relation!

Page 47: Information Extraction and Named Entity Recognition

Dan Jurafsky

Named Entities aren’t quite enough.Which relations hold between 2 entities?

Drug Disease

Cure?

Prevent?

Cause?

Page 48: Information Extraction and Named Entity Recognition

Dan Jurafsky

What relations hold between 2 entities?

PERSON ORGANIZATION

Founder?

Investor?

Member?

Employee?

President?

Page 49: Information Extraction and Named Entity Recognition

Dan Jurafsky

Extracting Richer Relations Using Rules andNamed Entities

Who holds what office in what organization?

PERSON, POSITION of ORG• George Marshall, Secretary of State of the United States

PERSON(named|appointed|chose|etc.) PERSON Prep? POSITION• Truman appointed Marshall Secretary of State

PERSON [be]? (named|appointed|etc.) Prep? ORG POSITION • George Marshall was named US Secretary of State

Page 50: Information Extraction and Named Entity Recognition

Dan Jurafsky

Hand-built patterns for relations• Plus:

• Human patterns tend to be high-precision• Can be tailored to specific domains

• Minus• Human patterns are often low-recall• A lot of work to think of all possible patterns!• Don’t want to have to do this for every

relation!• We’d like better accuracy

Page 51: Information Extraction and Named Entity Recognition

Relation Extraction

Supervised relation extraction

Page 52: Information Extraction and Named Entity Recognition

Dan Jurafsky

Supervised machine learning for relations

• Choose a set of relations we’d like to extract• Choose a set of relevant named entities• Find and label data

• Choose a representative corpus• Label the named entities in the corpus• Hand-label the relations between these entities• Break into training, development, and test

• Train a classifier on the training set

52

Page 53: Information Extraction and Named Entity Recognition

Dan Jurafsky

How to do classification in supervised relation extraction

1. Find all pairs of named entities (usually in same sentence)

2. Decide if 2 entities are related3. If yes, classify the relation• Why the extra step?

• Faster classification training by eliminating most pairs• Can use distinct feature-sets appropriate for each task.

53

Page 54: Information Extraction and Named Entity Recognition

Dan Jurafsky

Automated Content Extraction (ACE)17 sub-relations of 6 relations from 2008 “Relation Extraction Task”

Page 55: Information Extraction and Named Entity Recognition

Dan Jurafsky

Relation Extraction

Classify the relation between two entities in a sentence

American Airlines, a unit of AMR, immediately matched the move, spokesman Tim Wagner said.

SUBSIDIARY

FAMILYEMPLOYMENT

NIL

FOUNDER

CITIZEN

INVENTOR…

Page 56: Information Extraction and Named Entity Recognition

Dan Jurafsky

Word Features for Relation Extraction

• Headwords of M1 and M2, and combinationAirlines Wagner Airlines-Wagner

• Bag of words and bigrams in M1 and M2 {American, Airlines, Tim, Wagner, American Airlines, Tim Wagner}

• Words or bigrams in particular positions left and right of M1/M2M2: -1 spokesmanM2: +1 said

• Bag of words or bigrams between the two entities{a, AMR, of, immediately, matched, move, spokesman, the, unit}

American Airlines, a unit of AMR, immediately matched the move, spokesman Tim Wagner saidMention 1 Mention 2

Page 57: Information Extraction and Named Entity Recognition

Dan Jurafsky

Named Entity Type and Mention LevelFeatures for Relation Extraction

• Named-entity types• M1: ORG• M2: PERSON

• Concatenation of the two named-entity types• ORG-PERSON

• Entity Level of M1 and M2 (NAME, NOMINAL, PRONOUN)• M1: NAME [it or he would be PRONOUN]• M2: NAME [the company would be NOMINAL]

American Airlines, a unit of AMR, immediately matched the move, spokesman Tim Wagner saidMention 1 Mention 2

Page 58: Information Extraction and Named Entity Recognition

Dan Jurafsky

Parse Features for Relation Extraction

• Base syntactic chunk sequence from one to the otherNP NP PP VP NP NP

• Constituent path through the tree from one to the otherNP NP S S NP

• Dependency path Airlines matched Wagner said

American Airlines, a unit of AMR, immediately matched the move, spokesman Tim Wagner saidMention 1 Mention 2

Page 59: Information Extraction and Named Entity Recognition

Dan Jurafsky

Gazeteer and trigger word features for relation extraction

• Trigger list for family: kinship terms• parent, wife, husband, grandparent, etc. [from WordNet]

• Gazeteer:• Lists of useful geo or geopolitical words

• Country name list• Other sub-entities

Page 60: Information Extraction and Named Entity Recognition

Dan Jurafsky

American Airlines, a unit of AMR, immediately matched the move, spokesman Tim Wagner said.

Page 61: Information Extraction and Named Entity Recognition

Dan Jurafsky

Classifiers for supervised methods

• Now you can use any classifier you like• MaxEnt• Naïve Bayes• SVM• ...

• Train it on the training set, tune on the dev set, test on the test set

Page 62: Information Extraction and Named Entity Recognition

Dan Jurafsky

Evaluation of Supervised Relation Extraction

• Compute P/R/F1 for each relation

62

Page 63: Information Extraction and Named Entity Recognition

Dan Jurafsky

Summary: Supervised Relation Extraction

+ Can get high accuracies with enough hand-labeled training data, if test similar enough to training

- Labeling a large training set is expensive

- Supervised models are brittle, don’t generalize well to different genres

Page 64: Information Extraction and Named Entity Recognition

Relation Extraction

Semi-supervised and unsupervised relation extraction

Page 65: Information Extraction and Named Entity Recognition

Dan Jurafsky

Seed-based or bootstrapping approaches to relation extraction

• No training set? Maybe you have:• A few seed tuples or• A few high-precision patterns

• Can you use those seeds to do something useful?• Bootstrapping: use the seeds to directly learn to populate a

relation

Page 66: Information Extraction and Named Entity Recognition

Dan Jurafsky

Relation Bootstrapping (Hearst 1992)

• Gather a set of seed pairs that have relation R• Iterate:

1. Find sentences with these pairs2. Look at the context between or around the pair and

generalize the context to create patterns3. Use the patterns for grep for more pairs

Page 67: Information Extraction and Named Entity Recognition

Dan Jurafsky

Bootstrapping

• <Mark Twain, Elmira> Seed tuple• Grep (google) for the environments of the seed tuple“Mark Twain is buried in Elmira, NY.”

X is buried in Y“The grave of Mark Twain is in Elmira”

The grave of X is in Y“Elmira is Mark Twain’s final resting place”

Y is X’s final resting place.

• Use those patterns to grep for new tuples• Iterate

Page 68: Information Extraction and Named Entity Recognition

Dan Jurafsky

Dipre: Extract <author,book> pairs

• Start with 5 seeds:

• Find Instances:The Comedy of Errors, by William Shakespeare, wasThe Comedy of Errors, by William Shakespeare, isThe Comedy of Errors, one of William Shakespeare's earliest attemptsThe Comedy of Errors, one of William Shakespeare's most

• Extract patterns (group by middle, take longest common prefix/suffix)?x , by ?y , ?x , one of ?y ‘s

• Now iterate, finding new seeds that match the pattern

Brin, Sergei. 1998. Extracting Patterns and Relations from the World Wide Web.

Author BookIsaac Asimov The Robots of DawnDavid Brin Startide RisingJames Gleick Chaos: Making a New Science

Charles Dickens Great ExpectationsWilliam Shakespeare The Comedy of Errors

Page 69: Information Extraction and Named Entity Recognition

Dan Jurafsky

Snowball

• Similar iterative algorithm

• Group instances w/similar prefix, middle, suffix, extract patterns• But require that X and Y be named entities• And compute a confidence for each pattern

{’s, in, headquarters}

{in, based} ORGANIZATIONLOCATION

Organization Location of Headquarters

Microsoft Redmond

Exxon Irving

IBM Armonk

E. Agichtein and L. Gravano 2000. Snowball: Extracting Relations from Large Plain-Text Collections. ICDL

ORGANIZATION LOCATION

.69

.75

Page 70: Information Extraction and Named Entity Recognition

Dan Jurafsky

Distant Supervision

• Combine bootstrapping with supervised learning• Instead of 5 seeds,

• Use a large database to get huge # of seed examples

• Create lots of features from all these examples• Combine in a supervised classifier

Snow, Jurafsky, Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. NIPS 17Fei Wu and Daniel S. Weld. 2007. Autonomously Semantifying Wikipeida. CIKM 2007Mintz, Bills, Snow, Jurafsky. 2009. Distant supervision for relation extraction without labeled data. ACL09

Page 71: Information Extraction and Named Entity Recognition

Dan Jurafsky

Distant supervision paradigm

• Like supervised classification:• Uses a classifier with lots of features• Supervised by detailed hand-created knowledge• Doesn’t require iteratively expanding patterns

• Like unsupervised classification:• Uses very large amounts of unlabeled data• Not sensitive to genre issues in training corpus

Page 72: Information Extraction and Named Entity Recognition

Dan Jurafsky

Distantly supervised learning of relation extraction patterns

For each relation

For each tuple in big database

Find sentences in large corpus with both entities

Extract frequent features (parse, words, etc)

Train supervised classifier using thousands of patterns

44

11

22

33

55

PER was born in LOCPER, born (XXXX), LOCPER’s birthplace in LOC

<Edwin Hubble, Marshfield><Albert Einstein, Ulm>

Born-In

Hubble was born in MarshfieldEinstein, born (1879), UlmHubble’s birthplace in Marshfield

P(born-in | f1,f2,f3,…,f70000)

Page 73: Information Extraction and Named Entity Recognition

Dan Jurafsky

Unsupervised relation extraction

• Open Information Extraction: • extract relations from the web with no training data, no list of relations

1. Use parsed data to train a “trustworthy tuple” classifier2. Single-pass extract all relations between NPs, keep if trustworthy3. Assessor ranks relations based on text redundancy

(FCI, specializes in, software development) (Tesla, invented, coil transformer)

73

M. Banko, M. Cararella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the web. IJCAI

Page 74: Information Extraction and Named Entity Recognition

Dan Jurafsky

Evaluation of Semi-supervised andUnsupervised Relation Extraction

• Since it extracts totally new relations from the web • There is no gold set of correct instances of relations!

• Can’t compute precision (don’t know which ones are correct)• Can’t compute recall (don’t know which ones were missed)

• Instead, we can approximate precision (only)• Draw a random sample of relations from output, check precision manually

• Can also compute precision at different levels of recall.• Precision for top 1000 new relations, top 10,000 new relations, top 100,000• In each case taking a random sample of that set

• But no way to evaluate recall

74

Page 75: Information Extraction and Named Entity Recognition

Task: Wrapper Induction – from semi-structured/structured

Wrapper Induction Sometimes, the relations are structural.

Web pages generated by a database. Tables, lists, etc.

Wrapper induction is usually regular relations which can be expressed by the structure of the document:

the item in bold in the 3rd column of the table is the price

Handcoding a wrapper in Perl isn’t very viable sites are numerous, and their surface structure mutates

rapidly (around 10% failures each month) Wrapper induction techniques can also learn:

If there is a page about a research project X and there is a link near the word ‘people’ to a page that is about a person Y then Y is a member of the project X.

[e.g, Tom Mitchell’s Web->KB project]

Page 76: Information Extraction and Named Entity Recognition

Amazon Book Description….</td></tr></table><b class="sans">The Age of Spiritual Machines : When Computers Exceed Human Intelligence</b><br><font face=verdana,arial,helvetica size=-1>by <a href="/exec/obidos/search-handle-url/index=books&field-author= Kurzweil%2C%20Ray/002-6235079-4593641">Ray Kurzweil</a><br></font><br><a href="http://images.amazon.com/images/P/0140282025.01.LZZZZZZZ.jpg"><img src="http://images.amazon.com/images/P/0140282025.01.MZZZZZZZ.gif" width=90 height=140 align=left border=0></a><font face=verdana,arial,helvetica size=-1><span class="small"><span class="small"><b>List Price:</b> <span class=listprice>$14.95</span><br><b>Our Price: <font color=#990000>$11.96</font></b><br><b>You Save:</b> <font color=#990000><b>$2.99 </b>(20%)</font><br></span><p> <br>

….</td></tr></table><b class="sans">The Age of Spiritual Machines : When Computers Exceed Human Intelligence</b><br><font face=verdana,arial,helvetica size=-1>by <a href="/exec/obidos/search-handle-url/index=books&field-author= Kurzweil%2C%20Ray/002-6235079-4593641">Ray Kurzweil</a><br></font><br><a href="http://images.amazon.com/images/P/0140282025.01.LZZZZZZZ.jpg"><img src="http://images.amazon.com/images/P/0140282025.01.MZZZZZZZ.gif" width=90 height=140 align=left border=0></a><font face=verdana,arial,helvetica size=-1><span class="small"><span class="small"><b>List Price:</b> <span class=listprice>$14.95</span><br><b>Our Price: <font color=#990000>$11.96</font></b><br><b>You Save:</b> <font color=#990000><b>$2.99 </b>(20%)</font><br></span><p> <br>…

Page 77: Information Extraction and Named Entity Recognition

Extracted Book TemplateTitle: The Age of Spiritual Machines : When Computers Exceed Human IntelligenceAuthor: Ray KurzweilList-Price: $14.95Price: $11.96::

Page 78: Information Extraction and Named Entity Recognition

Template Types

Slots in template typically filled by a substring from the document.

Some slots may have a fixed set of pre-specified possible fillers that may not occur in the text itself.

Job type: clerical, service, custodial, etc. Company type: SEC code

Some slots may allow multiple fillers. Programming language

Some domains may allow multiple extracted templates per document.

Multiple apartment listings in one ad

Page 79: Information Extraction and Named Entity Recognition

Wrappers:Simple Extraction Patterns

Specify an item to extract for a slot using a regular expression pattern.

Price pattern: “\b\$\d+(\.\d{2})?\b” May require preceding (pre-filler) pattern to

identify proper context. Amazon list price:

Pre-filler pattern: “<b>List Price:</b> <span class=listprice>” Filler pattern: “\$\d+(\.\d{2})?\b”

May require succeeding (post-filler) pattern to identify the end of the filler.

Amazon list price: Pre-filler pattern: “<b>List Price:</b> <span class=listprice>” Filler pattern: “\$\d+(\.\d{2})?\b” Post-filler pattern: “</span>”

Page 80: Information Extraction and Named Entity Recognition

Simple Template Extraction

Extract slots in order, starting the search for the filler of the n+1 slot where the filler for the nth slot ended. Assumes slots always in a fixed order.

Title Author List price …

Make patterns specific enough to identify each filler always starting from the beginning of the document.

Page 81: Information Extraction and Named Entity Recognition

Pre-Specified Filler Extraction

If a slot has a fixed set of pre-specified possible fillers, text categorization can be used to fill the slot. Job category Company type

Treat each of the possible values of the slot as a category, and classify the entire document to determine the correct filler.

Page 82: Information Extraction and Named Entity Recognition

Wrapper tool-kits

Wrapper toolkits: Specialized programming environments for writing & debugging wrappers by hand

Examples World Wide Web Wrapper Factory (W4F)

[db.cis.upenn.edu/W4F] Java Extraction & Dissemination of

Information (JEDI) [www.darmstadt.gmd.de/oasys/projects/jedi]

Junglee Corporation

Page 83: Information Extraction and Named Entity Recognition

Wrapper induction

Highly regularsource documents

Relatively simple

extraction patterns

Efficient

learning algorithm

Writing accurate patterns for each slot for each domain (e.g. each web site) requires laborious software engineering.

Alternative is to use machine learning:

Build a training set of documents paired with human-produced filled extraction templates.

Learn extraction patterns for each slot using an appropriate machine learning algorithm.

Page 84: Information Extraction and Named Entity Recognition

Use <B>, </B>, <I>, </I> for extraction

<HTML><TITLE>Some Country Codes</TITLE><B>Congo</B> <I>242</I><BR><B>Egypt</B> <I>20</I><BR><B>Belize</B> <I>501</I><BR><B>Spain</B> <I>34</I><BR></BODY></HTML>

Wrapper induction: Delimiter-based extraction

Page 85: Information Extraction and Named Entity Recognition

l1, r1, …, lK, rK

Example: Find 4 strings<B>, </B>, <I>, </I> l1 , r1 , l2 , r2

labeled pages wrapper<HTML><HEAD>Some Country Codes</HEAD><B>Congo</B> <I>242</I><BR><B>Egypt</B> <I>20</I><BR><B>Belize</B> <I>501</I><BR><B>Spain</B> <I>34</I><BR></BODY></HTML>

<HTML><HEAD>Some Country Codes</HEAD><B>Congo</B> <I>242</I><BR><B>Egypt</B> <I>20</I><BR><B>Belize</B> <I>501</I><BR><B>Spain</B> <I>34</I><BR></BODY></HTML>

<HTML><HEAD>Some Country Codes</HEAD><B>Congo</B> <I>242</I><BR><B>Egypt</B> <I>20</I><BR><B>Belize</B> <I>501</I><BR><B>Spain</B> <I>34</I><BR></BODY></HTML>

<HTML><HEAD>Some Country Codes</HEAD><B>Congo</B> <I>242</I><BR><B>Egypt</B> <I>20</I><BR><B>Belize</B> <I>501</I><BR><B>Spain</B> <I>34</I><BR></BODY></HTML>

Learning LR wrappers

Page 86: Information Extraction and Named Entity Recognition

Distracting text in head and tail <HTML><TITLE>Some Country Codes</TITLE>

<BODY><B>Some Country Codes</B><P> <B>Congo</B> <I>242</I><BR> <B>Egypt</B> <I>20</I><BR> <B>Belize</B> <I>501</I><BR> <B>Spain</B> <I>34</I><BR> <HR><B>End</B></BODY></HTML>

A problem with LR wrappers

Page 87: Information Extraction and Named Entity Recognition

Ignore page’s head and tail

<HTML><TITLE>Some Country Codes</TITLE><BODY><B>Some Country Codes</B><P><B>Congo</B> <I>242</I><BR><B>Egypt</B> <I>20</I><BR><B>Belize</B> <I>501</I><BR><B>Spain</B> <I>34</I><BR><HR><B>End</B></BODY></HTML>

head

body

tail

}

}}

start of tail

end of head

Head-Left-Right-Tail wrappers

One (of many) solutions: HLRT

Page 88: Information Extraction and Named Entity Recognition

More sophisticated wrappers

LR and HLRT wrappers are extremely simple (though useful for ~ 2/3 of real Web sites!)

Recent wrapper induction research has explored more expressive wrapper classes [Muslea et al, Agents-98; Hsu et al, JIS-98; Kushmerick, AAAI-1999; Cohen, AAAI-1999; Minton et al, AAAI-2000]

Disjunctive delimiters Multiple attribute orderings Missing attributes Multiple-valued attributes Hierarchically nested data Wrapper verification and maintenance

Page 89: Information Extraction and Named Entity Recognition

Boosted wrapper induction

Wrapper induction is ideal for rigidly-structured machine-generated HTML…

… or is it?! Can we use simple patterns to extract from

natural language documents? … Name: Dr. Jeffrey D. Hermes … … Who: Professor Manfred Paul …... will be given by Dr. R. J. Pangborn …

… Ms. Scott will be speaking …… Karen Shriver, Dept. of ... … Maria Klawe, University of ...

Page 90: Information Extraction and Named Entity Recognition

BWI: The basic idea

Learn “wrapper-like” patterns for texts pattern = exact token sequence

Learn many such “weak” patterns Combine with boosting to build “strong”

ensemble pattern Boosting is a popular recent machine learning method

where many weak learners are combined

Demo: www.smi.ucd.ie/bwi Not all natural text is sufficiently regular for

exact string matching to work well!!

Page 91: Information Extraction and Named Entity Recognition

Learning for IE

Writing accurate patterns for each slot for each domain (e.g. each web site) requires laborious software engineering.

Alternative is to use machine learning: Build a training set of documents paired with human-

produced filled extraction templates. Learn extraction patterns for each slot using an

appropriate machine learning algorithm. Califf & Mooney’s Rapier system learns three

regex-style patterns for each slot: Pre-filler pattern Filler pattern Post-filler pattern

Page 92: Information Extraction and Named Entity Recognition

Rapier rule matching example

“…sold to the bank for an undisclosed amount…”POS: vb pr det nn pr det jj nnSClass: price

“…paid Honeywell an undisclosed price…”POS: vb nnp det jj nnSClass: price

RAPIER rules for extracting “transaction price”

Page 93: Information Extraction and Named Entity Recognition

Rapier Rules: Details Rapier rule :=

pre-filler pattern filler pattern post-filler pattern

pattern := subpattern + subpattern := constraint + constraint :=

Word - exact word that must be present Tag - matched word must have given POS tag Class - semantic class of matched word Can specify disjunction with “{…}” List length N - between 0 and N words satisfying other

constraints

Page 94: Information Extraction and Named Entity Recognition

Rapier’s Learning Algorithm

Input: set of training examples (list of documents annotated with “extract this substring”)

Output: set of rules

Init: Rules = a rule that exactly matches each training example

Repeat several times: Seed: Select M examples randomly and generate the K

most-accurate maximally-general filler-only rules(prefiller = postfiller = “true”).

Grow:Repeat For N = 1, 2, 3, … Try to improve K best rules by adding N context words of prefiller or postfiller context

Keep:Rules = Rules the best of the K rules – subsumed rules

Page 95: Information Extraction and Named Entity Recognition

Learning example (one iteration)

2 examples:‘… located in Atlanta, Georgia…”

‘… offices in Kansas City, Missouri…’

maximally specific rules(high precision, low recall)

maximally general rules(low precision, high recall)

appropriately general rule (high precision, high recall)

Init

Seed

Grow