Information Extraction, Data Mining & Joint Inference Andrew McCallum Computer Science Department University of Massachusetts Amherst Joint work with Charles Sutton, Aron Culotta, Khashayar Rohanemanesh Ben Wellner, Karl Schultz, Michael Hay, Michael Wick, David Mimno.
129
Embed
Information Extraction, Data Mining & Joint Inference Andrew McCallum Computer Science Department University of Massachusetts Amherst Joint work with Charles.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Information Extraction, Data Mining & Joint Inference
Andrew McCallum
Computer Science Department
University of Massachusetts Amherst
Joint work with Charles Sutton, Aron Culotta, Khashayar Rohanemanesh,Ben Wellner, Karl Schultz, Michael Hay, Michael Wick, David Mimno.
My Research
Building models that mine actionable knowledge
from unstructured text.
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
Extracting Job Openings from the Web
foodscience.com-Job2
JobTitle: Ice Cream Guru
Employer: foodscience.com
JobCategory: Travel/Hospitality
JobFunction: Food Services
JobLocation: Upper Midwest
Contact Phone: 800-488-2611
DateExtracted: January 8, 2001
Source: www.foodscience.com/jobs_midwest.html
OtherCompanyJobs: foodscience.com-Job1
A Portal for Job Openings
Job
Op
enin
gs:
Cat
ego
ry =
Hig
h T
ech
Key
wo
rd =
Jav
a L
oca
tio
n =
U.S
.
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
Data Mining the Extracted Job Information
IE fromChinese Documents regarding Weather
Department of Terrestrial System, Chinese Academy of Sciences
Information Extraction = segmentation + classification + clustering + association
As a familyof techniques:
October 14, 2002, 4:00 a.m. PT
For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation.
Today, Microsoft claims to "love" the open-source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers.
"We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“
Richard Stallman, founder of the Free Software Foundation, countered saying…
Microsoft CorporationCEOBill GatesMicrosoftGatesMicrosoftBill VeghteMicrosoftVPRichard StallmanfounderFree Software Foundation
What is “Information Extraction”
Information Extraction = segmentation + classification + association + clustering
As a familyof techniques:
October 14, 2002, 4:00 a.m. PT
For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation.
Today, Microsoft claims to "love" the open-source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers.
"We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“
Richard Stallman, founder of the Free Software Foundation, countered saying…
Microsoft CorporationCEOBill GatesMicrosoftGatesMicrosoftBill VeghteMicrosoftVPRichard StallmanfounderFree Software Foundation
What is “Information Extraction”
Information Extraction = segmentation + classification + association + clustering
As a familyof techniques:
October 14, 2002, 4:00 a.m. PT
For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation.
Today, Microsoft claims to "love" the open-source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers.
"We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“
Richard Stallman, founder of the Free Software Foundation, countered saying…
Microsoft CorporationCEOBill GatesMicrosoftGatesMicrosoftBill VeghteMicrosoftVPRichard StallmanfounderFree Software Foundation
What is “Information Extraction”
Information Extraction = segmentation + classification + association + clustering
As a familyof techniques:
October 14, 2002, 4:00 a.m. PT
For years, Microsoft Corporation CEO Bill Gates railed against the economic philosophy of open-source software with Orwellian fervor, denouncing its communal licensing as a "cancer" that stifled technological innovation.
Today, Microsoft claims to "love" the open-source concept, by which software code is made public to encourage improvement and development by outside programmers. Gates himself says Microsoft will gladly disclose its crown jewels--the coveted code behind the Windows operating system--to select customers.
"We can be open source. We love the concept of shared source," said Bill Veghte, a Microsoft VP. "That's a super-important shift for us in terms of code access.“
Richard Stallman, founder of the Free Software Foundation, countered saying…
Microsoft CorporationCEOBill GatesMicrosoftGatesMicrosoftBill VeghteMicrosoftVPRichard StallmanfounderFree Software Foundation
– Probability + First-order Logic, Co-ref on Entities (MCMC)
• Demo: Rexa, a Web portal for researchers
Hidden Markov Models
St -1
St
Ot
St+1
Ot +1
Ot -1
...
...
Finite state model Graphical model
€
P(v s ,
v o )∝ P(st | st−1)P(ot | st )
t=1
|v o |
∏
HMMs are the standard sequence modeling tool in genomics, music, speech, NLP, …
...transitions
observations
o1 o2 o3 o4 o5 o6 o7 o8
Generates:
State sequence
Observation sequence
IE with Hidden Markov Models
Yesterday Ron Parr spoke this example sentence.
Yesterday Ron Parr spoke this example sentence.
Person name: Ron Parr
Given a sequence of observations:
and a trained HMM:
Find the most likely state sequence: (Viterbi)
Any words said to be generated by the designated “person name”state extract as a person name:
person name
location name
background
We want More than an Atomic View of Words
Would like richer representation of text: many arbitrary, overlapping features of the words.
St -1
St
Ot
St+1
Ot +1
Ot -1
identity of wordends in “-ski”is capitalizedis part of a noun phraseis in a list of city namesis under node X in WordNetis in bold fontis indentedis in hyperlink anchorlast person name was femalenext two words are “and Associates”
…
…part of
noun phrase
is “Wisniewski”
ends in “-ski”
Problems with Richer Representationand a Joint Model
These arbitrary features are not independent.– Multiple levels of granularity (chars, words, phrases)
Model the dependencies.Each state would have its own Bayes Net. But we are already starved for training data!
Ignore the dependencies.This causes “over-counting” of evidence (ala naïve Bayes). Big problem when combining evidence, as in Viterbi!
St -1
St
Ot
St+1
Ot +1
Ot -1
St -1
St
Ot
St+1
Ot +1
Ot -1
Conditional Sequence Models
• We prefer a model that is trained to maximize a conditional probability rather than joint probability:P(s|o) instead of P(s,o):
– Can examine features, but not responsible for generating them.
– Don’t have to explicitly model their dependencies.
– Don’t “waste modeling effort” trying to generate what we are given at test time anyway.
Joint
Conditional
St-1 St
Ot
St+1
Ot+1Ot-1
St-1 St
Ot
St+1
Ot+1Ot-1
...
...
...
...
(A super-special case of Conditional Random Fields.)
[Lafferty, McCallum, Pereira 2001]
where
From HMMs to Conditional Random Fields
Set parameters by maximum likelihood, using optimization method on L.
€
P(v s ,
v o ) = P(st | st−1)P(ot | st )
t=1
|v o |
∏
€
vs = s1,s2,...sn
v o = o1,o2,...on
€
P(v s |
v o ) =
1
P(v o )
P(st | st−1)P(ot | st )t=1
|v o |
∏
€
=1
Z(v o )
Φs(st ,st−1)Φo(ot ,st )t=1
|v o |
∏
€
Φo(t) = exp λ k fk (st ,ot )k
∑ ⎛
⎝ ⎜
⎞
⎠ ⎟
(Linear Chain) Conditional Random Fields
yt -1
yt
xt
yt+1
xt +1
xt -1
Finite state model Graphical model
Undirected graphical model, trained to maximize
conditional probability of output (sequence) given input (sequence)
. . .
FSM states
observations
yt+2
xt +2
yt+3
xt +3
said Jones a Microsoft VP …
OTHER PERSON OTHER ORG TITLE …
output seq
input seq
Asian word segmentation [COLING’04], [ACL’04]IE from Research papers [HTL’04]Object classification in images [CVPR ‘04]
Wide-spread interest, positive experimental results in many applications.
Noun phrase, Named entity [HLT’03], [CoNLL’03]Protein structure prediction [ICML’04]IE from Bioinformatics text [Bioinformatics ‘04],…
[Lafferty, McCallum, Pereira 2001]
€
p(y | x) =1
Zx
Φ(y t , y t−1,x, t)t
∏ where
€
Φ(y t ,y t−1,x, t) = exp λ k fk (y t ,y t−1,x, t)k
∑ ⎛
⎝ ⎜
⎞
⎠ ⎟
Table Extraction from Government ReportsCash receipts from marketings of milk during 1995 at $19.9 billion dollars, was slightly below 1994. Producer returns averaged $12.93 per hundredweight, $0.19 per hundredweight below 1994. Marketings totaled 154 billion pounds, 1 percent above 1994. Marketings include whole milk sold to plants and dealers as well as milk sold directly to consumers. An estimated 1.56 billion pounds of milk were used on farms where produced, 8 percent less than 1994. Calves were fed 78 percent of this milk with the remainder consumed in producer households. Milk Cows and Production of Milk and Milkfat: United States, 1993-95 -------------------------------------------------------------------------------- : : Production of Milk and Milkfat 2/ : Number :------------------------------------------------------- Year : of : Per Milk Cow : Percentage : Total :Milk Cows 1/:-------------------: of Fat in All :------------------ : : Milk : Milkfat : Milk Produced : Milk : Milkfat -------------------------------------------------------------------------------- : 1,000 Head --- Pounds --- Percent Million Pounds : 1993 : 9,589 15,704 575 3.66 150,582 5,514.4 1994 : 9,500 16,175 592 3.66 153,664 5,623.7 1995 : 9,461 16,451 602 3.66 155,644 5,694.3 --------------------------------------------------------------------------------1/ Average number during year, excluding heifers not yet fresh. 2/ Excludes milk sucked by calves.
Table Extraction from Government Reports
Cash receipts from marketings of milk during 1995 at $19.9 billion dollars, was
slightly below 1994. Producer returns averaged $12.93 per hundredweight,
$0.19 per hundredweight below 1994. Marketings totaled 154 billion pounds,
1 percent above 1994. Marketings include whole milk sold to plants and dealers
as well as milk sold directly to consumers.
An estimated 1.56 billion pounds of milk were used on farms where produced,
8 percent less than 1994. Calves were fed 78 percent of this milk with the
1/ Average number during year, excluding heifers not yet fresh.
2/ Excludes milk sucked by calves.
CRFLabels:• Non-Table• Table Title• Table Header• Table Data Row• Table Section Data Row• Table Footnote• ... (12 in all)
[Pinto, McCallum, Wei, Croft, 2003 SIGIR]
Features:• Percentage of digit chars• Percentage of alpha chars• Indented• Contains 5+ consecutive spaces• Whitespace in this line aligns with prev.• ...• Conjunctions of all previous features,
N Two words in common 29Y One word in common 13Y "Normalized" mentions are string identical 39Y Capitalized word in common 17Y > 50% character tri-gram overlap 19N < 25% character tri-gram overlap -34Y In same sentence 9Y Within two sentences 8N Further than 3 sentences apart -1Y "Hobbs Distance" < 3 11N Number of entities in between two mentions = 0 12N Number of entities in between two mentions > 4 -3Y Font matches 1Y Default -19
OVERALL SCORE = 98 > threshold=0
Pair-wise Affinity Metric
Y/N?
Entity Resolution
DanaHill
Mr.Hill
AmyHall
sheDana
“mention”
“mention” “mention”
“mention”
“mention”
Entity Resolution
DanaHill
Mr.Hill
AmyHall
sheDana
“entity”
“entity”
Entity Resolution
DanaHill
Mr.Hill
AmyHall
sheDana
“entity”
“entity”
Entity Resolution
DanaHill
Mr.Hill
AmyHall
sheDana
“entity”
“entity”
“entity”
The Problem
DanaHill
Mr.Hill
AmyHall
sheDana
Independent pairwise affinity with connected components
Pair-wise mergingdecisions are beingmade independentlyfrom each other
They should be madejointly.
Affinity measures are noisy and imperfect.
C
C N
C
C
NN
C
N
N
DanaHill
Mr.Hill
AmyHall
sheDana
€
P(v y |
v x ) =
1
Z v x
exp λ l f l (x i, x j , y ij ) + λ ' f '(y ij ,y jk, y ik )i, j,k
∑l
∑i, j
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
C
C N
C
C
NN
N
NC
[McCallum & Wellner, 2003, ICML]
Make pair-wise mergingdecisions jointly by:- calculating a joint prob.- including all edge weights- adding dependence on consistent triangles.
CRF for Co-reference
DanaHill
Mr.Hill
AmyHall
sheDana
€
P(v y |
v x ) =
1
Z v x
exp λ l f l (x i, x j , y ij ) + λ ' f '(y ij ,y jk, y ik )i, j,k
∑l
∑i, j
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
C
C N
C
C
NN
N
NC
[McCallum & Wellner, 2003, ICML]
CRF for Co-reference
CRF for Co-reference
€
P(v y |
v x ) =
1
Z v x
exp λ l f l (x i, x j , y ij ) + λ ' f '(y ij ,y jk, y ik )i, j,k
∑l
∑i, j
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
C
C N
C
C
NN
N
C
+(23)
+(10)
+(4)
+(17)
-(-55)
∞−
-(-44)
-(-23)+(11)
-(-9) -(-22)
DanaHill
Mr.Hill
AmyHall
sheDana
N
218
CRF for Co-reference
€
P(v y |
v x ) =
1
Z v x
exp λ l f l (x i, x j , y ij ) + λ ' f '(y ij ,y jk, y ik )i, j,k
∑l
∑i, j
∑ ⎛
⎝ ⎜ ⎜
⎞
⎠ ⎟ ⎟
C
C N
C
N
NN
N
C
+(23)
+(10)
-(4)
+(17)
-(-55)
-(-44)
-(-23)+(11)
-(-9) -(-22)
DanaHill
Mr.Hill
AmyHall
sheDana
N
0∞−210
CRF for Co-reference
€
P(v y |
v x ) =
1
Z v x
exp λ l f l (x i, x j , y ij ) + λ ' f '(y ij ,y jk, y ik )i, j,k
[Powell, Mr. Powell, he] --> YES[Powell, Mr. Powell, she] --> NO
• ...Rather...
[Powell, Mr. Powell, he] > [Powell, Mr. Powell, she]
• In general, higher-ranked example may contain errors
[Powell, Mr. Powell, George, he] > [Powell, Mr. Powell, George, she]
1.
UPDATE
Ranking Intermediate SolutionsExample
2.
∆ Model = -23∆ Truth = -0.2
3.
∆ Model = 10∆ Truth = -0.1
4.
∆ Model = -10∆ Truth = -0.1
5.
∆ Model = 3∆ Truth = 0.3
• Like Perceptron:Proof of convergence under Marginal Separability
• More constrained than Maximum Likelihood:Parameters must correctly rank incorrect solutions!
1. Proposer:2. Performance Metric:3. Inputs: input sequence x and an initial (random) configuration4. Initialization: set the parameter vector5. Output: Parameters 6. Score function:
7. For t = 1,…,T and i = 0, …, n-1 do. Generate a training instance. Let and be the best and worst configurations among and according to the performance metric. . If
. end if
8. end for
€
y t +1 = Propose(x, y t )
€
F(Y )
€
y0
€
α =0
€
α
€
y t +1 = Propose(x,y t )
€
y +
€
y−
€
y t
€
y t +1€
Scorexα (y) = α .φ(x,y )
€
y t +1
€
Scorexα (y +) < Scorex
α (y−)
€
α =α +φ(x,y +) − φ(x,y−)
Sample Rank Algorithm
Weighted Logics TechniquesOverview
• Metropolis-Hastings (MH) for inference– Freely bake-in domain knowledge about fruitful jumps;
MH safely takes care of its biases.– Avoid memory and time consumption with massive
deterministic constraint factors: built jump functions that simply avoid illegal states.
• “Sample Rank”– Don’t train by likelihood of completely correct solution...– ...train to properly rank intermediate configurations
the partition function (normalizer) cancels!...plus other efficiencies
Partition Affinity CRF Experiments
Likelihood-basedTraining
Rank-basedTraining
Partition
Affinity69.2 79.3
Pairwise
Affinity62.4 72.5
B-Cubed F1 Score on ACE 2004 Noun Coreference
Better Representation
Better Training
To our knowledge, best previously reported results:
1997 65%
2002 67%
2005 68%
New
state-of-the-art
[Culotta, Wick, Hall, McCallum, 2007]
Outline
• The Need for IE and Data Mining.
• Motivate Joint Inference
• Brief introduction to Conditional Random Fields
• Joint inference: Information Extraction Examples
– Joint Labeling of Cascaded Sequences (Belief Propagation)
– Joint Labeling of Distant Entities (BP by Tree Reparameterization)
– Probability + First-order Logic, Co-ref on Entities (MCMC)
– Joint Information Integration (MCMC + Sample Rank)
• Demo: Rexa, a Web portal for researchers
Database A (Schema A)
First Name Last Name Contact
J. Smith 222-444-1337
J. Smith 444 1337
John Smith (1) 4321115555
Database B (Schema B)
Name Phone
John Smith U.S. 222-444-1337
John D. Smith 444 1337
J Smiht 432-111-5555
Schema A Schema B
First Name Name
Last Name Phone
Contact
John #1 John #2
J. Smith John Smith
J. Smith J Smiht
John Smith
John D. Smith
Information Integration
Entity# Name Phone
523 John Smith 222-444-1337
524 John D. Smith 432-111-5555
… … …
Schema MatchingCoreference
Normalized DB
Information Integration Steps
First Name, Last Name
Name Phone
Contact J. Smith
John Smith
J. Smith
A. Jones
Amanda John Smith ..
Amanda Jones ..
… ..
1. Schema Matching
2. Coreference
3. Canonicalization
Problems with a Pipeline
1. Data integration tasks are highly correlated
2. Errors can propagate
Schema Matching First
First Name, Last Name
Name Phone
Contact
1. Schema Matching
Provides Evidence
1. String Identical: F.Name+L.Name==Name2. Same Area Code: 3-gram in Phone/Contact3. …
J. Smith
John Smith
J. Smith
A. JonesAmanda
2. Coreference
NEW FEATURES
Coreference First
1. Field values similar across coref’d records2. Phone+Contact has same value for J. Smith mentions3. …
J. Smith
John Smith
J. Smith
A. JonesAmanda
1. Coreference
NEW FEATURES
Name Phone
Contact
2. Schema Matching
First Name, Last NameProvides Evidence
Problems with a Pipeline
1. Data integration tasks are highly correlated
2. Errors can propagate
Hazards of a Pipeline
Full Name
Company Name
Phone
Contact
1. Schema Matching
Table B
Full Name Company Name
Amanda Jones Smith & Sons
John Smith IBM
Table A
Name Corporation
Amanda Jones J. Smith & Sons
J. Smith IBM
ERRORS PROPOGATE
2. Coreferent?
Coref
Canonicalization
John SmithJ. SmithJ. SmithJ. SmihtJ.S MithJonh smithJohn
Canoni-calization
John Smith
Entity 87
Typically occurs AFTER coreference
Desiterata:• Complete: Contains all information (e.g. first + last)• Error-free: No typos (e.g. avoid “Smiht”)• Central: Represents all mentions (not “Mith”)
Access to such features would be very helpful to Coref
x6 x7
y67
f67x5 x8
x4
f5
y5
f8
y8y54 y54
Schema Matching
y1 y2x3
y3
y13 y23
y12
f1 f2
Coreference and Canonicalization
€
P(Y | X) =1
ZXψw(yi, xi) ψb(yij, xij)
yi,yj∈Y
∏yi∈Y
∏
€
ψ(yi,xi) = exp λ kfk(yi,xi)k
∑ ⎛
⎝ ⎜
⎞
⎠ ⎟f7
y5
y7
• x6 is a set of attributes {phone,contact,telephone}}
• x7 is a set of attributes {last name, last name}
• f67 is a factor between x6/x7
• y67 is a binary variable indicating a match (no)
• f7 is a factor over cluster x7
• y7 is a binary variable indicating match (yes)
x6 x7
y67
f67x5 x8
x4
f5
y5
f8
y8y54 y54
Schema Matching
y1 y2x3
y3
y13 y23
y12
f1 f2
Coreference and Canonicalization
€
P(Y | X) =1
ZXψw(yi, xi) ψb(yij, xij)
yi,yj∈Y
∏yi∈Y
∏
€
ψ(yi,xi) = exp λ kfk(yi,xi)k
∑ ⎛
⎝ ⎜
⎞
⎠ ⎟f7
y5
y7
x1 x2
• x1 is a set of mentions {J. Smith,John,John Smith}}
• x2 is a set of mentions {Amanda, A. Jones}
• f12 is a factor between x1/x2
• y12 is a binary variable indicating a match (no)
• f1 is a factor over cluster x1
• y1 is a binary variable indicating match (yes)
• Entity/attribute factors omitted for clarity
x6 x7
y67
f67x5 x8
x4
f5
y5
f8
y8y54 y54
Schema Matching
f43
y1 y2x3
y3
y13 y23
y12
f1 f2
Coreference and Canonicalization
€
P(Y | X) =1
ZXψw(yi, xi) ψb(yij, xij)
yi,yj∈Y
∏yi∈Y
∏
€
ψ(yi,xi) = exp λ kfk(yi,xi)k
∑ ⎛
⎝ ⎜
⎞
⎠ ⎟f7
y5
y7
x1 x2
Dataset
• Faculty and alumni listings from university websites, plus an IE system
• 9 different schemas
• ~1400 mentions, 294 coreferent
Example Schemas
DEX IE Northwestern Fac UPenn Fac
First Name Name Name
Middle Name Title First Name
Last Name PhD Alma Mater Last Name
Title Research Interests Job+Department
Department Office Address
Company Name E-mail
Home Phone
Office Phone
Fax Number
E-mail
Schema Matching Features
• String identical• Sub string matches• TFIDF weighted cosine distance• All of the above with between coreferent
– Probability + First-order Logic, Co-ref on Entities (MCMC)
– Joint Information Integration (MCMC + Sample Rank)
• Demo: Rexa, a Web portal for researchers
Data Mining Research Literature
• Better understand structure of our own research area.
• Structure helps us learn a new field.• Aid collaboration• Map how ideas travel through social networks
of researchers.
• Aids for hiring and finding reviewers!• Measure impact of papers or people.
Our Data
• Over 1.6 million research papers, gathered as part of Rexa.info portal.
• Cross linked references / citations.
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
Previous Systems
QuickTime™ and aTIFF (LZW) decompressor
are needed to see this picture.
ResearchPaper
Cites
Previous Systems
ResearchPaper
Cites
Person
UniversityVenue
Grant
Groups
Expertise
More Entities and Relations
Topical TransferCitation counts from one topic to another.
Map “producers and consumers”
Topical Bibliometric Impact Measures
• Topical Citation Counts
• Topical Impact Factors
• Topical Longevity
• Topical Precedence
• Topical Diversity
• Topical Transfer
[Mann, Mimno, McCallum, 2006]
Topical Transfer
Transfer from Digital Libraries to other topics
Other topic Cit’s Paper Title
Web Pages 31 Trawling the Web for Emerging Cyber-Communities, Kumar, Raghavan,... 1999.
Computer Vision 14 On being ‘Undigital’ with digital cameras: extending the dynamic...
Video 12 Lessons learned from the creation and deployment of a terabyte digital video libr..
Graphs 12 Trawling the Web for Emerging Cyber-Communities
Web Pages 11 WebBase: a repository of Web pages
Topical Diversity
Papers that had the most influence across many other fields...
Topical DiversityEntropy of the topic distribution among
papers that cite this paper (this topic).
HighDiversity
LowDiversity
Summary
• Joint inference needed for avoiding cascading errors in information extraction and data mining.– Most fundamental problem in NLP, data mining, ...
• Can be performed in CRFs– Cascaded sequences (Factorial CRFs)– Distant correlations (Skip-chain CRFs)– Co-reference (Affinity-matrix CRFs)– Logic + Probability (efficient by MCMC + Sample Rank)– Information Integration
• Rexa: New research paper search engine, mining the interactions in our community.
Outline
• Model / Feature Engineering
– Brief review of IE w/ Conditional Random Fields
– Flexibility to use non-independent features
• Inference
– Entity Resolution with Probability + First-order Logic
– Resolution + Canonicalization + Schema Mapping
– Inference by Metropolis-Hastings
• Parameter Estimation
– Semi-supervised Learning with Label Regularization