Entity/Event-Level Sentiment Detection and Inference Lingjia Deng Intelligent Systems Program University of Pittsburgh 1 Dr. Janyce Wiebe, Intelligent Systems Program, University of Pittsburgh Dr. Rebecca Hwa, Intelligent Systems Program, University of Pittsburgh Dr. Yuru Lin, Intelligent Systems Program, University of Pittsburgh Dr. William Cohen, Machine Learning Department, Carnegie Mellon
66
Embed
Entity/Event-Level Sentiment Detection and Inference Lingjia Deng Intelligent Systems Program University of Pittsburgh 1 Dr. Janyce Wiebe, Intelligent.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Entity/Event-LevelSentiment Detection and
Inference
Lingjia DengIntelligent Systems Program
University of Pittsburgh
Dr. Janyce Wiebe, Intelligent Systems Program, University of PittsburghDr. Rebecca Hwa, Intelligent Systems Program, University of Pittsburgh
Dr. Yuru Lin, Intelligent Systems Program, University of PittsburghDr. William Cohen, Machine Learning Department, Carnegie Mellon University
3
A World Of Opinions
REVIE
W EDITORIALS
BLOGSTWITTER
NEWS
6
Motivation• ... people protest the country’s same-sex
marriage ban ....
people
positive
negative
protest
same-sex marriage ban
WHAT ABOUT SAME-SEX MARRIAGE?
7
Explicit Opinions• The explicit opinions are revealed by opinion
expressions.
people
positive
negative
protestexplicit
same-sex marriage ban
8
Implicit Opinions• The implicit opinions are not revealed by
expressions, but are indicated in the text.• The system needs to infer implicit opinions.
Goal:Entity/Event-Level Sentiments• PositivePair(people, same-sex marriage)• NegativePair(people, same-sex marriage ban)
people
positive
negative
protest
implicit
explicit
same-sex marriage ban
11
Three Questions to Solve
• Is there any corpus annotated with both explicit and implicit sentiments?
• No. This proposal develops.
• Is there any inference rules defining how to infer implicit sentiments?
• Yes. (Wiebe and Deng, arXiv, 2014.)
• How do we incorporate the inference rules into computational models?
• This proposal investigates.
12
Completed and Proposed Work
• Expert Annotations on 70 documents (Deng et al., NAACL 2015)
• Non-expert Annotations on hundreds of documents
Corpus:MPQA 3.0
• A corpus of +/-effect event sentiments (Deng et al., ACL 2013)
• A model validating rules (Deng and Wiebe, EACL 2014)
• A model inferring sentiments (Deng et al., COLING 2014)
Sentiment Inference on
+/-Effect Events & Entities
• Joint Models• A pilot study (Deng and Wiebe, EMNLP 2015)
• Extracting Nested Source and Entity/Event Target• Blocking the rules
Sentiment Inference on
General Events & Entities
13
Background:Sentiment Corpora
Genre Source Target ImplicitOpinions
Review SentimentCorpus (Hu and Liu, 2004)
productreviews
writer the product, feature of the product
✗
Sentiment Tree Bank (Socher et al., 2013)
movie reviews
writer the movie✗
MPQA 2.0 (Wiebe at al., 2005; Wilson, 2008)
news, editorials, blogs, etc
writer, and any entity
an arbitrary span
✗
MPQA 3.0 news, editorials, blogs, etc
writer, and any entity
any entity/eventeTarget
✔
(head of noun phrase/verb phrase)
14
Background:MPQA Corpus• Direct subjective
o nested sourceo attitude
• attitude type• target
• Expressive subjective element (ESE)o nested sourceo polarity
• Objective speech evento nested sourceo target
15
MPQA 2.0: An Example
negative attitude
target
nested source:writer, Imam
When the Imam issued the fatwa against
Salman Rushdie for insulting the Prophet…
16
• Explicit sentimentso Extracting explicit opinion expressions, sources and
targets (Wiebe et al., 2005, Johansson and Moschitti, 2013a, Yang and Cardie, 2013, Moilanen and Pulman, 2007, Choi and Cardie, 2008, Moilanen et al., 2010).
• Implicit sentimentso Investigating features directly indicating implicit
sentiment (Zhang and Liu, 2011; Feng et al., 2013). No inference.
o A rules-based system requiring all oracle information. (Wiebe and Deng, arXiv 2014)
Background:Explicit and Implicit Sentiment
18
Completed and Proposed Work
• Expert Annotations on 70 documents (Deng et al., NAACL 2015)
• Non-expert Annotations on hundreds of documents
Corpus:MPQA 3.0
• A corpus of +/-effect event sentiments (Deng et al., ACL 2013)
• A model validating rules (Deng and Wiebe, EACL 2014)
• A model inferring sentiments (Deng et al., COLING 2014)
Sentiment Inference on
+/-Effect Events & Entities
• Joint Models• A pilot study (Deng and Wiebe, EMNLP 2015)
• Extracting Nested Source and Entity/Event Target• Blocking the rules
Sentiment Inference on
General Events & Entities
19
Completed and Proposed Work
• Expert Annotations on 70 documents (Deng et al., NAACL 2015)
• Non-expert Annotations on hundreds of documents
Corpus:MPQA 3.0
• A corpus of +/-effect event sentiments (Deng et al., ACL 2013)
• A model validating rules (Deng and Wiebe, EACL 2014)
• A model inferring sentiments (Deng et al., COLING 2014)
Sentiment Inference on
+/-Effect Events & Entities
• Joint Models• A pilot study (Deng and Wiebe, EMNLP 2015)
• Extracting Nested Source and Entity/Event Target• Blocking the rules
Sentiment Inference on
General Events & Entities
20
From MPQA 2.0 To MPQA 3.0
o “Imam” is negative toward “Rushdie’’.o “Imam” is negative toward “insulting’’.
o “Imam” is NOT negative toward “Prophet”.
negative attitude targ
et
nested source:writer, Imam
When the Imam issued the fatwa against
Salman Rushdie for insulting the Prophet…eTarge
t
21
Expert Annotations• Expert annotators include Dr. Janyce Wiebe and I.
• The expert annotators are asked to select which noun or verb is the eTarget of an attitude or an ESE.
• The expert annotators annotated 70 documents.
• The agreement score is 0.82 on average over four documents.
22
Non-Expert Annotations
• Previous work have tried to ask non expert annotators to annotate subjectivity and opinions (Akkaya et al., 2010, Socher et al., 2013).
• Reliable Annotationso Non-expert annotators with high credits.o Majority vote.o Weighted vote and reliable annotators (Welinder and Perona,
• The global model selects an optimal set of candidates:o one candidate from the four agent sentiment candidates,
• Agent1-pos, Agent1-neg, Agent2-pos, Agent2-nego one/no candidate from the reversed candidate,o one candidate from the +/-effect candidates,o one candidate from the four theme sentiment candidates.
• The framework assigns values (0 or 1) to uo maximizing the scores given by the local
detectors,
• and assigns values (0 or 1) to ξ, δo minimizing the cases where +/-effect event
sentiment rules are violated.
• Integer Linear Programming (ILP) is used.
p: candidate local score
u: binary indicator of choosing candidate
ξ, δ: slack variables of triple <i,k,j>representing this triple is an exception to +effect –effect rule (exception: 1)
50
• In a +effect event, sentiments are the same
+Effect Rule Constraints
1 001 10 0 +effect:
1-effect: 0
exception: 1not exception: 0
AND
51
-Effect Rule Constraints
• In a –effect event, sentiments are opposite.
52
Performances
Accur
acy
of Q
1
Accur
acy
of Q
2
Accur
acy
of Q
3
F-mea
sure
of Q
40
0.4
0.8
Light Color: LocalDark Color: ILP
(Q1) is it +effect or -effect?
(Q2) is the effect reversed?
(Q3) which spans are agents and themes?
(Q4) what are the writer’s sentiments?
Precision Q4
Recall of Q4
56
Part 2 Summary• Inferring sentiments toward entities participating
in the +/-effect events.
• Developed an annotated corpus (Deng et al., ACL 2013).
• Developed a graph-based propagation model showing the inference ability of rules (Deng and Wiebe, EACL 2014).
• Developed an Integer Linear Programming model jointly resolving various ambiguities w.r.t. +/-effect events and sentiments (Deng at al., COLING 2014).
57
Completed and Proposed Work
• Expert Annotations on 70 documents (Deng et al., NAACL 2015)
• Non-expert Annotations on hundreds of documents
Corpus:MPQA 3.0
• A corpus of +/-effect event sentiments (Deng et al., ACL 2013)
• A model validating rules (Deng and Wiebe, EACL 2014)
• A model inferring sentiments (Deng et al., COLING 2014)
Sentiment Inference on
+/-Effect Events & Entities
• Joint Models• A pilot study (Deng and Wiebe, EMNLP 2015)
• Extracting Nested Source and Entity/Event Target• Blocking the rules
Sentiment Inference on
General Events & Entities
59
Joint Models• In (Deng et al., COLING 2014), we use Integer
Linear Programming framework.
• Local systems are run.• Joint models take local scores as input, and
sentiment inference rules as constraints.
• In ILP, the rules are written in equations and in equations.
60
Joint Models: General Inference Rules• Great! Dr. Thompson likes the project. …
• Joint-1 (without any inference)• Joint-2 (added general sentiment inference rules)• Joint-3 (added +/-effect event information and the
rules)
Joint Model:Pilot Study Experiments
69
• Task: extracting PostivePair(s,t) and NegativePair(s,t).
• Baselines: for an opinion extracted by state-of-the-art systemso source s: the head of extracted source span
o eTarget t:• ALL NP/VP:
o all the nouns and verbs are eTargets• Opinion/Target Span Heads (state-of-the-art):
o head of extracted target span; o head of opinion span
o PositivePair or NegativePair: the extracted polarity
Joint Model:Pilot Study Performances
70
• Task: extracting PostivePairs and NegativePairs.
Joint Model:Pilot Study Performances
PosP
air A
ccur
acy
Neg
Pair
Accur
acy
PosP
air F
-mea
sure
Neg
Pair
F-mea
sure
0
0.1
0.2
0.3
0.4
0.5
ALL NP/VPOpinion/Target Span HeadsPSL1PSL2PSL3
True Negatives
71
• We cannot directly use the state-of-the-art sentiment analysis system outputs (spans) for entity/event-level sentiment analysis task.
• The inference rules can find more entity/event-level sentiments.
• The most basic joint models in our pilot study can improve in accuracies.
Joint Model:Pilot Study Conclusions
72
• Various variations of Markov Logic Network.• Integer Linear Programming.
• Each local component being improved.o Nested Sourceso ETargeto Blocking the rules
Joint Model:Proposed Extensions
74
Nested Sources nested source:writer, Imam,
Rushdie
negative attitude
When the Imam issued the fatwa against Salman Rushdie for insulting the Prophet…
• How do we know Rushdie is negative toward Prophet?• Because Imam claims so, by issuing the fatwa against
him.
• How do we know Imam has issued the fatwa?• Because the writer tells us so.
75
Nested Sourcesnested source:writer, Imam,
Rushdie
negative attitude
When the Imam issued the fatwa against Salman Rushdie for insulting the Prophet…
• Nested source reveals the embedded private states in MPQA.
• Attributing quotations (Pareti et al., 2013, de La Clergerie et al., 2011, Almeida et al., 2014).
• The overlapped opinions and opinion targets.
76
• Extracting named entities and events as potential eTargets (Pan et al., 2015, Finkel et al., 2005, Nadeau and Sekine, 2007, Li et al., 2013, Chen et al., 2009, Chen and Ji, 2009).
• Entity co-reference resolution (Haghighi and Klein, 2009;
Haghighi and Klein, 2010; Song et al., 2012).• Event co-reference resolution (Li et al., 2013, Chen et
al., 2009, Chen and Ji, 2009).
• Integrating external world knowledgeo Entity Linking to Wikipedia (Ji and Grishman, 2011; Milne and
Witten, 2008; Dai et al., 2011; Rao et al., 2013)
ETarget (Entity/Event Target)
79
• That man killed the lovely squirrel on purpose.o Positive toward squirrelo killing is a –effect evento Negative toward that man
• That man accidentally hurt the lovely squirrel.o Positive toward the squirrelo hurting is a –effect evento Negative toward that man
Blocking Inference Rules
✗
✓
80
• That man killed the lovely squirrel on purpose.o Positive toward squirrelo killing is a –effect evento Negative toward that man
• That man accidentally hurt the lovely squirrel.o Positive toward the squirrelo hurting is a –effect evento Negative toward that man
Blocking Inference Rules
✗
✓
81
Collect Blocking
Cases
Compare and Find differenc
es
Learn to Recogniz
e
Blocking Inference Rules
84
Part 3 Summary• Joint models:
o A pilot study (Deng and Wiebe, EMNLP 2015)o Improved joint models integrating improved