Top Banner
May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding: NSF; DHS; NIH; DARPA. DASH Optimization (Xpress-MP) Constrained Conditional Models Towards Better Semantic Analysis of Text Dan Roth Department of Computer Science University of Illinois at Urbana-Champaign Page 1
75

May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Mar 29, 2015

Download

Documents

Kelli Manlove
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

May 2013

KU Leuven, Belguim

With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding: NSF; DHS; NIH; DARPA. DASH Optimization (Xpress-MP)

Constrained Conditional Models Towards Better Semantic Analysis of Text

Dan RothDepartment of Computer ScienceUniversity of Illinois at Urbana-Champaign

Page 1

Page 2: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Nice to Meet You

Page 2

Page 3: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Natural Language Decisions are Structured Global decisions in which several local decisions play a role but there are

mutual dependencies on their outcome. It is essential to make coherent decisions in a way that takes the

interdependencies into account. Joint, Global Inference. TODAY:

How to support real, high level, natural language decisions How to learn models that are used, eventually, to make global decisions

A framework that allows one to exploit interdependencies among decision variables both in inference (decision making) and in learning.

Inference: A formulation for incorporating expressive declarative knowledge in decision making.

Learning: Ability to learn simple models; amplify its power by exploiting interdependencies.

Learning and Inference in NLP

Page 3

Page 4: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Comprehension

1. Christopher Robin was born in England. 2. Winnie the Pooh is a title of a book. 3. Christopher Robin’s dad was a magician. 4. Christopher Robin must be at least 65 now.

(ENGLAND, June, 1989) - Christopher Robin is alive and well. He lives in England. He is the same person that you read about in the book, Winnie the Pooh. As a boy, Chris lived in a pretty home called Cotchfield Farm. When Chris was three years old, his father wrote a poem about him. The poem was printed in a magazine for others to read. Mr. Robin then wrote a book. He made up a fairy tale land where Chris lived. His friends were animals. There was a bear called Winnie the Pooh. There was also an owl and a young pig, called a piglet. All the animals were stuffed toys that Chris owned. Mr. Robin made them come to life with his words. The places in the story were all near Cotchfield Farm. Winnie the Pooh was written in 1925. Children still love to read about Christopher Robin and his animal friends. Most people don't know he is a real person who is grown now. He has written two books of his own. They tell what it is like to be famous.

This is an Inference Problem

Page 4

Page 5: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Learning and Inference

Global decisions in which several local decisions play a role but there are mutual dependencies on their outcome. In current NLP we often think about simpler structured problems:

Parsing, Information Extraction, SRL, etc. As we move up the problem hierarchy (Textual Entailment, QA,….) not

all component models can be learned simultaneously We need to think about (learned) models for different sub-problems Knowledge relating sub-problems (constraints) becomes more

essential and may appear only at evaluation time Goal: Incorporate models’ information, along with prior

knowledge (constraints) in making coherent decisions Decisions that respect the local models as well as domain & context

specific knowledge/constraints.

Page 5

Page 6: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Outline Natural Language Processing with Constrained Conditional Models

A formulation for global inference with knowledge modeled as expressive structural constraints

Some examples

Extended semantic role labeling Preposition based predicates and their arguments Multiple simple models, Latent representations and Indirect Supervision

Amortized Integer Linear Programming Inference Exploiting Previous Inference Results Can the k-th inference problem be cheaper than the 1st?

Page 6

Page 7: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Three Ideas Underlying Constrained Conditional Models Idea 1: Separate modeling and problem formulation from algorithms

Similar to the philosophy of probabilistic modeling

Idea 2: Keep models simple, make expressive decisions (via constraints)

Unlike probabilistic modeling, where models become more expressive

Idea 3: Expressive structured decisions can be supported by simply learned models

Global Inference can be used to amplify simple models (and even allow training with minimal supervision).

Modeling

Inference

Learning

Page 7

Page 8: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Inference with General Constraint Structure [Roth&Yih’04,07]

Recognizing Entities and Relations

Dole ’s wife, Elizabeth , is a native of N.C.

E1 E2 E3

R12 R2

3

other 0.05

per 0.85

loc 0.10

other 0.05

per 0.50

loc 0.45

other 0.10

per 0.60

loc 0.30

irrelevant 0.10

spouse_of 0.05

born_in 0.85

irrelevant 0.05

spouse_of 0.45

born_in 0.50

irrelevant 0.05

spouse_of 0.45

born_in 0.50

other 0.05

per 0.85

loc 0.10

other 0.10

per 0.60

loc 0.30

other 0.05

per 0.50

loc 0.45

irrelevant 0.05

spouse_of 0.45

born_in 0.50

irrelevant 0.10

spouse_of 0.05

born_in 0.85

other 0.05

per 0.50

loc 0.45

Improvement over no inference: 2-5%

Models could be learned separately; constraints may come up only at decision time.

Page 8

Note: Non Sequential Model

Key Questions: How to guide the global inference? How to learn? Why not Jointly?

Y = argmax y score(y=v) [[y=v]] =

= argmax score(E1 = PER)¢ [[E1 = PER]] + score(E1 = LOC)¢ [[E1 =

LOC]] +… score(R

1 = S-of)¢ [[R

1 = S-of]] +…..

Subject to Constraints

An Objective function that incorporates

learned models with knowledge (constraints) A constrained Conditional Model

Page 9: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Constrained Conditional Models

How to solve?

This is an Integer Linear Program

Solving using ILP packages gives an exact solution. Cutting Planes, Dual Decomposition & other search techniques are possible

(Soft) constraints component

Weight Vector for “local” models

Penalty for violatingthe constraint.

How far y is from a “legal” assignment

Features, classifiers; log-linear models (HMM, CRF) or a combination

How to train?

Training is learning the objective function

Decouple? Decompose?

How to exploit the structure to minimize supervision?

Page 9

Page 10: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Inference: given input x (a document, a sentence),

predict the best structure y = {y1,y2,…,yn} 2 Y (entities & relations) Assign values to the y1,y2,…,yn, accounting for dependencies among yis

Inference is expressed as a maximization of a scoring function

y’ = argmaxy 2 Y wT Á (x,y)

Inference requires, in principle, touching all y 2 Y at decision time, when we are given x 2 X and attempt to determine the best y 2 Y for it, given w For some structures, inference is computationally easy. Eg: Using the Viterbi algorithm In general, NP-hard (can be formulated as an ILP)

Structured Prediction: Inference

Joint features on inputs and outputsFeature Weights

(estimated during learning)

Set of allowed structures

Placing in context: a crash course in structured prediction

Page 10

Page 11: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Structured Prediction: Learning

Learning: given a set of structured examples {(x,y)} find a scoring function w that minimizes empirical loss. Learning is thus driven by the attempt to find a weight vector w

such that for each given annotated example (xi, yi):

Page 11

Page 12: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Structured Prediction: Learning

Learning: given a set of structured examples {(x,y)} find a scoring function w that minimizes empirical loss. Learning is thus driven by the attempt to find a weight vector w such that for

each given annotated example (xi, yi):

We call these conditions the learning constraints.

In most learning algorithms used today, the update of the weight vector w is done in an on-line fashion, Think about it as Perceptron; this procedure applies to Structured Perceptron, CRFs, Linear

Structured SVM W.l.o.g. (almost) we can thus write the generic structured learning algorithm as

follows:

Score of annotated structure

Score of any other structure

Penalty for predicting other

structure8 y

Page 12

Page 13: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

In the structured case, the prediction (inference) step is often intractable and needs to be done many times

Structured Prediction: Learning Algorithm

For each example (xi, yi) Do: (with the current weight vector w)

Predict: perform Inference with the current weight vector yi’ = argmaxy 2 Y wT Á ( xi ,y)

Check the learning constraints Is the score of the current prediction better than of (xi, yi)?

If Yes – a mistaken prediction Update w

Otherwise: no need to update w on this example EndFor

Page 13

Page 14: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Structured Prediction: Learning Algorithm

For each example (xi, yi) Do:

Predict: perform Inference with the current weight vector yi’ = argmaxy 2 Y wEASY

T ÁEASY ( xi ,y) + wHARDT ÁHARD ( xi ,y)

Check the learning constraint Is the score of the current prediction better than of (xi, yi)?

If Yes – a mistaken prediction Update w

Otherwise: no need to update w on this example EndDo

Solution I: decompose the scoring function to EASY and HARD parts

EASY: could be feature functions that correspond to an HMM, a linear CRF, or even ÁEASY (x,y) = Á(x), omiting dependence on y, corresponding to classifiers.May not be enough if the HARD part is still part of each inference step.

Page 14

Page 15: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Structured Prediction: Learning Algorithm

For each example (xi, yi) Do:

Predict: perform Inference with the current weight vector yi’ = argmaxy 2 Y wEASY

T ÁEASY ( xi ,y) + wHARDT ÁHARD ( xi ,y)

Check the learning constraint Is the score of the current prediction better than of (xi, yi)?

If Yes – a mistaken prediction Update w

Otherwise: no need to update w on this example EndDo

Solution II: Disregard some of the dependencies: assume a simple model.

Page 15

Page 16: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Structured Prediction: Learning Algorithm

For each example (xi, yi) Do:

Predict: perform Inference with the current weight vector yi’ = argmaxy 2 Y wEASY

T ÁEASY ( xi ,y) + wHARDT ÁHARD ( xi ,y)

Check the learning constraint Is the score of the current prediction better than of (xi, yi)?

If Yes – a mistaken prediction Update w

Otherwise: no need to update w on this example EndDo yi’ = argmaxy 2 Y wEASY

T ÁEASY ( xi ,y) + wHARDT ÁHARD ( xi ,y)

This is the most commonly used solution in NLP today

Solution III: Disregard some of the dependencies during learning; take into account at decision time

Page 16

Page 17: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Linguistics Constraints

Cannot have both A states and B states in an output sequence.

Linguistics Constraints

If a modifier chosen, include its headIf verb is chosen, include its arguments

Examples: CCM Formulations

CCMs can be viewed as a general interface to easily combine declarative domain knowledge with data driven statistical models

Sequential Prediction

HMM/CRF based: Argmax ¸ij xij

Sentence Compression/Summarization:

Language Model based: Argmax ¸ijk xijk

Formulate NLP Problems as ILP problems (inference may be done otherwise)1. Sequence tagging (HMM/CRF + Global constraints)2. Sentence Compression (Language Model + Global Constraints)3. SRL (Independent classifiers + Global Constraints)

Page 17

(Soft) constraints component is more general since constraints can be declarative, non-grounded statements.

Page 18: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Semantic Role Labeling

I left my pearls to my daughter in my will .[I]A0 left [my pearls]A1 [to my daughter]A2 [in my will]AM-LOC .

A0 Leaver A1 Things left A2 Benefactor AM-LOC Location

I left my pearls to my daughter in my will .

Page 18

Archetypical Information Extraction Problem: E.g., Concept Identification and Typing, Event Identification, etc.

Page 19: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Algorithmic Approach

Identify argument candidates Pruning [Xue&Palmer, EMNLP’04] Argument Identifier

Binary classification Classify argument candidates

Argument Classifier Multi-class classification

Inference Use the estimated probability distribution given

by the argument classifier Use structural and linguistic constraints Infer the optimal global output

I left my nice pearls to her

I left my nice pearls to her[ [ [ [ [ ] ] ] ] ]

I left my nice pearls to her

candidate arguments

I left my nice pearls to her

Page 19

Use the pipeline architecture’s simplicity while maintaining uncertainty: keep probability distributions over decisions & use global inference at decision time.

Page 20: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Semantic Role Labeling (SRL)

I left my pearls to my daughter in my will .

0.5

0.15

0.15

0.1

0.1

0.15

0.6

0.05

0.05

0.05

0.05

0.1

0.2

0.6

0.05

0.05

0.05

0.7

0.05

0.150.3

0.2

0.2

0.1

0.2

Page 20

Page 21: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Semantic Role Labeling (SRL)

I left my pearls to my daughter in my will .

0.5

0.15

0.15

0.1

0.1

0.15

0.6

0.05

0.05

0.05

0.05

0.1

0.2

0.6

0.05

0.05

0.05

0.7

0.05

0.150.3

0.2

0.2

0.1

0.2

Page 21

Page 22: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Semantic Role Labeling (SRL)

I left my pearls to my daughter in my will .

0.5

0.15

0.15

0.1

0.1

0.15

0.6

0.05

0.05

0.05

0.05

0.1

0.2

0.6

0.05

0.05

0.05

0.7

0.05

0.150.3

0.2

0.2

0.1

0.2

One inference problem for each verb predicate.

Page 22

Page 23: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

No duplicate argument classes

Reference-Ax

Continuation-Ax

Many other possible constraints: Unique labels No overlapping or embedding Relations between number of arguments; order constraints If verb is of type A, no argument of type B

Any Boolean rule can be encoded as a set of linear inequalities.

If there is an Reference-Ax phrase, there is an Ax

If there is an Continuation-x phrase, there is an Ax before it

Constraints

Universally quantified rules

Learning Based Java: allows a developer to encode constraints in First Order Logic; these are compiled into linear inequalities automatically.

Page 23

Page 24: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

SRL: Posing the Problem

Demo: http://cogcomp.cs.illinois.edu/

Page 24

Page 25: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

y* = argmaxy wi Á(x; y) Linear objective functions Often Á(x,y) will be local functions,

or Á(x,y) = Á(x)

Context: Constrained Conditional Models

y7y4 y5 y6 y8

y1 y2 y3y7y4 y5 y6 y8

y1 y2 y3Conditional Markov Random Field Constraints Network

- i ½i dC(x,y)

Expressive constraints over output variables

Soft, weighted constraints Specified declaratively as FOL formulae

Clearly, there is a joint probability distribution that represents this mixed model.

We would like to: Learn a simple model or several simple models Make decisions with respect to a complex model

Key difference from MLNs which provide a concise definition of a model, but the whole joint one.

Page 25

Page 26: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Constrained Conditional Models – ILP formulations – have been shown useful in the context of many NLP problems

[Roth&Yih, 04,07: Entities and Relations; Punyakanok et. al: SRL …] Summarization; Co-reference; Information & Relation Extraction; Event

Identifications; Transliteration; Textual Entailment; Knowledge Acquisition; Sentiments; Temporal Reasoning, Dependency Parsing,…

Some theoretical work on training paradigms [Punyakanok et. al., 05 more; Constraints Driven Learning, PR, Constrained EM…]

Some work on Inference, mostly approximations, bringing back ideas on Lagrangian relaxation, etc.

Good summary and description of training paradigms: [Chang, Ratinov & Roth, Machine Learning Journal 2012]

Summary of work & a bibliography: http://L2R.cs.uiuc.edu/tutorials.html

Constrained Conditional Models—Before a Summary

Page 26

Page 27: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Outline Natural Language Processing with Constrained Conditional Models

A formulation for global inference with knowledge modeled as expressive structural constraints

Some examples

Extended semantic role labeling Preposition based predicates and their arguments Multiple simple models, Latent representations and Indirect Supervision

Amortized Integer Linear Programming Inference Exploiting Previous Inference Results Can the k-th inference problem be cheaper than the 1st?

Page 27

Page 28: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Verb SRL is not sufficient

John, a fast-rising politician, slept on the train to Chicago.

Relation: sleep• Sleeper: John, a fast-rising politician• Location: on the train to Chicago.

What was John’s destination?train to Chicago gives answer without verbs!

Who was John?John, a fast-rising politician gives answer without verbs!

Page 28

Page 29: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Examples of preposition relations

Queen of England

City of Chicago

Page 29

Page 30: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Predicates expressed by prepositions

live at Conway House at:1

stopped at 9 PMat:2

cooler in the eveningin:3

drive at 50 mphat:5

arrive on the 9th

on:17

the camp on the islandon:7

look at the watchat:9

Location

Temporal

ObjectOfVerb

Numeric

Index of definition on Oxford English Dictionary

Ambiguity & Variability

Page 30

Page 31: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Preposition relations [Transactions of ACL, ‘13]

An inventory of 32 relations expressed by preposition Prepositions are assigned labels that act as predicate in a predicate-

argument representation Semantically related senses of prepositions merged Substantial inter-annotator agreement

A new resource: Word sense disambiguation data, re-labeled SemEval 2007 shared task [Litkowski 2007]

~16K training and 8K test instances 34 prepositions

Small portion of the Penn Treebank [Dalhmeier, et al 2009] only 7 prepositions, 22 labels

Page 31

Page 32: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Computational Questions

1. How do we predict the preposition relations? [EMNLP, ’11]

Capturing the interplay with verb SRL? Very small jointly labeled corpus, cannot train a global model!

2. What about the arguments? [Trans. Of ACL, ‘13] Annotation only gives us the predicate How do we train an argument labeler?

Page 32

Page 33: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Predicting preposition relations

Multiclass classifier Uses sense disambiguation features

Depend on words syntactically connected to preposition [Hovy, et al. 2009]

Additional features based on NER, gazetteers, word clusters

Does not take advantage of the interactions between preposition and verb relations

Page 33

Page 34: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

The bus was heading for Nairobi in Kenya.

Coherence of predictions

Location

Destination

Predicate: head.02A0 (mover): The busA1 (destination): for Nairobi in Kenya

Predicate arguments from different triggers should be consistent

Joint constraints linking the two tasks.

Destination A1

Page 34

Page 35: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Joint inference

Each argument label

Argument candidates

PrepositionPreposition relationlabel

Verb SRL constraints

Only one label per preposition

Joint constraints (as linear inequalities)

Verb arguments Preposition relations

Re-scaling parameters (one per label)Constraints:

Variable ya,t indicates whether candidate argument a is assigned a label t. ca,t is the corresponding score

Page 36

Page 36: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Desiderata for joint prediction

Intuition: The correct interpretation of a sentence is the one that gives a consistent analysis across all the linguistic phenomena expressed in it1. Should account for dependencies between linguistic phenomena

2. Should be able to use existing state of the art models minimal use of expensive jointly labeled data

Joint constraints between tasks, easy with ILP forumation

Use small amount of joint data to re-scale scores to be in the same numeric range

Joint Inference – no (or minimal) joint learning

Page 37

Page 37: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Joint prediction helps

F175.6

76

76.4

76.8

77.2

76.22

76.84

77.07

Verb SRLIndependent Prep. -> Verb

Joint inference

Accuracy67.2

67.6

68

68.4

68.8

67.82

68.5568.39

Preposition RelationsIndependent Verb -> Prep.

Joint inferencePredicted together

All results on Penn Treebank Section 23

[EMNLP, ‘11]

Page 38

Page 38: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Example

Weatherford said market conditions led to the cancellation of the planned exchange.

Independent preposition SRL mistakenly labels the to as a Location

Verb SRL identifies to the cancellation of the planned exchange as an A2 for the verb led

Constraints (verbnet) prohibits an A2 to be labeled as a Location, joint inference correctly switches the prediction to EndCondition

Page 39

Page 39: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Preposition relations and arguments

1. How do we predict the preposition relations? [EMNLP, ’11]

Capturing the interplay with verb SRL? Very small jointly labeled corpus, cannot train a global model!

2. What about the arguments? [Trans. Of ACL, ‘13] Annotation only gives us the predicate How do we train an argument labeler?

Enforcing consistency between verb argument labels and preposition relations can help improve both

Page 40

Page 40: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Page 41

Indirect Supervision

In many cases, we are interested in a mapping from X to Y, but Y cannot be expressed as a simple function of X , hence cannot be learned well only as a function of X.

Consider the following sentences: S1: Druce will face murder charges, Conte said.

S2: Conte said Druce will be charged with murder . Are S1 and S2 a paraphrase of each other? There is a need for an additional set of variables to justify this decision.

There is no supervision of these variables, and typically no evaluation of these, but learning to assign values to them support better prediction of Y.

Discriminative form of EM [Chang et. al ICML10’, NAACL’10] [[Yu & Joachims ’09]

Page 41: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Preposition arguments

Governor Object

Poor care led to her death from pneumonia.

Cause(death, pneumonia)

led 0.2

her 0.2

death 0.6

pneumonia 1

Score candidates and select one governor and object

Page 42

Page 42: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Types are an abstraction that capture common properties of groups of entities.

Relations depend on argument types

Our primary goal is to model preposition relations and their arguments But the relation prediction strongly depends also on the semantic type of

the arguments.

Page 43

Poor care led to her death from pneumonia.

Cause(death, pneumonia)

Poor care led to her death from the flu.

How do we generalize to unseen words in the same “type”?

Page 43: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

WordNet IS-A hierarchy as types

pneumonia

=> respiratory disease

=> disease

=> illness

=> ill health

=> pathological state

=> physical condition

=> condition

=> state

=> attribute

=> abstraction

=> entity

More general, but less discriminative

Picking the right level in this hierarchy can generalize pneumonia and flu

Picking incorrectly will over-generalize

In addition to WordNet hypernyms, we also cluster verbs, nouns and adjectives using the dependency based word similarity of (Lin, 1998) and treat cluster membership as types.

Page 44

Page 44: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Why are types important?

Input Relation Governor type Object typeDied of pneumonia Cause Experience Disease Suffering from flu Cause Experience Disease

Some semantic relations hold only for certain types of entities

Page 45

Page 45: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Poor care led to her death from flu.

Cause

death flu

experience disease

Governor Object

Governor type

Object type

Relation

Predicate-argument structure of prepositions Supervision

Latent Structure

r(y)

h(y)

Prediction y

Page 46

Page 46: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Inference takes into account constrains among parts of the structure; formulated as an ILP Latent inference

Standard inference: Find an assignment to the full structure

Latent inference: Given an example with annotated r(y*)

Given that we have constraints between r(y) and h(y) this process completes the structure in the best possible way to support correct prediction of the variables being supervised.

Page 47

Page 47: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Learning algorithm

Initialize weight vector using multi-class classifier for predicates

Repeat Use latent inference with current weight to “complete” all missing

pieces Train weight vector (e.g., with Structured SVM)

During training, a weight vector w is penalized more if it makes a mistake on r(y)

Generalization of Latent Structure SVM [Yu & Joachims ’09] & Indirect Supervision learning [Chang et. al. ’10]

Page 48

Page 48: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Supervised Learning

Learning (updating w) is driven by:

Score of annotated structure

Score of any other structure

Penalty for predicting other

structure

Penalty for making a mistake must not be the same for the labeled r(y) and inferred h(y) parts

Completion of the hidden structure done in the inference step (guided by constraints)

Completion of the hidden structure done in the inference step (guided by constraints)

Page 49

Page 49: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Performance on relation labeling

Relations + Arguments + Types & Senses 87.5

88

88.5

89

89.5

90

90.5

Initialization+ Latent

Model size: 5.41 non-zero weights

Model size: 2.21 non-zero weights Learned to predict

both predicates and arguments

Using types helps. Joint inference with word

sense helps moreMore components constrain inference results and improve

inference

Page 50

Page 50: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Preposition relations and arguments

1. How do we predict the preposition relations? [EMNLP, ’11] Capturing the interplay with verb SRL? Very small jointly labeled corpus, cannot train a global model!

2. What about the arguments? [Trans. Of ACL, ‘13] Annotation only gives us the predicate How do we train an argument labeler?

Enforcing consistency between verb argument labels and preposition relations can help improve both

Knowing the existence of a hidden structure lets us “complete” it and helps us learn

Page 51

Page 51: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Outline Natural Language Processing with Constrained Conditional Models

A formulation for global inference with knowledge modeled as expressive structural constraints

Some examples

Extended semantic role labeling Preposition based predicates and their arguments Multiple simple models, Latent representations and Indirect Supervision

Amortized Integer Linear Programming Inference Exploiting Previous Inference Results Can the k-th inference problem be cheaper than the 1st?

Page 52

Page 52: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Constrained Conditional Models (aka ILP Inference)

How to solve?

This is an Integer Linear Program

Solving using ILP packages gives an exact solution. Cutting Planes, Dual Decomposition & other search techniques are possible

(Soft) constraints component

Weight Vector for “local” models

Penalty for violatingthe constraint.

How far y is from a “legal” assignment

Features, classifiers; log-linear models (HMM, CRF) or a combination

How to train?

Training is learning the objective function

Decouple? Decompose?

How to exploit the structure to minimize supervision?

Page 53

Page 53: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Inference in NLP

In NLP, we typically don’t solve a single inference problem. We solve one or more per sentence. Beyond improving the inference algorithm, what can be done?

S1

He

is

reading

a

book

After inferring the POS structure for S1, Can we speed up inference for S2 ?Can we make the k-th inference problem cheaper than the first?

S2

I

am

watching

a

movie

POS

PRP

VBZ

VBG

DT

NN

S1 & S2 look very different but their output structures are the same

The inference outcomes are the same

Page 54

Page 54: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Amortized ILP Inference [Kundu, Srikumar & Roth, EMNLP-12,ACL-13]

We formulate the problem of amortized inference: reducing inference time over the lifetime of an NLP tool

We develop conditions under which the solution of a new problem can be exactly inferred from earlier solutions without invoking the solver.

Results: A family of exact inference schemes A family of approximate solution schemes Algorithms are invariant to the underlying solver; we simply reduce the

number of calls to the solver

Significant improvements both in terms of solver calls and wall clock time in a state-of-the-art Semantic Role Labeling

Page 55

Page 55: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 480

100000

200000

300000

400000

500000

600000

Number of examples of given size

The Hope: POS Tagging on Gigaword

Number of Tokens

Page 56

Page 56: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Number of structures is much smaller than the number of sentences

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 480

100000

200000

300000

400000

500000

600000

Number of examples of size

Number of unique POS tag sequences

The Hope: POS Tagging on Gigaword

Number of Tokens

Number of examples of a given size Number of unique POS tag sequences

Page 57

Page 57: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

The Hope: Dependency Parsing on Gigaword

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 48 500

100000

200000

300000

400000

500000

600000

Number of Examples of sizeNumber of unique dependency trees

Number of Tokens

Number of structures is much smaller than the number of sentences

Number of examples of a given size Number of unique Dependency Trees

Page 58

Page 58: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

The Hope: Semantic Role Labeling on Gigaword

1 2 3 4 5 6 7 80

20000400006000080000

100000120000140000160000180000

Number of SRL structuresNumber of unique SRL structures

Number of Arguments per Predicate

Number of structures is much smaller than the number of sentences

Number of examples of a given size Number of unique SRL structures

Page 59

Page 59: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 480

100000

200000

300000

400000

500000

600000

Number of examples of size

Number of unique POS tag sequences

POS Tagging on Gigaword

Number of Tokens

How skewed is the distribution of the structures?

A small # of structures occur very frequently

Page 60

Page 60: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Amortized ILP Inference

These statistics show that many different instances are mapped into identical inference outcomes.

How can we exploit this fact to save inference cost?

We do this in the context of 0-1 LP, which is the most commonly used formulation in NLP.

Max cx Ax ≤ b x 2 {0,1}

Page 61

Page 61: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

x*P: <0, 1, 1, 0>

cP: <2, 3, 2, 1>cQ: <2, 4, 2, 0.5>

max 2x1+4x2+2x3+0.5x4

x1 + x2 ≤ 1 x3 + x4 ≤ 1

max 2x1+3x2+2x3+x4

x1 + x2 ≤ 1 x3 + x4 ≤ 1

Example I

P Q

Same equivalence class

Optimal Solution

Objective coefficients of problems P, Q

We define an equivalence class as the set of ILPs that have: the same number of inference variables

the same feasible set (same constraints modulo renaming)

Page 62

We give conditions on the objective functions, under which the solution of P (which we already cached) is the same as that of the new problem Q

Page 62: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

x*P: <0, 1, 1, 0>

cP: <2, 3, 2, 1>

cQ: <2, 4, 2, 0.5>

max 2x1+4x2+2x3+0.5x4

x1 + x2 ≤ 1 x3 + x4 ≤ 1

max 2x1+3x2+2x3+x4

x1 + x2 ≤ 1 x3 + x4 ≤ 1

Example I

P Q

Objective coefficients of active variables did not decrease from P to Q

Page 63

Page 63: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

x*P: <0, 1, 1, 0>

cP: <2, 3, 2, 1>

cQ: <2, 4, 2, 0.5>

max 2x1+4x2+2x3+0.5x4

x1 + x2 ≤ 1 x3 + x4 ≤ 1

max 2x1+3x2+2x3+x4

x1 + x2 ≤ 1 x3 + x4 ≤ 1

Example I

P Q

Objective coefficients of inactive variables did not increase from P to Q

x*P=x*

Q

Conclusion: The optimal solution of Q is the same as P’s

Page 64

Page 64: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Exact Theorem I

Denote: δc = cQ - cP

Theorem: Let x*

P be the optimal solution of an ILP P We are and assume that an ILP Q Is in the same equivalence class as P And, For each i ϵ {1, …, np } (2x*

P,i – 1)δci ≥ 0, where δc = cQ - cP

Then, without solving Q, we can guarantee that the optimal solution of Q is x*

Q= x*P

x*P,i = 0 cQ,i ≤ cP,i x*

P,i = 1 cQ,i ≥ cP,i

Page 65

Page 65: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Exact Theorem II

Theorem: Assume we have seen m ILP problems {P1, P2, …, Pm} s.t.

All are in the same equivalence class All have the same optimal solution

Let ILP Q be a new problem s.t. Q is in the same equivalence class as P1, P2, …, Pm

There exists an z ≥ 0 such that cQ = ∑ zi cPi

Then, without solving Q, we can guarantee that the optimal solution of Q is x*

Q= x*Pi

Page 67

Page 66: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

cP1

cP2

Solution x*

Feasible region

ILPs corresponding to all these objective vectors will share the same maximizer for this feasible region

All ILPs in the cone will share the maximizer

Exact Theorem II (Geometric Interpretation)

Page 68

Page 67: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Exact Theorem III (Combining I and II)

Theorem: Assume we have seen m ILP problems {P1, P2, …, Pm} s.t.

All are in the same equivalence class All have the same optimal solution

Let ILP Q be a new problem s.t. Q is in the same equivalence class as P1, P2, …, Pm

There exists an z ≥ 0 such that δc = cQ - ∑ zi cPi and (2x*P,i – 1) δci ≥ 0

Then, without solving Q, we can guarantee that the optimal solution of Q is x*

Q= x*Pi

Page 69

Page 68: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Approximation Methods

Will the conditions of the exact theorems hold in practice?

The statistics we showed almost guarantees they will. There are very few structures relative to the number of instances.

To guarantee that the conditions on the objective coefficients be satisfied we can relax them, and move to approximation methods.

Approximate methods have potential for more speedup than exact theorems. It turns out that indeed: Speedup is higher without a drop in accuracy.

0100000200000300000400000500000600000

Number of Examples of size

Page 70

Page 69: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Simple Approximation Method (I, II)

Most Frequent Solution: Find the set C of previously solves ILPs in Q‘s equivalence class Let S be the most frequent solution in C If the frequency of S is above a threshold (support) in C, return S,

otherwise call the ILP solver Top K Approximation:

Find the set C of previously solves ILPs in Q‘s equivalence class Let K be the set of most frequent solutions in C Evaluate each of the K solutions on the objective function of Q and

select the one with the highest objective value

Page 71

Page 70: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Theory based Approximation Methods (III, IV)

Approximation of Theorem I: Find the set C of previously solves ILPs in Q‘s equivalence class If there is an ILP P in C that satisfies Theorem I within an error margin

of ϵ, (for each i ϵ {1, …, np } (2x*P,i – 1)δci + ϵ ≥ 0, where δc = cQ - cP ),

return x*P

Approximation of Theorem III: Find the set C of previously solves ILPs in Q‘s equivalence class If there is an ILP P in C that satisfies Theorem III within an error margin

of ϵ, (There exists an z ≥ 0 such that: δc = cQ - ∑ zi cPi and (2x*

P,i – 1) δci + ϵ ≥ 0, return x*

P

Page 72

Page 71: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Semantic Role Labeling Task

I left my pearls to my daughter in my will .[I]A0 left [my pearls]A1 [to my daughter]A2 [in my will]AM-LOC .

A0 Leaver A1 Things left A2 Benefactor AM-LOC Location

Who did what to whom, when, where, why,…

Page 73

Page 72: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Experiments: Semantic Role Labeling

SRL: Based on the state-of-the-art Illinois SRL [V. Punyakanok and D. Roth and W. Yih, The Importance of Syntactic Parsing

and Inference in Semantic Role Labeling, Computational Linguistics – 2008] In SRL, we solve an ILP problem for each verb predicate in each sentence

Amortization Experiments: Speedup & Accuracy are measured over WSJ test set (Section 23) Baseline is solving ILP using Gurobi 4.6

For amortization: We collect 250,000 SRL inference problems from Gigaword and store in a

database For each ILP in test set, we invoke one of the theorems (exact / approx.) If found, we return it, otherwise we call the baseline ILP solver

Page 74

Page 73: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Solve only one in three problemsSpeedup & Accuracy

0.8

1.3

1.8

2.3

2.8

3.3

3.8

0

10

20

30

40

50

60

70

80

Exact Approximate

Speedup

F1

Page 75

1.0

Page 74: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Summary: Amortized ILP Inference

Inference can be amortized over the lifetime of an NLP tool Yields significant speed up, due to reducing the number of

calls to the inference engine, independently of the solver.

Current/Future work: Decomposed Amortized Inference

Possibly combined with Lagrangian Relaxation Approximation augmented with warm start Relations to lifted inference

Page 76

Page 75: May 2013 KU Leuven, Belguim With thanks to: Collaborators: Ming-Wei Chang, Gourab Kundu, Lev Ratinov, Rajhans Samdani, Vivek Srikumar, Many others Funding:

Conclusion Presented Constrained Conditional Models:

An ILP based computational framework that augments statistically learned linear models with declarative constraints as a way to incorporate knowledge and support decisions in an expressive output spaces

Maintains modularity and tractability of training A powerful & modular learning and inference paradigm for high level tasks.

Multiple interdependent components are learned and, via inference, support coherent decisions, modulo declarative constraints.

Learning issues: Exemplified some of the issues in the context of extended SRL Learning simple models; modularity; latent and indirect supervision

Inference: Presented a first step in amortized inference: How to use previous inference

outcomes to reduce inference cost

Thank You!

Check out our tools, demos, tutorial

Page 77