Top Banner
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint work with Victor Sheng, Foster Provost, and Jing Wang
31

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Mar 26, 2015

Download

Documents

Angel Grady
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Get Another Label? Improving Data Quality and Data Mining

Using Multiple, Noisy Labelers

Panos Ipeirotis

Stern School of BusinessNew York University

Joint work with Victor Sheng, Foster Provost, and Jing Wang

Page 2: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

2

Motivation

Many task rely on high-quality labels for objects:– relevance judgments for search engine results

– identification of duplicate database records

– image recognition

– song categorization

– videos

Labeling can be relatively inexpensive, using Mechanical Turk, ESP game …

Page 3: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Micro-Outsourcing: Mechanical Turk

Requesters post micro-tasks, a few cents each

Page 4: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

4

Motivation

Labels can be used in training predictive models

But: labels obtained through such sources are

noisy.

This directly affects the quality of learning models

Page 5: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

5

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Acc

ura

cyQuality and Classification Performance

Labeling quality increases classification quality increases

Q = 0.5

Q = 0.6

Q = 0.8

Q = 1.0

Page 6: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

6

How to Improve Labeling Quality

Find better labelers– Often expensive, or beyond our control

Use multiple noisy labelers: repeated-labeling– Our focus

Page 7: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

7

Majority Voting and Label Quality

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 3 5 7 9 11 13

Number of labelers

Inte

grat

ed q

ualit

y

P=0.4

P=0.5

P=0.6

P=0.7

P=0.8

P=0.9

P=1.0

Ask multiple labelers, keep majority label as “true” label

Quality is probability of majority label being correct

P is probabilityof individual labelerbeing correct

Page 8: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

8

Tradeoffs for Modeling

Get more examples Improve classification Get more labels per example Improve quality Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Acc

ura

cy

Q = 0.5

Q = 0.6

Q = 0.8

Q = 1.0

Page 9: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

9

Basic Labeling Strategies

Single Labeling– Get as many data points as possible

– One label each

Round-robin Repeated Labeling– Repeatedly label data points,

– Give next label to the one with the fewest so far

Page 10: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

10

Repeat-Labeling vs. Single Labeling

P= 0.8, labeling qualityK=5, #labels/example

Repeated

Single

With low noise, more (single labeled) examples better

Page 11: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

11

Repeat-Labeling vs. Single Labeling

P= 0.6, labeling qualityK=5, #labels/example

Repeated

Single

With high noise, repeated labeling better

Page 12: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

12

Selective Repeated-Labeling

We have seen: – With enough examples and noisy labels, getting multiple

labels is better than single-labeling

Can we do better than the basic strategies?

Key observation: we have additional information to guide selection of data for repeated labeling

– the current multiset of labels

Example: {+,-,+,+,-,+} vs. {+,+,+,+}

Page 13: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

13

Natural Candidate: Entropy

Entropy is a natural measure of label uncertainty:

E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1

Strategy: Get more labels for high-entropy label multisets

||

||log

||

||

||

||log

||

||)( 22 S

S

S

S

S

S

S

SSE

negativeSpositiveS |:||:|

Page 14: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

14

What Not to Do: Use Entropy

Improves at first, hurts in long run

Page 15: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Why not Entropy

In the presence of noise, entropy will be high even with many labels

Entropy is scale invariant – (3+ , 2-) has same entropy as (600+ , 400-)

15

Page 16: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

16

Estimating Label Uncertainty (LU)

Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs}

Label uncertainty = tail of beta distribution

SLU

0.50.0 1.0

Beta probability density function

Page 17: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Label Uncertainty

p=0.7 5 labels

(3+, 2-) Entropy ~ 0.97 CDF=0.34

17

Page 18: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Label Uncertainty

p=0.7 10 labels

(7+, 3-) Entropy ~ 0.88 CDF=0.11

18

Page 19: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Label Uncertainty

p=0.7 20 labels

(14+, 6-) Entropy ~ 0.88 CDF=0.04

19

Page 20: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Quality Comparison

20

0.60.650.7

0.750.8

0.850.9

0.951

0 400 800 1200 1600 2000Number of labels (waveform, p=0.6)

Labe

ling

qual

ity

UNF MULU LMU

Label Uncertainty

Round robin(already better

than single labeling)

Page 21: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

21

Model Uncertainty (MU)

Learning a model of the data provides an alternative source of information about label certainty

Model uncertainty: get more labels for instances that cause model uncertainty

Intuition?– for data quality, low-certainty “regions”

may be due to incorrect labeling of corresponding instances

– for modeling: why improve training data quality if model already is certain there?

Models

Examples

Self-healing process

+ ++

++ ++

+

+ ++

+

+ ++

++ ++

+

- - - -

- - - -- -

- -

- - - -

- - - -- - - -- - - -

- - - -

?

??

Page 22: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

22

Label + Model Uncertainty

Label and model uncertainty (LMU): avoid examples where either strategy is certain

MULULMU SSS

Page 23: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Quality

23

0.60.650.7

0.750.8

0.850.9

0.951

0 400 800 1200 1600 2000Number of labels (waveform, p=0.6)

Labe

ling

qual

ity

UNF MULU LMU

Label Uncertainty

Uniform, round robin

Label + Model Uncertainty

Model Uncertainty alone also improves

quality

Page 24: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

24

Comparison: Model Quality (I)

Label & Model Uncertainty

Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.

70

75

80

85

90

95

100

0 1000 2000 3000 4000Number of labels (sick, p=0.6)

Acc

urac

y

GRR MULU LMU

Page 25: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

25

Comparison: Model Quality (II)

65

70

75

80

85

90

95

100

0 1000 2000 3000 4000Number of labels (mushroom, p=0.6)

Acc

urac

y

GRR MULU LMUSL

Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.

Page 26: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

26

Summary of results

Micro-outsourcing (e.g., MTurk, RentaCoder, ESP game) change the landscape for data acquisition

Repeated labeling improves data quality and model quality With noisy labels, repeated labeling can be preferable to

single labeling When labels relatively cheap, repeated labeling can do

much better than single labeling Round-robin repeated labeling works well Selective repeated labeling improves substantially

Page 27: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

27

Opens up many new directions…

Strategies using “learning-curve gradient”

Estimating the quality of each labeler

Example-conditional labeling difficulty

Increased compensation vs. labeler quality

Multiple “real” labels

Truly “soft” labels

Selective repeated tagging

Page 28: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Thanks!

Q & A?

KDD’09 Workshop on Human Computationhttp://www.hcomp2009.org/Home.html

KDD’09 Workshop on Human Computationhttp://www.hcomp2009.org/Home.html

Page 29: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Estimating Labeler Quality

(Dawid, Skene 1979): “Multiple diagnoses”

– Assume equal qualities– Estimate “true” labels for examples– Estimate qualities of labelers given the “true” labels– Repeat until convergence

29

Page 30: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

So…

(Sometimes) quality of multiple noisy labelers better than quality of best labeler in set

30

Multiple noisy labelers improve quality

So, should we always get multiple labels?

Page 31: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers Panos Ipeirotis Stern School of Business New York University Joint.

Optimal Label Allocation

31