Top Banner
Thesis Proposal Maximizing long-term ROI for Active Learning Systems
42

Thesis Proposal

Feb 23, 2016

Download

Documents

Robert Graber

Thesis Proposal. Maximizing long-term ROI for Active Learning Systems. Interactive Classification Goal: Optimize life-time Return On Investment. Majority transactions automatically cleared. Learning Model to Flag Transactions for Manual Intervention. Domain specific transaction processing. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Thesis Proposal

Thesis Proposal

Maximizing long-term ROI for Active Learning Systems

Page 2: Thesis Proposal

Interactive ClassificationGoal: Optimize life-time Return On Investment

Large volume (in millions) of transactions coming in

Majority transactions automatically cleared

Minority transactions flagged for auditing

Transactions processed

successfully

Domain specific transaction processing

Machine Learning model

Defining Characteristics • Expensive domain experts• Skewed class distribution (minority events)• Concept/Feature drift• Biased sampling of labeled historical data• Lots of unlabeled data

Lower false positive rates

based on learning model

Learning Model to Flag Transactions for Manual

Intervention

Page 3: Thesis Proposal

Interactive Classification Applications

• Fraud detection (Credit Card, Healthcare)• Network Intrusion detection• Video Surveillance• Information Filtering / Recommender Systems• Error prediction/Quality Control– Health Insurance Claims Rework

Page 4: Thesis Proposal

Health Insurance Claim Process - Rework

Underpayments Overpayments

Page 5: Thesis Proposal

Why is solving Claims Rework important?

• Inefficiencies in the healthcare process result in large monetary losses affecting corporations and consumers– For large (10 million+) insurance plan, estimated $1 billion in loss of revenue

– $91 billion over-spent in US every year on Health Administration and Insurance (McKinsey study’ Nov 2008)

– 131 percent increase in insurance premiums over past 10 years

• Claim payment errors drive a significant portion of these inefficiencies– Increased administrative costs and service issues of health plans

– Overpayment of Claims - direct loss

– Underpayment of Claims – loss in interest payment for insurer, loss in revenue for provider

Page 6: Thesis Proposal

•Classifier trained from labeled data•Human (user/expert) in the loop using the results but also providing feedback at a cost

•Goal: Maximize long-term Return on Investment (equivalent to the productivity of the entire system)

Interactive Classification Setting – Machine Learning Setup

Unlabeled + Labeled Data

Trained Classifier

Ranked List scored by classifier

Page 7: Thesis Proposal

Factorization of the problem

Cost (Time of human expert)

Exploration (Future classifier

performance)

Exploitation (Relevancy to the expert)

Exploration-Exploitation Tradeoffs

Cost-Sensitive Active Learning

Standard Ranking / Relevance Feedback Active Learning

Cost-

Sens

itive

Explo

itation

Page 8: Thesis Proposal

Factorization of the problem – characterization of the models

• Uniform– Each instance has same value

• Variable– Each instance has different value

which is dependent on the properties of the instance

• Markovian– Each instance has dynamically

changing value depending on the (ordered) history of instances already observed, in addition to the factors for Variable model

Cost (Time of human expert)

Exploration (Future classifier

performance)

Exploitation (Relevancy to the expert)

Page 9: Thesis Proposal

Example Cases for Factorization of Cost Model

• Uniform: Speculative versus definitive language usage distinction for biomedical abstracts– [Settles et al., 2008]

• Variable: Part Of Speech tagging – Annotation time dependent on the sentence length with

longer documents taking more time to label [Ringger et al., 2008]

• Markovian: Claims Rework Error Prediction– If similar claims are shown to the auditors in sequence

reducing the cognitive switching costs, the time taken to label reduces [Ghani and Kumar, 2011]

Page 10: Thesis Proposal

Example Cases for Factorization of Exploitation Model

• Uniform: Claims Rework Error Prediction– If we only account for the administrative overhead

of fixing a claim [Kumar et al., 2010]• Variable: Claims Rework Error Prediction– If we take into account the savings based on the

adjustment amount of the claim [Kumar et al., 2010]

• Markovian: Claims Rework Error Prediction– Root cause detection [Kumar et al., 2010]

Page 11: Thesis Proposal

Example Cases for Factorization of Exploration Model

• Uniform: Extracting contact details from email signature lines– Random strategy gives results comparable to other strategies

[Settles et al., 2008]• Variable: KDD Cup 1999, Network Intrusion detection

– Sparsity based strategy gives good performance [Ferdowsi et al., 2011]

– Dependent on the properties of the examples (or population) which can be pre-determined.

• Markovian: Uncertainty based active sampling strategy– Most commonly used strategy

Page 12: Thesis Proposal

Problem Statement

How can we maximize long term ROI of active learning systems for interactive classification problems?

Page 13: Thesis Proposal

Proposed Hypothesis

Jointly managing the cost, exploitation and exploration factors will lead to increased long term ROI compared to managing them independently

Page 14: Thesis Proposal

Proposed Contributions

• A framework to jointly manage cost, exploitation and exploration

• Extensions of Active Learning along the following dimensions– Differential utility of a labeled example– Dynamic cost of labeling an example– Tackling concept drift

Page 15: Thesis Proposal

Proposed Framework

• Choice of Cost model• Choice of Exploitation model• Choice of Exploration model• Utility metric• Algorithms to optimize the utility metric

Page 16: Thesis Proposal

Choice of Models

Cost Model

Exploitation Model

Exploration Model

Uniform

Variable

Markovian

Unifo

rm

Varia

ble

Mar

kovia

n Uniform

Variable

Markovian

Page 17: Thesis Proposal

Utility Metric• Domain dependent• May or may not have a simple instantiation in the domain• Possible instantiations for Claims Rework domain

– Return on Investment (Haertal et al, 2008)• Corresponds to the business goal of the deployed systems• Return: Cumulative dollar value of claims adjusted• Investment: Cumulative time (equivalent dollar amount) for auditing the claims• Does not take into account the classifier improvement/degradation

– Amortized Return on Investment• Amortized return: Calculate the net present value of the returns based on the

expected future classifier improvement• Return: Cumulative dollar value of claims adjusted + net present value of the

increased returns due to future classifier improvement• Investment: Cumulative time (equivalent dollar amount) for auditing the claims• Takes into account exploration and exploitation

Page 18: Thesis Proposal

Algorithm to optimize the utility metric

• Optimization straightforward if a well defined utility metric exists for the domain– Computational approximations may still be required for practical feasibility

• Cases where a utility metric is not well defined based on the constituent cost/exploration/exploitation models, approaches to explore– Rank fusion based approach

• Each model provides a ranking which are combined to get a final ranking– Explore relevant approaches from reinforcement learning

• Exploration-exploitation tradeoff using variance estimates in multi-armed bandits (Szepesvári et al, 2009)

• Upper Confidence Bounds for Trees (Kocsis and Szepesvári, 2006)• Multi-armed bandit with dependent arms (Pandey et al, 2007)

Page 19: Thesis Proposal

Labeled Data (1,…,t-1)

Trained Classifier (1,…,t-1)

Ranked List

Cost (Time of human expert)

Exploration (Future classifier

performance)

Exploitation (Relevancy to the expert)

Labeled Data (t)

Unlabeled Data (t)

Interactive Classification Framework-Experimental Setup

Performance evaluation done on the set of labeled instances obtained at each iteration

Page 20: Thesis Proposal

Evaluation

Compare various approaches with multiple baselines• Random• Pure Exploitation– Exploitation=Var; Exploration=Uniform; Cost=Uniform

• Pure Exploration– Exploration=Var/Mar; Exploitation=Uniform;

Cost=Uniform• Pure Cost sensitive– Cost=Var; Exploitation=Uniform; Exploration=Uniform

Page 21: Thesis Proposal

Preliminary results

• Graph with results from framework

Page 22: Thesis Proposal

Generalizing Active Learning for Handling Temporal Drift

• What is temporal drift?– Changing data distribution– Changing nature of classification problem– Adversarial actions

• Related Work– Traditional active learning assumes static unlabeled pool– Stream-based active learning (Chu et al., 2011) assumes no memory to

store the instances and makes online decisions to request labels• Not completely realistic as labeling requires human effort and is usually not

real-time– Learning approaches from data streams with concept drift

predominantly use ensembles over different time period (Kolter and Maloof, 2007)

Page 23: Thesis Proposal

Proposed Setup for Temporal Active Learning

• Periodically changing unlabeled pool, corresponding to the experimental setup for interactive framework– Cumulative streaming pool– Recent streaming pool– Novel setup

• Three components for handling temporal drift– Instance selection strategy– Type of model: Ensemble or Single– Instance or model weighing scheme

Page 24: Thesis Proposal

Proposed Instance Selection Strategies

• Model Weight Drift Strategy• Feature Weight Drift Strategy• Feature Distribution Drift Strategy

Page 25: Thesis Proposal

Detecting Drift – Change in Models over Time

• Claims rework domain• 15 models built over 15 time periods• Similarity between the models based on cosine measure

Page 26: Thesis Proposal

Preliminary results

• Evaluation metric: Precision at 5 percentile• Represented in graph as percentage of the best strategy at a given

iteration to give a sense that the mentioned strategies are not the best strategies at all iterations

• Uncertainty begins to perform poorly at later iterations and feature drift based strategy starts performing better

Page 27: Thesis Proposal

Proposed Work

• More experiments and analysis for claims rework data with data from different clients

• More experiments based on synthetic dataset with longer observation sequence to analyze the performance of sampling strategies

• Generation of synthetic data based on Gaussian Mixture models to mimic real data

Page 28: Thesis Proposal

Cost-Sensitive Exploitation

Cost (Time of human expert)

Exploration (Future classifier

performance)

Exploitation (Relevancy to the expert)

Cost-

Sens

itive

Explo

itation

Page 29: Thesis Proposal

More Like This strategy

Labeled DataRanked List scored

by classifier

Select Top m% claims

Ran k

Online Strategy

Cluster

Page 30: Thesis Proposal

Online “More-like-this” AlgorithmRequire a labeled set L and an unlabeled set U

1. Train classifier C on L2. Label U using C3. Select top m% scored unlabeled examples UT

4. Cluster the examples UT U L into k clusters 5. Rank the k clusters using a exploitation metric6. For each cluster ki in k

1. Rank examples in ki

2. For each example in ki

1. Query expert for label of 2. If precision of cluster ki is < Pmin and number of labels > Nmin, Next

Page 31: Thesis Proposal

Offline Comparison – MLT vs Baseline

• 9% relative improvement over baseline for Precision at 2nd percentile metric

Precision@1 Precision@2 Precision@50

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

MoreLikeThis

Baseline

Page 32: Thesis Proposal

Live System Deployment

Baseline - batch classifier

More-Like-This0

0.1

0.2

0.3

0.4

0.5

0.6

0

50

100

150

200

250

Precision

Time Taken per audit

Prec

isio

n

Tim

e(se

cond

s)

• Number of claims audited:– Baseline system: 200– More-Like-This: 307

• 90% relative improvement over baseline

• 27% reduction in audit time over baseline

~$10 Million savings /year for a typical insurance company

Page 33: Thesis Proposal

SummaryProblem Statement

How to maximize long term ROI of active learning systems for interactive classification problems

Page 34: Thesis Proposal

SummaryThesis Contributions

• Characterization of the interactive classification problem– Defining the cost/exploration/exploitation models

• Uniform• Variable• Markovian

• Generalization (Extensions?) of Active Learning along the following dimensions– Differential utility of a labeled example– Dynamic cost of labeling an example– Tackling concept drift– A framework to jointly manage these considerations

Page 35: Thesis Proposal

SummaryEvaluation

• Empirical Evaluation of the proposed framework– Using evaluation metric motivated by real business tasks– Datasets

• Real world dataset: Health Insurance Claims Rework• Synthetic dataset

– Comparison with multiple baselines based on underlying cost/exploitation/exploration models

• Methodological contribution – Novel experimental setup– Intend to make the synthetic dataset and its generators public

Page 36: Thesis Proposal

SummaryProposed Work: Temporal Active Learning

• Creation of synthetic datasets• Evaluation and analysis of proposed strategies

on synthetic and claims rework dataset

Page 37: Thesis Proposal

SummaryProposed Work: Framework for interactive classification

• Evaluate multiple utility metrics/optimization algorithm for Claims Rework domain

• Augment temporal drift synthetic data for evaluating framework

• Evaluate multiple utility metrics/optimization algorithm for synthetic dataset

Cost Model

Exploitation Model

Exploration Model

Uniform

Variable

Markovian

Unifo

rm

Varia

ble

Mar

kovia

n

Uniform

Variable

Markovian

Page 38: Thesis Proposal

Thanks

Page 39: Thesis Proposal

• Problem Description– High level factorization of the problem

• Related Work– Triangle

• Our proposed approach – framework– Broad categorization of the models

• Choice of models– Choice of utility metric– Choice of optimization– Proposed work (various aproaches)

• Temporal active learning– Some initial results

• Cost sensitive exploitation• Summary

– Problem statemnt– Contributions– Evaluation

Page 40: Thesis Proposal

Thesis Contributions• Problem Statement: How to generalize active learning to incorporate differential utility of a

labeled example(dynamic/variable exploitation), dynamic cost of labeling an example, concept drift in a unified framework that makes the deployment of such learning systems practical

• Contributions– Characterization of the interactive learning problem– Generalization of Active Learning along the following dimensions

• Differential utility of a labeled example• Dynamic cost of labeling an example• Tackling concept drift• Cost-Sensitive Exploitation• A unified framework to solve these considerations jointly

– First solution: Optimizing joint utility function based on cost, exploration utility and exploitation utility– Second solution: Using Upper Confidence Bound approach with contextual multi-armed bandit setup to incorporate the different

factors

– Empirical Evaluation of the proposed framework• Using evaluation metric motivated by real business tasks• Datasets

– Synthetic dataset– Real world dataset: Health Insurance Claims Rework

• Comparison with multiple baselines based on underlying factors

Page 41: Thesis Proposal

Situating the thesis work wrt related work

Active Learning

Cost-sensitiveProactiveLearning• Unreliable Oracle• Oracle variation

PrActiveLearning• Differential Utility• Dynamic cost• Concept Drift

Efficiency & Representation• Feature level feedback• Feature acquisition• Batch active learning

Page 42: Thesis Proposal

Problem Statement

How to generalize active learning to incorporate differential utility of a labeled example(dynamic/variable exploitation), dynamic cost of labeling an example, concept drift in a unified framework that makes the deployment of such learning systems practical