A Framework for Scalable Cost- A Framework for Scalable Cost- sensitive Learning Based on sensitive Learning Based on Combining Probabilities and Combining Probabilities and Benefits Benefits Wei Fan, Haixun Wang, and Philip S. Yu Wei Fan, Haixun Wang, and Philip S. Yu IBM T.J.Watson IBM T.J.Watson Salvatore J. Stolfo Salvatore J. Stolfo Columbia University Columbia University
32
Embed
A Framework for Scalable Cost- sensitive Learning Based on Combining Probabilities and Benefits Wei Fan, Haixun Wang, and Philip S. Yu IBM T.J.Watson Salvatore.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
A Framework for Scalable Cost-A Framework for Scalable Cost-sensitive Learning Based on sensitive Learning Based on Combining Probabilities and Combining Probabilities and BenefitsBenefits
Wei Fan, Haixun Wang, and Philip S. YuWei Fan, Haixun Wang, and Philip S. YuIBM T.J.WatsonIBM T.J.Watson
Salvatore J. StolfoSalvatore J. StolfoColumbia UniversityColumbia University
Scalable Issues of Data MiningScalable Issues of Data Mining
ƒ Two folds: the data and the algorithm.ƒ Dataset:
too big to fit into memory.inherently distributed across the network.incremental data available periodically.
Scalable Issues of Data Mining Scalable Issues of Data Mining
ƒ Learning algorithm:non-linear complexity in the size of dataset n. memory based due to random access pattern of record in dataset.significantly slower if dataset is not held entirely in memory.
ƒ State-of-the-artmany scalable solutions are algorithm specific.
decision trees: SPRINT, RainForest and BOATgeneral algorithms are not very scalable and only work for cost-insensitive problemsmeta-learning
ƒ Question: general and work for both cost-sensitive and cost-insentive problems.
Cost-sensitive ProblemsCost-sensitive Problems
ƒ Charity Donation:Solicit to people who will donate large amount of charity.Costs $0.68 to send a letter.E(x): expected donation amount.Only solicit if E(x) > 0.68, otherwise lose money.
ƒ Credit card fraud detection:Detect frauds with high transaction amount
$90 to challenge a potential fraudE(x): expected fraudulant transaction amount.Only challenge if E(x) > $90, otherwise lose money.
ƒ Question: how to estimate E(x) efficiently?
Basic FrameworkBasic Framework
D
D1 D2D2
large dataset
partition into
k subsets
ML1ML2 MLt
C1 C2Ck
generate
k models
Basic FrameworkBasic Framework
DTest Set
C1 C2 Ck
Sent to k models
P1 P2 PkCompute k predictions
Combine
P
Combine to one prediction
Cost-sensitive Decision MakingCost-sensitive Decision Making
ƒ Assume that records the benefit received by predicting an example of class to be an instance of class .
ƒ The expected benefit received to predict an example to be an instance of class (regardless of its true label) is
ƒ The optimal decision-making policy chooses the label that maximizes the expected benefit, i.e.,
ƒ When and is a
traditional accuracy-based problem.ƒ Total benefits
Charity Donation ExampleCharity Donation Example
ƒ It costs $.68 to send a solicitation.ƒ Assume that is the best
estimate of the donation amount,
ƒ The cost-sensitive decision making will solicit an individual if and only if
For decision trees, is the number of examples in a node and is the number of examples with class label , then the probability is more sophisticated methods
smoothing:early stopping, and early stopping plus smoothing
For rules, probability is calucated in the same way as decision trees
ƒ Each model computes an expected benefit for example over every class label
ƒ Combining individual expected benefit together
ƒ We choose the label with the highest combined expected benefit
1. Decision threshold line2. Examples on the left are more profitable than those on the right3. "Evening effect": biases towards big fish.
Why accuracy is higher?Why accuracy is higher?
More sophisticated combining More sophisticated combining approachesapproaches
ƒ Regression:Treat base classifiers' outputs as indepedent variables of regression and the true label as dependent variables.
ƒ Modify Meta-learning:Learning a classifier that maps the base classifiers' class label predictions to that the true class label.For cost-sensitive learning, the top level classifier output probability instead of just a label.
ƒ Evaluated a wide range of combining techniques include variations of averaging, regression and meta-learning for scalable cost-sensitive (and cost-insensitive learning).
ƒ Averaging, although simple, has the highest accuracy.
ƒ Previously proposed approaches have significantly more overhead and only work well for tradtional accuracy-based problems.
ƒ Future work: ensemble pruning and performance estimation
ƒ Suppose that is the probability that is an instance of class label .
ƒ An inductive model will always predict the label with the highest probability, i.e.,