Top Banner
Lazy Bayesian Rules: A Lazy Semi-Naïve Bayesian Learning Technique Competitive to Boosting Decision Trees Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia Appeared in ICML ‘99
10

Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia

Jan 17, 2016

Download

Documents

leala

Lazy Bayesian Rules: A Lazy Semi-Naïve Bayesian Learning Technique Competitive to Boosting Decision Trees. Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia Appeared in ICML ‘99. Paper Overview. Description of LBR, Adaboost and Bagging - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia

Lazy Bayesian Rules: A Lazy Semi-Naïve Bayesian Learning Technique Competitive to

Boosting Decision Trees

Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting

Deakin University

Victoria Australia

Appeared in ICML ‘99

Page 2: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia

Paper Overview

• Description of LBR, Adaboost and Bagging

• Experimental Comparison of algorithms

Page 3: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia

Naïve Bayesian Tree

• Each tree node is a naïve bayes classifier

Page 4: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia

Lazy Bayesian Rules

• Build a special purpose bayesian classifier based on the example to classify

• greedily choose which attributes to remain constant and which should vary

Page 5: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia
Page 6: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia

Boosting / Bagging

• Adaboost

– train on examples

– evaluate performance

– re-train new classifier with weighted examples

– repeat

– when classifying, vote according to weights

• Bagging

– train many times on samples drawn with replacement

– when classifying, vote equally

Page 7: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia
Page 8: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia
Page 9: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia
Page 10: Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting Deakin University Victoria Australia