Data Mining, Decision Data Mining, Decision Trees and Earthquake Trees and Earthquake Prediction Prediction Professor Sin-Min Lee Professor Sin-Min Lee
Data Mining, Decision Trees Data Mining, Decision Trees and Earthquake Prediction and Earthquake Prediction
Professor Sin-Min Lee Professor Sin-Min Lee
What is Data Mining?What is Data Mining?
• Process of automatically finding the relationships and patterns, and extracting the meaning of enormous amount of data.
• Also called “knowledge discovery”
ObjectiveObjective
• Extracting the hidden, or not easily recognizable knowledge out of the large data… Know the past
• Predicting what is likely to happen if a particular type of event occurs … Predict the future
ApplicationApplication
• Marketing example– Sending direct mail to randomly chosen
people– Database of recipients’ attribute data (e.g.
gender, marital status, # of children, etc) is available
– How can this company increase the response rate of direct mail?
Application (Cont’d)Application (Cont’d)
• Figure out the pattern, relationship of attributes that those who responded has in common
• Helps making decision of what kind of group of people the company should target
• Data mining helps analyzing large amount of data, and making decision…but how exactly does it work?
• One method that is commonly used is decision tree
Decision TreeDecision Tree
• One of many methods to perform data mining - particularly classification
• Divides the dataset into multiple groups by evaluating attributes
• Decision tree can be explained a series of nested if-then-else statements.
• The Decision Tree is one of the most popular classification algorithms in current use in Data Mining
Decision Tree (Cont’d)Decision Tree (Cont’d)
• Each non-leaf node has a predicate associated, testing an attribute of data
• Leaf node represents a class, or category• To classify a data, start from root node and traverse down the
tree by testing predicates and taking branches
Example of Decision TreeExample of Decision Tree
What is a Decision Tree?What is a Decision Tree?
• 20-Questions Example– Progressive Yes-No Decisions Until an Answer
is Obtained
• 20-Questions Machine at Linens & Things
• Key to the Phylum – classification tool– Carl Linnaeus, Swedish Botanist, 1730’s– Classifies known species:
• Kingdoms, Phyla, Classes, Orders, Families, Genera, and Species
What is a Decision Tree?What is a Decision Tree?
Body Temperature
Root Node
Hibernates
Warm-Blooded
Non-Mammal
Cold-BloodedInternal Node
LeafNode
Yes No
Non-Mammal
Non-Mammal
Four-Legged
Yes No
Mammal
What are Decision Trees Used What are Decision Trees Used For?For?
How to Use a Decision TreeHow to Use a Decision Tree
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Start from the root of tree.
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K >= 80K
Refund Marital Status
Taxable Income Cheat
No Single 80K ? 10
Test Data
Deduction
How to Make a Decision TreeHow to Make a Decision Tree
Tid Refund MaritalStatus
TaxableIncome Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes10
categoric
al
categoric
al
continuous
class
Training Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Single, Divorced
< 80K > 80K
Splitting Attributes
Model: Decision Tree
Induction
Hunt’s AlgorithmHunt’s Algorithm
• Let Dt be the set of training records that reach a node t
• General Procedure:– If Dt contains records that
belong the same class yt, then t is a leaf node labeled as yt
– If Dt is an empty set, then t is a leaf node labeled by the default class, yd
– If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.
Tid Refund Marital Status
Taxable Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes 10
Dt
?
Hunt’s AlgorithmHunt’s AlgorithmTid Refund Marital
Status Taxable Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes 10
Don’t Cheat
Refund
Don’t Cheat
Don’t Cheat
Yes No
Refund
Don’t Cheat
Yes No
MaritalStatus
Don’t Cheat
Cheat
Single,Divorced
Married
TaxableIncome
Don’t Cheat
< 80K >= 80K
Refund
Don’t Cheat
Yes No
MaritalStatus
Don’t Cheat
Cheat
Single,Divorced
Married
Measure of Purity: Measure of Purity: GiniGini• Gini Index for a given node t :
(NOTE: p( j | t) is the relative frequency of class j at node t).
– Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information
– Minimum (0.0) when all records belong to one class, implying most interesting information
j
tjptGINI 2)]|([1)(
C1 0C2 6
Gini=0.000
C1 2C2 4
Gini=0.444
C1 3C2 3
Gini=0.500
C1 1C2 5
Gini=0.278
Advantage of Decision TreeAdvantage of Decision Tree• simple to understand and interpret
• require little data preparation
• able to handle nominal and categorical data.
• perform well with large data in a short time
• the explanation for the condition is easily explained by boolean logic.
Advantages of Decision TreeAdvantages of Decision Tree
• Easy to visualize the process of classification– Can easily tell why the data is classified in a
particular category - just trace the path to get to the leaf and it explains the reason
• Simple, fast processing– Once the tree is made, just traverse down the
tree to classify the data
Decision Tree is for…Decision Tree is for…
• Classifying the dataset which– The predicates return discrete values– Does not have an attributes that all data has
the same value
CMT catalog: Shallow earthquakes, 1976-2005
Gordon & Stein, 1992
INDIAN PLATE MOVES NORTHCOLLIDING WITH EURASIA
COMPLEX PLATE
BOUNDARY ZONE IN
SOUTHEAST ASIA
Northward motion of India deforms all of
the region
Many small plates (microplates) and
blocks
Molnar & Tapponier, 1977
India subducts India subducts beneath Burma beneath Burma
microplatemicroplateat about 50 at about 50
mm/yrmm/yr
Earthquakes Earthquakes occur at plate occur at plate
interface along interface along the Sumatra arc the Sumatra arc (Sunda trench)(Sunda trench)
These are These are spectacular & spectacular &
destructive destructive results of many results of many
years of years of accumulated accumulated
motionmotion
NOAA
IN DEEP OCEAN tsunami has long wavelength, travels fast, small amplitude - doesn’t affect ships
AS IT APPROACHES SHORE, it slows. Since energy is
conserved, amplitude builds up - very damaging
Because seismic waves travel much faster (km/s) than tsunamis, rapid analysis of seismograms can identify earthquakes likely to cause major tsunamis and predict when waves will arrive
TSUNAMI WARNING
Deep ocean buoys can measure wave heights, verify tsunami and reduce false alarms
HOWEVER, HARD TO PREDICT EARTHQUAKES recurrence is highly variable
M>7 mean 132 yr 105 yr Estimated probability in 30 yrs 7-51%
Sieh et al., 1989
Extend earthquake history with geologic records -paleoseismology
EARTHQUAKE RECURRENCE AT SUBDUCTION ZONES IS
COM PLICATED
In many subduction zones, thrust earthquakes have patterns in space and time. Large earthquakes occurred in the Nankai trough area of Japan approximately every 125 years since 1498 with similar fault areas
In some cases entire region seems to have slipped at once; in others slip was divided into several events over a few years.
Repeatability suggests that a segment that has not slipped for some time is a gap due for an earthquake, but it’s hard to use this concept well because of variability
GAP?
NOTHING YET Ando, 1975
1985 MEXICO EARTHQUAKE1985 MEXICO EARTHQUAKE1985 MEXICO EARTHQUAKE1985 MEXICO EARTHQUAKE
• SEPTEMBER 19, 1985
• M8.1
• A SUBDUCTION ZONE QUAKE
• SEPTEMBER 19, 1985
• M8.1
• A SUBDUCTION ZONE QUAKE
• ALTHOUGH LARGER THAN USUAL, THE EARTHQUAKE WAS NOT A “SURPRISE”
• A GOOD, MODERN BUILDING CODE HAD BEEN ADOPTED AND IMPLEMENTED
• ALTHOUGH LARGER THAN USUAL, THE EARTHQUAKE WAS NOT A “SURPRISE”
• A GOOD, MODERN BUILDING CODE HAD BEEN ADOPTED AND IMPLEMENTED
1985 MEXICO EARTHQUAKE1985 MEXICO EARTHQUAKE1985 MEXICO EARTHQUAKE1985 MEXICO EARTHQUAKE
• EPICENTER LOCATED 240 KM FROM MEXICO CITY
• EPICENTER LOCATED 240 KM FROM MEXICO CITY
• 400 BUILDINGS COLLAPSED IN OLD LAKE BED ZONE OF MEXICO CITY
• SOIL-STRUCTURE RESONANCE IN OLD LAKE BED ZONE WAS A MAJOR FACTOR
• 400 BUILDINGS COLLAPSED IN OLD LAKE BED ZONE OF MEXICO CITY
• SOIL-STRUCTURE RESONANCE IN OLD LAKE BED ZONE WAS A MAJOR FACTOR
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: ESSENTIAL STRUCTURES--ESSENTIAL STRUCTURES--
SCHOOLSSCHOOLS
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: ESSENTIAL STRUCTURES--ESSENTIAL STRUCTURES--
SCHOOLSSCHOOLS
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: STEEL FRAME BUILDINGSTEEL FRAME BUILDING
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: STEEL FRAME BUILDINGSTEEL FRAME BUILDING
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: POUNDINGPOUNDING
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: POUNDINGPOUNDING
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: NUEVA LEON APARTMENT NUEVA LEON APARTMENT
BUILDINGS BUILDINGS
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: NUEVA LEON APARTMENT NUEVA LEON APARTMENT
BUILDINGS BUILDINGS
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: SEARCH AND RESCUESEARCH AND RESCUE
1985 MEXICO EARTHQUAKE: 1985 MEXICO EARTHQUAKE: SEARCH AND RESCUESEARCH AND RESCUE
• Definition
• Characteristics
• Project:California Earthquake Prediction)
Characteristics (cont.)Characteristics (cont.)
Characteristics (cont.)Characteristics (cont.)
• 2. Locality: information transferred by a neuron is limited by its nearby neurons.
• CAEP: short term earthquake prediction is highly influenced by it’s geologic figure locally.
Characteristics (cont.)Characteristics (cont.)
• 3. Weighted sum and activation function with nonlinearity: input signal is weighted at the synoptic connection by a connection weight.
• CAEP: nearby location will be weighted with each activation function.
Characteristics (cont.)Characteristics (cont.)• 4. Plasticity: connection weights
change according to the information fed to the neuron and the internal state. This plasticity of the connection weights leads to learning and self-organization. The plasticity realizes the adaptability against the continuously varying environment.
• CAEP: calculate the stress of focused point according to the seismic wave history in the around area
Characteristics (cont.)Characteristics (cont.)• 5. Generalization: A neural
network constructs its own view of the world by inferring an optimal action on the basis of previously learned events by interpolation, and extrapolation.
• CAEP: get a view of one area from past experience by pattern representation Prediction.
Basic Function of CSEPBasic Function of CSEP
• Neuron: list of locations along San Andreas Fault, and two of the associated faults—Hayward and Calaveras.
Basic Function of CSEP (cont.)Basic Function of CSEP (cont.)
• Neuron’s parameters: magnitude, date, latitude, longitude, depth, location, ground water, observation, etc.
LearningLearning
• Learning is essential for unknown environments,– i.e., when designer lacks omniscience
• Learning is useful as a system construction method,– i.e., expose the agent to reality rather than trying to
write it down
• Learning modifies the agent's decision mechanisms to improve performance
Learning agentsLearning agents
Learning elementLearning element
• Design of a learning element is affected by– Which components of the performance element are to
be learned– What feedback is available to learn these components– What representation is used for the components
• Type of feedback:– Supervised learning: correct answers for each
example– Unsupervised learning: correct answers not given– Reinforcement learning: occasional rewards
Inductive learningInductive learning
• Simplest form: learn a function from examples
f is the target function
An example is a pair (x, f(x))
Problem: find a hypothesis hsuch that h ≈ fgiven a training set of examples
(This is a highly simplified model of real learning:– Ignores prior knowledge– Assumes examples are given)
–
•
•
Learning decision treesLearning decision trees
Problem: decide whether to wait for a table at a restaurant, based on the following attributes:1. Alternate: is there an alternative restaurant nearby?2. Bar: is there a comfortable bar area to wait in?3. Fri/Sat: is today Friday or Saturday?4. Hungry: are we hungry?5. Patrons: number of people in the restaurant (None, Some, Full)6. Price: price range ($, $$, $$$)7. Raining: is it raining outside?8. Reservation: have we made a reservation?9. Type: kind of restaurant (French, Italian, Thai, Burger)10. WaitEstimate: estimated waiting time (0-10, 10-30, 30-60, >60)
Attribute-based representationsAttribute-based representations
• Examples described by attribute values (Boolean, discrete, continuous)
• E.g., situations where I will/won't wait for a table:
• Classification of examples is positive (T) or negative (F)
•
Decision treesDecision trees
• One possible representation for hypotheses• E.g., here is the “true” tree for deciding whether to wait:
ExpressivenessExpressiveness
• Decision trees can express any function of the input attributes.• E.g., for Boolean functions, truth table row → path to leaf:
• Trivially, there is a consistent decision tree for any training set with one path to leaf for each example (unless f nondeterministic in x) but it probably won't generalize to new examples
• Prefer to find more compact decision trees
Hypothesis spacesHypothesis spaces
How many distinct decision trees with n Boolean attributes?
= number of Boolean functions
= number of distinct truth tables with 2n rows = 22n
• E.g., with 6 Boolean attributes, there are 18,446,744,073,709,551,616 trees
Hypothesis spacesHypothesis spaces
How many distinct decision trees with n Boolean attributes?= number of Boolean functions= number of distinct truth tables with 2n rows = 22n
• E.g., with 6 Boolean attributes, there are 18,446,744,073,709,551,616 trees
How many purely conjunctive hypotheses (e.g., Hungry Rain)?• Each attribute can be in (positive), in (negative), or out
3n distinct conjunctive hypotheses• More expressive hypothesis space
– increases chance that target function can be expressed– increases number of hypotheses consistent with training set
may get worse predictions
Decision tree learningDecision tree learning
• Aim: find a small tree consistent with the training examples• Idea: (recursively) choose "most significant" attribute as root of
(sub)tree
Choosing an attributeChoosing an attribute
• Idea: a good attribute splits the examples into subsets that are (ideally) "all positive" or "all negative"
• Patrons? is a better choice
•
Using information theoryUsing information theory
• To implement Choose-Attribute in the DTL algorithm
• Information Content (Entropy):
I(P(v1), … , P(vn)) = Σi=1 -P(vi) log2 P(vi)
• For a training set containing p positive examples and n negative examples:
np
n
np
n
np
p
np
p
np
n
np
pI
22 loglog),(
Information gainInformation gain
• A chosen attribute A divides the training set E into subsets E1, … , Ev according to their values for A, where A has v distinct values.
• Information Gain (IG) or reduction in entropy from the attribute test:
• Choose the attribute with the largest IG
v
i ii
i
ii
iii
np
n
np
pI
np
npAremainder
1
),()(
)(),()( Aremaindernp
n
np
pIAIG
Information gainInformation gain
For the training set, p = n = 6, I(6/12, 6/12) = 1 bit
Consider the attributes Patrons and Type (and others too):
Patrons has the highest IG of all attributes and so is chosen by the DTL algorithm as the root
bits 0)]4
2,
4
2(
12
4)
4
2,
4
2(
12
4)
2
1,
2
1(
12
2)
2
1,
2
1(
12
2[1)(
bits 0541.)]6
4,
6
2(
12
6)0,1(
12
4)1,0(
12
2[1)(
IIIITypeIG
IIIPatronsIG
Example contd.Example contd.• Decision tree learned from the 12 examples:
• Substantially simpler than “true” tree---a more complex hypothesis isn’t justified by small amount of data