Computational Intelligence Computational Intelligence for Data Mining for Data Mining Włodzisław Duch Włodzisław Duch Department of Informatics Department of Informatics Nicholas Copernicus University Nicholas Copernicus University Torun, Poland Torun, Poland W W ith help from ith help from R . . Adamczak Adamczak , , K . . Grąbczewski Grąbczewski K . . Grudziński Grudziński , , N . . Jankowski Jankowski , , A . . Naud Naud http://www.phys.uni.torun.pl/kmk http://www.phys.uni.torun.pl/kmk WCCI WCCI 200 200 2 2 , , Honolulu, HI Honolulu, HI
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Computational Intelligence Computational Intelligence for Data Miningfor Data Mining
Computational Intelligence Computational Intelligence for Data Miningfor Data Mining
Włodzisław DuchWłodzisław DuchDepartment of InformaticsDepartment of Informatics
Nicholas Copernicus University Nicholas Copernicus University Torun, PolandTorun, Poland
WWith help fromith help fromRR. . AdamczakAdamczak, , KK. . Grąbczewski Grąbczewski
KK. . GrudzińskiGrudziński, , NN. . JankowskiJankowski, , AA. . Naud Naud
Group membersGroup membersGroup membersGroup members
PlanPlanPlanPlanWhat this tutorial is about ?
• How to discover knowledge in data; • how to create comprehensible models of data; • how to evaluate new data.
1. AI, CI & Data Mining2. Forms of useful knowledge3. GhostMiner philosophy4. Exploration & Visualization5. Rule-based data analysis 6. Neurofuzzy models7. Neural models8. Similarity-based models9. Committees of models
AI, CI & DMAI, CI & DMAI, CI & DMAI, CI & DM
Artificial Intelligence: symbolic models of knowledge. • Higher-level cognition: reasoning, problem solving,
Chemical analysis of wine from grapes grown in the same region in Italy, but derived from three different cultivars.Task: recognize the source of wine sample.13 quantities measured, continuous features:
• malic acid content • alkalinity of ash • total phenols content • nonanthocyanins
phenols content • color intensity • hue• proline.
Exploration and visualizationExploration and visualizationExploration and visualizationExploration and visualization
General info about the data
Exploration: dataExploration: dataExploration: dataExploration: data
Inspect the data
Exploration: data statisticsExploration: data statisticsExploration: data statisticsExploration: data statisticsDistribution of feature values
Proline has very large values, the data should be standardized before further processing.
Exploration: data standardizedExploration: data standardizedExploration: data standardizedExploration: data standardizedStandardized data: unit standard deviation, about 2/3 of all data should fall within [mean-std,mean+std]
Other options: normalize to fit in [-1,+1], or normalize rejecting some extreme values.
Decision treesDecision treesDecision treesDecision treesSimplest things first: use decision tree to find logical rules.
Test single attribute, find good point to split the data, separating vectors from different classes. DT advantages: fast, simple, easy to understand, easy to program, many good algorithms.
Decision bordersDecision bordersDecision bordersDecision bordersUnivariate trees: test the value of a single attribute x < a.
Multivariate trees:test on combinations of attributes.
Result: feature space is divided in hyperrectangular areas.
SSV decision treeSSV decision treeSSV decision treeSSV decision treeSeparability Split Value tree: based on the separability criterion.
: ( ) , continuous, ,
: ( ) , discrete
, , , ,
x D f x s fLS s f D
x D f x s f
RS s f D D LS s f D
Define left and right sides of the splits:
SSV criterion: separate as many pairs of vectors from different classes as possible; minimize the number of separated from the same class.
( ) 2 , , , ,
min , , , , ,
c cc C
c cc C
SSV s LS s f D D RS s f D D D
LS s f D D RS s f D D
SSV – complex treeSSV – complex treeSSV – complex treeSSV – complex treeTrees may always learn to achieve 100% accuracy.
Very few vectors are left in the leaves!
SSV – simplest treeSSV – simplest treeSSV – simplest treeSSV – simplest treePruning finds the nodes that should be removed to increase generalization – accuracy on unseen data.
Trees with 7 nodes left: 15 errors/178 vectors.
SSV – logical rulesSSV – logical rulesSSV – logical rulesSSV – logical rulesTrees may be converted to logical rules.Simplest tree leads to 4 logical rules:
1. if proline > 719 and flavanoids > 2.3 then class 12. if proline < 719 and OD280 > 2.115 then class 2
3. if proline > 719 and flavanoids < 2.3 then class 3
4. if proline < 719 and OD280 < 2.115 then class 3
How accurate are such rules? Not 15/178 errors, or 91.5% accuracy! Run 10-fold CV and average the results.85±10%? Run 10X!
SSV – optimal trees/rulesSSV – optimal trees/rulesSSV – optimal trees/rulesSSV – optimal trees/rulesOptimal: estimate how well rules will generalize.Use stratified crossvalidation for training;use beam search for better results.
1. if OD280/D315 > 2.505 and proline > 726.5 then class 1
2. if OD280/D315 < 2.505 and hue > 0.875 and malic-acid < 2.82 then class 2
3. if OD280/D315 > 2.505 and proline < 726.5 then class 2
4. if OD280/D315 < 2.505 and hue > 0.875 and malic-acid > 2.82 then class 3
5. if OD280/D315 < 2.505 and hue < 0.875 then class 3
Note 6/178 errors, or 91.5% accuracy! Run 10-fold CV: results are 90.4±6.1%? Run 10X!
Logical rulesLogical rulesCrisp logic rules: for continuous x use linguistic variables (predicate functions).
sk(x) ş True [XkŁ x ŁX'k], for example: small(x) = True{x|x < 1}medium(x) = True{x|x [1,2]}large(x) = True{x|x > 2}
Linguistic variables are used in crisp (prepositional, Boolean) logic rules:
IF small-height(X) AND has-hat(X) AND has-beard(X) THEN (X is a Brownie) ELSE IF ... ELSE ...
• Rules may expose limitations of black box solutions.
• Only relevant features are used in rules. • Rules may sometimes be more accurate than
NN and other CI methods. • Overfitting is easy to control, rules usually
have small number of parameters. • Rules forever !?
A logical rule about logical rules is:
IF the number of rules is relatively smallAND the accuracy is sufficiently high. THEN rules may be an optimal choice.
Logical rules - limitationsLogical rules - limitationsLogical rules - limitationsLogical rules - limitationsLogical rules are preferred but ...
• Only one class is predicted p(Ci|X,M) = 0 or 1 black-and-white picture may be inappropriate in many applications.
• Discontinuous cost function allow only non-gradient optimization.
• Sets of rules are unstable: small change in the dataset leads to a large change in structure of complex sets of rules.
• Reliable crisp rules may reject some cases as unclassified.
• Interpretation of crisp rules may be misleading.
• Fuzzy rules are not so comprehensible.
How to use logical rules?How to use logical rules?How to use logical rules?How to use logical rules?Data has been measured with unknown error. Assume Gaussian distribution:
( ; , )x xx G G y x s
x – fuzzy number with Gaussian membership function.
A set of logical rules R is used for fuzzy input vectors: Monte Carlo simulations for arbitrary system => p(Ci|X)
Analytical evaluation p(C|X) is based on cumulant:
1; , 1 erf ( )
2 2
a
x
x
a xa x G y x s dy a x
s
2.4 / 2 xs Error function is identical to logistic f. < 0.02
Rules - choicesRules - choicesRules - choicesRules - choicesSimplicity vs. accuracy. Confidence vs. rejection rate.
true | predicted r
r
p p p pp
p p p p
Accuracy (overall) A(M) = p+ p
Error rate L(M) = p+ p
Rejection rate R(M)=p+r+pr= 1L(M)A(M)
Sensitivity S+(M)= p+|+ = p++ /p+
Specificity S(M)= p = p /p
p is a hit; p false alarm; p is a miss.
Rules – error functionsRules – error functionsRules – error functionsRules – error functionsThe overall accuracy is equal to a combination of sensitivity and specificity weighted by the a priori probabilities:
A(M) = pS(M)+pS(M)
Optimization of rules for the C+ class;
large means no errors but high rejection rate.
E(M)= L(M)A(M)= (p+p) (p+p)minM E(M;) minM {(1+)L(M)+R(M)} Optimization with different costs of errors
ROC (Receiver Operating Curve): p (p, hit(false alarm).
Fuzzification of rulesFuzzification of rulesFuzzification of rulesFuzzification of rules
Rule Ra(x) = {xa} is fulfilled by Gx with probability:
( ) T ; , ( )a x x
a
p R G G y x s dy x a
Error function is approximated by logistic function; assuming error distribution (x)x)), for s2=1.7 approximates Gauss < 3.5%
Rule Rab(x) = {b> x a} is fulfilled by Gx with probability:
( ) T ; , ( ) ( )b
ab x x
a
p R G G y x s dy x a x b
Soft trapezoids and NNSoft trapezoids and NNSoft trapezoids and NNSoft trapezoids and NNThe difference between two sigmoids makes a soft trapezoidal membership functions.
Conclusion: fuzzy logic with (x) (x-b) m.f. is equivalent to crisp logic + Gaussian uncertainty.
Optimization of rulesOptimization of rulesOptimization of rulesOptimization of rules
Fuzzy: large receptive fields, rough estimations.Gx – uncertainty of inputs, small receptive fields.
Minimization of the number of errors – difficult, non-gradient, but now Monte Carlo or analytical p(C|X;M).
21{ }; , | ; ( ),
2x i iX i
E X R s p C X M C X C
• Gradient optimization works for large number of parameters.
• Parameters sx are known for some features, use them as
optimization parameters for others! • Probabilities instead of 0/1 rule outcomes.• Vectors that were not classified by crisp rules have now non-
zero probabilities.
MushroomsMushroomsMushroomsMushroomsThe Mushroom Guide: no simple rule for mushrooms; no rule like: ‘leaflets three, let it be’ for Poisonous Oak and Ivy.
8124 cases, 51.8% are edible, the rest non-edible. 22 symbolic attributes, up to 12 values each, equivalent to 118 logical features, or 2118=3.1035 possible input vectors.
Safe rule for edible mushrooms: odor=(almond.or.anise.or.none) Ů spore-print-color = Ř green
48 errors, 99.41% correct
This is why animals have such a good sense of smell! What does it tell us about odor receptors?
Mushrooms rulesMushrooms rulesMushrooms rulesMushrooms rulesTo eat or not to eat, this is the question! Not any more ...
A mushroom is poisonous if: R1) odor = Ř (almond anise none); 120 errors, 98.52% R2) spore-print-color = green 48 errors, 99.41% R3) odor = none Ů stalk-surface-below-ring = scaly Ů stalk-color-above-ring = Ř brown 8 errors, 99.90% R4) habitat = leaves Ů cap-color = white no errors!
R1 + R2 are quite stable, found even with 10% of data;
R3 and R4 may be replaced by other rules, ex:
R'3): gill-size=narrow Ů stalk-surface-above-ring=(silky scaly) R'4): gill-size=narrow Ů population=clustered Only 5 of 22 attributes used! Simplest possible rules? 100% in CV tests - structure of this data is completely clear.
Recurrence of breast cancerRecurrence of breast cancerRecurrence of breast cancerRecurrence of breast cancerInstitute of Oncology, University Medical Center, Ljubljana.
286 cases, 201 no (70.3%), 85 recurrence cases (29.7%)
Many systems tried, 65-78% accuracy reported. Single rule:
IF (nodes-involved [0,2] degree-malignant = 3 THEN recurrence ELSE no-recurrence
77% accuracy, only trivial knowledge in the data: highly malignant cancer involving many nodes is likely to strike back.
Neurofuzzy systemNeurofuzzy systemNeurofuzzy systemNeurofuzzy system
Feature Space Mapping (FSM) neurofuzzy system.Neural adaptation, estimation of probability density distribution (PDF) using single hidden layer network (RBF-like) with nodes realizing separable functions:
1
; ;i i ii
G X P G X P
Fuzzy: x(no/yes) replaced by a degree x. Triangular, trapezoidal, Gaussian or other membership f.
M.f-s in many dimensions:
FSMFSMFSMFSM
Rectangular functions: simple rules are created, many nearly equivalent descriptions of this data exist.
If proline > 929.5 then class 1 (48 cases, 45 correct + 2 recovered by other rules).
If color < 3.79285 then class 2 (63 cases, 60 correct)
Interesting rules, but overall accuracy is only 88±9%
Initialize using clusterization or decision trees.Triangular & Gaussian f. for fuzzy rules.Rectangular functions for crisp rules.
Between 9-14 rules with triangular membership functions are created; accuracy in 10xCV tests about 96±4.5%
Neural networksNeural networksNeural networksNeural networks• MLP – Multilayer Perceptrons, most popular NN models.Use soft hyperplanes for discrimination.Results are difficult to interpret, complex decision borders. Prediction, approximation: infinite number of classes.
• RBF – Radial Basis Functions.
RBF with Gaussian functions are equivalent to fuzzy systems with Gaussian membership functions, but …
No feature selection => complex rules.
Other radial functions => not separable!
Use separable functions, not radial => FSM.
• Many methods to convert MLP NN to logical rules.
Rules from MLPsRules from MLPsRules from MLPsRules from MLPsWhy is it difficult?
Learning dynamicsLearning dynamicsLearning dynamicsLearning dynamicsDecision regions shown every 200 training epochs in x3, x4 coordinates; borders are optimally placed with wide margins.
• There is no simple correlation between single values and final diagnosis.
• Results are displayed in form of a histogram, called ‘a psychogram’. Interpretation depends on the experience and skill of an expert, takes into account correlations between peaks.
Goal: an expert system providing evaluation and interpretation of MMPI tests at an expert level.
Problem: agreement between experts only about 70% of the time; alternative diagnosis and personality changes over time are important.
Psychometric dataPsychometric dataPsychometric dataPsychometric data
1600 cases for woman, same number for men.27 classes: norm, psychopathic, schizophrenia, paranoia, neurosis, mania, simulation, alcoholism, drug addiction, criminal tendencies, abnormal behavior due to ...
Extraction of logical rules: 14 scales = features.
Define linguistic variables and use FSM, MLP2LN, SSV - giving about 2-3 rules/class.
Probabilities for different classes. For greater uncertainties more classes are predicted.
Fitting the rules to the conditions:typically 3-5 conditions per rule, Gaussian distributions around measured values that fall into the rule interval are shown in green.
Verbal interpretation of each case, rule and scale dependent.
VisualizationVisualizationVisualizationVisualizationProbability of classes versus input uncertainty.
Detailed input probabilities around the measured values vs. change in the single scale; changes over time define ‘patients trajectory’.
Interactive multidimensional scaling: zooming on the new case to inspect its similarity to other cases.
SummarySummarySummarySummaryComputational intelligence methods: neural, decision trees, similarity-based & other, help to understand the data.Understanding data: achieved by rules, prototypes, visualization.
Small is beautiful => simple is the best!Simplest possible, but not simpler - regularization of models; accurate but not too accurate - handling of uncertainty; high confidence, but not paranoid - rejecting some cases.
• Challenges:hierarchical systems, discovery of theories rather than data models, integration with image/signal analysis, reasoning in complex domains/objects, applications in bioinformatics, text analysis ...
ReferencesReferencesReferencesReferences
Many papers, comparison of results for numerous Many papers, comparison of results for numerous datasets are kept at:datasets are kept at:
http://www.phys.uni.torun.pl/kmk
See also my homepage at: See also my homepage at:
http://www.phys.uni.torun.pl/~duch
for this and other presentations and some papers.for this and other presentations and some papers.
We are slowly getting there. All this and more is included in the Ghostminer, data mining software (in collaboration with Fujitsu) just released …