tu-logo ur-logo Outline Extreme Learning Machine for Multi- Categories Classification Applications Hai-Jun Rong 1,2 , Guang-Bin Huang 1 and Yew-Soon Ong 2 1 School of Electrical and Electronic Engineering 2 School of Computer Engineering Nanyang Technological University Nanyang Avenue, Singapore 639798 E-mail: {hjrong, egbhuang, asysong}@ntu.edu.sg IEEE World Congress on Computational Intelligence Hong Kong, June 1-6 2008 ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
55
Embed
Extreme Learning Machine for Multi-Categories ...pdfs.semanticscholar.org/42ea/aa55b18175123b5cb3acff2f2dc2e3e… · tu-logo ur-logo Outline Extreme Learning Machine for Multi-Categories
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
tu-logo
ur-logo
Outline
Extreme Learning Machine for Multi-Categories Classification Applications
Hai-Jun Rong1,2, Guang-Bin Huang1 and Yew-Soon Ong2
1School of Electrical and Electronic Engineering
2School of Computer EngineeringNanyang Technological University
3 ELM for Multi-Categories Classification Problems
4 Performance Evaluations
5 Summary
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Function Approximation of Neural Networks
Figure 3: Feedforward Network Architecture.
Learning Model
For N arbitrary distinct samples(xi , ti ) ∈ Rn × Rm , SLFNs with Lhidden nodes and activation functiong(x) are mathematically modeled as
fL(xj ) = oj , j = 1, · · · , N (5)
Cost function: E =PN
j=1
‚‚‚oj − tj‚‚‚
2.
The target is to minimize the costfunction E by adjusting the networkparameters: βi , ai , bi .
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Function Approximation of Neural Networks
Figure 3: Feedforward Network Architecture.
Learning Model
For N arbitrary distinct samples(xi , ti ) ∈ Rn × Rm , SLFNs with Lhidden nodes and activation functiong(x) are mathematically modeled as
fL(xj ) = oj , j = 1, · · · , N (5)
Cost function: E =PN
j=1
‚‚‚oj − tj‚‚‚
2.
The target is to minimize the costfunction E by adjusting the networkparameters: βi , ai , bi .
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Function Approximation of Neural Networks
Figure 3: Feedforward Network Architecture.
Learning Model
For N arbitrary distinct samples(xi , ti ) ∈ Rn × Rm , SLFNs with Lhidden nodes and activation functiong(x) are mathematically modeled as
fL(xj ) = oj , j = 1, · · · , N (5)
Cost function: E =PN
j=1
‚‚‚oj − tj‚‚‚
2.
The target is to minimize the costfunction E by adjusting the networkparameters: βi , ai , bi .
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Function Approximation of Neural Networks
Figure 3: Feedforward Network Architecture.
Learning Model
For N arbitrary distinct samples(xi , ti ) ∈ Rn × Rm , SLFNs with Lhidden nodes and activation functiong(x) are mathematically modeled as
fL(xj ) = oj , j = 1, · · · , N (5)
Cost function: E =PN
j=1
‚‚‚oj − tj‚‚‚
2.
The target is to minimize the costfunction E by adjusting the networkparameters: βi , ai , bi .
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Learning Algorithms of Neural Networks
Figure 4: Feedforward Network Architecture.
Learning Methods
Many learning methodsmainly based ongradient-descent/iterativeapproaches have beendeveloped over the pasttwo decades.
Back-Propagation (BP)and its variants are mostpopular.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Learning Algorithms of Neural Networks
Figure 4: Feedforward Network Architecture.
Learning Methods
Many learning methodsmainly based ongradient-descent/iterativeapproaches have beendeveloped over the pasttwo decades.
Back-Propagation (BP)and its variants are mostpopular.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Learning Algorithms of Neural Networks
Figure 4: Feedforward Network Architecture.
Learning Methods
Many learning methodsmainly based ongradient-descent/iterativeapproaches have beendeveloped over the pasttwo decades.
Back-Propagation (BP)and its variants are mostpopular.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Advantagnes and Disadvantages
PopularityWidely used in various applications: regression,classification, etc.
LimitationsUsually different learning algorithms used in differentSLFNs architectures.Some parameters have to be tuned mannually.Overfitting.Local minima.Time-consuming.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
SLFN ModelsLearning Methods
Advantagnes and Disadvantages
PopularityWidely used in various applications: regression,classification, etc.
LimitationsUsually different learning algorithms used in differentSLFNs architectures.Some parameters have to be tuned mannually.Overfitting.Local minima.Time-consuming.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
3 ELM for Multi-Categories Classification Problems
4 Performance Evaluations
5 Summary
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Unified Learning PlatformELM Algorithm
Extreme Learning Machine (ELM)
Figure 5: Feedforward Network Architecture: any type ofG(ai , bi , x).
New Learning Theory
If continuous target function f (x) can beapproximated by SLFNs with adjustable hiddennodes then the hidden node parameters of suchSLFNs needn’t be tuned. Instead, all thesehidden node parameters can be randomlygenerated without the knowledge of the trainingdata. Given any nonconstant piecewisecontinuous function g, for any continuous targetfunction f and any randomly generatedsequence {(ai , bi )
Li=1},
limL→∞
‖f (x) − fL(x)‖ = 0
holds with probability one if βi is chosen tominimize the ‖f (x) − fL(x)‖, i = 1, · · · , L.
G.-B. Huang, et al., “Universal Approximation Using Incremental Constructive Feedforward Networks with Random
Hidden Nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Convex Incremental Learning Machine,” Neurocomputing, vol. 70, pp. 3056-3062, 2007.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Unified Learning PlatformELM Algorithm
Extreme Learning Machine (ELM)
Figure 5: Feedforward Network Architecture: any type ofG(ai , bi , x).
New Learning Theory
If continuous target function f (x) can beapproximated by SLFNs with adjustable hiddennodes then the hidden node parameters of suchSLFNs needn’t be tuned. Instead, all thesehidden node parameters can be randomlygenerated without the knowledge of the trainingdata. Given any nonconstant piecewisecontinuous function g, for any continuous targetfunction f and any randomly generatedsequence {(ai , bi )
Li=1},
limL→∞
‖f (x) − fL(x)‖ = 0
holds with probability one if βi is chosen tominimize the ‖f (x) − fL(x)‖, i = 1, · · · , L.
G.-B. Huang, et al., “Universal Approximation Using Incremental Constructive Feedforward Networks with Random
Hidden Nodes,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Convex Incremental Learning Machine,” Neurocomputing, vol. 70, pp. 3056-3062, 2007.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Unified Learning PlatformELM Algorithm
Unified Learning Platform
Figure 6: Feedforward Network Architecture: any type ofG(ai , bi , x).
Mathematical Model
For N arbitrary distinct samples(xi , ti ) ∈ Rn × Rm , standard SLFNswith L hidden nodes and output functiong(x) are mathematically modeled as
LXi=1
βi G(ai , bi , xj ) = tj , j = 1, · · · , N
(6)
(ai , bi ): Hidden node parameters.
βi : the weight vector connecting the i thhidden node and the output node.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Unified Learning PlatformELM Algorithm
Unified Learning Platform
Figure 6: Feedforward Network Architecture: any type ofG(ai , bi , x).
Mathematical Model
For N arbitrary distinct samples(xi , ti ) ∈ Rn × Rm , standard SLFNswith L hidden nodes and output functiong(x) are mathematically modeled as
LXi=1
βi G(ai , bi , xj ) = tj , j = 1, · · · , N
(6)
(ai , bi ): Hidden node parameters.
βi : the weight vector connecting the i thhidden node and the output node.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Unified Learning PlatformELM Algorithm
Extreme Learning Machine (ELM)
Mathematical ModelPLi=1 βi G(ai , bi , xj ) = tj , j = 1, · · · , N is equivalent to Hβ = T, where
H is called the hidden layer output matrix of the neural network; the i th column of H is the output of the i thhidden node with respect to inputs x1, x2, · · · , xN .
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
where H† is the Moore-Penrose generalized inverse of hidden layer outputmatrix H.
Source Codes of ELM
http://www.ntu.edu.sg/home/egbhuang/
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Unified Learning PlatformELM Algorithm
ELM Learning Algorithm
Salient Features
“Simple Math is Enough.” ELM is a simple tuning-free three-step algorithm.
The learning speed of ELM is extremely fast.
Unlike the conventional learning methods which MUST see the training data before generating the hiddennode parameters, ELM could generate the hidden node parameters before seeing the training data.
Unlike the traditional classic gradient-based learning algorithms which only work for differentiable activationfunctions, ELM works for all bounded nonconstant piecewise continuous activation functions includingnon-differential activation functions.
Unlike the traditional classic gradient-based learning algorithms facing several issues like local minima,improper learning rate and overfitting, etc, ELM tends to reach the solutions straightforward without suchtrivial issues.
The ELM learning algorithm looks much simpler than many learning algorithms: neural networks andsupport vector machines.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
ELM for Multi-Categories Classification Problems
Three basic methods
1 Single ELM classifier: m output nodes of ELM for m-class applications.We say x is in class l if output node l has the highest output value.
2 One-Against-All ELM (ELM-OAA): m-class classification problems areimplemented by m binary ELM classifiers, each of which is trainedindependently to classify one of the m pattern classes.
3 One-Against-One ELM (ELM-OAO): the m pattern classes are pairwisedecomposed into m(m − 1)/2 two different classes, and each of them istrained by one binary ELM classifier.
Exponential loss based decoding approach used in ELM-OAA and ELM-OAOE. L. Allwein, et al., “Reducing multiclass to binary: a unifying approach for margin classifiers,” Journal of Machine
Learning Research, vol. 1, pp. 113-141, 2001.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
ELM for Multi-Categories Classification Problems
Three basic methods
1 Single ELM classifier: m output nodes of ELM for m-class applications.We say x is in class l if output node l has the highest output value.
2 One-Against-All ELM (ELM-OAA): m-class classification problems areimplemented by m binary ELM classifiers, each of which is trainedindependently to classify one of the m pattern classes.
3 One-Against-One ELM (ELM-OAO): the m pattern classes are pairwisedecomposed into m(m − 1)/2 two different classes, and each of them istrained by one binary ELM classifier.
Exponential loss based decoding approach used in ELM-OAA and ELM-OAOE. L. Allwein, et al., “Reducing multiclass to binary: a unifying approach for margin classifiers,” Journal of Machine
Learning Research, vol. 1, pp. 113-141, 2001.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
ELM for Multi-Categories Classification Problems
Three basic methods
1 Single ELM classifier: m output nodes of ELM for m-class applications.We say x is in class l if output node l has the highest output value.
2 One-Against-All ELM (ELM-OAA): m-class classification problems areimplemented by m binary ELM classifiers, each of which is trainedindependently to classify one of the m pattern classes.
3 One-Against-One ELM (ELM-OAO): the m pattern classes are pairwisedecomposed into m(m − 1)/2 two different classes, and each of them istrained by one binary ELM classifier.
Exponential loss based decoding approach used in ELM-OAA and ELM-OAOE. L. Allwein, et al., “Reducing multiclass to binary: a unifying approach for margin classifiers,” Journal of Machine
Learning Research, vol. 1, pp. 113-141, 2001.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
ELM for Multi-Categories Classification Problems
Three basic methods
1 Single ELM classifier: m output nodes of ELM for m-class applications.We say x is in class l if output node l has the highest output value.
2 One-Against-All ELM (ELM-OAA): m-class classification problems areimplemented by m binary ELM classifiers, each of which is trainedindependently to classify one of the m pattern classes.
3 One-Against-One ELM (ELM-OAO): the m pattern classes are pairwisedecomposed into m(m − 1)/2 two different classes, and each of them istrained by one binary ELM classifier.
Exponential loss based decoding approach used in ELM-OAA and ELM-OAOE. L. Allwein, et al., “Reducing multiclass to binary: a unifying approach for margin classifiers,” Journal of Machine
Learning Research, vol. 1, pp. 113-141, 2001.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
ELM for Multi-Categories Classification Problems
Three basic methods
1 Single ELM classifier: m output nodes of ELM for m-class applications.We say x is in class l if output node l has the highest output value.
2 One-Against-All ELM (ELM-OAA): m-class classification problems areimplemented by m binary ELM classifiers, each of which is trainedindependently to classify one of the m pattern classes.
3 One-Against-One ELM (ELM-OAO): the m pattern classes are pairwisedecomposed into m(m − 1)/2 two different classes, and each of them istrained by one binary ELM classifier.
Exponential loss based decoding approach used in ELM-OAA and ELM-OAOE. L. Allwein, et al., “Reducing multiclass to binary: a unifying approach for margin classifiers,” Journal of Machine
Learning Research, vol. 1, pp. 113-141, 2001.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
Table 1: Specification of Real-World Classification BenchmarkProblems
Type Datasets # Attributes # Classes # ObservationsTraining Testing
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
Summary
ELM, ELM-OAO and ELM-OAA obtain similar testingaccuracies.ELM-OAO usually requires smaller number of hiddennodes than the single ELM classifier and ELM-OAA.The training time required by ELM-OAO is similar or lessthan ELM and ELM-OAA when the number of patternclasses is small (say, not larger than 10).However when the number of pattern classes is large (say,larger than 10), the training time cost by ELM-OAO is mostlikely higher than the single ELM classifier but still smallerthan ELM-OAA.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.
G.-B. Huang, et al., “Can Threshold Networks Be Trained Directly?” IEEE Transactions on Circuits andSystems II, vol. 53, no. 3, pp. 187-191, 2006.
G.-B. Huang, et al., “Real-Time Learning Capability of Neural Networks”, IEEE Transactions on NeuralNetworks, vol. 17, no. 4, pp. 863-878, 2006.
ELM Web Portal: www.ntu.edu.sg/home/egbhuang Extreme Learning Machines
tu-logo
ur-logo
Neural NetworksELM
ELM for Multi-Categories Classification ProblemsPerformance Evaluations
Summary
References
References
G.-B. Huang, et al., “Universal Approximation Using Incremental Networks with Random HiddenComputational Nodes”, IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006.
G.-B. Huang, et al., “Extreme Learning Machine: Theory and Applications,” Neurocomputing, vol. 70, pp.489-501, 2006.
M.-B. Li, et al., “Fully complex extreme learning machine,” Neurocomputing, vol. 68, pp. 306-314, 2005.
N.-Y. Liang, et al., “A Fast and Accurate On-line Sequential Learning Algorithm for Feedforward Networks,”IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1411-1423, 2006.