Neurocomputing 61 (2004) 139 – 168 www.elsevier.com/locate/neucom ANFIS unfolded in time for multivariate time series forecasting N. Arzu S isman-Ylmaz a;∗ , Ferda N. Alpaslan b , Lakhmi Jain c a Central Bank of the Republic of Turkey, Ankara, Turkey b Middle East Technical University, Computer Engineering Department, Ankara 06531, Turkey c University of South Australia, Adelaide, Australia Abstract Thi s paper prop oses a tempor al neuro- fuz zy system named ANF IS unf olded in time whi ch is designed to provide an environment that keeps temporal relationships between the variables and to forecast the future behavior of data by using fuzzy rules. It is a modication of ANFIS neuro- fuzzy model. The rule base of ANFIS u nfolde d in time contains tempor al TSK( Takagi–Sugeno–Kang) fuzzy rules. In the training phase, back-propagation learning algorithm is used. The system takes the multivariate data and the number of lags needed to construct the unfolded model in order to desc rib e a var iable and pre dicts the fut ure behavior. Comput er simula tio ns are per for med by usi ng real mul tiv ariate data and a benchmark proble m (Ga s Fur nace Dat a). Experimental results show that the proposed model achieves online learning and prediction on temporal data. The results are compared with the results of ANFIS. c 2004 Elsevier B.V. All rights reserved. Keywords: Neuro-fuzzy systems; Unfolding in time; Backpropagation 1. Introd uction In multivariate time series analysis, it is possible to dene each time series in terms of previous values of itself and previous values of other time series in the same system. The denitions of each time series can be represented as a rule which can be used in a rule-based system. These rules can be utilized for forecasting the future behavior ofthe system. Neuro-fuzzy systems are widely used for combining the function approximatio n and learning ability of neural networks and enhanced explanation capability of fuzzy sys- tems. Recurr ent Neural Network is a conv enie nt struc ture for proc essin g time series data as stated in [10]. In Rec urr ent Neur al Net works , if the input sequence is of a ∗ Corresponding author. 0925-2312/$ -see front matterc 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2004.03.009
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
ANFIS unfolded in time for multivariate timeseries forecasting
N. Arzu S isman-Ylmaza;∗ , Ferda N. Alpaslan b , Lakhmi Jainc
a Central Bank of the Republic of Turkey, Ankara, Turkey bMiddle East Technical University, Computer Engineering Department, Ankara 06531, Turkey
cUniversity of South Australia, Adelaide, Australia
Abstract
This paper proposes a temporal neuro-fuzzy system named ANFIS unfolded in time which
is designed to provide an environment that keeps temporal relationships between the variables
and to forecast the future behavior of data by using fuzzy rules. It is a modication of ANFIS
neuro-fuzzy model. The rule base of ANFIS unfolded in time contains temporal TSK(Takagi–
Sugeno–Kang) fuzzy rules. In the training phase, back-propagation learning algorithm is used.
The system takes the multivariate data and the number of lags needed to construct the unfolded
model in order to describe a variable and predicts the future behavior. Computer simulations
are performed by using real multivariate data and a benchmark problem (Gas Furnace Data).Experimental results show that the proposed model achieves online learning and prediction on
temporal data. The results are compared with the results of ANFIS.
c 2004 Elsevier B.V. All rights reserved.
Keywords: Neuro-fuzzy systems; Unfolding in time; Backpropagation
1. Introduction
In multivariate time series analysis, it is possible to dene each time series in terms
of previous values of itself and previous values of other time series in the same system.
The denitions of each time series can be represented as a rule which can be used ina rule-based system. These rules can be utilized for forecasting the future behavior of
the system.
Neuro-fuzzy systems are widely used for combining the function approximation and
learning ability of neural networks and enhanced explanation capability of fuzzy sys-
tems. Recurrent Neural Network is a convenient structure for processing time series
data as stated in [10]. In Recurrent Neural Networks, if the input sequence is of a
∗ Corresponding author.
0925-2312/$ - see front matter c 2004 Elsevier B.V. All rights reserved.
140 N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139– 168
maximum length T , the recurrent network can be turned into an equivalent feed-forward
neural network dened over T time intervals. The idea is called unfolding in time [12].
The feed-forward neural network is duplicated for T times so that each of the neuralnetwork is kept for a time interval. In other words, each neural network represents a
state of the input sequence in time. The connection weights between the same nodes
at the duplicated neural network are identical in such a way that the neural networks
at dierent time intervals behave identical. A modied version of back-propagation
algorithm is used for training the neural network. The weight updates are summed up
and applied to the same weights at dierent time intervals. In the literature, there are
various examples of unfolding in time applications such as [ 3,12]. These examples are
mostly knowledge-based systems.
In this paper, unfolding in time approach is used to construct the knowledge-base in
a neuro-fuzzy model. A neuro-fuzzy system is used and it is duplicated for the number
of time intervals needed to forecast the system output accurately. A modied temporal back-propagation algorithm is used as the learning algorithm. It is aimed to present
the temporal data to the neuro-fuzzy system continuously. In other words, the learning
algorithm is for online learning. The neuro-fuzzy model unfolded in time is treated
as a single neural network, rather than the duplication of the basic neural network.
The connections between neural networks at dierent time intervals are also taken into
account while network parameters are updated.
During the experiments, each data set is processed by Fuzzy multivariate auto-
regression (MAR) Algorithm [13], which is a variable extraction method for multi-
variate time series data. Fuzzy MAR is based on fuzzy linear regression. It aims to
extract a set of temporal variables by solving a linear programming problem by means
of Simplex method.
2. ANFIS
Adaptive neuro-fuzzy inference system (ANFIS) is a neuro-fuzzy system developed
by Roger Jang [6 – 9]. It has a feed-forward neural network structure where each layer
is a neuro-fuzzy system component (Fig. 1). It simulates Takagi–Sugeno–Kang (TSK)
fuzzy rule [14] of type-3 where the consequent part of the rule is a linear combination
of input variables and a constant. The nal output of the system is the weighted average
of each rule’s output. The form of the type-3 rule simulated in the system is as follows:
IF x1 is A1 AND x2 is A2 AND · · · AND xp is Ap
THEN y = c0 + c1 x1 + c2 x2 + · · · + cp xp.The neural network structure contains ve layers excluding input layer.
• Layer 0 is the input layer. It has n nodes where n is the number of inputs to the
system.
• Layer 1 is the fuzzication layer in which each node represents a membership value
to a linguistic term as a Gaussian function with the mean
N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139 – 168 143
(4) Error is computed for this epoch by using an error measure to compare the expected
output to the output of the system.
(5) Training is performed by updating the parameters in layer 1 (a, b, c) and in layer 4 (˜ pi). This is oine learning, because all data set is presented to the network at
once and the parameters are updated.
(6) After a predetermined number of training epochs is reached, the training process
terminates.
The fuzzy rules produced in terms of parameters are as follows.
Rule 1:
IF GasFlowRate(t − 4) is SMALL1 AND CO2Concentration(t − 1) is SMALL2
THEN CO2Concentration(t ) = p11 * GasFlowRate(t − 4)
+p12 * CO2Concentration(t − 1) + p13
Rule 2:
IF GasFlowRate(t − 4) is SMALL1 AND CO2Concentration(t − 1) is LARGE2
THEN CO2Concentration(t ) = p21 * GasFlowRate(t − 4)
+p22 * CO2Concentration(t − 1) + p23
Rule 3:
IF GasFlowRate(t − 4) is LARGE1 AND CO2Concentration(t − 1) is SMALL2
THEN CO2Concentration(t ) = p31 * GasFlowRate(t − 4)
+p32 * CO2Concentration(t − 1) + p33
Rule 4:
IF GasFlowRate(t − 4) is LARGE1 AND CO2Concentration(t − 1) is LARGE2
THEN CO2Concentration(t ) = p41 * GasFlowRate(t − 4)
+p42 * CO2Concentration(t − 1) + p43
In this example there are two fuzzy values SMALLi and LARGE i for both variables(GasFlowRate(t − 4) and CO2Concentration(t − 1)) where i denotes the index of the
variable. Each fuzzy value such as SMALLi is denoted by the parameters in the rst
layer (ai ; bi ; ci). p jk is the parameter in the fourth layer where j denotes the rule and
k denotes the parameter index. It is used in computing the output of the system which
is CO2Concentration(t ).
3. ANFIS unfolded in time
The neuro-fuzzy systems in the literature are mostly multi-layer feed-forward neural
network structures. When temporal data is concerned it is needed to construct a neuralnetwork structure which uses temporal relationships.
Recurrent neural network structures are more convenient for that purpose. Unfolding
in time is a method used for training the recurrent neural network structures. The
neuro-fuzzy approach in the study utilizes this method.
Unfolding in time approach is applied to the neuro-fuzzy system in order to construct
a temporal multi-layer feed-forward neural network. The feed-forward neural network
is duplicated for T times where T is the number of time intervals needed in the specic
problem. The resulting system is called ANFIS unfolded in time.
144 N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139– 168
Y
X0
Y1
X1
Y2 Y3
Y4
X3X2
NN2NN1 NN3 NN4
0
Fig. 2. ANFIS unfolded in time.
The neural network structure enables us to dene a problem where the input can be
a vector such that (̃x; y) and the system produces only one output which is y.
Sample system can be seen in Fig. 2. Each of the boxes represents one ANFIS struc-
ture dened in Fig. 1. In the problem given in Fig. 2, it is assumed that the output
of the system depends on four
previous input values. In order to achieve the network structure, ANFIS is duplicated for 4 time intervals. The input of the neuro-fuzzy sys-
tem is composed of two elements which are X and Y . There is only one output of
the system which is Y . Initially (at time t = 0), X 0 and Y 0 are input to NN1 (net-
work component for time interval 1). The output of NN1 is obtained as Y 1 (at time
t = 1). Then, it is input to the NN2. Another input is needed which is X 1 (external
input). So it is supplied externally, since the system does not produce X 1. Output for
the second time interval is obtained as Y 2. The same process is performed for the
rest of the time intervals (two more time intervals). Finally, Y 4 is obtained as the
output of NN4. It is treated as the output of the system ANFIS unfolded in time for
t = 0. In other words, the input is supplied at time 0, and the output for time 4 is
obtained.
The same process is repeated for time t = 1; 2; : : : until the end of the sample dataset is reached.
3.1. Temporal back-propagation algorithm
The algorithm used in the neural network structure is a modication of the back-
propagation algorithm. Since the basic neuro-fuzzy system is a feed-forward neural
network, back-propagation algorithm is convenient to use. Because the neural net-
work is duplicated T times the basic back-propagation learning algorithm is modied
accordingly.
The neuro-fuzzy system is treated as a black-box containing T neural networks.
The connections between neural networks are also taken into account, representing thetemporal relationships. The parameters in the last network are updated according to the
error in the last interval. The error in one of the previous networks is updated by using
the error in the specied time interval and the errors propagated from the following
intervals. The error coming from the following intervals are back-propagated to the
error in the specic interval. Unlike the conventional unfolding in time method, the
parameters are updated independently.
The algorithm in Fig. 3 describes the steps for processing the data for one specic
time interval. The data is presented to the neuro-fuzzy system at each time interval.
N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139 – 168 145
Part 1 (Forward Phase)
1. The data at time T = k is given as input to system ),( k k y x .
2. The output y k+1 is computed for network at time interval 1. If T isgreater than 1, the last element in the input vector y k becomes
y k+1 (y=y 1). The output of each step will be one of the elements in
the input vector until t = T .
3. The output of the last neural network is the output of
ANFIS_unfolded_in_time. In other words, Y k = y T+k
Part 2 (Backward Phase)
1. Compute the error for the output response of the system Y k and the
given input vector ),( k k y x
2)( k k Y yerror −=
2. Back-propagate the error to the parameters in network T .
3. Back-propagate the error to the error in the network T-1. Then
update the parameters in network T-1 by using the propagated
error .4. Repeat the above step until t=0.
Fig. 3. Forward and backward processing phases in ANFIS unfolded in time.
E(t)∂
E(t−1)∂E(t−1) +
y(t)
E(t)
y(t−1)
NN(t−1) NN(t)
Fig. 4. Error computation for a time interval.
The data is processed and the output is obtained for T time intervals ahead. The error
is computed and back-propagated through the network, updating the parameters of each
node (online learning).
The algorithm contains two phases: Forward phase and Backward phase. In the
Forward phase, the data at specic time k is introduced to the system and the com-
putations are performed according to the input value. The important feature of the
temporal neuro-fuzzy system is that at the end of the computation for a data at time
interval k , it yields into the output of the system at time interval k + T .
In the Backward phase, the parameters in all networks are updated according to the
output produced by the neuro-fuzzy system. The error in network T is used to update
only the parameters in network T . But, for updating the parameters in network T − 1,the error of the following network (which is network T ) is back-propagated to network
T −1. The same process is applied to all previous time intervals until all the parameters
in all networks are updated.
The output of the forward phase is accepted as the output of ANFIS unfolded in time.
At the end of the Backward Phase all parameters are updated and the data in the next
time interval is presented to the system.
The method of back-propagating the error is shown in Fig. 4. If the error computed
at time interval t is E (t ) then the error is back-propagated through the neural network
156 N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139– 168
Fig. 10. Output for Gas furnace data variable x0.
• Monthly US housing starts and sold data contains two variables, which are the prices
that housing starts ( x0) and the prices that housing sold ( x1). The data is collected
for the period January 1965 and December 1974.
• Monthly Interest Rate data contains three series which are Federal Funds Rate ( x0),
90-Day Treasury Bill Rate ( x1), and the 1-Year Treasury Bill Rate ( x2).
• Quarterly, seasonally adjusted, US Fixed investment ( x0) and changes in business
inventories ( x1) are tested by the algorithm. The data is recorded between 1947 and
1971.
• Natural logarithms of the annual sales of mink furs ( x0) and muskrat furs ( x1) by
Hudson’s Bay company are used in the experiments. The data is recorded between
1850 and 1911.
• Power station data is taken from a 50 MW turbo-alternator and contains in-phase
current deviations ( x0), out-of-phase current deviations ( x1) and frequency deviations
of voltage generated ( x2).
• Production data set contains two variables which are weekly production gures inthousands of units ( x0) and weekly billing gures in millions of dollars ( x1) of a
company.
• Unemployment data set contains two variables which are unemployment ( x0) and
gross domestic product ( x1) in UK between 1955 and 1969. The data is recorded
quarterly.
In the experiments, rst half of the observations are used in training phase. The
entire data set is used in recognition phase. The training and recognition results are
N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139 – 168 161
(c)
Fig. 13. (continued ).
The recognition error is 0.201765. For the Grain Prices data set, the recognition
errors changes between 0.075 and 0.17 which are very close results given as Table 3.
In Fig. 11, it can be seen that the output for x0 simulates successfully the behavior of
the original x0 data. The other variables also yield error very close to zero.The Housing data set has approximately same recognition errors for both x0 and x1
as seen in Table 3 which are 12.9466 and 11.561. Also as seen in Fig. 12, both for
x0 and x1, the output of ANFIS unfolded in time follows the same pattern with the
original output but misses the local peak data points.
The Interest Rates data set variables x0, x1 and x2 yields promising recognition errors
in Table 3. Fig. 13 validates this situation. The outputs obtained are very close to the
expected output. The recognition errors for x0 and x1 in
Investment and Inventories data set are very close to each other in Table 3. Out-of-
sample recognition error is slightly higher than the error for training sample for both
variables given as in Fig. 14.
The recognition errors are slightly worse for x1 than the error for x0 in Mink-Muskratdata set in Table 3. In Fig. 15, the outputs are close to the expected results for both
of the variables.
In Table 3, for the Power Station data, x0 and x2 has approximately the same recog-
N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139 – 168 165
(c)
Fig. 16. (continued ).
5. Conclusion
A neuro-fuzzy system is constructed for prediction of time series data by means of a temporal learning algorithm and a temporal neuro-fuzzy model. The model, namely
ANFIS unfolded in time, provides an on-line environment which takes a time series
and forecasts about the future behavior of it. Because the recurrent neural network
structure seems to be convenient for time series analysis, unfolding-in-time approach is
useful to represent a recurrent neural network as a feed-forward one. The neuro-fuzzy
system which is basically a black box of feed-forward neural networks is duplicated
for T time intervals in this approach. The number of time intervals is provided to
the neuro-fuzzy system as an argument. These are computed by using Fuzzy-MAR
algorithm [13]. As an alternative, tests can be iteratively performed to nd the best
number of time intervals for the time series data given.
Because the resulting model can be used for small time intervals T , it can be appliedto areas including short-term prediction, such as:
• Financial or meteorological data forecasting can be an application area, since the
model is convenient for forecasting problems.
• Sequence detection given by Rumelhart [12] is a possible application of unfolding-in-
time concept.
• Image processing, specically motion detection can be a dierent application area
for ANFIS unfolded in time because of the usage of temporal expert system rules.
168 N.A. S isman-Ylmaz et al. / Neurocomputing 61 (2004) 139– 168
[2] G.E. Box, G.M. Jenkins, Time Series Analysis: Forecasting and Control, Holden Day, San Francisco,
1970.
[3] F.N. Civelek-Alpaslan, K.M. Swigger, A temporal neural network model for constructing connectionistexpert system knowledge bases, J. Network Comput. Appl. 19 (1996) 19–133.
[4] W. Farag, A. Tawk, On fuzzy model identication and the gas furnace Data, Proceedings of the
IASTED International Conference, Hawaii, 2000.
[5] M.B. Gorzalczany, A. Gluszek, Neuro-fuzzy systems for rule-based modeling of dynamic processes,
Proceedings of ESIT, 2000, pp. 416–422.
[6] J.-S.R. Jang, Self-learning fuzzy controllers based on temporal back propagation, IEEE Trans. Neural