Top Banner
THE EFFECTS OF NEURAL NETWORKS TRAINING FACTORS ON STOCK PRICE PREDICTION ERRORS Ahmed F. Aleroud 1 , Izzat M. Alsmadi 2 , Ahmad I. Alaiad 1 , and Qasem A. Al-Radaideh 2 1 Department of Information Systems, University of Maryland, Baltimore County (UMBC) 1000 Hilltop Circle, Baltimore, MD 21250, USA {Ahmed21, aalaiad1}@umbc.edu 2 Department of Computer Information System, Yarmouk University, Irbid 21163, J {ialsmadi, qasemr}@yu.edu.jo ABSTRACT Machine learning approaches have been widely used for many financial applications. Neural network is an evolutionary computational approach which has proved its effectiveness in stock price prediction. This study evaluates the effects of changing two training factors on the performance of neural networks in short and long term stock price prediction error rates. The training time and the length of training period are boosted in a regular manner. Their effect on prediction error rate is evaluated. The results have shown that the length of training time has a significant effect on minimizing error rate in short term prediction, compared with its effect on long term prediction error rate. The results of training period factor indicated that increasing the training period has more effects on minimizing error rate of long term stock price prediction than its effect on the short term price prediction. KEY WORDS Neural networks, feed forward network, training time, 1. Introduction One of the most important issues in financial sector is the forecasting of stock exchange market index values. Stock markets are highly affected by many interrelated economic, political, and sentimental factors, which often interact with one another in a complex manner. The variation of stock prices is nonlinear by nature. As such, it has always been very difficult to forecast stock prices. Additionally, predicting stock market is an attractive research topic in the last decade, not only for researchers but also for investors. Finding a model to predict stock prices accurately with the least possible error rates was the essential goal for most researches in this context. There are several researches which were conducted in the context of predicting stock prices using data mining techniques such as decision trees and neural networks. Most of the researches focus on developing a model to improve prediction performance in stock price prediction. Nonetheless, few research papers discussed the algorithmic factors that may effect on stock price prediction accuracy. In this paper, we demonstrate the effect of two neural networks (NNs) training factors on the error rates of stock price prediction in both short and long term prediction. The short term stock price prediction refers to predicting the stock prices for a short period of time (e.g. days), while the long term prediction refers to predicting the stock prices for a long period of time (e.g. months). Prediction stock price using NNs has several dimensional factors that may effect on the overall prediction outcome. Two main factors are of primary interest in this research: The training dataset size (i.e. the length of training period) and the neural network training time. We have evaluated the effect of these two factors on the error rates of short term and long term prediction of stock prices. The rest of this paper is organized as follows: in section 2, we describe the related work pertaining to neural networks use in stock price prediction. In section 3, we describe our research approach and the steps utilized in evaluating the neural network training factors effects on prediction errors. In section 4, we provide an analysis about the effect of each factor on neural network s prediction error rate in both short and long term stock price prediction. Finally, we conclude our paper and points out future work in section 5. 2. Related Work Predicting stock prices has attracted researchers for many decades. Several methods have been used to predict the stock prices; each provides different perspectives and has different dimensions. However, the common goal was to have better financial and business decisions. Generally speaking, the methods that were used to predict market prices fall broadly into three categories: fundamental analysis, traditional time series forecasting, and technical analysis [1]. The author of [2] pointed to stock price dependency on its intrinsic value and anticipated return on investment. Stock price is concerned with analyzing the company’s operations and the market in which the company is operating. It is usually not appropriate for short and medium term speculations. In statistics, traditional Proceedings of the IASTED International Conference Communication, Internet, and Information Technology (CIIT 2012) May 14 - 16, 2012 Baltimore, USA DOI: 10.2316/P.2012.773-017 stock price prediction ordan 362
7

The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

Apr 09, 2023

Download

Documents

Lamia El-Khouri
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

THE EFFECTS OF NEURAL NETWORKS TRAINING FACTORS ON STOCK PRICE PREDICTION ERRORS

Ahmed F. Aleroud1, Izzat M. Alsmadi2, Ahmad I. Alaiad1, and Qasem A. Al-Radaideh2 1Department of Information Systems, University of Maryland, Baltimore County (UMBC)

1000 Hilltop Circle, Baltimore, MD 21250, USA {Ahmed21, aalaiad1}@umbc.edu

2Department of Computer Information System, Yarmouk University, Irbid 21163, J

{ialsmadi, qasemr}@yu.edu.jo ABSTRACT Machine learning approaches have been widely used for many financial applications. Neural network is an evolutionary computational approach which has proved its effectiveness in stock price prediction. This study evaluates the effects of changing two training factors on the performance of neural networks in short and long term stock price prediction error rates. The training time and the length of training period are boosted in a regular manner. Their effect on prediction error rate is evaluated. The results have shown that the length of training time has a significant effect on minimizing error rate in short term prediction, compared with its effect on long term prediction error rate. The results of training period factor indicated that increasing the training period has more effects on minimizing error rate of long term stock price prediction than its effect on the short term price prediction.

KEY WORDS Neural networks, feed forward network, training time,

1. Introduction

One of the most important issues in financial sector is the forecasting of stock exchange market index values. Stock markets are highly affected by many interrelated economic, political, and sentimental factors, which often interact with one another in a complex manner. The variation of stock prices is nonlinear by nature. As such, it has always been very difficult to forecast stock prices. Additionally, predicting stock market is an attractive research topic in the last decade, not only for researchers but also for investors. Finding a model to predict stock prices accurately with the least possible error rates was the essential goal for most researches in this context. There are several researches which were conducted in the context of predicting stock prices using data mining techniques such as decision trees and neural networks. Most of the researches focus on developing a model to improve prediction performance in stock price prediction. Nonetheless, few research papers

discussed the algorithmic factors that may effect on stock price prediction accuracy. In this paper, we demonstrate the effect of two neural networks (NNs) training factors on the error rates of stock price prediction in both short and long term prediction. The short term stock price prediction refers to predicting the stock prices for a short period of time (e.g. days), while the long term prediction refers to predicting the stock prices for a long period of time (e.g. months). Prediction stock price using NNs has several dimensional factors that may effect on the overall prediction outcome. Two main factors are of primary interest in this research: The training dataset size (i.e. the length of training period) and the neural network training time. We have evaluated the effect of these two factors on the error rates of short term and long term prediction of stock prices. The rest of this paper is organized as follows: in section 2, we describe the related work pertaining to neural networks use in stock price prediction. In section 3, we describe our research approach and the steps utilized in evaluating the neural network training factors effects on prediction errors. In section 4, we provide an analysis about the effect of each factor on neural network s prediction error rate in both short and long term stock price prediction. Finally, we conclude our paper and points out future work in section 5. 2. Related Work

Predicting stock prices has attracted researchers for many decades. Several methods have been used to predict the stock prices; each provides different perspectives and has different dimensions. However, the common goal was to have better financial and business decisions. Generally speaking, the methods that were used to predict market prices fall broadly into three categories: fundamental analysis, traditional time series forecasting, and technical analysis [1]. The author of [2] pointed to stock price dependency on its intrinsic value and anticipated return on investment. Stock price is concerned with analyzing the company’s operations and the market in which the company is operating. It is usually not appropriate for short and medium term speculations. In statistics, traditional

Proceedings of the IASTED International ConferenceCommunication, Internet, and Information Technology (CIIT 2012)May 14 - 16, 2012 Baltimore, USA

DOI: 10.2316/P.2012.773-017

stock price prediction

ordan

362

Page 2: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

time series forecasting techniques have been applied to predict stock price fluctuation [3]. The statistical techniques were utilized building models that analyze historical price data in order to extract meaningful statistics. Several techniques such as autoregressive integrated moving-average (ARIMA) or multivariate regression were used. The technical analysis [4] aimed to predict future price variations using past stock prices and volume information. The research by the author of [5] viewed different applications of data mining techniques on the stock market. The authors discussed the applicability of decision tree, neural network, clustering and association for different stock market applications. The research conducted in [6] studied the Istanbul Stock Exchange market and explored using artificial neural networks(ANN) to predict Istanbul Stock Exchange (ISE) market index value. In [7], the authors used artificial neural networks (ANN) to forecast Jordanian stock price. The network model was developed using a feed forward multi-layer neural network with two to three layers. The research in [1] predicts the Tehran Exchange Price Index (TEPIX) based on Tehran Stock Exchange (TSE) database using two types of ANN models. The main objective of the research was to find out possible dependencies in TEPIX, and to see which neural network model performs best in forecasting. The results of this research showed that the (Group Method of Data Handling) GMDH type of neural networks have better results when the associated system is so complex. The research in [8] proposed a hybrid genetic algorithm (GA) optimized decision tree-support vector machine (SVM) approach. The proposed approach showed high accurate stock prediction; it can predict one-day-ahead trends in stock markets. The system performance was compared to that of ANN based and Naïve Bayes (NB) based systems. The results showed that the trend prediction accuracy is the highest for the hybrid system, and that the hybrid system outperforms ANN and NB based trend prediction systems. The research by the authors of [9] developed also a hybrid trend prediction system based on combined decision tree and rough based theory. It used 21 different technical indicators to extract information from the financial time series. The results showed that the trend prediction accuracy is highest when a hybrid system is applied. The research by [10] presents a neuro-evolutionary method for a short-term stock index prediction based on a combined NN and GA. The goal was to predict the change of closing values of GSE index for the next day. The standard back propagation algorithm with momentum was used for learning. The simulation results of the proposed system that were conducted to predict the percentage of change for closing values were very promising and competitive to the ones obtained by the other heuristic models. The research by [11] aimed to developing a prediction application, using computational intelligent methods by employing ANN, which could assist investors in making financial decisions. The Multi-layer perception and radial basis function neural network architectures were modeled to forecast the closing prices. The research by the

author of [12] predicted the accuracy of prediction of TOPIX (Tokyo Stock Exchange Prices Indexes) using multi-branch neural networks MBNNs and compared the results to conventional NN. The results showed that MBBNs could have a better accuracy and stability with fewer parameters than conventional NNs. The research by the author of [13] focused on finding a way in stock market to figure out the right stocks and the right time to buy. The research developed a trading method by combining both the filter rule and the decision tree technique. The purpose of the filter rule was to generate candidate trading points whereas the decision tree was used to cluster and screen these generated point. The Results showed that the proposed trading method outperformed both the filter rule and the previous evaluated methods. A hybrid decision tree (DT) and ANN model was used in [14] for stock price prediction problem. The goal was to remove the noisy data using the DT base line model and to get a new training data. The new data can be passed then to ANN. Three types of experiments were conducted, the first was one for DT base line, where the accuracy achieved was 65%, and the second experiment was conducted on ANN where the accuracy was about 59%. The best achieved results were when both ANN and DT were combined. The authors of [15] utilized genetic algorithms (GA) to generate short-term trading rules in stock market. They showed that GA trading prediction accuracy was better than a “buy and hold” approach, which takes into account company performance in the prediction model. In [16], the authors proposed a hybrid genetic-neural network algorithm to analyze and forecast the closing prices of stocks. The authors utilized the role of GA in optimization and fitness. The experimental results showed that the correlation coefficient between the predicted price and the actual one was more than 0.96. Additionally, the daily forecasts were better than weekly ones, which mean that GA-ANN was better for short term forecasting. 3. The Research Approach

The primary focus of this research paper are two-fold: first, to measure the impacts of variation in the training time of neural network on both short and long term stock price prediction error rate. Second, we evaluated the effect of the training data period length, which is represented by the number of days used as an input to neural network on both short term and long term prediction error rate. Before describing our approach in details, we will describe one of the most widely used training approaches in neural networks, which is the propagation approach. The Goal of propagation approach is to reach to some acceptable error rate after supplying the neural network with enough training data and training time to predict the expected output. Hence, the NNs propagation approach is affected by two balancing factors. The first one is the algorithm training time, and the second is the amount of historical data used in training, which represents the length of training period. Several propagation training methods have been proposed and used with artificial neural networks.

363

Page 3: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

Back propagation and resilient propagation training algorithms are examples of propagation approaches which were used in neural networks training. The goal of back propagation algorithm is to minimize the error function as much as possible to achieve the desired output. The problem with this approach is that it doesn’t always guarantee the optimal solution. In reality, the results may generally converge to any local minimum value on the error surface. The backpropogation NNs are created by utilizing a supervised learning which is appropriate for a multi–layer feed forward network. The basic idea of the back propagation algorithm is to repeat the application of the chain rule, to compute the influence of each weight in the neural network with respect to an arbitrary error function. The backpropogation has several challenges and problems. First it uses only weight-specific information (i.e. partial derivative) to adapt weight-specific parameters. Further, flat plateau where weight adaptation is slow and fluctuate. Moreover, choosing an optimal learning rate and momentum value for the backpropagation can be difficult. Finally, the convergence obtained from backpropagation learning is very slow and isn't always guaranteed. By contrast, the resilient algorithm uses a different approach which improves the probability of finding the best solution by adjusting the weights of the network based on local gradient information [17]. The goal of this research is to compare the performance of different training factors on long term and short term stock price prediction error rates. We selected the resilient algorithm to perform our experiments; this algorithm is designed using the feed-forward neural network paradigm. In feed-forward neural network, the nodes are arranged into layers. Resilient is another effective propagation method that performs a direct adaptation of the weight step based on local gradient information. Resilient approach can be used for feed forward neural networks and simple recurrent neural networks. The major motivation of resilient is to eliminate the harmful influence of the size of the partial derivative on the weighting step. It considers only the sign of the derivative to indicate the direction of the weight update. Hence, its major difference compared with other techniques is that the effect of the resilient adaptation process does not blur the unforeseeable influence of the size of the derivative, but only depends on the temporal

behavior of its sign. This leads to an efficient and transparent adaptation process. Another interesting yet important feature of the resilient algorithm is the simplicity of its implementation, In addition to the efficiency of computing the local learning scheme, which modifies the update-values for each weight according to the behavior sequence of signs of the partial derivatives. The sign of the number of learning steps is significantly reduced. Additionally, the expense of the computation is held small. The last important feature of the algorithm is the robustness against the choice of its initial parameters. Our research approach utilized resilient adaption to measure the effects of changing training factors on the adaptation process. Figure 1 shows a high level overview of the research approach followed in this paper. The major issues we addressed in this study are, first to measure the effects of rising the training time on both short term and long term price prediction. And second, the effects of training data period length on short and long term stock prices prediction given the same neural network training time.

We believe that addressing these two issues is very important in applying neural network models for financial applications. Although, we found many research papers focus on improving the NN prediction performance in the long and short term stock price prediction, we didn't find any work which compared the effects of these two training factors on the performance of neural networks. Addressing these factors is vital for several reasons; first, it is necessary to validate results of the previous research approaches which utilized neural networks for financial applications. Second, it is very important to prove experimentally how to generate more accurate long term prediction by understanding the best adaption behavior of neural networks, hence, the scope of this paper is to study the effects rising the training time, or the training data length period on the error rate of the prediction process. We run different experiments to see the effects of these two factors on the prediction error rate for long and short price prediction. Our approach steps as appears in figure1 are summarized as follows; first, the neural network is trained on an initial set of inputs. Second, the performance of the neural network model is evaluated for short and long term price prediction. Next, the NN training settings which are represented by the training time and the length of training

Training Data Neural

Network Training

Short Term Price Prediction

Long Term Price Prediction

Neural Network

Evaluation Results

Training Time Modification

Training Period Modification

Figure 1. The Research Approach

364

Page 4: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

Periods are modified, and the neural network is re-trained with the new settings. The effect of the new settings on the error rate of short and long terms stock price prediction is measured and the results are compared with the initial settings. We started our model by training the proposed neural network (see section 4) using a training data set with different training period’s lengths. The bigger the date window, the larger the dataset used in training. The dataset used contains some financial indicators to predict the stock average adjusted closing price. In this study we are only interested in comparing the actual closed prices with the predicted ones. The purpose is not the prediction process itself, but to study effects of different neural network training factors on error rate prediction. hence, we focused on a limited number of financial indicators. Table 1 shows a sample of training instances which we provided as training inputs to our neural network. The target variable in this dataset is the adjusted close price. This historical data is taken from the S&P500 index dataset [18]. In our experiments we used such historical knowledge to predict the adjusted closed price. We have used a sample of 4 years training data for the year's (2008-2011). The data set includes 6 different attributes, which are the open, high, low, close, volume, and the adjusted close price. Our major focus is on the average adjusted closing price. 3.1 The Neural Network Design

Neural network is an information processing mechanism which consists of many processing element called neurons, interconnected through uni-directed connections [16]. The information processing capabilities are stored in neurons. Several learning techniques can be used to build the network learning capabilities which are categorized as supervised or unsupervised. The learning process consists of adapting weights in some manner to minimize the Prediction error rate of new examples. Two types of training can be used to adapt neural network weights; the supervised and the unsupervised learning. In the supervised learning, a set of known output examples are used in network training. It is expected that the network adaption function will maintain a structure which can infer new outputs.

In unsupervised learning, the output of training sample is not known; however, the network must accumulate some knowledge about data properties. The NN should then reflect this knowledge in its predicted outputs. When the output of the new input samples is to be predicted, the neural network uses some of data properties it has learned to predict their output. The processing unit in neural networks has an activation sigmoid function; which is shown in formula 1. This function tries to objectively minimize the prediction error rate.

퐹(푥) =1

(1 + 푒 ) . .(1)

The goal of different supervised learning algorithms is to optimize the error function. The value of this error function is the mean of the square sum of differences between the actual values and the predicted values. The prediction error rate is shown in formula 2 where퐸 ,푑 , 푎 Represents the predicted and the actual values of the neuron j

퐸 = 푑 푎 . . (2)

The total error value is given in formula number 3. Where E is the total error and p is the number of patterns. Our initial step in the proposed approach is to build the structure of the neural network, which we used later for our short term and long term prediction experiments

퐸 =12퐸 =

12 (푑 − 푎 ) . . (3)

For The short term prediction, we used a network of 5 inputs, 10 nodes in the hidden layer, and one output respectively. The structure of the NNs which we created is shown in figure 2. Where, each node in the input layer represents an input of data in a single day. The inputs of this network are the closed prices of 5 days, to predict the closing price for next n days. The number of inputs is the same for our long term prediction, which are 5 inputs. Therefore, the network will take the input of 5 weeks and predict the average closing price for the next n weeks.

Date Open High Low Close Volume Adjusted Close

01/04/2011 1329.48 1337.85 1328.89 1332.41 4223740000 1332.41

31/03/2011 1327.44 1329.77 1325.03 1325.83 3566270000 1325.83

30/03/2011 1321.89 1331.74 1321.89 1328.26 3809570000 1328.26

29/03/2011 1309.37 1319.45 1305.26 1319.44 3482580000 1319.44

28/03/2011 1315.45 1319.74 1310.19 1310.19 2465820000 1310.19

25/03/2011 1311.8 1319.18 1310.15 1313.8 4223740000 1313.8

Table1. S&P500 Training Dataset

365

Page 5: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

00.010.020.030.040.050.060.07

10 15 20 25

Pred

ctio

n Er

ror

NN Training Time/seconds

Short Term prediction Error R/ Training Time

Day1Day2Day3Day4Day5

00.010.020.030.040.050.060.070.08

10 15 20 25

Pred

ictio

n Er

ror

NN Training Time/seconds

Long Term Price Prediction Error R/Training Time

Week1Week2Week3Week4Week5

Hence, the output of network prediction is the predicted closed price for a specific day or week. The major purpose of the evaluation phase is to measure the effects of boosting NNs training time and the training data period length on the prediction error rate for both long and short term price prediction. We modified the training time for both short term (next n days) and long term (next n weeks) predictions. In addition, we modified the training data period. In the following section we show the experiments’ settings results.

.

4. Experiments and Evaluation

In our experiment and evaluation phase, we performed 2 experiments on a machine of 512 MB main memory, and windows XP SP2 operating system. We used a four years historical data from S&P500 index for the years (2008-2011). We haven’t used the whole historical data in experiments; ultimately, the reason of selecting this long history is to give some flexibility in selecting different parts of the data history in our experiments. The focus of our experiments is on the closed stock price prediction error rate. We used the resilient algorithm implementation in Matlab in both experiments, where the multilayer feed forward network supervised training is utilized. In the first experiment, we measured the effect of training time on the prediction error rate of the short term prediction. We trained the neural network using a data of 25 days, In order to predict the closed price of the next 5 days. During this experiment, we increased the NN training time from 10 seconds to 25 seconds in a regular basis, to minimize the expected prediction error. For each training time we recorded the error rate for all predicted prices of 5 days, with each training time. Figure 3 depicts the effect of training time on neural network short term prediction error rate. When the training time is 10 seconds, the average of prediction error rate for all predicted prices of 5 days is 0.052. This average is declined to approximately 0.01, when the training time is raised to 25 seconds.

Figure 3. The Training Time Effect on Short Term prediction error rate

An initial analysis of this sharp decline shows that the NNs achieved about 0.042 better accuracy in short term prediction when we increase the training time to approximately 2.5 times with respect to initial training time.

Figure 4. The Training Time Effect on Long Term Prediction error rate

In the second part of the first experiment we used the same training time ranges to measure the decline in prediction error rate for the long term prediction. The same experimental setting has been used, except that we used the 25 days data to predict the average closed price for the next 5 weeks (i.e. long term prediction). The results of long term prediction error rates using the same range of training times are shown on Figure 4. The results demonstrate the average reduction in prediction error in the case of long term prediction where the prediction is made on one week instead of one day. The error rate is minimized from an average of 0.0585 to 0.0366 when the neural network training time is raised from 10 to 25 seconds. This results in about 0.0219 decline in the prediction error rate. This reduction in the long term price prediction error rate is about 54% less than that of the short term prediction, where the experiment setting is the same. To illustrate the reduction rate range for all weeks, we also show the error bars on the graph. The results again support the generic reduction in error rates of the long term results. Eventually, the reduction rate in price prediction errors of the long term

Figure 2. The Neural Network Design for the Close Stock Price Prediction

The hidden layer layer

1 output 5 Inputs

366

Page 6: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

00.010.020.030.040.050.060.07

25 50 75 100

Pred

ictio

n Er

ror

Training Period Length/Days

Long Term Price Prediction Error R /Training Period length

week1week2week3week4week5

00.010.020.030.040.050.06

25 50 75 100

Pred

ictio

n Er

ror

Training Period Length /Days

Short Term Price Prediction Error R/Training Period Length

DAY1

DAY2

DAY3

DAY4

DAY5

prediction is less than that of the short term ones. For instance, in the case of short term prediction (see figure 3), the reduction in prediction error rate for day number 1 is (0.054 - 0.011 = 0.043) when the training time is raised from 10 to 25 seconds. By contrast, the reduction in error rate in the case of long term prediction is approximately (0.052 - 0.037 = 0.0155) for the week 1 (see figure 4). The results of the two figures (3 and 4) support the following conclusion: Boosting NNs training time has a better effect on the prediction accuracy of the short term stock price prediction than that on the long term prediction.

Figure 5. The Training period length effect on short Term Prediction error rate

The second experiment evaluated the effect of the NN training data period length on the performance of both long and short term stock price prediction. To measure the effect of NN training data period length (i.e. the training history) on both Short term and long term price prediction, we used a historical training data of (25, 50, 75, 100) days to predict the average closed price for the next 5 days. We aimed to measure the effect of the training data size (aka training period) on the error rate of short term prediction, given the same training time. Figure 5 summarizes our findings. This graph shows that the average reduction in prediction error rate is about (0.0060) when the training data period length is raised from 25 to 100 days. An initial analysis about this reduction rate indicates that the error reduction in short term price prediction is very small when the training data period is increased from 25 days to 100 days. When we measure the effects of training data period on individual days, the results are also consistent with the overall average, for day number 1 the reduction in prediction error is only (0.054-0.049= 0.0058). The effects of training data period on the prediction error rate was evaluated on long term stock price prediction, the same experiment settings is utilized by providing a training data of the lengths (25, 50, 75, and 100) days to the neural network and then measuring the effects of training data period on the price predictions error rates when the prediction is to be performed in a long term manner (e.g. weeks). Figure 6 shows the effects of different training periods on the long term prediction error rate of neural network.

We trained the neural network which we created earlier, on (25-100) days using the same training time which is(25 seconds), then we used the training model to predict the average closing price for the next 5 weeks. The average error reduction value is 0.0109, which is better than the error reduction rate of short term prediction.

Figure 6. The Training period length Effect on Long Term Prediction error rate

The difference in error reduction rate is 25% better for long term prediction. The results of two figures (4 and 5) support the following conclusion: Long term stock price prediction accuracy seems more positively affected by boosting the size of training data when NNs are used in the prediction process. When the training data is boosted up to 4 times of the initial one, the error reduction rate is getting higher, resulting in better prediction accuracy. By contrast, we didn't find a much improvement in prediction accuracy (i.e. reduction in error rate) when we boosted the training data period of short term prediction.

5. Conclusion and Future Work

The paper evaluated the effects of boosting two training factors on the neural network prediction error rate when used to predict the stock prices for short and long periods. The first factor is the NNs training time, where our finding shows that raising the training time is more effective in minimizing prediction error rate in the short term stock price prediction, which is normally a prediction over a small amount of time. Although rising training time minimizes long term prediction error rate, this minimization is considered minor compared with that of short term one. The second factor is the length of the NN training data period which is used as an input in neural network training model. The finding show that raising the length of training period or the training history has better effect on minimizing prediction error rate in long term stock price prediction than its effects on the short term price prediction, when the training time is fixed. The results of this study open many questions about the effects of different training factors on short and long term price prediction when neural network models are used. The factors were analysed on a dataset of 4 features, however, utilizing more features might have some effects

367

Page 7: The Effects of Neural Networks Training Factors on Stock Price Prediction Errors

on NNs training factors. We believe that our evaluation must take into account more features in order to generalize our results. Additionally, we are planning to measure the effects of additional factors on the performance of neural network for stock price prediction. References

[1] M. Mehrara, A. Moeini, Ahrari, M. Ghafari, Using Technical Analysis with Neural Network for Forecasting Stock Price Index in Tehran Stock Exchange. Middle Eastern Finance and Economics Journal, 6(6), (2010), 51-61 [2] J.C. Ritchie, Fundamental analysis: a back-to-the-basics investment guide to selecting quality Stocks. (Irwin Professional Publishing, 1996). [3] G. Box, G. Jenkins, Time series analysis: forecasting and control. (Wiley, John & Sons, 2008). [4] J.J. Murphy, Technical analysis of the financial markets: a comprehensive guide to trading methods and applications. (New York Institute of Finance, 1999). [5] E. Hajizadeh, H. Davari, and J. Shahrabi, Application of data mining techniques in stock markets: A survey, Journal of Economics and International Finance. 2(7), 2010, 109-118. [6] B. Egeli, M. Ozturan, B. Badur, Stock Market Prediction Using Artificial Neural Networks, Neural Computing and Applications, 13(I3), 2004, 255-260. [7] A. Abu Hammad, S. Alhaj, E. Hall, Forecasting the Jordanian Stock Prices Using Artificial Neural Network, University of Cincinnati, Ohio, 2007 [8] B. Binoy, V. Nair, N. Mohandas, A Genetic Algorithm Optimized Decision Tree-SVM based Stock Market Trend Prediction System, International Journal on Computer Science and Engineering, 02(09), 2010, 2981-2988. [9] B. Binoy, V. Mohandas, N. Sakthivel, A Decision Tree- Rough Set Hybrid System for Stock Market Trend Prediction, International Journal of Computer Applications, (6)19, 2010, 1-6. [10] J. Mandziuk, M. Jaruszewicz, Neuro Evolutionary Approach To Stock Market Prediction, Proc. of International Joint Conference on Neural Networks, Orlando, Florida, USA, 2007, 2515-2520. [11] P. Patel, T. Marwala, Forecasting Closing Price Indices Using Neural Networks, IEEE International Conference on Systems and Cybernetics, Taipei, Taiwan, 2006, 2351-2356. [12] T. Yamashita, K. Hirasawa, J. Hu, Application of Multi-Branch Neural Networks to Stock Market Prediction, Proc. of International Joint Conference on Neural Networks, Montreal, Canada, 2005, 2544 - 2548. [13] W. Muh-Cherng, L. Sheng-Yu, L. Chia-Hsin, An Effective Application of Decision Tree to Stock Trading, Expert Systems with Applications, 31(2), 2006, 270-274. [14] C. Tsai, and S. Wang, Stock Price Forecasting by Hybrid Machine Learning Techniques, Proc. of the International Multi Conference of Engineers and Computer Scientists, Hong Kong, 2009. [15] J. Potvina, S. Patrick, and V. Maxime, Generating Trading Rules on the Stock Markets with Genetic Programming, Journal of Computers & Operations Research, 31(7), 2004 1033–1047. [16] H. Hao, Short-Term Forecasting Of Stock Price Based On Genetic-Neural Network, Proc. 6th International Conference on Natural Computation, Yantai, China, 2010 [17] M. Riedmiller, and H. Braun, A Direct Adaptive Method for Fast Backpropagation Learning: The RPROP Algorithm, Proc. of the IEEE International Conference on Neural Networks, 1993, 586-591.

[18] Historical Data for S&P 500 Stocks, http: //pages.swcp.com/- stocks/

368