1 A COMPARISON OF MARITIME TECHNICAL ANALYSIS AND CHAOTIC MODELING IN LONG TERM FORECASTING OF FREIGHT RATE INDICES Alexandros M. GOULIELMOS (*), former Professor of Marine Economics, [email protected],[email protected]Constantinos V. GIZIAKIS, Professor of Maritime economics Elias KAPOTHANASSIS, PhD candidate University of Piraeus, Department of Maritime Studies, 80 Karaoli and Dimitiou St., Piraeus 18534, Greece (*) corresponding author, ABSTRACT The paper presents the main challenge facing the shipping industry, which is its inability to produce reliable forecasts. As a result of poor forecasting, the shipping industry has ordered 274 million dwt of dry cargo ships to be delivered by 2014. First, we examine critically the available of tools developed in academia to predict stock prices and freight rates. These have been developed in two main schools of market modeling: linear and non-linear. Linear tools include such methods as the random walk/martingale, and the latest flagship, GARCH/Co-integration analysis. Non-linear stochastic models include non-linear deterministic/chaos theory models, developed since 1963 based on the work of Mandelbrot. Of particular interest is Maritime Technical Analysis, which has its origins in Hampton‟s 1990 monograph. This method supports the existence of short (3-4 years) and long (16-24 years) shipping cycles, using a chartist approach. Twice since 1981, shipping has experienced dramatic changes in the parameters governing the market, which Mandelbrot has described as the “Joker in the pack”. These two violent discontinuities were the energy crisis and the banking crisis. Running traditional and non-traditional econometric tests, like BDS, on BPI daily and weekly rates 1999-2011, we have found non-normality, long term correlation and chaotic characteristics (fractal and low chaos dimensional one). The Hurst exponent is equal to 0.93, indicating black noise. The Lyapunov exponent implies that forecasting beyond 6 weeks is impossible, but we found a shortcut to long term chaotic forecasting by calculating cycle durations using data from the previous 3 years, 4 to 6 years and 8 to 12 years using the Vn statistic. Maritime technical analysis, however, provided us with valuable clues, not only for the short term cycle, which will end in May 2011, but also the long term cycle, which will end by 2017. KEYWORDS Forecastin; freight rates; chaos; maritime technical analysis; BPI 1999-2011; Vn statistic; H exponent; maritime chartists approach; testing for normality; independence; long term memory
28
Embed
A Comparison of Maritime Technical Analysis and Chaotic Modeling in Long Term Forecasting of Freight Rate Indices
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
A COMPARISON OF MARITIME TECHNICAL ANALYSIS AND CHAOTIC
MODELING IN LONG TERM FORECASTING OF FREIGHT RATE INDICES
(BM). In rigorous terms RW is described as Pt=Pt-1+at [1], where Pt is the observed price
at q given time and at is the independent error term with zero mean. Let a price change
be: ΔPt = Pt-Pt-1 [2], which also has an independent error. Moreover: Pt= [3],
where i=1…t. Prices are so accumulations of purely random changes. Most of the works
using RW were collected by Cootner (1964) and developed in detail by Granger and
Morgenstern (1970). The RW model followed the doctoral thesis of Bachelier (1900) in
Paris entitled “The Mathematical Theory of Speculative Prices”. He developed many of
the mathematical properties of Brownian motion five years earlier than Einstein (1905),
who proved RW to be expressed as: distance= . The RW model can be tested
by the autocorrelation properties of the price changes Fama (1965), viewing equation [1]
above as a “model of an ARIMA Small (2005)5 process” Box and Jenkins (1976), i.e. a
one variable linear stochastic process. Of course there are also other approaches6 that use
linear stochastic models, like those determining the order of integration. Moreover,
relaxing, or rather, replacing, the RW hypothesis with that of a Martingale, (which rules
out the dependence of the conditional expectations of changes in future values on the
information available today‟), we can use ARCH7 models due to Engle (1982), as well as
other techniques. The modeling of integrated (financial) time series is very popular
2 British statistician, who took a long look at London shares, NY cotton and Chicago wheat, more than a
century of data, in search of conventional patterns upon which an investor could turn an easy buck. „On the
whole‟, he laconically concluded after pages of fruitless regression analysis, „I regard this experiment as a
failure…There is no hope of being able to predict movements on the exchange‟. 3 The term „Random Walk‟ appeared in a correspondence in Nature in 1905 between Pearson and Rayleigh
(*): (*) “The problem of the Random Walk, Nature 72, 294, 318, and 342”. 4 This motion is „the erratic movement of a small particle suspended in a fluid‟. This is due to water
molecules colliding with this particle. Einstein proved the relationship between Brownian motion and
random walk. Weiner modeled Brownian motion as a random walk with an underlying Gaussian statistical
structure. 5 ARIMA stands for ‟Autoregressive-integrated-moving average‟ process, i.e. a non-stationary stochastic
process related to ARMA. It becomes stationary if differenced d times, where d= integer. ARMA stands for
a stationary stochastic process mixed with AR and MA (moving average) processes. AR= a stationary
stochastic process, where the current value of time series is related to its past p values (p=an integer); if
p=1, we have AR(1) with infinite variance; if p=2, we have AR(2), related to previous 2 values. Small noted
that his data of financial time series exhibited deterministic components, which, however, are assumed by
ARIMA and GARCH to be stochastic. 6 One may distinguish between „integrated and non-integrated‟ time series. In maritime economy many
believe that one variable (say the freight rate) causes another variable (like the returns/profits or the supply
of ship services), and thus co-integration is a commonly tested hypothesis. 7 ARCH stands for the first-order „autoregressive conditional heteroskedastic‟ process. This is very popular
dealing with financial time series. However, it requires „conditional variance‟ ( to be always positive.
„Autoregressive conditional‟ means that the changes in variability are controlled by data‟s past behavior.
The variance is time-varying and conditional on past one. It exhibits frequency distributions with high
peaks at the mean and fat-tails.
6
among maritime economists. Tests for co-integration and estimation can be made with the
so called vector error correction model-VECM8.
4.2 Forecasting efforts by maritime economists
4.2.1 Linear regression models
Forecasting maritime markets, despite its high importance Stopford (2009), accounts for
few works, unlike maritime modeling. Tsolakis et al. (2003) presented a model to forecast
second hand ship prices from 1999 to 2001, using AR→ Xt+1=μ+ρXt+ , where μ is
the mean, η is a shock at t, ρ the autoregressive coefficient and X the second hand price.
They compared their (inside the sample) forecasts (not shown in their paper) with the
above equation with a structured error correction model, by calculating the mean squared,
absolute and percentage error. They found, strangely, that the two models shared success
among two different classes of ships: AR/VAR9 performed better in bulk carriers (handy,
Panamax and Cape; and handy tankers) and SEM performed better in larger tankers
(Panamax, Aframax, Suezmax and VLCC), but they gave no explanation why. They
argued that SEM is good to describe and forecast cycles and to evaluate policies, while
VAR is good for other jobs. Batchelor et al (2007) tested10
the performance of popular
time series models (meaning ARIMA and VECM), in predicting: (a) spot and (b) forward
freight rates.
The table below summarizes their results:
General VECM
model and
restricted one,
plus SURE, 10
and 20 steps
ahead.
Best in-sample fit
based on root
RMSE. Using
natural logs.
Stationarity exists
by ADF, Phillips-
Peron and KPSS
1992 tests.
Future rates
converge strongly
to spots. Futures
are more volatile
than spots and
harder to forecast.
Out of sample,
ECM fails to
predict future rates
but spots market
is efficient.
Out of sample,
all models
outperform
Random Walk.
Future rates do
help to forecast
spots.
Restricted
VECM
outperformed
RW.
VECM11
Unhelpful to
predict future rates.
Spots and future
rates co-integrate.
ARIMA or VAR
(Sims, 1980)
forecast better.
ARIMA less
accurate than RW
and VAR more
accurate than
VECM, S-
VECM perform
better than
VAR in spots,
but not in future
rates. S-VECM
outperforms
8 VARs containing integrated and co-integrated variables enable us to develop the VECM framework.
9 VAR=Vector AR.
10 Their results are questionable as they have reported: significant serial correlation and heteroskedasticity
and excess kurtosis in all series; excess skewness in most; Jarque-Bera departure from normality in all
routes for both spots and futures. 11
VECM forecasting implies dangers, if underlying market structure is evolving and…This is not robust to
structural change according to above authors.
7
ARIMA. RW.
4.2.2 Non-linear regression models
Forecasting freight rate markets using nonlinear regression methods, occurred after 1997
McConville and Rickaby (1995)12
, following the work of Li and Parsons. Li and Parsons
(1997) used a nonlinear regression model, i.e. an artificial neural network model.
Between 1986 and 1993, considerable attention has been paid to developing non-linear
regression mathematical models. Li and Parsons used the monthly tanker freight rates for
1980 to1995. Their results were mixed over the duration of forecasts: for one month
neural networks were either worse than or equivalent with ARMA; for five months
ARMA was worse; for 12 months ARMA was better and for 24 months ARMA was
equivalent. Lyridis et al (2004) tried to forecast the VLCC spot rates from 1979 to 2002
using neural networks, believing that those are more suitable for non-stationary non-linear
time series. Their forecasts, though better than those of Li and Parsons, showed deviations
from actual values, varying by between 30 and 60 world-scale units for one month to 12
months ahead.
4.2.3 Chaotic time series
Chaotic13
methods were first applied to maritime forecasting by (Goulielmos and Psifia
2009; Goulielmos 2009, 2010, 2011). These were followed by Thalassinos et al (2009).
Goulielmos and Psifia (2009) tried to forecast the one year weekly time charter of a
65,000 dwt bulk-carrier, from 1989 to 2008, using the non-linear methods of principal
components and kernel density estimation (KDE). Forecasts outside the sample, tested
with actual values after rates were published, varied from $28 per week to $623, i.e.
0.97% on a total of about $60,000. Moreover, Goulielmos (2009, p. 345) used 543 values
of the BPI to predict 6 weeks ahead using non-linear methods of OLS Farmer and
Sidorowich (1987) and KDE. The deviations inside the sample varied from 2.26% to
16.31% (i.e. $979) on about a $6000 weekly freight rate for August 26th, 2008 to
September 29, 2008. The prediction for September 29th
, 2008, was $2839 against an
actual value of $2376 indicating that the crisis was coming, compared with $4885 a week
previously. So, the methods captured the fall of the freight market. In addition,
Goulielmos (2010) used historic data of second hand ship prices to predict prices for the
next 12 months. Five non-linear methods were tested and three finally applied
simultaneously. He wanted to achieve deviations of less than 1% inside the sample, and
he did. Predicted prices varied from $64.5 million to $70.5 (2006-2007) and differed from
actual values (known afterwards when figures were published) by $64.4 to $ 70.82, which
is to say that they were very close. Thalassinos et al. (2009) tried to predict the Aframax
tanker rates (105,000 dwt double hull) on a 401 weekly charter rate index, using the False
Nearest Neighbors method for 30 steps ahead. Data were from the end of March 2000 to
12
found out of 1750 authors, 9 having studied forecasting demand between 1983 and 1993 and 12
forecasting „international trade‟, between 1971 and 1993, including M.Sc and Ph.D theses and studies by
organizations, like Drewry etc. It seems that none has used nonlinear techniques in these 21 studies. Among
the topics were: expectations (Wright: 1993), autoregressive modeling (1983), a PhD (1981) at the
University of Liverpool for forecasting charter rates and Hampton‟s 3rd
edition (1991). 13
It is the case when we have a stochastic behavior appearing in deterministic systems (Royal statistical
society, 1986 conference, UK). Chaos has the properties of sensitivity to starting conditions and the
existence of an attractor.
8
end of November 2007. The deviations achieved were from $4.2 to $1789.2 per week for
k=10, 20 time steps on about $33,000 and $1,737 for k=30 and relative error about 5.11%
maximum. Predictions on absolute amounts were almost twice as bad as those by
Goulielmos (2009). Moreover, the method used belongs to the so called geometrical
methods that serve the purpose of finding cycles, system dimensions and testing for
determinism. System dimensions must be calculated accurately first by other methods like
correlation dimensions, which are then confirmed by FNN.
5. ANALYSIS I: THE ‘POKER’ AND THE ‘JOKER’ EFFECT
5.1 The poker effect
Stopford (2009, p. 738-742) described four sets of shipping forecasting problems: (1)
dealing with behavioral variables, (2) applying wrong model specifications and
assumptions, (3) difficulty in monitoring results, and (4) difficulty escaping from the
present. He wrote (p. 700): “Shipping investors know very well that they are not dealing
with certainty. In fact they are in much the same position as a poker player making an
educated guess about his opponent‟s cards. The poker player knows he cannot identify the
hand exactly... But a professional uses every scrap of information to make an educated
guess about the range of possible hands. Although he will often be wrong, over a period
of time this information helps him to come out ahead. Shipping investors play the odds in
much the same way - they know they will not win every hand - but they also know that
the right information plays an essential part in narrowing the odds”. There are truths and
mistakes in the above statement. In effect Stopford (2009) recommends guessing based on
the best right information. We believe that maritime forecasting is possible, but it is not a
poker game. It is a “joker game”, although the arrival of the joker cannot be predicted,
while an opponent‟s cards can be.
5.2 The ‘Joker’ effect
To describe the joker effect, we will reproduce the process followed by Hurst (1951).
First, we simulate a random process using a deck of 52 cards containing numbers ±1, ±3,
±5, ±7 and ±9. The deck is repeatedly cut , and shuffled after each cut. The number of the
card appearing in each cut is written down, and a sequence of 1000 recorded. Hurst found
that they show an RW (calculated by a method called Rescaled Range Analysis Hurst
(1951)14‟- RRA). Hurst, however, was interested in a biased RW. To simulate that, he
shuffled the deck and cut it once, noting the number, which might be +3. He replaced this
card, reshuffled and created two decks of 26 cards (deck A and B). Then he took out the 3
highest cards from A and placed them into B, and also removed the three lowest cards
from B. By this method he created a bias of +3 in deck B. He also placed a joker in B and
reshuffled. Deck B now is a biased time series generator, and is used until the joker is cut,
whereupon the pack is re-biased. Hurst tried 1000 trials of 100 hands. The time series
then showed persistence/biased RW. The cuts of the deck were random, the generation of
time series was also random and the appearance of the joker was random, but still the
14
A method developed by Hurst to determine long-memory effects and fractional brownian motion.
9
series showed persistence15
. It seemed that a local randomness existed in a global
structure. In terms of forecasting, the trends of the time series persist until an economic
equivalent of the joker arises to change persistence/bias in magnitude or in direction or in
both. We subscribe to Hurst‟s theory and believe that the above paragraph describes our
industry more accurately, as the experience from the facts (of shipping cycles) has taught
us. This means that forecasting maritime markets will be of no special difficulty,
invalidating Stopford‟s argument that this is the case of a poker game; forecasting the
arrival of the joker, however, is impossible so far16
.
6. ANALYSIS II: THE MODERN FINANCE
What alternative methods have we to reduce risk? Besides those already described, we
have modern finance, which came after FA and TA, based on the mathematics of chance
and statistics, accepting that prices are not predictable, although their fluctuations can be
described by the mathematical laws of chance. Risk is measurable and manageable. The
work of this discipline started in 1900 with the work of French mathematician Louis
Bachelier, who studied financial markets in his doctoral thesis. It drew upon (Pascal
1623-1662; Fermat 1601,1654, 1665), who invented probability theory. Bachelier (1900)
passed over FA and established the RW model. It postulated that prices will go up or
down with equal probability, and their variation is measurable. Most changes fit a bell
shape histogram, i.e. 68% of moves are small, and within one standard deviation, or 1σ,
from the mean; 95% are within 2σ and 98% within 3σ. Extremely few of the changes are
very large. The numerous small changes cluster in the center of the bell; the rare big
changes are indicated by the edges or tails f the distribution. Mathematicians called this
bell curve „normal‟. The bell curve was first described by the German Gauss (1809). As
shown by Goulielmos et al (2010, p. 115), however, the dry cargo index of freight rates
between 1914 and 2008 exceeded changes of 3σ on many occasions, in 1914 4.57σ, in
1920 3.45σ, in 1960 3.08σ, in 1970 -3.28σ, in 1972 4.9σ, in 1973 -4.88σ and 7.04σ, in
1974 4σ and -3.55, in 2004 5.23σ, and in 2008 -9.56σ. These large changes are much
more common than are suggested by theory. 11 times maritime markets went out of the
bounds set by the normal distribution, where the probability for that is less than 2%. This
was totally unexpected. The joker has unpredictably appeared in 1973 in the form of the
energy crisis and in 2008 in the form of the banking crisis. Bachelier‟s 1900 doctoral
thesis was translated into English in 1964. A more general statement of Bachelier‟s
thinking goes by the name of the efficient market hypothesis (Fama, 1991). This means
that in an ideal market all relevant information is already priced into a security today.
Yesterday‟s change does not influence todays, nor today‟s tomorrows, and each price
change is independent from the last.
The critical assumptions of Bachelier‟s model were: (1) that price changes are statistically
independent, and (2) they are normally distributed. This, however, did not describe
15
This is the tendency of a time series to follow trends. A rise yesterday gives a probability for a rise next
day and vice versa. This is attributed to the memory existing in the series, which is also long and called
black noise. This memory is also called long term correlation, and is not taken into account in RW and AR
models. 16
This further means that we need a forecasting model that may start as a deterministic model with
persistence and cover many decades; next incorporating the forecasting of the joker, which is a random
phenomenon, and then continuing with the same deterministic model. In fact we need a model that forecasts
life, which has alternating events of determinism and events of randomness, we believe. This is for future
research, perhaps based on the late Prigogine‟s work in Physics.
10
reality, especially during the 1990s. As Mandelbrot and Hudson (2004) argued, many
financial price series have a memory and today does in fact influence tomorrow. Large
changes in prices today are likely to be followed by large changes next day, and the
reverse is also true. There is no well-behaved, predictable pattern in prices, nor a periodic
up-and-down procession from boom to bust in business cycles. There is a long term
memory and price time series have different degrees of memory. This makes the RW
model inapplicable.
7. ANALYSIS III: HERETIC FINANCE17
AND CHAOTIC TIME SERIES
7.1 Introduction
In this part will examine the tools that chaos theory can provide for solving the
forecasting problem of maritime indices, especially in the long run.
7.2 Financial versus maritime time series
7.2.1 Stationarity.
We will make here the assumption that maritime time series resembles financial time
series (Jing et al 2008; Chen et al 2010). As financial time series are non-stationary, we
assume that the same is the case for maritime time series, and for this reason we will
apply a filter to transform them into stationary series. Following the suggestion of Peters
(1994), we will use the first logarithmic differences. By so doing we have sacrificed one
observation.
7.2.2 Normality.
Moreover, financial time series are usually tested for normality as there is the probability
for mean and variance to be time-dependent. In effect if variance is time-varying, then
risk and management of it are different from the case where variance is constant. In our
case we calculated the Jarque-Bera test of normality18
. The JB was found to be equal to
866 (rounded) > 5.99 for a=5% for 2 degrees of freedom. The data, therefore, are not
normally distributed. This conclusion is in line19
with what we have described above
Goulielmos (2010).
7.2.3 BDS/IID test.
A further test is that of statistical independence, which indicates whether there are in the
data linear or non-linear dependencies. We skip this stage and go directly to test for „non-
linear independence‟, which will tell us whether our data can be described by a non-linear
model using the BDS Brock, Hsieh and Le Baron (1991) statistic (used for a diagnostic
17
We call heretic finance the school of finance invented by Mandelbrot since 1951 based on biased RW. 18
Kurtosis on non-stationary data for BPI for 2815 daily time charter rates is 1.670856 and skewness is
1.404267. JB= (2815/6)1.404267+ (2815/24) . 19
Almost all maritime economists in their models have found excess kurtosis and excess skewness but
provided no remedies for that.
11
test). This is a test for non-linear dependence in time series. It is a powerful test to find
out whether the iid hypothesis stands in a linear, chaotic or stochastic non-linear model.
Running this test with MATLAB program, we derived the four values of the BDS for
embedding dimensions/correlation dimensions 2, 3, 4, and 5 with e/σ=1.5 (which for low
dimensions), where σ is the standard deviation of a normally distributed sample, and
e=the dimensional distance. The values are: 10.0113, 11.8654, 13.1226 and 13.8816. For
a sample size n=500 (near our sample of n=573), the critical values with 99.5%
confidence level, are: 2.80, 2.86, 2.86 and 2.94 from the table due to Kanzler (1999).
Consequently our time series are not iid.
7.2.4 Efficient market hypothesis.
The hypothesis of the efficient market states that all available information is directly and
fully reflected in prices/time charters. Both (Samuelson 1965; Mandelbrot 1966) have
demonstrated that if the transaction cost is zero and if investors have easy access to the
quickly diffused information, then we may use an RW model. Fama (1965) classified
market efficiency in three levels: weak, strong and semi-strong. Moreover, Fama (1991),
and other financial analysts, argues that returns are characterized by short-term memory
or there is a short-term autocorrelation (not long-term dependence). Among the tests for
the existence of a memory, and its length, is Rescaled Range Analysis (RRA), first
described by Hurst, (Hurst 1951; Mandelbrot 1972; Greene and Fielitz 1977; Feder 1988;
Peters 1994).
7.2.4.1. A memory test.
Einstein (1905) proved that the RW model obeys the law20
: R=k [1], where R is the
distance covered, k is a constant and T is the time index. Hurst (1951) proposed a
generalization of Einstein‟s formula (Peters 1994; Steeb 2008), which then can be applied
to a broader class of time series, including the RW, as follows: R/S=k [2], where R/S is
the rescaled range Mandelbrot (1951-1977)21
, i.e. the range (maximum less minimum
value of time series) divided by (local) standard deviation, T is the index for the number
of observations, k is a constant of time series and H is the Hurst exponent or power law,
where 0≤H≤1. If H=1/2 then the series is independent and follows an RW or White noise
or BM.
Moreover, BM can be now generalized to fractal Brownian motion (FBM), using formula
[2]. In this paper H was found( Mandelbrot 1971; Peters 1994)22
to be equal to 0.93
(rounded) at n≥14 using 2815-1 daily time charter rates for BPI, 1999 to 2011. These and
other calculations were made using the computer program NLTSA V.2.0. H is found from
the regression: log(R/S) =log (k ) + H log (T) [3]. This is equation [2] after taking logs.
This H shows black noise23
in the series and a speed higher than that of the RW.
20
This law expresses the situation where the distance covered by a random particle undergoing random
collisions from all sides is directly related to the square root of time. 21
This is a dimensionless ratio that scales by H as time increases. Rescaling permits the comparison of
distant observations and it can be used for time series with no characteristic scale. It obeys fractal geometry. 22
Moreover, we have applied the Hurst process on first differences as suggested by Mandelbrot to avoid
taking values of H→1, we found no differences from the values obtained when we used first logarithmic
differences suggested by Peters . 23
The characteristics of a time series: (a) to follow trends, (b) to have memory and (c) to have observations
correlated no matter how distant is one from the other.
12
Moreover, there is a 93% probability that the series will increase in the next immediately
following period (2011+) (i.e. this is the „Joseph‟ effect) - invalidating the efficient
market hypothesis.
As shown in Figures 6 and 10, the series has the potential of sudden catastrophes (called
the Noah effect). This series is not distributed independently, but is biased. This bias is
due to the reactions of ship-owners in the current market conditions, using the
information about the current order book and delivery of ships released by shipping
statistical houses and shipyards.
7.2.4.2. Further information from H.
H can give us further information about the data of the BPI 1999 to 2011. The fractal
dimension of the probability space of BPI is FD=1/H= 1/0.93=1.075 (rounded) <2 (2 is
the value for an RW). Also, the fractal dimension of this time series equals FDTS=2-H=2-
0.93=1.07 which is <1.50. The rate of decay of the Fourier series is equal to
RDFS=2H+1=0.93*2+1=2.86. This is denoted by 1/ . The autocorrelation
function (measuring the covariance of a data series with itself) at some time lag τ, is
defined, for fractional Gaussian noise process H≠1/2 and large τ, as follows:
[4] or . A random walk requires C (τ) = 0 (except for τ=0). This indicates
long term memory effects and the correlations are positive for H>1/2 (=persistence).
Mandelbrot, Taqqu and Wallis (1979, 1969) showed the power of the nonparametric RRA
for determining the long run memory and dependence, even when time series are not
Gaussian and exhibit excess kurtosis and excess skewness, as is common in maritime
time series Jing et al. (2008). Moreover, there is a relationship between H and d
(integration coefficient) from ARFIMA models: d=H-1/2 and thus if 0<d<1/2 then the
ARFIMA process is stationary and possesses long term memory. Here d=0.93-0.50=0.43
<0.50 Siriopoulos (1998)24
. The mean orbital period25
is at 14 weeks (3.5 months) and
this indicates a non-periodic cycle.
7.3. Testing for the existence of chaotic dynamics
7.3.1. Introduction
We know that financial time series are good candidates for chaotic analysis and we
believe that the same is true for maritime time series. Even if a time series looks random
(e.g. Figures 6 and 10), it might still be deterministic Siriopoulos (1998). The new
element since 1985, and especially since 1987-89, however, is that chaotic signals must
be analyzed in the time domain state space of the system (phase space26
). Moreover,
simple non-linear models, though free of autocorrelation, can exhibit strong non-linear
dependence Granger and Andersen (1978). This is not new. Mandelbrot (1963) argued
that in uncorrelated returns large variations tend to be followed by large variations. This
24
In the General Index of the Athens Stock Exchange, d=0.30 25
A full orbital period is equal to a cycle. 26
A graph that allows all possible states of a system.
13
led (Engle 1982; Bollerslev 1986) to the ARCH and GARCH27
models respectively
(dealing with a time varying variance), but there are still certain anomalies.
7.3.2 Chaotic systems defined
A chaotic system (or a discrete chaotic time series) is a system exhibiting sensitivity to
initial (starting) conditions, which looks random or irregular. Depending on the dimension
of chaos, this system can be predicted. What we need to know is: (a) if there is sensitivity
on starting conditions, and (b) the dimension of the system. For the identification of a
chaotic time series, we have to construct first a pseudo-phase space using all possible
states of the system in the time delay method Packard, et al. (1980).
7.3.3. The system’s dimension28
Using one of the five methods available, i.e. the correlation dimension, we find system‟s
dimension. This method is due to Grasberger and Procaccia (1983). There is a number of
dimensions where the correlation dimension stabilizes at increasing embedding29
dimensions. This dimension applies also to the relevant attractor30
of the system. Here, as
shown in Figure 4, the correlation dimension - cd is equal 1.27 which is less than 2, and
fractal, at an embedding dimension of 18, calculated by the computer program NLTSA
(2000). White noise does not permit stabilization anywhere. This 1.27 also satisfies the
Eckman and Ruelle rule31
(1992) that cd≤2log2815=6.9. The time delay32
is set equal to
1, as suggested by Siriopoulos and Leontitsis (2000). So, the BPI 1999-2011 per day time
charter rate is chaotic of a dimension 2 which is less than 10 (i.e. a low dimensional
and predictable chaos).
Figure 4: System dimension by Correlation dimension versus embedding dimension
Source: Excel and NLTSA V.2.0 (2000)
27
GARCH stands for the Generalized, (meaning that it has been broadened to accommodate more
circumstances than ARCH in 1982), Autoregressive (AR), Conditional (meaning that the changes in the
variability are controlled by data‟s own past behavior), heteroskedasticity (meaning that data‟s variability
changes with time). This is a set of statistical tools to model data. 28
To analyze the problem of chaos in financial time series there are two methods: (1) the dimension
approach, used here and (2) the approach with artificial neural networks. 29
The idea is to embed time series in successive dimensions until we see (if dimensions are ≤4) a clear
picture of the object (the attractor) that is formed. This is done using as few equations as possible. 30
This is an object expressing the equilibrium level of the system. 31
Fundamental limitations for estimating dimensions and Lyapunov exponents in dynamical systems. 32
The system of time delay is a method to create another variable for each dimension, where
we have only one variable.
0
0,2
0,4
0,6
0,8
1
1,2
1,4
0 5 10 15 20 25
correlation dimension
embedding dimension
14
(a) The Lyapunov‟s exponent33
: the Lyapunov exponent can then be calculated. We
will use the method developed by Kantz (1994), which is considered robust and is not
influenced by the embedding dimension chosen. The attractor has a dimension of 1.27. It
has been argued Siriopoulos and Leontitsis (2000) that the Lyapunov exponent must be
twice that, 2.54 , while according to Takens34
(1981) it must be no
greater than 4: ≥2*1.27+1 4. NLTSA gives us the values of time evolution S(a)
and evolution a for the first 5 values of a, including 0. The maximum Lyapunov exponent
is given by the slope of the curve in Figure 4, which is determined by: S(a)= ,
where c is a constant.
Figure 5: Lyapunov exponent by the slope of S(a) = 1+c
Source: Excel and NLTSA
As shown in Figure 5, the Lyapunov exponent is equal to the slope of the curve, i.e.
0.1594, where the error35
, given by R² = 0.9902, is small in the area where a is in the
closed interval [0-5]. The exponent is positive, and allows us to forecast 1/0.1594 days
ahead =6.27→6 days, which is a very short period.
There are three further options for extending the prediction period: (1) to turn to MTA,
which we do in Analysis IV, (2) to transform the data into a shorter period, i.e. into weeks
and (3) to calculate cycles. Turning data into weeks, we obtain Figure 6. This has a self-
similarity property36
of time series, as can be seen by comparing Figure 6 with Figure 10.
The two figures are the same under scale.
33
A measure of the dynamics of an attractor. If positive, it measures a sensitive dependence on initial
conditions. 34
Detecting strange attractors in turbulence, Lecture notes in Mathematics 898, Springer-Verlang. 35
Where R² must be ≥98%. 36
Frequency distributions are self similar, if, after an adjustment for scale, they are much the same shape.
y = 0,1594x + 10,521R² = 0,9902
10,4
10,6
10,8
11
11,2
11,4
0 1 2 3 4 5 6
S(a) =time evolution
Evolution a
15
Figure 6: BPI per week, 1999-2011, 573 weeks
Source: Data and excel transforming 2815 days into weeks
Passing from days to weeks, the degree of persistence reduces from =0.93 to =0.735
for n=10 and for a total of n=573-1 weeks. The mean orbital period also reduced to 2.5
months. Following the same steps as above, we found the dimension equal to 1.22 at an
embedding dimension of 12. The dimension is again fractal, less than 2, and low. The
new Lyapunov exponent, 0.1709 at 97.3% = The prediction period is now
equal to 1/0.1709 = 5.85→6 weeks at a slightly higher error of +0.7%. This is an
improvement, giving predictability for 6 weeks rather than 6 days, but the prediction
period is still short. We turn now to the third option.
7.4. Non-periodic shipping cycles
To locate cycles we divide our weekly data of n=572 weeks into 14 groups so that N
times A equals n. We have chosen n=560 and we have found 14 integer dividers that
divide this exactly. We have plotted the log(R/S)n and log E(R/S)n against n to detect
cycles (Peters, 1994). The plot (not shown here) gave four non-periodic cycles: (1) at 16
weeks (4 months), (2) at 35 weeks (about 9 months), (3) at 80 weeks (20 months) and (4)
at 140 weeks (35 months).This 35 month cycle is very close to the cycle of 3-4 years
found by MTA (Analysis IV).
Peters (1994) suggested another more accurate method to calculate cycles. This is:
Vn=(R/S)n/ (Figure 7).
0,00
10000,00
20000,00
30000,00
40000,00
50000,00
60000,00
70000,00
80000,00
90000,00
100000,00
0 100 200 300 400 500 600 700
16
Figure 7: Vn versus log n for BPI 1999-2011 (weekly rates)
Source: Excel and NLTSA
The Vn statistic shows clearly 4 cycles in the period 1999-2011: (1) at 16 weeks (4
months), (2) a new at 35 weeks (about 9 months), (3) at 80 weeks (20 months) and (4) at
140 weeks (35 months). The criterion used for finding a cycle is that a cycle exists when
the Vn curve flattens out (called a break) Peters (1994, p. 92-93). The data used does not
allow us to identify37
any 20 years cycles, as the data collection period is shorter than that.
To overcome this difficulty, we examined 61 years from 1941 to 2007for the index and
found H3 = 0.64 at n=10, and three cycles Stopford (2009) 38
, this time of 6, 12 and 30
years. Moreover, for 259 years (1741-2008), H4 was higher and equal to 0.696 at n=10
and the cycles were 6 years and 129 years. These results indicate that shipping cycles
differ from period to period and from index to index, making prediction even more
difficult. The positive element from the above analysis is that certain cycle durations
appear again and again. This is unlikely to be mere chance.
8. ANALYSIS IV: MARITIME TECHNICAL ANALYSIS
8.1 Introduction
Technical analysis (TA) is used to recognize patterns, real or spurious, in studying large
quantities of data on prices, volumes, and indicator charts in search of clues to buy or sell.
TA expanded in the 1990s and appeared also on the internet, where stocks were traded.
All major forex (currency) houses use this analysis, and try to find support points and
trading ranges. Chartists for: (1) can at times be correct, and (2) their models may work at
times, but their theory is not a foundation on which to build a global risk-management
system Mandelbrot and Hudson (2004 ). Miner (1999) gives the objective of TA as: “To
identify those market conditions and the specific trading strategies that have a high
37
Mrs Psifia E-M found for the dry cargo charter index per trip on 1968-2003, 3 non-periodic cycles of 28,
60 and 105 months (2 years 4 months; 5 years and 8 years and 9 months). For dry cargo time charters 1971-
2003 there was one cycle of 2 years and 8 months. For dry cargo per trip charter 1971-2003 there were two
cycles of 48 and 96 months (4 years and 8 years). This means that each category has its own cycle(s), but
cycle duration is not very different from one index to another. 38
concluded that a cycle of 7 years is supported by statistics. The last 50 years, however, support a cycle of
8 years. This is also the mean value of the 12 cycles (i.e. 7.8 years). Six cycles lasted over 9 years. Nine
cycles appeared in bulk shipping since 1947. One thing is certain; cycles are non-periodic.
1
1,1
1,2
1,3
1,4
1,5
1 1,5 2 2,5 3
Vn=(R/S)n/sq. root of n
log n
17
probability of success”. The basic assumption of TA (Siriopolos 1999; Miner 1999) is
that the price level is determined by demand and supply. In addition, past price time series
shape the future levels of prices. There are two phases: an accumulation of shares, when
prices are low, and distribution, when prices are high. TA passed through three historical
developments: (a) the Dow Theory (1902, 1922 and 1932), (b) the Elliott theory of waves
(1930 onwards) and (c) the use of technical indices (1975 onwards). We will concentrate
on the second development that has influenced shipping especially the analysis due to
Frost and Prechter (1978). (Elliott 1930; Prechter 1980), argued that markets follow 8
(steady in number) full waves with a repeated (impulsive) formulation of 5 waves up, that
move according to a trend and 3 corrective waves down39
. These are followed by 34
medium waves and 144 small waves40
. All numbers obey the „Fibonacci‟ series. Here we
have also the „graphical TA‟ due to Gann41
(1878-1956), which we meet in stock
exchange index analysis in financial columns of newspapers referring to such concepts
like levels of support, resistance and retracement focusing on time and price (following
the „classical graphical TA‟). Gann used the geometric angles of the trend (mainly based
on the 45 degrees „equilibrium line‟ for the relation between price and time). As Peters
(1994) argued, charts are important tools for the day traders in all trading rooms and for
short-term investors. TA is based on the belief that there are regular market cycles, hidden
by noise or irregular perturbations that drive the market‟s underlying clockwork
mechanism. Spectral analysis finds only correlated noise, but nothing is certain. The
information used is related to the momentum of a particular variable and to market
dynamics and crowd behavior.
8.2 Maritime Technical Analysis (1990)
(Hampton 1990; Randers and Goluke42
2007), argue that it is possible to make accurate
long term forecasts of freight rates. They combine market psychology with cycles of
various durations. The techniques are based on a detailed analysis of over 40 years of
shipping statistics. In this historical analysis, there are long periods of low freight rates,
representing an 8 to 12 year correction phase of a long cycle, with a total duration of
about 20 years. Long cycle corrections are preceded by 8 to12 years of healthy freight
rates, when over-optimism caused a surplus of ships to be built (Figure 8).
39
Numbers 5 and 3 of Elliott are connected with the series of Arabic numbers introduced first in Italy by
Fibonacci (known as Leonardo of Pisa, 1180-1250) in his „book of abacus‟ in 1202. The Fibonacci series
1,1,2,3,5,8,13,21,34,55,89,144,233 are related with the „golden ratio or spiral logarithm‟ of Ancient Greeks;
this was known by Pythagoras, which is given by solving: This is called the
„golden mean or divine proportion‟. This number is present also in the works of Leonardo da Vinci and
Salvador Dali. This is the way to construct the so called „golden square‟. 40
The 5 waves up create 21 waves (5+3+5+3+5), while the 3 ones down create 13 waves (5+3+5). 41
„Successful stock selecting methods in Wall street‟, Institute of Economic finance (undated).
42 They argued that it is possible to explain much of the history of the world‟s shipping markets since 1950
as the interaction of two balancing feedback loops: a capacity adjustment loop which creates a roughly 20-
year wave, and a capacity utilization adjustment loop which generates a roughly 4-year cycle.