RBF Based Responsive Stimulators to Control Epilepsy...Using Radial Basis Functions (RBFs), we modeled interictal and postictal time series based on electroencephalograms (EEGs) of
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
i
RBF Based Responsive Stimulators to Control Epilepsy
by
Siniša Čolić
A thesis submitted in conformity with the requirements
where a1 is the upper triangular matrix introduced in the transformation to orthogonality, w is
the weight matrix.
3.3 Application to Henon Map
Henon map is a 2-dimensional dynamical system that has been well studied due to its ability to
exhibit chaotic behavior for certain parameters. This makes it good model to test out the
learning techniques introduced. The dynamics of the Henon map are assumed to be
significantly less complex thus making the Henon map a good starting place to verify the RBF
models abilities. In what is to follow we applied the gradient descent training technique on the
Henon map to see if the gradient descent RBF model can capture the chaotic behaviour of the
Henon map.
3.3.1 Henon Map
The Henon map is defined by,
352� = S52� − ~35" (3.22)
S52� = Y35 (3.23)
27
where a and b are two parameters that can be preset to make the map exhibit chaotic
behaviour. In figure 3.2 below we show the difference between the Henon map running in non-
chaotic mode with parameters a=1.25 and b = 0.3 and chaotic mode with a = 1.4 and b =0.3.
The non-chaotic mode shown in figure 3.2a is periodic and can easily be predicted well in
advance. The chaotic mode shown in figure 3.2b has a chaotic pattern which cannot be
predicted in advance.
Figure 3.2 – Comparison of non-chaotic and chaotic Henon map time series
a) Non-chaotic Henon map time series with a=1.25 and b=0.3 b) Chaotic Henon map time series with a=1.4 and b=0.3.
0 20 40 60 80 100 120 140 160 180 200-0.4
-0.2
0
0.2
0.4
Time
Yn
a) Non-chaotic Henon map (a=1.25, b=0.3)
20 40 60 80 100 120 140 160 180 200-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
Time
Yn
b) Chaotic Henon map (a=1.4, b=0.3)
28
3.3.2 Data Preprocessing
In order to model the Henon map it was necessary to divide the time series into training data.
Four thousand time series samples were generated in Matlab for training the RBF to model the
Henon map. Following the generation of the data, the data was modified to produce samples of
varying time embedding. Where time embedding refers to the number of time points used
prior to a prediction. It can also be thought of as the input length. Training samples were
created with time embeddings of 1, 2, 5, 10 and 20.
3.3.3 RBF Training of Henon map
The training of the Henon map was done with the gradient descent method mentioned earlier.
The initial center c parameters were selected from the training data, the r or variance was set
to 0.1, and the weights were randomized from a Gaussian distribution. The training variables
used in the gradient descent method are shown in Table 3.1.
The error calculation was done in two steps. First during the training the error was calculated
using the MSE for the regular non-recurrent mode. Then afterwards the model was verified
using the MSE on the recurrent mode generation compared with the actual Henon map time
series.
�Z6 = �9-1-� ∑ �� − h����"5F�� (3.24)
29
where m is the embedding of the model used to make the prediction h��� and D is the number
of sample points used in the training.
Since in recurrent mode the models diverge very rapidly (see figure 3.3a), it was decided that
MSE was not enough to validate the model. Therefore complexity was further used as a way to
confirm the model selection. To calculate the maximum Lyapunov exponent we used STLmax
[22][23]. To calculate the correlation dimension we used a Matlab program based on
Grasberger and Procaccia [25] written by Zalay, who is a member of our group. The complexity
was compared with the complexity on the training data. The complexity was calculated using
8000 samples, time constant of 2 and embedding dimension of 7. The result was 0.99 for the
maximum Lyapunov exponent and 1.33 for the correlation dimension.
Table 3.1 – Henon Map gradient descent training parameters and results
Model Embedding
(m)
RBFs
(N)
Training
Epochs
MSE
(non-Recurrent)
MSE
(Recurrent)
Max Lyapunov
Exponent
Correlation
Dimension
1 1 10 1000 2.00e-3 3.64e-2 0.20 NaN*
2 2 10 1000 4.55e-7 1.12e-2 0.87 1.27
3 5 20 1000 4.47e-6 1.12e-2 -0.12 0
4 10 20 1000 2.90e-3 7.90e-3 -1.66 6.80e-3
5 20 20 1000 2.70e-3 1.12e-2 -5.30 1.70e-3
6 1 20 1000 1.70e-3 1.69e-1 0.22 NaN*
7 2 20 1000 5.95e-7 1.12e-2 1.10 1.37
8 5 40 1000 7.98e-5 1.12e-2 1.15 2.01
9 10 40 1000 4.31e-5 1.05e-1 -0.44 0
10 20 40 1000 6.93e-2 1.45e-2 0.02 0
* NaN refers to Not A Number and commonly results when the correlation dimension is unable to be calculated, in
this case it is because Model 1 and Model 2 produced a steady constant value in recurrent mode.
30
The trained RBFs were used in recurrent RBF mode to produce time series of length 8000 as a
means to compare the effectiveness of the training. The results are shown in Table 3.1. From
the results the lowest error in training MSE (non-recurrent) was 4.55e-7 for model 2 with an
embedding of 2 and 10 RBFs. Furthermore the recurrent MSE was the lowest with 1.12e-2
along with models 3, 5, 7, and 8. Model 2 produced a maximum Lyapunov exponent of 0.87 and
correlation dimension of 1.27 which closely matched the values found on the original data, 0.99
and 1.33 respectively. Models 7, also with an embedding of 2, but with 20 RBFs produced
similar complexity to the training data, but it required more RBFs. Therefore model 2 was
selected. Further by simple observation in figure 3.3a it was verified that model 2 matches the
characteristics of the Henon map. In figure 3.3b the MSE training with respect to the first 200
epochs is provided to show how the model slowly converged to the error of 4.55e-7. The RBF
with the gradient descent learning method was sufficient in modeling the Henon map. It was
not necessary to use any of the other training techniques.
31
Figure 3.3 – Comparing RBF Henon map model to chaotic time series
a) Comparison of chaotic Henon map time series to RBF generated data in recurrent mode with RBF parameters of m=2 and N=10 b) Mean squared error plot with respect to epoch of gradient descent training for the same model
3.4 Application to Non-ictal Time Series data
As mentioned earlier the non-ictal data is highly complex and possibly chaotic (HPC). Moreover
it is non-stationary and embedded with noise. Modeling this complex time series is far more
difficult than modeling the Henon map. In what is to follow we will describe the process of
20 40 60 80 100 120 140 160 180 200
-0.4
-0.2
0
0.2
Time
Yn
RBF model of Henon Map with m=2 and N=10
20 40 60 80 100 120 140 160 180 200
-0.4
-0.2
0
0.2
Original Henon Map time series for a=1.4 and b=0.3
Time
Yn
0 5 10 15 20 25
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
Comparing RBF Prediction to Henon Map
Time
Yn
Henon Map data
RBF (m=2, N=10)
20 40 60 80 100 120 140 160 180 2000
0.1
0.2
0.3
b) Gradient Descent MSE on Henon Map RBF model (m=2, N=10)
Epoch
MS
Ea) Comparing RBF Henon Map Model To Original Henon Map Time Series
32
modeling the non-ictal time series data starting with the acquisition of data, immediately
followed by the preprocessing of the data and concluding with the training results and
verification of complexity.
3.4.1 Low Mg2+
/ High K+ Animal Data Acquisition
Training seizure data was collected independently by Eunji, a member of our group. Eight slices
from the hippocampus of male Wiser rats aged 17-25 days were obtained. Then the slices were
bathed in a low Mg2+/High K+ solution and electrodes were placed in the CA1 region of the
hippocampus. After roughly 20-40 minutes the slices begin to exhibit spontaneous seizures due
to the presence of the low Mg2+/High K+. The seizing activity was recorded by electrodes at a
sampling frequency of 2kHz. The whole process is described in further detail in the paper by
Chiu et al, with the exceptions that we use sampling of 2kHz where as Chiu et. al sampled at
10kHz [3]. Once all the data had been collected it was only a matter of separating the ictal
regions from non-ictal regions. The separation of the ictal and interictal region was the most
difficult as there is a steady increase in spikes as the seizure develops. To avoid overlapping the
regions we selected interictal data as far away from the ictal region as possible. The postictal
region was selected after the last ictal spike was observed (see figure 1.1).
3.4.2 Data Preprocessing
The non-ictal time series data is susceptible to many noise sources ranging from the external
environment to electromyogram (EMG) interference from muscles to simply artifacts in the
33
measuring instrumentation. Therefore preprocessing consisted of filtering out the noise,
trimming out outliers in the time series recording and scaling the signal to lie between the
range -1 to 1.
The signal was low pass filtered to 50Hz followed by a light high pass filtering at 0.5Hz to
remove some low frequency oscillations which interfere in the training of the RBF. After
filtering the training data was downsampled by 20.
The trimming of the signal was done in such a way that only the occasional outliers would be
removed and the remainder of the signals would fall just under the trimming. Following the
trimming the data was scaled such that the maximum and minimum values would lie within -1
to 1 respectively.
3.4.3 RBF Training of Non-Ictal Time Series
The above mentioned training methodologies were then applied in three different sequences
to train our model.
1. Gradient descent
2. Forward Selection
3. Regression Tree with Forward Selection
34
Similarly to the Henon map the training of the RBF models was done using the MSE defined by
equation 3.24. To verify that the models sufficiently resemble the properties of the non-ictal
extracellular time series we tested the model by running it in recurrent mode and initialized
with ictal data.
The final test of the models was to verify their complexity and also how well it resembled the
actual non-ictal time series data used in training. The complexity was calculated by finding the
maximum Lyapunov exponent and correlation dimension which were introduced in Sections
2.1.1 and 2.1.2 respectively. Table 3.2 below shows the complexities of the interictal and
postictal training data after downsampling to 100Hz sample rate. To calculate the maximum
Lyapunov exponent we used STLmax [22][23]. To calculate the correlation dimension we used a
program based on Grasberger and Procaccia[16] written by a Zalay, a member of our group.
The results were calculated using 6 different samples of length 8000, time constant of 2 and
embedding dimension of 7. The results of the calculation yielded a maximum Laypunov
exponent and correlation dimension of 1.67 and 5.66 respectively for the interictal time series
data. Postictal time series data yielded a maximum Lyapunov exponent and correlation
dimension of 1.64 and 6.33 respectively. These complexity results were compared later with
the complexity found from the RBF models generating in recurrent mode.
Table 3.2 – Complexity of interictal and postictal training time series
Model Maximum
Lyapunov Exponent
Standard Error Correlation
Dimension
Standard Error
Interictal Time Series 1.67 0.04 5.66 0.27
Postictal Time Series 1.64 0.06 6.33 0.07
35
3.4.2.1 Gradient Descent
Initially the training of the non-ictal data was done with the gradient descent method. The
training was first done on the interictal model. The centres c were selected initially from the
training data, the variance r was set to 0.1 and the weights w were selected from a normalized
Gaussian distribution. The embedding and number of RBFs were swept through a variety of
choices from fairly simple to very complex (see Table 3.3).
The gradient descent training worked in reducing the error on predictions of the training data.
Although it failed to produce anything resembling the time series of the interictal extracellular
time series when operated in recurrent mode. The MSE results from both training (non-
recurrent) and recurrent modes are shown Table 3.3 along with the complexity calculations.
None of the models succeeded to capture the characteristics of the interictal time series. The
best result was achieved with model 2 which had an embedding of 10 and with 20 RBFs . It
produced a MSE of 0.0777 after training and a MSE of 0.1784 in recurrent mode. The maximum
Lyapunov exponent was close to 0 and the correlation dimension was 0 thus lacking any sort of
complexity. Figure 3.4 shows the results of model 2. From figure 3.4c it can be seen that the
recurrent mode would produce oscillations until it converged to a constant close to 0. From the
above results it was determined that gradient descent based methods were not going to
succeed in modeling the chaotic interictal activity. Thus we proceeded to try out the other
learning methods.
36
Table 3.3 – Interictal gradient descent training parameters and results
* NaN refers to Not A Number and commonly results when the correlation dimension is unable to be calculated, in
this case it is because Models 3 - 10 produced a steady constant value in recurrent mode.
37
Figure 3.4 – RBF Interictal model after gradient descent training
a) Interictal training data. b) Prediction of RBF after gradient descent on training data, embedding of the model is
equal to 10 and the number of RBFs used are 20. c) Result of RBF prediction in recurrent mode. d) MSE error curve
with respect to number of training epochs.
0 50 100 150 200 250 300 350 400 450 500-1
-0.5
0
0.5
1
Time
Volta
ge
(m
V)
a) Training Data
0 50 100 150 200 250 300 350 400 450 500
-0.4
-0.2
0
0.2
Time
Vo
lta
ge
(m
V)
b) Non-recurrent RBF Model (m=10, N=20)
0 50 100 150 200 250 300 350 400
0.08
0.1
0.12
0.14
0.16
0.18
d) Gradient Descent MSE on Interictal RBF Model (m=10, N=20)
MS
E
Epochs
0 50 100 150 200 250 300 350 400 450 500-0.1
-0.05
0
0.05
0.1
Vo
lta
ge
(m
V)
c) Recurrent RBF Model (m=10, N=20)
Time
38
3.4.2.2 Forward Selection
The forward selection (FS) learning technique uses a non-gradient based learning method which
may avoid getting trapped in local minimum. The advantage of training with the FS is that it did
not require a lot of parameter selection prior to training. The main parameter that was
controlled was the embedding of the time series. The embedding used for training were 5, 10,
20, 30, 40, 50, 60, 80, 100, 120 and 140. As before, we trained on the interictal training data to
see if forward selection learning could capture the features of the interictal region. After
training the models were tested in recurrent mode. Figure 3.5 shows the results of MSE and
complexity of the different RBF models. The lowest error achieved was 0.19 for embedding 5,
although it failed to produce any complexity. Only the models with embedding 40 and 50
produced complexity in both the Lyapunov exponent and correlation dimension. Even so the
Lyapunov complexity fell far short of the 1.67 goal for the interictal time series. The model with
embedding 50 seemed slightly superior to the other models and its results were further
decomposed in figure 3.6. Under embedding of 50 the model produced 3591 RBFs. The training
is shown in figures 3.6b where as the number of RBFs was added the GCV error reduced. The
addition of further RBFs stops once the GCV error does not change significantly for the past 5
RBF additions. At which point the selection process backtracks to the point 5 RBFs before and
takes that to be the model. This occurred after 3591 RBFs were included. With such a large
number of RBFs it is likely the training attempted to select one RBF for each of the training time
series points which negates any real learning. In figure 3.6a we compare the recurrent RBF
model time series generation to that of the interictal training data. The result is significantly
better than that of the gradient descent training. Simple visual observation shows the two
39
waves are significantly different. The model appears to be stuck in a rhythmic-like pattern with
no real complexity.
Figure 3.5 – Results of Interictal RBF training with forward selection
Comparing MSE and complexity of RBF models with different embeddings. Models were tested in recurrent mode. It can be noted that the lowest MSE occurred for embeddings of 5, 10, 20, 140 and 200. However the only consistent complexity occurred at embeddings of 50 and 60. Preference was given to model complexity and thus the model with embedding of 50 was chosen. All the models fall short on the Lyapunov exponent indicating that the models were unable to match the complexity of the interictal time series.
5 10 20 30 40 50 60 80 100 120 140 200
0.1
0.2
0.3
Embedding
MS
E
MSE vs Embedding of RBFs (Interictal Model)
5 10 20 30 40 50 60 80 100 120 140 200-0.5
0
0.5
1
1.5
Embedding
Lm
ax
Max Lyapunov Exponent vs RBFs (Interictal Model)
5 10 20 30 40 50 60 80 100 120 140 200
1
2
3
4
5
Embedding
Dim
Correlation Dimension vs Embedding of RBFs (Interictal Model)
40
Figure 3.6 – RBF interictal model after training with forward selection
a) Comparison of the interictal training data to the recurrent RBF model selected with embedding of 50 and 3591 RBFs. There is significant improvement over the gradient descent training method however it still does not resemble the interictal data. b) The RBF selection process showing the reduction in GCV error until the error flat lines and no more RBFs are added.
0 500 1000 1500 2000 2500 3000 35000
0.05
0.1
0.15
Number of RBFs
GC
V E
rro
r
b) GCV Error During FS Training (m=50, N=3591)
0 50 100 150 200 250 300 350 400 450 500-1
-0.5
0
0.5
1
Time
Vo
lta
ge
(m
V)
a) Interictal Training Data
0 50 100 150 200 250 300 350 400 450 500-1
-0.5
0
0.5
1
Time
Vo
lta
ge
(m
V)
Recurrent RBF Model (m=50, N=3591)
41
3.4.2.3 Tree Regression and Forward selection
To improve on the results of the FS we employed the tree regression (TR) on the data. TR
sampled the training data to create viable center c and variance r parameters. Then using FS
the best RBFs were selected to produce a much more compact model consisting of far less RBF
functions.
We applied the training on the same embeddings as in section 3.4.2.2 with the FS training. This
time TR was applied before the FS. We first trained on the interictal data. The results were very
encouraging. Figure 3.7 shows the MSE and complexity with respect to the embedding of the
models trained after operating the RBFs in recurrent mode. The MSE was lowest for
embeddings 5, 20 and 30. Even so the MSE was not all that different from section 3.4.2.2 and
3.4.2.1 and those models were not successful in capturing the interictal time series. For that
reason the complexity and resemblance to the training data were taken to be the more reliable
estimates. The complexity of many of the models matched closely to the Lmax of 1.67 and
correlation dimension of 5.66 found for the actual interictal time series.
After training, the RBF models with embeddings of 20, 30 and 50 were found to match closely
the complexity of the interictal data and still managed to resemble the data fairly well in visual
comparisons. The embedding 20 RBF model was found to have a complexity that most closely
matched that of the interictal data with Lmax of 1.68 and correlation dimension of 6.21.
Embedding 30 was fairly close with an Lmax of 1.72 and a correlation dimension of 6.30.
Embedding 50 was also fairly close with Lmax of 1.58 and correlation dimension of 6.22. The
42
correlation dimension results for the three models were far off from the interictal training data,
but the training data had a large variance so preference was given to the Lmax estimate for the
interictal training case.
In figure 3.8 we compare the recurrent RBF mode of the three top models to the interictal time
series to show how they compare. It was noted that even though the model with embedding of
20 had a lower MSE it failed to match the interictal time series as well as the embedding 50
model. In particular it failed to match the amplitude characteristics. The embedding 50 model
matched the time series the best while at the same time maintaining complexity that closely
resembled the interictal time series. In comparison to the embedding 30 model, the embedding
50 model still had better amplitude characteristics. Therefore we chose the embedding 50
model to represent the interictal time series.
Having successfully trained the interictal time series using the TR technique the same training
technique was applied on the postictal time series, the results of which are shown in figure 3.9
and 3.10. The embeddings of 5, 20, 30 produced slightly lower MSE values. Having shown that
the MSE was not a reliable estimate we focused more on the complexity of the models. The
embeddings of 20, 40 and 50 produced similar complexity results to the postictal time series
complexity. The postictal time series had complexity of 1.64 for Lmax and 6.35 for correlation
dimension. The embedding of 50 model had the closest complexity with Lmax of 1.59 and
correlation dimension of 6.33. The embedding of 40 had a Lmax of 1.73 and a correlation
dimension of 6.39. The embedding of 20 had a Lmax of 1.68 and a correlation dimension of
43
6.21. Figure 3.10 compares the three top RBF models to the postictal time series. The
embeddings 20 and 40 models lacked the ability to match the amplitude characteristics as well
as the embedding 50 model. The embedding of 50 model was selected as the postictal model.
Figure 3.7 – Results of interictal RBF training with tree regression
Comparing MSE and complexity of RBF models with different embeddings. Models were tested in recurrent mode. It can be noted that the lowest MSE occurred for embeddings of 5, 20 and 30. Models with embeddings 20, 30 and 50 produced the closet complexity to the interictal training data.
5 10 20 30 40 50 60 80 100 120 140 200
0.1
0.2
0.3
Embedding
MS
E
MSE vs Embedding of RBFs (Interictal Model)
5 10 20 30 40 50 60 80 100 120 140 200
2
4
6
Embedding
Lm
ax
Maximum Lyapunov Exponent vs Embedding of RBFs (Interictal Model)
5 10 20 30 40 50 60 80 100 120 140 200
2
4
6
8
Embedding
Dim
Correlation Dimension vs Embedding of RBFs (Interictal Model)
44
Figure 3.8 –RBF interictal after training with tree regression
a) The interictal time series data that RBF model is striving to replicate. b) The chosen RBF model operated in recurrent mode showing strong resemblance to the interictal time series. The model has an embedding of 50 and uses 99 RBFs. c) The RBF model with 30 embedding and 112 RBFs had slightly better complexity and lower MSE but lacked in the amplitude when compared to the interictal data. d) Similarly RBF model with 20 embedding and 139 RBFs had good complexity but also lacked in amplitude characteristics.
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
a) Interictal Time Series
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
b) Recurrent RBF Model (m =50, N=99)
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
c) Recurrent RBF Model (m=30, N=112)
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
d) Recurrent RBF Model (m=20, N=139)
45
Figure 3.9 – Results of postictal RBF training with tree regression
Comparing MSE and complexity of RBF models with different embeddings. Models were tested in recurrent mode. It can be noted that the lowest MSE occurred for embeddings of 5, and 20. Embedding models 20, 40 and 50 had the closest matching complexity to the postictal time series.
5 10 20 30 40 50 60 80 100 120 140 200
0.1
0.2
0.3
0.4
0.5
Embedding
MS
E
MSE vs Embedding of RBFs (Postictal Model)
5 10 20 30 40 50 60 80 100 120 140 200
2
4
6
8
10
Embedding
Lm
ax
Maximum Lyapunov Exponent vs Embedding of RBFs (Postictal Model)
5 10 20 30 40 50 60 80 100 120 140 200
2
4
6
Embedding
Dim
Correlation Dimension vs Embedding of RBFs (Postictal Model)
46
Figure 3.10 – RBF postictal training with tree regression
a) The postictal time series data that RBF model is striving to replicate. b) The chosen RBF model operated in recurrent mode showing strong resemblance to the postictal time series while maintaining the closest matching complexity. The model has an embedding of 50 and uses 128 RBFs. c) The RBF model with 40 embedding and 146 RBFs had slightly better complexity and lower MSE but lacked in the amplitude when compared to the interictal data. d) Similarly RBF model with 20 embedding and 156 RBFs had good complexity but also lacked in amplitude characteristics.
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
a) Postictal Time Series
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
b) Recurrent RBF Model (m=50, N=128)
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
c) Recurrent RBF Model (m=40, N=146)
0 50 100 150 200 250 300 350 400 450 500-1
0
1
Time
Vo
lta
ge
(m
V)
d) Recurrent RBF Model (m=20, N=156)
47
CHAPTER 4
MODELING SPONTANEOUS SEIZURE LIKE EVENTS
At this point we have established a RBF model of both interictal and postictal time series. The
RBF model was able to capture the necessary features of the biological system. It was able to
maintain a recurrent mode of time series generation while at the same time matching the
amplitude characteristics and the complexity of the biological system. In this chapter we
introduce some SLE models found in literature. We then proceed to summarize in detail the
spontaneous SLE model that will be used to test the RBF stimulators.
4.1 Literature Review
Modeling of spontaneous seizure-like episodes has been achieved using computational models
[8][36][37][38]. When modeling epilepsy one has to take into consideration that not all epilepsy
disorders are the same. Epilepsy can occur in many different regions of the brain and each has
48
its own unique characteristics. Here we provide some literature reviews on the different
models out there.
In 2002 Wendling et al., constructed a seizure model of the human epilepsy using intracerebral
EEG recordings from the human hippocampus. They created a macroscopic model which
represented the neurodynamics of four populations of neurons using 2nd order differential
equations with a static nonlinearity [36]. The four clusters were divided into: main cells
(pyramidal cells in the hippocampus or neocortex), two feedback subsets composed of local
interneurons (either excitatory or inhibitory) and a fourth subset to represent the inhibitory
interneurons with faster kinetics [36]. The model produced very similar waveforms to the
intericital and ictal activity found in the human epilepsy.
In 2004, Suffczynski et al., used a bistable neural network model to create a macroscopic model
for rat absence epilepsy [37]. They modeled the thalamo-cortical circuits based on relevant
physiological data. Transitions between the ictal and interictal states were determined
randomly with constant probabilities. They managed to model the seizure-like oscillations fairly
accurately. However the model was never designed to produce the extracellular type signals
recorded from the intracerebral electrodes.
Recently Zalay et al., modeled temporal lobe epilepsy of the rat hippocampus [8]. Like the other
models this model represented the macroscopic neurodynamics of populations of neurons.
The model consisted of cognitive rhythm generators (CRGs) defined by four differential
49
equations. The output of each CRG was calculated using a static nonlinearity. Furthermore the
model was able to produce an extracellular like signal by taking the outputs of each CRG and
summing them up relative to a centre point based on a topological square relationship between
the four CRGs. The model closely matched the real extracellular recording from the Low Mg2+
spontaneous seizure setup in the rat hippocampus.
The training of the RBF stimulation models was based on the temporal lobe epilepsy and that
made the selection of the Cognitive Rhythm Generator Seizure-Like Event (CRGSLE) model by
Zalay et al. an appropriate choice to test our stimulation on. In the following section CRGSLE
model will be further described.
4.2 CRG Based Spontaneous Seizure-Like Event Model
The CRGSLE model was chosen for validation of our hypothesis that simulating with a HPC non-
ictal signal (i.e. interictal or postictal) would produce successful suppression of an ictal event.
The strengths of the model is that it models the temporal lobe epilepsy, produces spontaneous
seizure-like events (SLEs) and produces an extracellular signal that mimics the extracellular
recordings used in training.
As mentioned earlier the way the CRGSLE (see figure 4.1) works is that it creates four CRGs that
represent different populations of neurons through 2nd order limit cycle dynamics and a static
nonlinearity connecting the state variables to the output waveform [8]. The coupling between
50
the different CRGs is done with an exponential impulse response function, which is referred to
as an ‘integrating mode’ [8]. The nth CRGs’ combined dynamics are defined by four differential
where �5�S� is the mode input function and S1 are the CRG outputs, ]15 are the directional
coupling coefficients and 35��� is the optional external input. W(∙) is the intrinsic output
waveform of the CRG normalized over (-π,π], with the 4-quadrant arctangent function
providing the instantaneous phase angle [8].
51
The phase and amplitude modulation functions are defined by,
Z�,5 = ] 5 + X5O�5 + 35� ���, (4.7)
Z�,5 = 0, (4.8)
where kn is a modulatory gain and 35� ��� is an optional additive input. The CRGSLE model
generation of spontaneous seizure events is shown in figure 4.2.
The extracellular field potential used to simulate the SLE time series was produced by the
output of the four CRGs (see figure 4.2b). It was created by treating each of the CRGs as a point
source and treating the center of the electrode as being placed above the center of the square
like arrangement of the CRGs [8]. The extracellular seizure shares many features with the actual
seizure data from the rat slice as shown in figure 4.3. The comparison in figure 4.3 acts as a
verification that the CRGSLE model is a good representation of the actual biological system we
are trying to stimulate.
52
Figure 4.1 – CRGSLE model
Diagram showing the configuration of the 4 CRGs used to create the SLEs. The CRG is composed of three parts. First the integrating mode takes all the inputs coming from the other models and convolutes them. Then it feeds the result into the differential equations which contain clock like dynamics. The result of this clock portion is then fed to a mapper which creates the output that can be fed into the other CRGs.
53
Figure 4.2 – CRGSLE model output waveforms produced
a) The extracellular recording of a spontaneous seizure-like event. b) The four CRG outputs. The CRGSLE produces spontaneous SLEs based on the outputs of the 4 CRGs. The 4 CRGs are combined to produce an extracellular type recording by treating each of the CRG outputs as point sources equidistant away from the extracellular recording region.
0 10 20 30 40 50 60 70 80 90
-1
0
1
a) Unstimulated CRGSLE Model
Time (s)
Volta
ge
(m
V)
0 10 20 30 40 50 60 70 80 90
0
20
40
60
b) CRG Outputs
Time (s)
Volta
ge
(m
V)
0 10 20 30 40 50 60 70 80 90
0
20
40
60
Time (s)
Vo
lta
ge
(m
V)
0 10 20 30 40 50 60 70 80 90
0
20
40
60
Time (s)
Vo
lta
ge (
mV
)
0 10 20 30 40 50 60 70 80 90
0
20
40
60
Time (s)
Vo
lta
ge
(m
V)
54
Figure 4.3 – Comparison of the CRGSLE seizures to the actual seizures being modeled
a) Comparison of the seizures recorded from the rat hippocampus under lowMg2+
conditions and the seizures produced by the CRGSLE model. b) Shows a close up comparison of the actual biological and computational seizures.
0.5 1 1.5 2 2.5 3 3.5 4
x 106
a) Seizures Recorded From Hippocampus
20 30 40 50 60 70 80
CRGSLE Seizures
1.3 1.35 1.4 1.45 1.5 1.55 1.6
b) Closeup of Seizure Recorded From Hippocampus
38.5 39 39.5 40 40.5 41 41.5 42
-2
-1
0
1
Closeup of CRGSLE Seizure
55
CHAPTER 5
CONTROLLING SEIZURES
This chapter compares the standard DBS periodic stimulation to that of the HPC interictal and
postictal RBF stimulation models. Then we describe how the stimulation techniques were
applied to the CRGSLE model. After that we provide quantification of the stimulation efficacy
using ROC curves and the area under the ROC curves.
5.1 Application of Stimulation to CRGSLE Model
In the earlier section 4.2 we showed how the CRGSLE model was able to achieve an accurate
representation of the epilepsy found in the rat hippocampus. Now we describe how the same
model can receive inputs from external stimulation.
56
Providing responsive external stimulation to the CRGSLE model required two issues to be
addressed. First it was important that the stimulation be added in the appropriate place to
mimic an external stimulation. Secondly to apply a responsive stimulation it was necessary to
determine when the system was seizing.
The external stimuli was added in equation 4.5 to the mode input function �5�S� [8]. The
external stimuli 35��� is equal to the gain multiplied by the stimulus that was being provided,
whether that was RBF interictal, RBF postictal or periodic. The gain made it possible to modify
the intensity of the input being applied.
The decision to apply the stimuli was determined by comparing the complexity of the state
variable O�� (related to the instantaneous phase of CRG1) to the specified excitation threshold
(exThr) parameter. The basis for this method is that the complexity of the model would be
lower in the ictal state than in the interictal or posticital states, thus there would be a reduction
in complexity as the state of the system transferred from interictal to ictal. The complexity was
calculated by applying STLmax on windowed data of length 5000 from the state variable O��. The
exThr was preset such that when the complexity of O�� reaches a certain value it would indicate
that the system is in the ictal mode. At which point the system would receive stimulation [8].
Then once the system entered the postictal region the complexity would rise above the exThr
and the stimulation would be disengaged.
57
In figure 5.1 we show the feedback configuration of the stimulator and CRGSLE computer
model. Note that the dotted line going to the stimulator from the CRGSLE output represents
the previous m embedding points used to initialize and reinitialize the RBF stimulators.
Figure 5.1 – Stimulation Setup
The stimulation setup that combines the CRGSLE model and the RBF and periodic stimulators. The excitability connection acts like a feedback indicating if stimulation should be applied or not. Stimulation is the stimulator output. Model output is the extracellular field created by the CRGSLE Model.
5.2 Periodic Stimulator Frequency Selection
There is no clearly defined stimulation frequency that works best with DBS. Generally the
frequency is tuned from 0-300Hz until the best result is achieved. In our case we decided on the
stimulation frequency based on the FFT of the RBF interictal and postictal stimulation models.
58
The FFT of one RBF postictal prediction is shown in figure 5.2a. Although the FFTs vary from one
RBF prediction to another, there was one commonality found across all the predictions of
interictal and postictal and that was that the 12Hz component had the strongest amplitude.
Thus to make a fair comparison to the RBF we used the 12Hz periodic stimulation frequency.
The FFT of the 12Hz periodic is shown in figure 5.2b.
59
Figure 5.2 – FFT Comparison of RBF Stimulator and 12Hz Periodic Stimulator
a) Sample periodic time signal and FFT associated with it. It shows that 12Hz is the highest amplitude imbedded frequency in the RBF prediction. The 12Hz was the common frequency found across all interictal and postictal RBF predictions after training. b) The 12Hz periodic stimulation signal with the FFT showing the strong 12Hz and the harmonics.
0 50 100 150 200 250 300 350 400 450 500-1
-0.5
0
0.5
1
Time
Voltage (m
V)
a) Postictal RBF Model
0 5 10 15 20 25 30 35 40 45 500
0.05
0.1
0.15
0.2Single-Sided Amplitude Spectrum of Postictal Training Data (y(t))
Frequency (Hz)
|Y(f)|
0 50 100 150 200 250 300 350 400 450 500-1
-0.5
0
0.5
1
Time
Voltage(m
V)
b) 12Hz Periodic Stimulation
0 5 10 15 20 25 30 35 40 45 500
0.02
0.04
0.06
0.08Single-Sided Amplitude Spectrum of 12Hz Periodic Stimulation (y(t))
Frequency (Hz)
|Y(f)|
60
5.3 Results of RBF Stimulation
Having described how the CRGSLE model receives external stimulation, we now proceed to
show the results of the periodic, interictal and postictal stimulations. In figure 5.3 below the
results of the three different stimulations is compared. It can be seen that there is a good
reduction in the number of seizures after the interictal stimulation (figure 5.3b) and an even
better reduction after postictal stimulation (figure 5.3c). The periodic stimulation produced only
a slight reduction in seizures (figure 5.3d). This result was achieved with exThr of 0.2 and a gain
of 0.01. To better assess the stimulation dependence on parameters an ROC curve was
performed on a sweep of different gain and exThr parameters.
61
Figure 5.3 – Stimulation of the CRG SLE mode with interictal, postictal and periodic stimulations
The results of applying the three different stimulation techniques are compared with the normal SLE model. b) The application of the interictal RBF stimulation reduced the number of seizures by roughly two thirds. c) The postictal stimulation managed to reduce the number of SLEs even more. d) The application of the periodic stimulation also reduced the number of SLEs but not nearly as much as the interictal and postictal stimulations.
0 10 20 30 40 50 60 70 80 90-2
0
2V
oltage (m
V)
a) Extracellular Recording of the Unstimulated CRGSLE Model
0 10 20 30 40 50 60 70 80 90-2
0
2b) Stimulated CRGSLE Model by Interictal RBF
0 10 20 30 40 50 60 70 80 90-2
0
2Interictal RBF Stimulation
0 10 20 30 40 50 60 70 80 90-2
0
2c) Stimulated CRGSLE Model by Postictal RBF
0 10 20 30 40 50 60 70 80 90-2
0
2Postictal RBF Stimulation
d) Stimulated CRGSLE Model by Periodic Stimulation of 12Hz
0 10 20 30 40 50 60 70 80 90-2
0
2d) Stimulated CRGSLE Model by Periodic Stimulation of 12Hz
0 10 20 30 40 50 60 70 80 90
-1
0
1
Time
Periodic Stimulation of 12Hz
62
5.4 ROC Measurements
The Receiver Operating Characteristic (ROC) is a practical evaluation technique that accurately
compares the successfulness of prediction [39]. It came about as a way to deal with
complicated cases where the distribution of positive and negative classes was strongly skewed.
For example in the diagnosis of cancer it is probabilistically more likely that a negative
prediction for cancer will be the correct one than not. This bias tends to lead to procedures that
favour a negative prediction rather than an accurate prediction based on facts. ROC
compensates this by dividing the predictions into four cases. They are true positive (TP), false
negative (FN), false positive (FP) and true negative (TN). From these cases we can calculate the
true positive rate TPR and the false positive rate FPR.
�y� = gwgw2�� (5.1)
�y� = �w�w2g� (5.2)
The TPR is a reflection of the sensitivity of the prediction, meaning how accurate the prediction
is. The FPR represents 1-specificity which is a measure of how well you are discriminating
between the two cases. The values of TPR and FPR are found by varying the detection
threshold. Then the points are plotted on an ROC curve with the x-axis representing the 1-
specificity and the y-axis representing the sensitivity. Looking back at the cancer prediction
example we see that a high TPR value means that we are catching the positive cases very well.
However a high FPR means that we are also making a lot of mistakes by incorrectly classifying
63
many of the negative cases as positive. Ideally we want a system to have a high TPR and a low
FPR, meaning that the system is very specific and sensitive.
Although ROC is often used for prediction, we made a slight modification here to apply it to the
evaluation of seizure control efficacy. In our case sensitivity measures how effective the
stimulation is. The specificity measures how accurately the stimulation is applied. A low
specificity means that the stimulation is applied all the time whether a seizure is present or not.
A high specificity means that we only apply stimulation to the strong seizures. Therefore the
same ROC curve profile is achieved even though we applied it to a control system.
5.4.1 ROC Curve Construction
As was mentioned in the previous section our application is not a prediction, but rather a
control. That meant that we needed to modify the general usage of ROC to apply to our control
situation. To do this we needed to appropriately find a way to convert the successful seizure
control into TP, FP, TN, FN subgroups. To do this we tracked three variables with time. The first
was the SLE complexity without any stimulation. The second was the SLE complexity with
stimulation. The last was the actual time series of the stimulation. Then using a threshold of 0.2
we went through the time series of the first two variables and placed a 1 for all the times the
model was in ictal state. For the third variable we placed a 1 for every time the stimulation was
being applied. Then using table 5.1 we defined the different case (i.e. all three variables 1 is a
False Negative (FN)). The TP, FP, TN and FN were tallied up. Equations 5.1 and 5.2 were then
used to find the TPR and FPR so that we could plot sensitivity vs 1-specificity (TPR vs FPR). Then
64
we repeated the process for randomly modified CRGSLE parameters so that we would create a
slightly different SLE model each time to reflect the differences across patients. The process
was repeated for the different stimulation threshold (exThr) values and different gains. A good
representation was formed that would not be biased on only one good model.
Table 5.1 – Determination of ROC cases
Control Case TN TP FN FN FP TP FP FN
Seizure Before Stimulation 0 1 0 1 0 1 0 1
Seizure After Stimulation 0 0 1 1 0 0 1 1
Stimulation Applied 0 0 0 0 1 1 1 1
5.4.2 ROC Curve Comparison
To create the ROC curve we divided the three different stimulation models into three groups:
periodic, interictal and postictal. Each group was further divided into four subparts indicating
the different gains used in the stimulation. These gains were 0.01, 0.1, 1, and 10. Then the
exThr was spanned into 25 different values (0, 0.01:0.02:0.09, 0.1:0.10:0.9, 1:1:10) to produce
25 different ROC points. To provide some statistical significance to the results we created 32
replicate SLE models (samples) with slight changes in the coupling parameters. The changes
allowed us to produce slightly different dynamics to better contrast with the differences found
in the population. Then the ROC results were averaged for each stimulation gain and model.
We found that the gain of 0.01 produced the best results. We then constructed the ROC curve
65
using the 0.01 gain to compare the different stimulator models. The ROC result is shown in
figure 5.4.
Figure 5.4 – ROC comparison of the periodic, interictal and postictal stimulation
The sensitivity of the complex stimulations is significantly superior to the periodic stimulation particularly as the specificity reduces and more stimulation occurs.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8ROC Curve Comparison of Stimulation Models Under 0.01 Gain
Se
nsitiv
ity
1-Specificity
Interictal
Postictal
Periodic
66
The ROC curve shows that both interictal and postictal stimulation produced better ROC results
than the periodic stimulation. At very high specificity all the models had low results due to the
fact that there was only minor stimulation since as the exThr parameter was set too high. At
low specificity when all models are applied at low exThr the best results were achieved for the
interictal stimulation and the postictal. The complexity of the interictal and postictal
stimulation models allowed the CRGSLE model to better maintain its intrinsic complexity even
during the ictal events.
5.4.3 Area Under the ROC Curve
The successfulness of the ROC curve can be further quantized using the area under the curve.
The larger the area under the curve then the better the model being evaluated is. Here we used
the trapezoidal rule to calculate the area under the curve. In figure 5.5 we show the results of
the three ROCs from figure 5.4 where the three stimulator models were compared for the 0.01
gain. The result indicates that there is an improvement in control of seizures using the HPC
stimulation of the interictal and postictal models. The postictal performed slightly better than
the interictal based on the area under the ROC curve. We then applied the student ‘t’ test and
the Wilcoxon rank sum test to test the hypothesis that two samples are significantly different
on the population of 32. The results are shown in table 5.2. There was no significance between
the interictal and postictal performance, however both the interictal and postictal results were
significant when compared to the periodic stimulation.
67
Then in figure 5.6 we show how the three waveforms vary with gain of stimulation. The
periodic stimulation varied very little with changes in stimulation gain. However the highly
complex interictal and postictal stimulation models seemed to have the higher success with
lower gains. This meant that a strong stimulation (high gain) was not necessary for the HPC
interictal and postictal models to be successful.
Another thing that was done was we modified the RBF recurrent mode prediction duration
from 50 to 500 data points (see figures 5.6 and 5.7). Since stimulation was applied in discrete
steps this meant that there would be longer stimulation of the CRGSLE computer model. As well
the reinitializing to match the current state would not be updated as often allowing for more
divergence between the RBF model and the CRGSLE model. We observed that the variation of
the areas changed more than the mean area for the low gains. This suggests that by applying
longer duration stimulations we are at times more successfully suppressing the seizures and at
other times we are creating seizures due to over stimulating. Thus it is important to use shorter
duration stimulation to allow for more consistent seizure suppression.
Lastly we verified that the periodic stimulation remained fairly consistent across different
frequencies. We compared the ROC areas using three different periodic frequency stimulators:
12Hz periodic, 60Hz periodic and 200Hz periodic. The population size was 16, but it was enough
to show the trend with frequency. The results are shown in figure 5.8. As can be seen the gain
does not vary significantly across frequencies or gains. Granted there was a large drop for the
high gain in the 200Hz stimulation, but for the remainder of the frequencies and gains the
68
results were fairly consistent. This showed that gain and frequency had little effect on the
success of period stimulation.
Figure 5.5 – ROC area under the curve comparison of the periodic, interictal and postictal stimulation models
Further verification of the area under the ROC curve shows that interictal and postictal achieve better results than the periodic stimulation. Postictal produced slightly better results than the interictal. The error bars represent standard error for a population size of 32.
ROC Area vs Stimulation at the 0.01 Gain of Each Model
69
Figure 5.6 – ROC area for different gains of the stimulation models 50 reinitialization
Comparison of the area under the ROC curve for different gains and different stimulators with reinitialization after 50 points of stimulation. The error bars represent the standard error. Used sample size of 16 to construct the standard error.
Interictal Postictal 12Hz Periodic0.4
0.45
0.5
0.55
0.6
0.65
RO
C A
rea
ROC Area vs Stimulation Gain of Each Model (50 reinitialization)
0.01
0.1
1
10
70
Figure 5.7 – ROC area for different gains of the stimulation models 500 reinitialization
Comparison of the area under the ROC curve for different gains and different stimulators with reinitialization after 500 points of stimulation. The error bars represent the standard error. Used a sample size of 16 to produce the standard errors. The variation of results is greater than that with 50 reinitialization.
Interictal Postictal 12Hz Periodic0.4
0.45
0.5
0.55
0.6
0.65
RO
C A
rea
ROC Area vs Stimulation Gain of Each Model (500 reinitialization)
0.01
0.1
1
10
71
Figure 5.8 – ROC area for different gains and different periodic frequencies
The periodic stimulation achieved similar results as the complex models for high gain. The complex models outperformed the periodic as the gains were reduced. The error bars represent the standard error.
12Hz Periodic 60Hz Periodic 200Hz Periodic0.3
0.35
0.4
0.45
0.5
0.55
0.6
RO
C A
rea
ROC Area vs Stimulation Gain for Different Periodic Signals
0.01
0.1
1
10
72
CHAPTER 6
DISCUSSION AND FUTURE WORK
In this chapter we discuss three most notable results from this thesis. We then discuss the
implications of these results and what it means for future work.
6.1 RBF Model Captures Complexity
In chapter 2 we showed that the RBF model we trained successfully captured the shape and
complexity of the interictal and postictal regions of a seizure. This was verified by operating the
model in recurrent mode and showing that the model sustained dynamics similar to the
interictal and postictal time series while at the same time maintaining similar complexity. The
RBF models were very robust to initialization conditions. For example if the model was
initialized with ictal time series it would still continue to produce the interictal and postictal
dynamics. This ensured that even though the stimulation produced would vary based on
initialization, it would never diverge to a constant stimulation or become a DC stimulator. It is
73
believed that by training the model on multiple slices of different specimens that the RBF had
generalized or captured the characteristics common across all the groups.
6.2 Complex RBF Stimulation Outperforms Periodic
The hypothesis that stimulating with HPC biologically based stimulation would successfully
reduce seizure occurrence comes from the understanding that under normal conditions the
brain functions in a highly complex possibly chaotic manner. This fact has been verified in
literature on numerous occasions [2][3][20][28]. Therefore it is reasonable to believe that to
achieve better results one needs to communicate with the brain in the same biologically based
language.
In this thesis we tested our HPC biologically based stimulators on the CRGSLE computation
model. The results in figure 5.1 show that interictal and postictal stimulation reduced the
number of seizures to a greater extent than the periodic stimulation. To quantitatively compare
the three stimulation methodologies we constructed a ROC curve based on the successfulness
of control. The ROC was applied for multiple gains and the best gain for each model was chosen
for the final ROC comparison. The final comparison is shown in figure 5.2 and it shows that the
performance of interictal and postictal stimulation across different exThrs was significantly
better than the periodic. The difference between interictal and postictal was minimal. The
distinct difference between periodic and the RBF model stimulators was the complexity.
Therefore the hypothesis was satisfied.
74
6.3 Low Gain More Successful in Complex Stimulation
The final and most significant finding of this thesis is that the HPC RBF stimulators performed
better with lower gain of stimulation. The periodic stimulation had little benefit in using lower
gain stimulation. In fact the periodic stimulation tended to favour higher gain as can be shown
by figure 5.3. The CRGSLE model seems to model the dynamics of the SLE very well as many
findings in DBS show that higher gain periodic stimulation performs best. Generally the gain has
to be increased to achieve successful treatment. At larger gains the stimulation is not very
specific to the region intended to be stimulated and the likelihood of the stimulation affecting
other regions of the brain increases. This was not the case for the RBF interictal and postictal
stimulation. Interictal and postictal RBF stimulation both showed improvement with lower gain
stimulation. With lower gain stimulation they were able to focus the stimulation to the regions
that need it and avoid inducing other undesired effects on surrounding brain regions.
6.4 Future Work
The promising results achieved in this thesis are only the first steps. The model provided us with
a way to test the viability of our hypothesis in treating epilepsy. It has also left a lot of questions
to be answered. Can the success achieved on the CRGSLE model be replicated in-vivo?
Therefore the next logical step will be to reproduce these results in-vivo.
75
Would training on high frequency stimulation improve the results was another good question
that arose. In our preprocessing of the training data we removed the majority of the noise by
filtering out all frequencies above 50Hz. This made the model easier to train and less
computationally demanding to implement. It also means that our stimulator may have been
lacking some key features that may help in suppressing seizures. Previous work done by Chiu et
al. suggests that the higher frequencies are indicative of the seizure onset and hold valuable
features for detecting seizures [3]. In the future we will train on the higher frequencies to see if
their inclusion will yield better results. Due to the high noise content at higher frequencies we
may need to use other means to capture the higher frequency features. The Neural Rhythm
Extractor (NRE) developed by Zalay et al, is one proposed method to capture the frequency
information [40]. The NRE which at the heart uses a wavelet packet transform will find the main
frequency bands of the interictal and postictal time series. Then the RBF can then be selectively
trained on those bands. Then by stimulating the CRGSLE with different RBFs trained on
different bands we will track down the frequencies responsible for successful seizure
suppression.
As successful as the RBF had been in capturing the low frequency content we feel that we can
do better. A good substitute training model being considered is the Restricted Boltzman
Machine (RBM) developed by Hinton [41]. The RBM is a more complex model and although it is
based on ANNs is it trained highly effectively by an unsupervised random dream like state. The
use of RBMs brings about a computational issue that needs to be addressed to achieve real
time stimulation. This will mean that we will need to move away from computer based
76
stimulation to hardware stimulation through hardware such as Field Programmable Gate Arrays
(FPGAs). There is a lot of work left to be done, but the goal of the future work will remain to
achieve the same success in-vivo as on the in-silco CRGSLE model.
77
CONCLUSION
With the aid of RBFs we have captured the highly complex possibly chaotic (HPC)
neurodynamics of interictal and postictal regions of seizure time series. We have applied these
stimulation techniques to a CRGSLE model and shown that the HPC stimulation significantly
outperformed those of the low complexity periodic stimulation. If the same results can be
achieved on a rat in-vivo model then this has serious potential to change the way we treat
epilepsy and paves the way towards new treatment opportunities for all those in need.
78
Bibliography
[1] A. Babloyantz, A. Destexhe, “Low-dimensional chaos in an instance of epilepsy”, Neurobiology, Vol. 83, pp. 3513-3517, 1986
[2] S. J. Schiff, K. Jerger, D. H. Duong, T. Chang, M. L. Spano, W. L. Ditto, “Controlling chaos in the brain”, Nature, Vol. 350, pp. 615-620, 1994
[3] A. W.L. Chiu, E. E. Kang, M. Derchansky, Peter L. Carlen, B. L. Bardakjian, “Online Prediction of Onsets of Seizure-like Events in Hippocampal Neural Networks Using Wavelet Artificial Neural Networks”, Annals of Biomedical Engineering, Vol. 34, pp. 282-294, 2006
[4] A. W. L. Chiu, M. Derchansky, E. E. Kang, P. L. Carlen, B. L. Bardakjian, “Prevention of Spontaneous Seizure-like Events in Both in-silico and in-vitro Epilepsy Models”, Engineering in
Medicine and Biology 27th Annual Conference, pp.1-4, 2005
[5] M. Hodaie, R. A. Wennberg, J. O. Dostrovsky, and A. M. Lozano, “Chronic Anterior Thalamus Stimulation for Intractable Epilepsy”, Epilepsia, Vol. 34, pp. 603-608, 2002
[6] Y. F. Sun, Y. C. Liang, W. L. Zhang, H. P. Lee, W. Z. Lin, L. J. Cao, “Optimal partition algorithm of the RBF neural network and its application to financial time series forecasting”, Neural
Computation & Applications, Vol. 14, pp. 36-44, 2005
[7] X. Li and Z. Deng, “A Machine Learning Approach to Predict Turning Points for Chaotic Financial Time Series”, 19th IEEE International Conference on Tools with Artificial Intelligence, pp. 331-335, 2007
[8] O. C. Zalay, D. Serletis, P. L. Carlen, B. L. Bardakjian, “System chracterization of neuronal excitability and its relevance to spontaneous seizure-like transitions in a hippocampal network model”, Submitted to J Neuroscience, pp. 1-29, 2009
[9] C. Hamani, D. Andrade, M. Hodaie, R. Wennberg, and A. Lozano, “Deep brain stimulation for
the treatment of epilepsy”, Int. J Neural Systems, Vol. 19, pp.213-226, 2009
[10] B. M. Uthman, B. J. Wilder, J. K. Penry, C. Dean, R. E. Ramsay, S. A. Reid, E. J. Hammond, W.
B. Tarver, BS and J. F. Wernicke, “Treatment of epilepsy by stimulation of the vegus nerve”,
Epilepsia, Vol. 34, pp. 1007-1016, 1993
[11] S. C. Schachter, “Vagus nerve stimulator therapy summary: five years after FDA approval”,
Neurology, Vol. 59, no. 6 Suppl. 4, pp. S15-20, 2002
79
[12] D.M. Andrade, D. Zumsteg, C. Hamani, M. Hodaie, S. Sarkissian, A.M. Lozano, and R.A.
Wennberg, “Long-term follow-up of patients with thalamic deep brain stimulation for epilepsy”,
Neurology, Vol. 66, pp. 1571-1573, 2006
[13] C. Pollo and J.G. Villemure, “Rationale, mechanisms of efficacy, anatomical targets and
future prospects of electrical deep brain stimulation for epilepsy”, Acta Neurochir. Suppl., Vol.
97, pp. 311-320, 2007
[14] K. Vonck, P. Boon, L. Goossens, S. Dedeurwaerdere, P. Claeys, F. Gossiaux, P. Van Hese, T.
De Smedt, R. Raedt, E. Achten, K. Deblaere, A. Thieleman, P. Vandemaele, E. Thiery, G.
Vingerhoets, M. Miatton, J. Caemaert, D. Van Roost, E. Baert, G. Michielsen, F. Dewaele, K. Van
Laere, V. Thadani, D. Robertson and P. Williamson, “Neurostimulation for refractory epilepsy",
Acta Neurol. Belg., Vol. 103, pp. 213-217, 2003
[15] J.F. Tellez-Zenteno, R.S. McLachlan, A. Parrent, C.S. Kubu and S. Wiebe, “Hippocampal
electrical stimulation in mesial temporal lobe epilepsy”, Neurology, Vol. 66, pp. 1490-1494,
2006
[16] K. N. Fountas and J. R. Smith, 'A novel closed-loop stimulation system in the control of
[17] K. N. Fountas, J. R. Smith, A. M. Murro, J. Politsky, Y. D. Park and P. D. Jenkins, “Implantation of a closed-loop stimulation in the management of medically refractory focal epilepsy: a technical note”, Stereotact Funct. Neurosurg, Vol. 83, pp. 153-158, 2005
[18] S. H. Strogatz, “Nonlinear Dynamics and Chaos”, Addison - Wesley Publishing Company, 1994
[19] A. Courville, “Chaosmakers for Epilepsy” M.A.Sc thesis, University of Toronto, 1998
[20] J. Gao, Y. Cao, W. Tung, J. Hu, “Multiscale Analysis of Complex Time Series”, Wiley, 2007
[21] A. Wolf, J. B. Swift, H. L. Swinney and J. A. Vastano, “Determining Lyapunov Exponents From A Time Series”, Physica, pp. 285-317, 1985
[22] L. D. Iasemidis, J. C. Sackellares, H. P. Zaveri, W. J. Williams, “Phase Space Topography and the Lyapunov Exponent of Electrocorticograms in Partial Seizures”, Brain Topography, Vol. 2, pp. 187-201, 1990
[23] S. P. Nair, D. Shiau, J. C. Principe, L. D. Iasemidis, P. M. Pardalos, W. M. Norman, P. R. Carney, K. M. Kelly, J. C. Sackellares, “An investigation on EEG dynamics in an animal model of
80
temporal lobe epilepsy using the maximum Lyapunov exponent”, Experimental Neurology, Vol. 216, pp. 115-121, 2009
[24] M.T. Rosenstein, J.J. Collins, and C.J. De Luca, “A practical method for calculating larget Lyapunov exponents from small data sets”, Physica D, Vol. 65, pp. 117-134, 1993
[25] P. Grassberger, and I. Procaccia, “Characterization of strange attractors”, Phys. Rev. Lett., Vol. 50, pp. 346-349, 1983
[26] A. Babloyantz, J.M. Salazar, “Evidence of chaotic dynamics of brain activity during the sleep cycle”, Phys. Letters, Vol. 111A, pp. 152-156, 1985
[27] J. Fell, J. Röschke, and P. Beckmann, “Deterministic chaos and the first positive Lyapunov exponent: a nonlinear analysis of the human electroencephalogram during sleep”, Biol. Cybern, Vol. 69, pp. 139–164, 1993
[28] A. Babloyantz, and A. Destexhe, “Low-dimensional chaos in an instance of epilepsy”, Proc.
Natl. Acad. Sci., Vol. 83, pp. 3513-3517, 1986
[29] M. Orr, J. Hallam, K. Takezawa, A. Murray, S. Ninomiya, M. Oide and T. Leonard, “Combining regression trees and radial basis function networks”, International Journal of
Neural Systems, pp. 1-17, 1999
[30] M.J.L. Orr, “Regularisation in the selection of radial basis function centres”, Neural
Computation, pp. 1-16, 1995
[31] R. Zemouri, D. Racoceanu, N. Zerhouni, “Recurrent radial basis function network for time-series prediction”, Eng. App. of Artificial Intelligence, Vol. 16, pp. 453-463, 2003
[32] H. Kantz, and T. Schreiber, “Nonlinear time series analysis”, Cambridge University Press, 1997
[33] J. R. Shewchuk, “An introduction to the conjugate gradient method without the agonizing pain”, Carnegie Mellon University, Ed. 1.25, 1994
[34] S. Chen, C.F.N. Cowan, and P.M. Grant, “Orthogonal least squares learning for radial basis function networks”, IEEE Transactions on Neural Networks, Vol. 2, pp. 302-309, 1991
[35] R.A. Horn, and C.R. Johnson, “Matrix Analysis”, Cambridge University Press, 1985
[36] F. Wendling, F. Bartolomei, J. J. Bellanger and P. Chauvel, “Epileptic fast activity can be
explained by a model of impaired GABAergic dendritic inhibition”, European Journal of
Neuroscience, Vol. 15, pp. 1499-1508, 2002
81
[37] P. Suffczynski, S. Kalitzin, Lopes Da Silva, “Dynamics of non-convulsive epileptic
phenomena modeled by a bistable neuronal network”, Neuroscience, Vol. 126, pp. 467−484,
2004
[38] F. Grimbert, O. Faugeras, “Bifurcation analysis of Jansen's neural mass model”, Neural
Comput, Vol. 18, pp. 3052−3068, 2006.
[39] T. Fawcett, “An introduction to ROC analysis”, Pattern Recognition Letters, Vol. 27, pp. 861-
874, 2006
[40] O. C. Zalay, E. E. Kang, M. Cotic, P. L. Carlen, and B. L. Bardakjian, “A Wavelet Packet-Based
Algorithm for the Extraction of Neural Rhythms”, Annals of Biomedical Engineering, Vol. 37 No.
3, pp. 595-613, 2009
[41] G. E. Hinton, S. Osindero, Y. Teh, “A fast learning algorithm for deep belief nets”, Neural