SELECTION OF A SIMULATION SOFTWARE TO MODEL A SMALL SIGNALIZED SYSTEM OF A MULTILANE ARTERIAL IN THE SOUTHEASTERN US by ELSA GEBRU TEDLA A THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of Civil, Environmental and Construction Engineering in the Graduate School of The University of Alabama TUSCALOOSA, ALABAMA 2009
128
Embed
SELECTION OF A SIMULATION SOFTWARE TO MODEL A …acumen.lib.ua.edu/content/u0015/0000001/0000032/u0015_0000001_0000032.pdfneed, two traffic simulation tools, SimTraffic and AIMSUN,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
SELECTION OF A SIMULATION SOFTWARE TO MODEL A SMALL
SIGNALIZED SYSTEM OF A MULTILANE ARTERIAL IN
THE SOUTHEASTERN US
by
ELSA GEBRU TEDLA
A THESIS
Submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of Civil,
Environmental and Construction Engineering in the Graduate School of
The University of Alabama
TUSCALOOSA, ALABAMA
2009
Copyright Elsa Gebru Tedla 2009 ALL RIGHTS RESERVED
ii
ABSTRACT
Employment of traffic simulation tools has become a popular practice in traffic
operations analyses as the transportation system has become more complex and more frequently
congested. Most of the commercially available traffic simulation models work best for free-flow
or unsaturated conditions. Depending on the type of traffic condition and type of analysis, the
performance of simulation models varies and there is little information available to help the
analyst to select the most appropriate and accurate model for a given analysis. To address this
need, two traffic simulation tools, SimTraffic and AIMSUN, were evaluated and compared for a
congested arterial segment. Both simulation packages are designed to model almost any
combination of surface street and freeway facilities. In this paper, an arterial segment in
Tuscaloosa, Alabama (McFarland Boulevard) between 13th street and 31st street was coded and
simulated for AM, Mid day, and PM peak periods. The network was simulated 10 times for each
peak period using both simulation models, and average values were taken for comparison. Then
the network was evaluated using output measures of effectiveness (MOE) such as Vehicle Hours
Travel (VHT), Vehicle Miles Travel (VMT), average speed, and flow rate at the network level,
along with delay, travel time, and average speed at the arterial level, and delay and traffic
volume at a link level. Using statistical methods and graphical plots for comparison, each
simulation model was evaluated for its capability to replicate existing field conditions using
default and calibrated traffic parameters. In addition to accuracy, the models were also compared
with respect to ease of coding, and quality/usefulness of output. This report documents relevant
iii
results and calibration processes used for employing the models in future studies and practices
regarding congested arterials.
iv
LIST OF ABBREVIATIONS
AIMSUN Advanced Interactive Microscopic Simulator for Urban and Non-urban
Networks
CORSIM CORridor-microscopic SIMulation program
CV Coefficient of Variance
DYNASMART DYnamic Network Assignment Simulation Model for Advanced Road
Telematics
GETRAM Generic Environment for TRaffic Analysis and Modeling
HCM Highway Capacity Manual
HCS Highway Capacity Software
ITS Intelligent Transportation Systems
LOS Level of Service
MOE Measures of effectiveness
NB North Bound
ODBC Open Data Base Connectivity
SB South Bound
St. Dev Standard Deviation
VHT Vehicle Hours Travel,
VMS Variable Message Sign
VMT Vehicle Miles Travel
3-D 3 Dimensional
v
ACKNOWLEDGMENTS
I am pleased to have this opportunity to thank the many colleagues, friends, and faculty
members who have helped me with this research project. Without their support and
encouragement, the completion of this thesis would not have happened.
I would like to thank my committee members: Dr. Daniel Turner, Dr. Jay Lindly, Dr.
Steven Jones, and Dr. Daniel Fonseca for serving on my committee and for their invaluable
feedback. In particular, I would like to thank my advisor, Dr. Daniel Turner for his patience and
excellent guidance throughout this research and throughout my academic career. If it was not for
his support and encouragement, this would not have happened. Dr. Turner, I cannot thank you
enough.
I am grateful for the support I had from friends and UTCA staff during my stay at UA.
I wish to thank Ms. Connie Harris and Dr. Janet Norton for their help in various matters. I like
to thank Ayse Narci for lending me a hand whenever I needed.
I would like to express my sincere thanks to my parents and to my family. I like to
thank my parents, Keki and Ababa, for their love and care throughout my life. Their prayers and
thoughts have got me where I am today in my life. I like to thank my husband, Getiye Yene
Fikir, for his endless love, support, and continuous encouragement when I did not think I would
ever finish this thesis. I am thankful and most fortunate to have him in my life.
Finally and most importantly, I would like to thank God for everything he has given me
and everything he has done for me. God, I know that you always have the best for me. I thank
you a million times.
vi
CONTENTS
ABSTRACT .................................................................................................................................... ii
LIST OF ABBREVIATIONS ........................................................................................................ iv
This chapter has provided information about microsimulation models and their
processing logic, including how simulation vehicles are generated. The three behavioral models:
car-following, lane-changing, and gap-acceptance were discussed with respect to their
application in the AIMSUN. Detailed descriptions of AIMSUN and SimTraffic were presented,
including their modeling features and default parameters were presented. The next chapter
provides the methodology followed to accomplish the project
26
CHAPTER FOUR
METHODOLOGY
After completion of the literature review and background study of the AIMSUN and
SimTraffic models, the next task in this project was to create the study network using the
selected simulation models. Developed in Europe and relatively new to North American users,
AIMSUN is a sophisticated simulation model that is attracting the attention of transportation
modelers. On the other side, SimTraffic is a highly accepted simulation model and has been used
by American professionals for more than fifteen years.
This chapter describes the research methodology followed to compare the two models.
The chapter explains the network coding process and how the performance of each of the
selected models was evaluated using the default parameters. It includes descriptions of the
project area, traffic data source, and definitions of selected performance measures used for
comparing the simulation results. Finally, it discusses the model validation portion of the project.
4.1. Modeled Project Area
The study site was McFarland Boulevard (US 82) in Tuscaloosa, Alabama, between 13th
Street and 31st Street. McFarland Boulevard is a six-lane arterial facility that works under
coordinated traffic signal timings and mostly under saturated conditions during peak hours. The
segment considered for the study is approximately 2.2 miles long, and consists of three major
signalized intersections: two four-leg intersections and one T-intersection. This thesis project
was started in 2007 using data prepared in 2006 by a consulting engineering firm. While this
27
project was ongoing, the T-intersection was converted to a four-leg intersection as a result of a
major development in the area. The project continued using the older network layout, but without
considering the modified intersection due to the difficulty and time related to acquiring updated
data for that intersection. Although the amount of error introduced by the conversion of the
intersection is unknown, based on an analysis discussed in a later section of this report, the
author made the assumption that the amount of error would be minimal.
4.2. Project Data
Microscopic simulation is characterized by the high level of detail at which the system is
modeled. The quality of the model is highly dependent on the availability and accuracy of the
input data. Therefore, the user must be aware that to build a good network, a large amount of
data is required. The data are described in the following sections of this report.
4.2.1. Network Layout
A traffic network model is composed of a set of links connected to each other by nodes
(intersections) which may contain different traffic features. To build the network model, the
following input data is required:
• A map of the area, preferably digitized in .DXF or .bmp format.
• The number of lanes for every link and side lanes.
• Possible turning movements for every intersection, including details about the lanes from
which each turning movement is allowed.
• Speed limits for links and turning speeds for allowed turns at every intersection.
• Detectors including their position and measuring capabilities.
28
4.2.2. Traffic Demand Data
Traffic demand data can be defined in two ways: by the traffic flows at the sections or by
Origin/Destination (O/D) matrix. Depending on the type of model selected, the following input
data must be provided:
• Vehicle types and their attributes.
• Flows at the input links (entrances to the network) for each vehicle type.
• Turning proportions at all sections for each vehicle type.
4.2.3. Traffic Control
All simulation models take into account different types of traffic control: traffic signals,
stop signs, and yield signs. The input data required to define the traffic control follows:
• Signalized intersection: location of signals, the signal groups into which turning
movements are grouped, the sequence of phases, the offset for the junction and the
duration of each phase.
• Unsignalized junctions: definition of priority rules and location of yield and/or stop signs.
The study segment in this project was analyzed based on one hour traffic state volumes
for the AM, Mid day, and PM peak periods. The traffic data was collected in the fall of 2006 by
a local transportation consulting firm. The same firm provided all input data for Synchro network
including turning movement counts for intersections, traffic signal timings, roadway geometry,
and speed limits. For coding the network in AIMSUN, input values were extracted from the
available Synchro file, and aerial photos were used to create the lane geometry. Figure 4-1 and 4-
2 below show the study network in AIMSUN and SimTraffic, respectively. Figures 4-3 through
4-5 show the three peak hour traffic volumes used for coding the project network.
29
Figure 4 - 1 Study Network in AIMSUN Model
Figure 4 - 2 Study Network in SimTraffic Model
30
Figure 4 - 3 AM Peak Hour Traffic Volumes
Figure 4 - 4 Mid Day Peak Hour Traffic Volumes
31
Figure 4 - 5 PM Peak Hour Traffic Volumes
4.3. Comparison of Simulation Models
Both simulation models have default parameters embedded in their processing logic
which controls traffic operations. Most of these consist of vehicle performance and driver
behavior parameters such as vehicle length, maximum acceleration/deceleration, speed factors,
and other factors that are quite difficult to measure in the field (Jones et al., 2004). Since all the
data required to perform a simulation may not be available or easily measured, professionals tend
to depend on the default parameters for their analysis.
After the network coding was completed, the next step was simulating existing conditions
using the model default parameters. This was done to evaluate the performance of each
simulation model before any adjustments or calibration measures were considered.
Since microsimulation models rely on random numbers, a single simulation run cannot be
expected to reflect exact field conditions. Results from individual runs can vary by up to 25%,
and higher standard deviations may be expected for facilities operating at or near capacity (Chu,
32
Liu, Oh, & Recker, 2004). The minimum number of simulation runs was determined based on
the guidelines published in the “Traffic Analysis Toolbox” (Dowling et al., 2004). The guideline
furnished the minimum number of repetitions for various desired confidence intervals and
degrees of confidence using the Student’s t-statistics shown below:
�����% � 2 ����/ �,����
√�
Where:
CI = Confidence interval for the true mean
α (alpha) = Probability of the true mean not lying within the confidence interval
����/ �,��� = Student’s t-statistics for the probability of two-sided error summing to
alpha with N-1 degrees of freedom
N = Number of repetitions
S = Standard deviation
It is up to the analyst to decide the required length of confidence interval and desired
level of confidence based on the purpose of the analysis. For this study, a desired interval of two
standard deviations at 95% confidence level was selected as satisfactory. Based on the
guidelines, a minimum of eight repetitions were required to obtain the desired confidence
interval.
Therefore, each simulation model was run ten times, for the entire network, for each peak
periods, using a random number seed to create a variation among the runs. The average was
taken for comparison.
Selection of performance measures is dependent on the objectives of a particular project
or work to be accomplished by the analyst. In this project, the main objective is to determine
whether one model is better than the other in reproducing actual field condition, so it was felt
( 1 )
33
that the two models should be compared at more than one aggregation level. Therefore, the
network was evaluated according to the output of representative measures of effectiveness
(MOE) at three aggregation levels.
At the total network or system level, the following MOEs were used:
• Vehicle Hours of Travel (VHT) in hours,
• Vehicle Miles of Travel (VMT) in miles,
• Average speed in mph, and
• Network flow rate in vph.
At the arterial level, three MOEs were used:
• Delay in seconds,
• Travel time in seconds, and
• Average speed in mph.
At link or segment level, three MOEs were used:
• Delay in seconds, and
• Volume in vph.
Comparisons of the two models at three aggregation levels were performed for three peak
periods; AM, Mid day, and PM. A total of fifty four paired MOE outputs were compared based
on visual inspection (using graphical plots) and statistical methods (Student’s t-test and f-test).
These methods were selected largely due to their common usage and application on previous
studies (Qureshi et al., 2003; Shaaban & Radwan, 2004: & Xie et al., 2002).
All simulation pairs were compared graphically to see if the simulation results produced
values close to field data. Simulation outputs within the range of 5%-10% of field values were
assumed to be satisfactory in this study. Arterial MOEs were compared using t-statistics for the
34
difference of means and f-statistics for the difference of variances between model outputs and
field data. The t-statistics was used to test whether or not the sample means from the model and
field came from equivalent of non-equivalent populations. The hypothesis test with a 95%
significance level was:
• Ho (null hypothesis): mean of model MOE = mean of field MOE
• H1 (alternate hypothesis): mean of model MOE ≠ mean of field MOE
Failing to reject the null hypothesis would mean that the two samples are not significantly
different. The f-statistics is used to test the null hypothesis that two samples do not have
significant difference or the individual samples are equally spread around the sample mean. The
differences of sample variance between model MOEs and field MOEs were tested as follows:
• Ho (null hypothesis): variance of model MOE = variance of field MOE
• H1 (alternate hypothesis): variance of model MOE ≠ variance of field MOE
It should be noted that different simulation models tend to formulate their computations
of performance measures in different ways. In addition, each model has slightly different
definitions of MOEs and definitions of the vehicles considered in the computation of the MOEs,
making comparison of the two models indirect and difficult. For example, SimTraffic considers
only those vehicles that are entering a link when computing VMT, but it includes those vehicles
denied entry when computing VHT. In the case of AIMSUN, only the vehicles leaving a link or
system are considered. For clarification purposes, the definitions of the performance measures
used in this analysis are given in Table 4-1, taken from user’s manual of each model.
35
Table 4-1 Definition of MOEs
MOE SimTraffic AIMSUN
Vehicle Hours Travel
Vehicle Hours Travel (VHT) is the total time each vehicle was present on the link. The travel time includes time spent by vehicles denied entry (waiting to enter the network), but does not include the time spent by vehicles on the upstream link waiting to enter the subject link.
Total travel time experienced by all the vehicles that have exited the network during the simulation period. Vehicles remaining in the system are excluded from the total system travel time computation.
Vehicles Miles Travel
Vehicle Mile Travel (VMT) is the total distance traveled by all vehicles on the link including the curve distance within intersections. Vehicles that are denied entry are not included.
Total distance travelled by all the vehicles that have crossed the network. Vehicles remaining in the system are excluded from the computation.
Average Speed
VMT divided by total time spent on the network. The time used does not include time spent by denied entry vehicles. Average speed may therefore be higher than VMT divided by VHT for the link.
Average speed for all vehicles that have left the system. This is calculated using the mean journey speed for each vehicle.
Delay per vehicle
Delay per vehicle is the total delay divided by the number of vehicles. Total delay is defined as the travel time minus the time it would take the vehicle if traveling at the maximum permitted speed (the speed limit or the maximum safe turning speed, whichever is lesser). The delay accrued by vehicles denied entry is added to this total.
Average delay time per vehicle. This is the difference between the expected travel time (time it would take to traverse the section under ideal conditions) and the actual travel time. It is calculated as the average of all vehicles.
Arterial Vehicle Count
It uses origin destination data to only count vehicles on the current link that came from the arterial on the next upstream link. This is not the same as taking only those vehicles that travel the entire length of the arterial.
At the stream level, the vehicle data gathered considers only the vehicles that followed the complete stream.
36
4.4. Validation Data
After the network was created and simulation runs were performed, validation of the
simulation outputs from AIMSUN and SimTraffic were performed. Validation is a process in
which simulation outputs are compared with field data to determine how close the model
replicates the field conditions. In this project, field data used for validation of the outputs were
average arterial travel time, average arterial speed, average link speed, and link volume. The
author collected arterial and link travel times using the “floating car run” method described in
the Federal Highway Administration report “Traffic Analysis Toolbox” (Dowling et al., 2004).
In this method, a vehicle was driven the length of the facility ten times during the analysis
period, and the mean travel time was computed. The required number of repetitions to gather
field data was estimated by a criteria discussed in the previous section of this thesis.
Arterial speed was computed using the field collected travel time and using the link
distance from the network geometry. Link volumes were extracted from inputs into the starting
Synchro file provided by a private consulting firm. It should be noted that the original Synchro
file was created at the end of 2006 and field data for validation was collected at the end of year
2008. The author believes that the two year difference between the beginning and completion of
this project could possibly affect the comparison result to some extent. However, the author
gathered one hour of traffic volume on one of the study links and compared it to the older data
set from 2006. The difference found to be less than 1%. Therefore, for the links used in this
study the author made the assumption that the effect of the traffic growth over the two year
period would be minimal on the validation of the models and continued working with the older
data set.
37
In summary, this chapter presented a review of the research methodologies followed to
accomplish the objectives of the project. It included brief descriptions of the project area, traffic
data source, and definitions of selected performance measures. It also included the data
collection procedure followed for collecting validation data of the models. In addition, discussion
of the graphical and statistical methods used for comparisons of the simulation results was also
presented. The next chapter presents the comparison results of AIMSUN and SimTraffic.
38
CHAPTER FIVE
RESULTS OF COMPARISONS
This section presents analysis of the two simulation models used in this project;
AIMSUN and SimTraffic. The models are compared three ways: ease of coding and data entry,
usefulness of simulation output, and accuracy of performance measures.
5.1. Data Entry and Ease of Coding
SimTraffic has a straightforward data entry process, which is probably one reason it is a
popular choice among professionals. For this project, it took approximately 25 hours to create a
new network using Synchro and to perform a SimTraffic simulation. Using AIMSUN, it took
more than 200 hours for the same person to complete coding for the same network, plus an
additional 20 hours to read and understand the user’s manual. For example, coding a traffic
signal was difficult, complicated, and more time consuming than SimTraffic. For coding the
traffic signal, the author had to seek help from an experienced modeler and even so it took a
considerable amount of time.
Synchro serves as a platform to create a traffic signal and also to create the traffic
network for SimTraffic using a link-node system. An intersection is generated automatically at
the point where two links intersect and a simple data input window is displayed for the entry of
number of lanes and lane directions. AIMSUN uses a system of links and “joins” to create the
network elements. User needs to create an intersection (called junctions in AIMSUN) by joining
39
every single lane and specifying lane directions based on permitted tutoring movements. This
process results in coding times that are longer than expected for a simple standard intersection.
Entry of traffic volume for the AIMSUN network was another process found to be
cumbersome for the author. For SimTraffic, Synchro provided a window format for a direct input
of the volumes at the intersections. For AIMSUN, volumes need to be converted into percentages
based on their arrival from a previous intersection. The user must convert the data for each
turning movement and for each vehicle type separately before entering the traffic volume.
In addition, the geometric layout features in AIMSUN are less realistic at matching the
details of a roadway layout, and lack the flexibility to simulate some types of lane alignments.
For instance, use of two or more exclusive left turn lanes or a channelized right turn lane is not
supported in AIMSUN. For this project, these geometric elements were approximated by using
consecutive piecewise linear sections. This approximation could possibly impact the quality of
the simulation result but it is beyond the capacity of the author to quantify the extent of the
impact. On the other hand, AIMSUN has some positive aspects. For instance, the model
performs error checking while the data is being input and saves the modeler from the time
consuming task of identifying and debugging data errors at a later stage. Another advantage is
that the user can define different traffic streams, and can place a detector anywhere in the
network to collect statistical data. These features are not available in SimTraffic.
In summary, creation of a simple three signal network and conducting a simulation on
that network took approximately eight times longer in AIMSUN than in SimTraffic. The network
coding and simulation using AIMSUN was felt to be excessively detailed for a small network
with a standard type of intersection. In the author’s opinion, SimTraffic is significantly easier to
use. In coding the network each program had features that were desirable, and a user might
40
select which model to use based on which features were most important for modeling a particular
arterial.
5.2. Simulation Output
Both simulation models provide detailed output, and both provide animated graphics and
tabular format. Animation output is powerful in that it enables the user to quickly assess the
overall performance of the network qualitatively. It also provides detailed information at specific
locations. AIMSUN has some desirable features that are not available in SimTraffic. It allows a
single vehicle to be traced through the simulation, and it provides a time series view of the
vehicle trajectory. Performance measures such as flow, speed, density, and number of stops can
be displayed in time series viewer windows during the simulation for sections, vehicles, or
detectors.
AIMSUN is more flexible at storing simulation output. The user can choose to store the
output in either an Excel or Access database in an ODBC format. In addition, AIMSUN reports
more MOEs than SimTraffic. A report for an arterial by SimTraffic produces four MOEs, but
AIMSUN reports more than eight MOEs. Screen shots of arterial reports from both models are
given in Figures 5-1 and 5-2.
41
Figure 5 - 1 Screen Shot of Arterial Report from the AIMSUN Model
Figure 5 - 2 Screen Shot of Arterial Report from the SimTraffic Model
42
SimTraffic saves the report as a text file and the user must copy it into Excel for further
analysis. AIMSUN reports the mean and standard deviation for each output, whereas SimTraffic
reports only mean values. In AIMSUN, simulation results can be reported periodically in user-
defined time intervals or over the entire simulation time period. This is not possible in
SimTraffic.
In summary, AIMSUN appears to perform better in the output category. It provides better
graphical output, the tabular output is more versatile, and the user has better control of the type
and frequency of output.
5.3. Comparison of Simulation Results
In this section, two comparisons were performed. First, MOEs were compared at the
network level (VMT, VHT, average speed, and flow rate) and arterial and segment levels
(average delay) to evaluate whether the two models deliver similar analytical outputs. Second,
MOEs from simulation were compared to MOEs from field measurements to gauge if the models
replicated arterial field conditions. The MOEs used for the second comparison were travel time
and average speed at the arterial level, and volume at the segment level.
5.3.1. Model Comparison
5.3.1.1. Comparison of Network MOEs
Network MOEs are important for measuring a systemwide performance. For each peak
hour, ten simulation runs of the network were performed using both models. Network MOEs
from AIMSUN and SimTraffic for the AM peak period are presented in Table 5-1 and Table 5-2.
The values are given for each of the ten runs, and the summary statistics of all ten were used for
In summary, the patterns observed for link delay were similar to the previously discussed
patterns found for arterial delay. Average delays from SimTraffic increased as traffic volume
increased whereas average delays from AIMSUN were almost identical for the three peak
periods regardless of the traffic increase. Therefore for the comparison of delay, SimTraffic
seems to be more realistic than AIMSUN.
5.3.2. Comparison of Model Output and Field Data
Using statistical methods and graphical plots, both simulation models were evaluated for
their capability to replicate the field data collected during the project. Several comparisons were
performed to determine the differences between model outputs and field data.
5.3.2.1. Comparison of Arterial MOE – Travel Time
Plots were prepared to compare travel time and speed profiles. They were helpful in
evaluating the ability of each model to replicate field data. Figures 5-6 and 5-7 show plots of
arterial travel times for the AM and PM peak traffic, respectively.
010203040506070
0 1 2 3 4
Ave
rage
Del
ay,
sec
Link No
AIMSUN
SimTraffic
48
Looking at the AM peak hour (Figure 5-6), SimTraffic produced travel times closer to
field values than AIMSUN. For AM traffic, AIMSUN overestimated travel time by 15% to 20%
but SimTraffic undesetimated by less than 5%.
The same pattern observed for delay in the previous sections of this thesis was exihibited
for PM peak travel time. As traffic volume increased for the PM periods, travel time from
SimTraffic also increased. However, AIMSUN produced almost identical values for all three
peak periods regardless of the difference in traffic volumes as shown in Figure 5-7. Both models
overestimated roadway capacites for the PM high traffic condition, but SimTraffic values were
again closer to field conditons than AIMSUN.
Figure 5 - 6 Simulated NB Arterial Travel Time vs. Field Travel Time - AM Peak
100
120
140
160
180
200
0 1 2 3 4 5 6 7 8 9 10
Tra
vel T
ime,
sec
Simulation Runs
AIMSUN
SimTraffic
Field
49
Figure 5 - 7 Simulated NB Arterial Travel Time vs. Field Travel Time - PM Peak
In addition to the graphical anaylsis discussed above, statistical tests were used to
compare mean values of simulated travel times with mean values of travel times from field
observation. The result of the t-test showed that there is a significant difference between the two
means whereas the f-test showed there is no significant difference among the individual runs of
both samples. Both t-test and f-test results for arterial travel time and average speed are presented
in Appendix D.
5.3.2.2. Comparison of Arterial MOE – Average Speed
Simulated arterial speed is compared to field speed using a statistical test and graphical
plots. Figures 5-8 and 5-9 show individual runs from each model and from field data for north
bound traffic for AM and PM periods. Since speed is computed from travel time, the patterns
observed for arterial speed were consistent with the patterns for arterial travel time for all peak
periods. Statistical tests suggested there is a significant difference between model outputs and
field data.
Based on the graphical plots, SimTraffic seemed to perform better than AIMSUN for the
arterial segment considered in the study. As traffic increased for the PM peak period, SimTraffic
120
140
160
180
200
220
1 2 3 4 5 6 7 8 9 10
Tra
vel T
ime,
sec
Simulation Runs
AIMSUNSimTrafficField
50
speed decreased accordingly, whereas speeds from AIMSUN increased. For the lower volume
morning peak, both models estimated average speeds closer to observed field speeds. AIMSUN
slightly underestimated average speed, but SimTraffic overestimated them as shown in Figure 5-
8. However, looking at Figure 5-9 for the PM traffic, both models overestimated average speed,
but SimTraffic performed closer to field speeds than AIMSUN.
Figure 5 - 8 Simulated NB Arterial Speed vs. Field Speed - AM Peak
Figure 5 - 9 Simulated NB Arterial Speed vs. Field Speed - PM Peak
20
25
30
35
0 1 2 3 4 5 6 7 8 9 10
Ave
rage
Spe
ed,
mph
Simulation Runs
AIMSUN
SimTraffic
Field
15
20
25
30
35
0 1 2 3 4 5 6 7 8 9 10
Ave
rage
Spe
ed,
mph
Simulation Runs
AIMSUN
SimTraffic
Field
51
In summary, arterial MOEs from AIMSUN and SimTraffic were compared by using
graphical plots and test statistics. For the three peak periods and for both north and south bound
traffic directions, a total of 12 sample pairs (model sample vs. field sample) were tested. Based
on the results of the t-statistics performed for the arterial travel times and average speeds,
neither model was able to reproduce the selected field MOEs. Based on results of the f-statistics
for both models, two-third of the paired sample variances shown no significant difference among
individual observations.
According to the graphical analysis, SimTraffic MOEs were generally found to be closer
to arterial observed values than AIMSUN. When traffic is congested, AIMSUN tends to
overestimate arterial capacities as compared to SimTaffic. AIMSUN indicated higher average
speeds and shorter travel times than those collected in the field, implying that AIMSUN is
overestimating the available capacity of the arterial to a higher degree than SimTraffic.
The author found that parameters used in the car-following model of AIMSUN have a
tendency to overestimate capacity, as compared to SimTraffic. For example, the default reaction
time, headway, vehicle length, and vehicle space used in the car-following models of AIMSUN
are smaller than the ones used in the car-following model of SimTraffic. The common default
parametes used in AIMSUN and SimTraffic were discussed in previous chapters.
5.3.2.3. Comparison of Link MOE – Link Volume
Both models reproduced the field volumes reasonably within the range of 5-10% of the
field volume, with the exception of one link with low traffic volume, for which both models
overestimated volume by more than 10%. The SimTraffic estimates appeared to be more
accurate than the AIMSUN estimate, as shown in Figure 5-10.
52
Figure 5 - 10 Simulated Volume vs. Field Volume - AM Peak
5.4 Summary of Results
The previous sections presented an analysis of simulation comparisons made between
AIMSUN and SimTraffic, using default parameters at different levels of aggregation. A total of
54 simulation comparisons were performed graphically and statistically. For arterial MOEs, t-
statistics were applied to test if there is significant difference between model mean values and
field mean values.
Except for few comparisons from the Mid day peak period, the test showed that the two
models delivered different estimations for the selected MOEs. Based on the graphical analysis
performed, SimTraffic appears to more closely simulate field observations for McFarland
Boulevard than AIMSUN for most comparisons. For example, compared to field data,
estimations of arterial MOEs from AIMSUN had almost twice as much variability as
SimTraffic’s estimations, as shown in Table 5-4.
800
1000
1200
1400
1600
1800
0 1 2 3 4
Flo
w, v
ph
Link No
AIMSUN
SimTraffic
Field
53
Table 5-4 Model Estimation of Arterial MOEs MOE Model AM peak Mid day peak PM peak Arterial Travel Time, sec
AIMSUN over 15-20% under 1-5 % under 15-30% SimTraffic under 5-10% over 1-5% under 10-15%
Arterial Speed, mph
AIMSUN under 5-10% over 15-20% over 40-55% SimTraffic over 5-15% under 1-5% over 10-25%
For the arterial segment used in this study, AIMSUN appeared to be insensitive to traffic
increases for the three peak periods. As traffic conditions changed from near free flow in the AM
peak to congested in the PM peak period, AIMSUN delivered almost identical values of MOEs.
SimTraffic MOEs changed in relation to traffic volume. For example, delay and travel time
from SimTraffic were higher during the PM peak period than the AM peak period, while they
were almost identical for all time periods in AIMSUN. Therefore, based on initial comparisons
of the models using default parameters, SimTraffic seemed to yield results more consistent with
field observations than AIMSUN.
In summary, differences between default parameters of each model, differences in
computation of MOEs, differences in definitions of variables, and differences in simulation
MOEs for the two models indicated that the two models may have more differences than
commonalities. In addition, the results suggest that neither AIMSUN nor SimTraffic could
replicate field conditions reasonably, and the results should not be trusted without further model
calibration. Therefore, to better replicate field conditions of the study network model calibration
is performed in the next chapter.
54
CHAPTER SIX
COMPARISON OF MODELS AFTER CALIBRATION
In the previous chapter, analyses of outputs from AIMSUN and SimTraffic showed that
the models delivered different MOEs in simulating McFarland Boulevard. Furthermore, without
calibration neither model was able to replicate field conditions accurately. This suggests the
importance of model calibration to provide a better representation of the traffic conditions. In the
process of calibration, default parameters are adjusted to reflect the local driving conditions of
the existing traffic conditions (Hourdakis, Michalopoulos, & Kottommannil, 2003). Calibration
is a time consuming process because modification of the default parameters is done iteratively
following a trail-and-error method to obtain a close match between model estimates and field
measurements.
Every model comes with a set of user-adjustable parameters for calibrating the model to
local traffic conditions. However, the difficulty, time, and cost associated with model calibration
often causes users to depend on model default parameters. For instance, most vehicle and driver
parameters such as vehicle length, vehicle spacing, and reaction time are difficult and expensive
to measure in the field. Another difficulty associated with calibration is that it is not clear which
parameter to modify to achieve the desired change. Adjustment of one link specific parameter
could have undesirable effects on simulation results of an adjacent link or somewhere else in the
network, and therefore the modeler can end up in a never-ending process. (Dowling et al., 2004)
55
In this project, the time and difficulty related to acquiring required field data for full
calibration was beyond the scope of this thesis. Therefore, the author performed only a minor
calibration of the global parameters as suggested in the user manuals of the models, as described
with the following paragraphs. Although the adjustments made to the parameters were small, the
iterative process performed by the author to achieve the final values was very time consuming.
6.1 Model Calibration
Compared to SimTraffic, AIMSUN had a higher tendency to overestimate road capacity
for congested traffic conditions. As discussed in previous sections, driving behavioral parameters
such as vehicle spacing and reaction time are smaller in AIMSUN than SimTraffic, therefore
resulting in higher saturation flow rates than other models.
For calibration of the AIMSUN model, reaction time and reaction time at stop were
adjusted to reproduce simulation MOEs similar to field MOEs. Reaction time is the time a driver
takes to react to speed changes in the preceding vehicle. It is used in the car-following model
(range from 0.1 to 1.0 sec). Reaction time at stop is the time it takes for a stopped vehicle to react
to the acceleration of the vehicle in front or to a traffic signal changing to green. Reaction time at
stop has a strong influence in queue discharge behavior (TSS, 2006). Table 6-1 shows the default
parameter in one column and then shows the changed values that were used in the three time
periods simulated.
Table 6-1 Suggested Calibration Parameters for AIMSUN
Parameter Default all peaks
Calibrated AM
Calibrated Mid day
Calibrated PM
Reaction time, sec 0.75 0.50 0.75 0.80
Reaction time at stop, sec 1.35 1.00 1.40 1.60
56
As discussed in the previous sections, SimTraffic yielded Mid day MOEs close to
observed field conditions. For AM and PM peak periods, SimTraffic overestimated the capacity
of the system. For example, arterial travel time was underestimated by 5-15% and average speed
was overestimated by 10-25 %. Therefore, an effort was made to calibrate the AM and PM
traffic conditions to lower the simulated capacity of the system. Based on the SimTraffic user
manual, the primary parameter suggested for system calibration is headway factor. By default,
SimTraffic is calibrated to a headway factor of 1.0 to give flow rates of about 1850 vehicles per
hour per lane for speeds above 30 mph (Shaaban et al., 2004). Adjusting the headway parameter
above 1.0 would result in lower saturation flow. Table 6-2 shows the adjusted parameters. .
Table 6-2 Suggested Calibration Parameters for SimTraffic
Parameter Default all peaks
Calibrated AM
Calibrated Mid day
Calibrated PM
Headway Factor 1.00 1.05 1.00 1.10
6.2 Calibration Outputs
In this section, comparisons of simulation results after model calibration are discussed for
arterial MOEs. Small adjustments of the suggested parameters were found to provide output
closer to observed field conditions than the non-calibrated results.
6.2.1 Arterial Travel Time
Looking at Figures 6-1 and 6-2, after calibration of the models, AIMSUN and SimTraffic
estimated arterial travel times closer to the observed field conditions. Both models estimated the
AM and Mid day peak traffic conditions better than the PM congested traffic conditions. Even
though both models underestimated the PM arterial travel time, SimTraffic underestimated by
less than 5% whereas AIMSUN underestimated by 10 to 15%. For the three peak periods, a total
57
of six arterial travel time comparisons were made between model estimation and field data. Out
of the six comparisons, three of the SimTraffic estimations and one of the AIMSUN estimations
showed no significant difference from field data. Based on the analyses, SimTraffic has
estimated travel time slightly better than AIMSUN. In addition, both simulation models have
shown a significant improvement after model calibration.
Figure 6 - 1 Simulated vs. Observed NB Travel Time - AM Peak
Figure 6 - 2 Simulated vs. Observed NB Travel Time - PM Peak
100
120
140
160
180
0 1 2 3 4 5 6 7 8 9 10
Tra
vel T
ime,
sec
Simulation Runs
AIMSUN
SimTraffic
Field
140
160
180
200
220
1 2 3 4 5 6 7 8 9 10
Tra
vel T
ime,
sec
Simulation Runs
AIMSUN
SimTraffic
Field
58
6.2.2 Arterial Speed
After calibration of the simulation models, the AM peak period was found to be a better
fit to observed arterial speed than the PM peak period (Figures 6-3 and 6-4). SimTraffic
produced estimates of arterial speeds closer to field speeds than AIMSUN for all peak periods.
Figure 6 - 3 Simulated vs. Observed NB Average Speed - AM Peak
Figure 6 - 4 Simulated vs. Observed NB Average Speed - PM Peak
20
25
30
35
0 1 2 3 4 5 6 7 8 9 10
Ave
rage
Spe
ed,
mph
Simulation Runs
AIMSUN
SimTraffic
Field
15
20
25
30
35
1 2 3 4 5 6 7 8 9 10
Ave
rage
Spe
ed,
mph
Simulation Runs
AIMSUN
SimTraffic
Field
59
In summary, the graphical and statistical analyses of the calibrated model results
indicated that even small adjustments of few default parameters can yield a better replication of
field conditions. For the AM and Mid day peak hours where traffic conditions range from light to
moderate, both models estimated arterial MOEs much closer to field conditions than they did
before model calibration. Even after calibration, AIMSUN was more likely to overestimate
roadway capacities for congested traffic conditions, thereby implying a better operating
condition than observed in the field. Overall, SimTraffic simulated almost all MOEs closer to
observed values than those generated by AIMSUN.
60
CHAPTER SEVEN
CONCLUSIONS AND RECOMMENDATIONS
This chapter focuses on conclusions drawn about AIMSUN and SimTraffic simulations
of a congested arterial segment. It summarizes the results of this study and also includes some
recommendations for future studies. These conclusions are not universal. They are based upon a
relatively small signalized arterial network on a six lane urban arterial in Tuscaloosa, Alabama.
They are limited by the quality of data provided in 2006 by engineering firm and validation data
gathered in 2008 by the author for an analysis of simulation results. The time and funding
associated with this thesis did not allow collection of a more complete dataset or a more
extensive analysis.
7.1 Conclusions
AIMSUN and SimTraffic were compared three ways: ease of coding and data entry,
usefulness of simulation output, and accuracy of performance measures. SimTraffic had an easy-
to-use graphical interface and a straightforward data entry process as compared to AIMSUN.
Creation of the study network and conducting a simulation on the network took approximately
eight times longer in AIMSUN than in SimTraffic. For example, coding a traffic signal was
difficult and more time consuming than coding the same network using SimTraffic. In addition,
creation of some geometric network features such as application of two or more exclusive left
lanes or channelized right turns were not supported in AIMSUN. Although modeling of a simple
61
network using SimTraffic was found to be much easier for this project, AIMSUN has a number
of powerful tools that are not explored in detail in this thesis. For instance, transit simulation and
traffic incident simulation are features that are more desirable for coding of complex urban
networks and AIMSUN has full capacity to code these features than SimTraffic.
The models were also compared with respect to usefulness of simulation output. Both
simulation models provide detailed output, and both provide animated graphics and tabular
format. AIMSUN has a more flexible mechanism for storing simulation output and has desirable
features that are not available in SimTraffic. The user can choose to store the output in either
Excel or Access format. In comparison, SimTraffic saves the report as a text file, so the user
must copy it to excel for further analysis. In addition, AIMSUN allows a single vehicle to be
traced through the simulation and provides a time series view of the vehicle trajectory.
Overall, AIMSUN appears to be better in the output category. It provides better graphical
output, the tabular output is more versatile, and the user has better control of the type and
frequency of output.
Another category for comparison of AIMSUN and SimTraffic was accuracy of simulated
MOEs. The modeling undertaken for the McFarland Boulevard study area found that traffic
conditions can be reproduced by the models with more accuracy if the models are calibrated. The
study has identified differences between the model outputs and the actual field data collected.
Several comparisons of MOEs at different aggregation levels were performed in order to
discern the differences between the models output and the field data. For most comparisons,
graphical plots were used to evaluate the ability of each of the two models to replicate the field
data and also to compare outputs between the two models. For arterial MOEs t-test and f-test
were employed for further analysis.
62
For AM and Mid day peak traffic periods, SimTraffic was found to be a better simulator
than AIMSUN, even before model calibration. For the PM congested traffic conditions,
AIMSUN had a tendency to overestimate arterial capacity more than SimTraffic. After
calibration of the models was performed, both models produced an improved output for all peak
periods, and SimTraffic simulated the field conditions more closely than AIMSUN.
Comparing the level of difficulty for creating a network, usefulness of simulation outputs,
and accuracy of performance measures experienced in this study, the author suggest that
SimTraffic is preferred for replicating congested traffic conditions on McFarland Boulevard.
7.2 Recommendations
The author recommends that updated modeling data such as traffic volumes and traffic
signal timings be used for further studies on the network. Field collected validation and
calibration data should be set aside before beginning of simulation so that an analysis time could
be saved.
This study used only a single day of field observed data for validating the model outputs.
It is recommended to confirm the results of the findings by testing the model using multiple days
of field data and using more rigorous statistical analysis. It is also recommended to use other
MOEs such as delay and queue length to see if the simulations produce similar results.
A variety of global and local default parameters could be modified to replicate field
conditions to a higher degree. It is recommended that further research be carried out to improve
simulation, in particular in calibration of the models.
In this study, signalized intersections parallel to the study arterial were not included in the
simulation. The author suggests that parallel streets and intersections near the study network be
added to reflect the effect of surrounding traffic.
63
REFERENCE
Barcelo, J. (2001). Microscopic traffic simulation; A tool for the analysis and assessment of ITS
systems. Highway capacity committee, Half year meeting, Lake Tahoe. Barcelo, J., & Ferrar, J., Montero, L. (2001). Assessment of incident management strategies
using AIMSUN. TSS-Transport Simulation Systems, S.L. Retrieved July 13, 2008, from http://www.aimsun.com/its_2001.pdf
Boxill , S.A., & Yu, L. (2000). An evaluation of traffic simulation models for supporting ITS
development. Report no. SWUTC/00/167602-1, Southwest Region University Transportation Center, Texas.
Cheu,R.L., Tan, Y., & Lee,D. (2003). Comparison of PARAMICS and GETRAM/ AIMSUN
microscopic traffic simulation tools. Paper no. 04-2640, Transportation Research Board 83rd Annual Meeting, Transportation Research Board, Washington, DC.
Chu, L., Liu, H.X., Oh, J., & Recker, W. (2004). A calibration procedure for microscopic traffic
simulation. Paper no. 04-4165, Transportation Research Board 83rd Annual Meeting, Transportation Research Board, Washington, DC.
Dowling, R., Skabardonis, A., & Alexiadis, V. (2004). Traffic analysis toolbox volume III:
Guidelines for applying traffic microsimulation software. Report no. FHWA-HRT-04-040, Federal Highway Administration, Washington, DC.
Dowling, R. (2007). Definition, interpretation, and calculation of traffic analysis tools measures
of effectiveness. Federal Highway Administration, Washington, DC. Hass, C.P. (2001). Assessing developments using AIMSUN. National Transportation Library,
TRIS online. Retrieved July 20, 2008, from http://ntlsearch.bts.gov/tris/record/tris/01047260.html
Hourdakis, J., Michalopoulos, P.G., & Kottommannil, J. (2003). A practical procedure for
calibrating microscopic traffic simulation models. Paper no. 03-4167, Transportation Research Board 83rd Annual Meeting, Transportation Research Board, Washington, DC.
Husch, D., & Albeck, J. (2004). SimTraffic 6 User Guide. Version 6, Trafficware, Albany, CA. Jones, S.L., Sullivan, A., Anderson, M., Malave, D., & Cheekoti, N. (2004). Traffic simulation
software comparison study. UTCA Report no. 02217, University Transportation Center for Alabama, AL.
64
Jones, S.L., & Sullivan, A. (2005). U.S. Highway 280 alternatives analysis and visualization. UTCA Report no. 04408, University Transportation Center for Alabama, AL.
Kondyli, A., Duret, A., & Elefteriadou,L. (2007). Evaluation of CORSIM and AIMSUN for
freeway merging segments under breakdown conditions. Paper no. 07-2416, Transportation Research Board 86th Annual Meeting, Transportation Research Board, Washington, DC.
Lieberman, E., & Rathi, A.K. (1975). Traffic simulation, revised Monograph on traffic flow
theory (chapter 10). Retrieved October 7, 2008, from http://www.tfhrc.gov/its/tft/tft.htm Middleton, M.D., & Cooner. S. A. (1999). Simulation of congested Dallas freeways: Model
selection and calibration. Report no. TX-00/3943-1,Texas Transportation Institute, Texas. Park, B., Park, I., & Choi, K. (2004). Evaluation of microscopic simulation tools for coordinated
signal system deployment. KSCE Journal of Civil Engineering. Volume 8, No 2. Qureshi, M., Jitta, S.R., & Spring, G.S. (2003). A comparison of control delay estimated by
SimTraffic and CORSIM for actuated signals versus observed control delay. Paper no. 04-2798, Transportation Research Board 83rd Annual Meeting, Transportation Research Board, Washington, DC.
model. Paper no. 07-2700, Transportation Research Board 86th Annual Meeting, Transportation Research Board, Washington, DC.
Selinger, M.J., Speth, S.B., & Trueblood, M.T. (2003). Apples and Oranges… or splitting hairs?
It depends. Kansas University Transportation Center. Retrieved March 4, 2008, from http://www.kutc.ku.edu/cgiwrap/kutc/pctrans/ezine/1/appleorange.php
Shaaban, K.S., Radwan, E. (2005). A calibration and validation procedure for microscopic
simulation model: A case study of SimTraffic for arterial streets. Paper no. 05-0274, Transportation Research Board 84th Annual Meeting, Transportation Research Board, Washington, DC.
Trueblood, M. (2001). CORSIM… SimTraffic... what is the difference? Kansas University
Transportation Center. Retrieved March 2, 2008, from http://www.kutc.ku.edu/cgiwrap/kutc/pctrans/ezine/1/difference.php
TSS (2006). AIMSUN 5.1 Microsimulator user’s manual. Version 5.1, TSS-Transport
Simulation Systems, S.L. Xiao, H., Ambadipudi, R., Hourdakis, J., & Michalopoulos, P. (2005). Methodology for selecting
microscopic simulators: Comparative evaluation of AIMSUN and VISSIM. Report no. CTS 05-05, Intelligent Transportation Systems Institute, Center for Transportation Studies, MN.
65
Xie, C., & Parkany, E. (2002). Signalized intersection simulation in CORSIM and SimTraffic.
Paper no. 02-3716, Transportation Research Board 81st Annual Meeting, Transportation Research Board, Washington, DC.
Table 16 Hypothesis Tests for Arterial MOEs - AM Peak
Model
Travel Time, sec Speed, mph
NB SB NB SB
AIM
SU
N
Sample Mean
Model 168.7 155.4 26.7 24.1 Field 146.9 130.6 28.5 26.1
Standard Deviation
Model 1.8 1.6 0.3 0.3 Field 7.5 5.2 1.5 1.1
t - test
t - score 8.975 14.297 -4.054 -5.851
Decision on H0
Reject Reject Reject Reject
f - test
f - score 0.056 0.099 0.031 0.067
Decision on H0
Not Reject
Not Reject
Not Reject
Not Reject
Sim
Tra
ffic
Sample Mean
Model 140.1 118.3 30 29.5
Field 146.9 130.6 28.5 26.1
Standard Deviation
Model 1 2.3 0 0.5
Field 7.5 5.2 1.5 1.1
t - test
t - score -2.839 -6.805 3.155 8.887
Decision on H0
Reject Reject Reject Reject
f - test
f - score 0.017 0.197 0.000 0.240
Decision on H0
Not Reject
Not Reject
Not Reject
Not Reject
For t - test: Null Hypothesis: mean of simulated MOE = mean of field MOE Alternative hypothesis: mean of simulated MOE ≠ mean of field MOE Alpha = 0.05 For f - test: Null Hypothesis: variance of simulated MOE = variance field MOE Alternative hypothesis: variance of simulated MOE ≠ variance of field MOE Alpha = 0.05
100
Table 17 Hypothesis Tests for Arterial MOEs - Mid day Peak
Model
Travel Time, sec Speed, mph
NB SB NB SB
AIM
SU
N
Sample Mean
Model 165.3 146.8 28.5 26.7 Field 165.8 153.1 25.2 22.3
Standard Deviation
Model 2.6 3.7 0.5 0.5 Field 4.4 6.0 0.7 0.9
t - test
t - score -0.308 -2.828 12.278 13.402
Decision on H0
Not Reject
Reject Reject Reject
f - test
f - score 0.338 0.391 0.438 0.378
Decision on H0
Reject Reject Reject Reject
Sim
Tra
ffic
Sample Mean
Model 172.6 157.1 24.4 22.3
Field 165.8 153.1 25.2 22.3
Standard Deviation
Model 2.4 3 0.5 0.7
Field 4.4 6 0.7 0.9
t - test
t - score 4.268 1.873 -3.114 0.022
Decision on H0
Reject Not
Reject Reject
Not Reject
f - test
f - score 0.294 0.260 0.557 0.574
Decision on H0
Not Reject
Not Reject
Reject Reject
For t - test: Null Hypothesis: mean of simulated MOE = mean of field MOE Alternative hypothesis: mean of simulated MOE ≠ mean of field MOE Alpha = 0.05 For f - test: Null Hypothesis: variance of simulated MOE = variance field MOE Alternative hypothesis: variance of simulated MOE ≠ variance of field MOE Alpha = 0.05
101
Table 3 Hypothesis Tests for Arterial MOEs- PM Peak
Model
Travel Time, sec Speed, mph
NB SB NB SB
AIM
SU
N
Sample Mean
Model 158.4 159.7 30.4 23.6 Field 190.4 224.9 22.0 15.2
Standard Deviation
Model 2.9 3.2 0.5 0.4 Field 8.1 12.4 1.0 0.8
t - test
t - score -11.774 -16.127 24.438 28.110
Decision on H0
Reject Reject Reject Reject
f - test
f - score 0.126 0.067 0.290 0.244
Decision on H0
Not Reject
Not Reject
Not Reject
Not Reject
Sim
Tra
ffic
Sample Mean
Model 174.4 189.7 24.4 18.7
Field 190.4 224.9 22 15.2
Standard Deviation
Model 4.5 7.2 0.5 0.8
Field 8.1 12.4 1 0.8
t - test
t - score -5.474 -7.766 6.962 9.381
Decision on H0
Reject Reject Reject Reject
f - test
f - score 0.306 0.340 0.292 0.945
Decision on H0
Not Reject
Reject Not
Reject Reject
For t - test: Null Hypothesis: mean of simulated MOE = mean of field MOE Alternative hypothesis: mean of simulated MOE ≠ mean of field MOE Alpha = 0.05 For f - test: Null Hypothesis: variance of simulated MOE = variance field MOE Alternative hypothesis: variance of simulated MOE ≠ variance of field MOE Alpha = 0.05
102
Appendix – E
Simulation Results after Model Calibration - Graphs
103
Figure 1 Simulated vs. Observed NB Travel Time - AM Peak
Figure 2 Simulated vs. Observed SB Travel Time - AM Peak
60
80
100
120
140
160
180
0 1 2 3 4 5 6 7 8 9 10
Tra
ve
l ti
me
, se
c
Simulation Runs
AIMSUN
SimTraffic
Field
60
80
100
120
140
160
180
0 1 2 3 4 5 6 7 8 9 10
Tra
ve
l ti
me
, se
c
Simulation Runs
AIMSUN
SimTraffic
Field
104
Figure 3 Simulated vs. Observed NB Average Speed - AM Peak
Figure 4 Simulated vs. Observed SB Average Speed - AM Peak
10.0
15.0
20.0
25.0
30.0
35.0
0 1 2 3 4 5 6 7 8 9 10
Av
era
ge
Sp
ee
d,
mp
h
Simulation Runs
AIMSUN
SimTraffic
Field
10.0
15.0
20.0
25.0
30.0
35.0
0 1 2 3 4 5 6 7 8 9 10
Av
era
ge
Sp
ee
d,
mp
h
Simulation Runs
AIMSUN
SimTraffic
Field
105
Figure 5 Simulated vs. Observed NB Travel Time-Mid day Peak
Figure 6 Simulated vs. Observed SB Travel Time-Mid day Peak
130
140
150
160
170
180
1 2 3 4 5 6 7 8 9 10
Tra
ve
l ti
me
, se
c
Simulation Runs
AIMSUN
SimTraffic
Field
120
130
140
150
160
170
180
1 2 3 4 5 6 7 8 9 10
Tra
ve
l ti
me
, se
c
Simulation Runs
AIMSUN
SimTraffic
Field
106
Figure 7 Simulated vs. Observed NB Average Speed-Mid day Peak
Figure 8 Simulated vs. Observed SB Average Speed- Mid day Peak
10.0
15.0
20.0
25.0
30.0
35.0
1 2 3 4 5 6 7 8 9 10
Av
era
ge
Sp
ee
d,
mp
h
Simulation Runs
AIMSUN
SimTraffic
Field
10.0
15.0
20.0
25.0
30.0
35.0
1 2 3 4 5 6 7 8 9 10
Av
era
ge
Sp
ee
d,
mp
h
Simulation Runs
AIMSUN
SimTraffic
Field
107
Figure 9 Simulated vs. Observed NB Travel Time -PM Peak
Figure 10 Simulated vs. Observed SB Travel Time - PM Peak
120
140
160
180
200
220
240
1 2 3 4 5 6 7 8 9 10
Tra
ve
l ti
me
, se
c
Simulation runs
AIMSUN
SimTraffic
Field
140
160
180
200
220
240
260
1 2 3 4 5 6 7 8 9 10
Tra
ve
l ti
me
, se
c
Simulation runs
AIMSUN
SimTraffic
Field
108
Figure 11 Simulated vs. Observed NB Average Speed - PM Peak
Figure 12 Simulated vs. Observed SB Average Speed - PM Peak
10.0
15.0
20.0
25.0
30.0
35.0
1 2 3 4 5 6 7 8 9 10
Av
era
ge
Sp
ee
d,
mp
h
Simulation runs
AIMSUN
SimTraffic
Field
5.0
10.0
15.0
20.0
25.0
30.0
1 2 3 4 5 6 7 8 9 10
Av
era
ge
Sp
ee
d,
mp
h
Simulation runs
AIMSUN
SimTraffic
Field
109
Appendix – F
Simulation Results after Model Calibration - Tables
110
Table 18 Simulated Arterial MOEs after Calibration - AM Peak Hour
Simulation Runs 1 2 3 4 5 6 7 8 9 10 Average St. Dev % CV
Table 1 Hypothesis Tests for Arterial MOEs - AM Peak
Model
Travel Time, sec Speed, mph
NB SB NB SB
AIM
SU
N
Sample Mean
Model 154.8 142.2 29.3 26.6 Field 146.9 130.6 28.5 26.1
Standard Deviation
Model 2.6 1.4 0.5 0.3 Field 7.5 5.2 1.5 1.1
t - test
t - score 3.154 6.773 1.540 1.303
Decision on H0
Reject Reject Not
Reject Not
Reject
f - test
f - score 0.122 0.071 0.142 0.078
Decision on H0
Not Reject
Not Reject
Not Reject
Not Reject
Sim
Tra
ffic
Sample Mean
Model 142.0 121.9 29.5 28.4
Field 146.9 130.6 28.5 26.1
Standard Deviation
Model 0.5 0.7 0.5 0.5
Field 7.5 5.2 1.5 1.1
t - test
t - score -2.090 -5.237 1.946 6.005
Decision on H0
Not Reject
Reject Not
Reject Reject
f - test
f - score 0.004 0.017 0.131 0.231
Decision on H0
Not Reject
Not Reject
Not Reject
Not Reject
For t - test: Null Hypothesis: mean of simulated MOE = mean of field MOE Alternative hypothesis: mean of simulated MOE ≠ mean of field MOE Alpha = 0.05 For f - test: Null Hypothesis: variance of simulated MOE = variance field MOE Alternative hypothesis: variance of simulated MOE ≠ variance of field MOE Alpha = 0.05
115
Table 2 Hypothesis Tests for Arterial MOEs - Mid day Peak
Model
Travel Time, sec Speed, mph
NB SB NB SB
AIM
SU
N
Sample Mean
Model 166.0 146.4 28.3 26.1 Field 165.8 153.1 25.2 22.3
Standard Deviation
Model 2.6 4.5 0.5 1.7 Field 4.4 6.0 0.7 0.9
t - test
t - score 0.123 -2.845 11.762 6.113
Decision on H0
Not Reject
Reject Reject Reject
f - test
f - score 0.349 0.556 0.421 3.840
Decision on H0
Reject Reject Reject Reject
Sim
Tra
ffic
Sample Mean
Model 172.6 157.1 24.4 22.3
Field 165.8 153.1 25.2 22.3
Standard Deviation
Model 2.4 3 0.5 0.7
Field 4.4 6 0.7 0.9
t - test
t - score 4.268 1.873 -3.114 0.022
Decision on H0
Reject Not
Reject Reject
Not Reject
f - test
f - score 0.294 0.260 0.557 0.574
Decision on H0
Not Reject
Not Reject
Reject Reject
For t - test: Null Hypothesis: mean of simulated MOE = mean of field MOE Alternative hypothesis: mean of simulated MOE ≠ mean of field MOE Alpha = 0.05 For f - test: Null Hypothesis: variance of simulated MOE = variance field MOE Alternative hypothesis: variance of simulated MOE ≠ variance of field MOE Alpha = 0.05
116
Table 3 Hypothesis Tests for Arterial MOEs - PM Peak
Model
Travel Time, sec Speed, mph
NB SB NB SB
AIM
SU
N
Sample Mean
Model 173.2 206.2 27.4 18.4 Field 190.4 224.9 22.0 15.2
Standard Deviation
Model 4.0 9.8 0.7 1.7 Field 8.1 12.4 1.0 0.8
t - test
t - score -6.023 -3.750 15.303 5.261
Decision on H0
Reject Reject Reject Reject
f - test
f - score 0.243 0.623 0.471 4.230
Decision on H0
Not Reject
Reject Reject Reject
Sim
Tra
ffic
Sample Mean
Model 187.1 203 22.5 17.4
Field 190.4 224.9 22 15.2
Standard Deviation
Model 9.4 7.1 1.1 0.7
Field 8.1 12.4 1 0.8
t - test
t - score -0.836 -4.853 1.076 6.345
Decision on H0
Not Reject
Reject Not
Reject Reject
f - test
f - score 1.334 0.328 1.279 0.681
Decision on H0
Not Reject
Reject Not
Reject Reject
For t - test: Null Hypothesis: mean of simulated MOE = mean of field MOE Alternative hypothesis: mean of simulated MOE ≠ mean of field MOE Alpha = 0.05 For f - test: Null Hypothesis: variance of simulated MOE = variance field MOE Alternative hypothesis: variance of simulated MOE ≠ variance of field MOE Alpha = 0.05