Contract No. H2020 – 881777 OPTIMA-WP6-D-ARD-001-06 Page 1 of 62 02/02/2021 OPTIMA cOmmunication Platform for TraffIc ManAgement demonstrator D6.1. Data Requirements for CDM definition and Database configuration Due date of deliverable: 31/08/2020 Actual submission date: 02/02/2021 Leader of this Deliverable: Ardanuy Ingeniería SA – ARD Responsible Author: Jerónimo Padilla - ARD Reviewed: Yes Document status Revision Date Description 01 24.05.2020 1 st Draft 02 30.10.2020 2 nd Draft 03 16.11.2020 TMT 1 st revision 04 17.12.2020 TMT 2 nd revision 05 02/02/2021 Final Version Project funded from the European Union’s Horizon 2020 research and innovation programme Dissemination Level PU Public X CO Confidential, restricted under conditions set out in Model Grant Agreement CI Classified, information as referred to in Commission Decision 2001/844/EC This project has received funding from the Shift2Rail Joint Undertaking (JU) under grant agreement No 881777. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the Shift2Rail JU members other than the Union. Any dissemination of results reflects only the author’s view and the JU is not responsible for any use that may be made of the information it contains. Start date of project: 01/12/2019 Duration: 39 months Ref. Ares(2021)1217377 - 12/02/2021
62
Embed
OPTIMA cOmmunication Platform for TraffIc ManAgement ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 1 of 62 02/02/2021
OPTIMA
cOmmunication Platform for TraffIc ManAgement demonstrator
D6.1. Data Requirements for CDM definition and Database configuration
Due date of deliverable: 31/08/2020
Actual submission date: 02/02/2021
Leader of this Deliverable: Ardanuy Ingeniería SA – ARD
Responsible Author: Jerónimo Padilla - ARD
Reviewed: Yes
Document status
Revision Date Description
01 24.05.2020 1st Draft
02 30.10.2020 2nd Draft
03 16.11.2020 TMT 1st revision
04 17.12.2020 TMT 2nd revision
05 02/02/2021 Final Version
Project funded from the European Union’s Horizon 2020 research and innovation programme
Dissemination Level
PU Public X
CO Confidential, restricted under conditions set out in Model Grant Agreement
CI Classified, information as referred to in Commission Decision 2001/844/EC
This project has received funding from the Shift2Rail Joint Undertaking (JU) under grant
agreement No 881777. The JU receives support from the European Union’s Horizon 2020
research and innovation programme and the Shift2Rail JU members other than the Union. Any
dissemination of results reflects only the author’s view and the JU is not responsible for any use
that may be made of the information it contains.
Start date of project: 01/12/2019 Duration: 39 months
Ref. Ares(2021)1217377 - 12/02/2021
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 2 of 62 02/02/2021
REPORT CONTRIBUTORS
Name Company Details of Contribution
Jerónimo Padilla and Anil
Shewani ARD
Contributions in all sections. Document
creation, structure organization and
coordination.
Jin Liu UNEW Contributions in sections 5, 6 and 7 and
general review.
Ángel García Luengo INECO Contributions in sections 5 and 6.
Airy Magnien UIC Contributions in sections 5 and 6.
Paul Hyde UNEW Contributions in all sections and corrections
and general review.
Cristian Ulianov UNEW Contributions in all sections and general
review.
Jerónimo Padilla and Anil
Shewani
ARD Final version document’s set-up, formatting
and corrections.
Sara Ferrari RINA-C Quality Check
Jose Bertolin UNIFE Final review
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 3 of 62 02/02/2021
EXECUTIVE SUMMARY
WP6 aims at providing a usable and adaptable data management system for the TMS, in time for
system tests and validation under WP7. The data management aspects dealt with under WP6
are:
• the platform-independent Conceptual Data Model (CDM) and,
• the definition of the corresponding databases here implemented as the persistence layer
associated with the Integration. (IL)
The second objective is to take the CDM as it was outlined in the course of the In2Rail project
(WP8 and 9, 2016), to complete its specification, and to seek alignment with the works under the
Linx4Rail project running in parallel.
First, a review of Traffic Management System (TMS) assets incoming from available data from
the In2Rail deliverable D9.1 “Asset status representation” was conducted. The considered railway
assets were switch, crossing, track (rail), catenary, bridge, tunnel, embankments, line sections
and level crossing. Each of them was described by characterising their attributes as being either
static or dynamic data. The analysis showed that none of the reviewed existing models were able
to completely represent both static and dynamic attributes independently. Therefore, a hybrid
approach was proposed using both railML (to describe static elements) and OGC/sensorML (to
describe dynamic elements).
Database requirements were reviewed. On one hand, regarding general requirements, TAF TAP
TSI programme was proposed for this purpose, since its aim is to define the data exchange
between individual IMs and RUs. On the other hand, specific requirements were reviewed
incoming from the X2Rail-2 deliverable D6.1 as the source for the data structures needed for the
OPTIMA demonstrator. A list of requirements will be defined for the OPTIMA project, then the
databases needed will be enumerated and a list of requirements for them will be proposed.
A complete set of data structures related to the OPTIMA databases (geographical, timetable and
vehicle) were analyzed and are included in Appendix A. Those, along with the data structures that
WP5 will provide related to the business clients and external services, will need to be addressed
by the CDM definition proposed in WP6.
Finally, Database QoS requirements were identified, based on the TAF TAP TSI programme,
mainly characterized by accuracy, completeness, consistency and timeliness. QoS requirements
were defined focusing on transport priority for the next railway assets: Interlocking, PIS, RBC,
Energy Management System, Maintenance Service Management System and Weather forecast.
The first objective of WP6 is to provide a usable and adaptable data management system for the
TMS, in time for system tests and validation under WP7. The data management aspects dealt
with WP6 are:
• the platform-independent CDM and,
• the definition of the corresponding databases here implemented as the persistence layer
associated with the Integration. (IL)
The second objective is to take the CDM as it was outlined in the course of the In2Rail project
(WP8 and 9, 2016), to complete its specification, and to seek alignment with the works under the
Linx4Rail project running in parallel.
This WP is linked to:
• WP2 (System architecture, technical coherence, and alignment with S2R; especially in
relation with Linx4Rail);
• WP4 (middleware configuration for the Integration Layer);
• WP5 (integration of business clients/services);
• WP7 (validation of the system).
T6.1 – Requirements analysis for the databases [M3 – M9] (Leader: ARD; contributors: RFI,
ADIF, SZDC, UIC, SSSA, NEXEYA, UNEW, INECO).
Under Task 6.1, data requirements will be established as well as quality of service requirements.
Assets and asset states will be reviewed and handled by the TMS to consolidate the
corresponding set of asset descriptions. The focus shall be on the operational status of assets,
hence on their time- or state-dependent attributes.
In2Rail deliverable 9.1 “Asset status representation” is the starting point for this task. The review
will include all documentation at hand, especially with regards to “external” models such as
EULYNX.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 11 of 62 02/02/2021
The quality of service requirements will focus on volumes, transaction throughputs, data lifetime,
data integrity checks, etc., and be based on In2Rail recommendations when available.
SCOPE OF THE WORK
The scope of this deliverable includes:
• A brief explanation of what a DBMS is and the purpose that it will have in OPTIMA
• A review of assets handled by the TMS, with the focus on their operational status. The
In2Rail Deliverable 9.1 [In2Rail D9.1] “Asset status representation” was used as the guide
for this task;
• The establishment of data structures and database requirements, based on existing
specifications and examples from the railway domain such as [TAF TAP TSI];
• The establishment of QoS requirements based on X2Ral-2 and In2Rail recommendations
• As an appendix, the complete list of data structures related to geographical, timetable and
vehicle as stated in X2Rail-2 deliverables.
DATABASE MANAGEMENT SYSTEM (DBMS)
Introduction
A database-management system (DBMS) is a collection of interrelated data and a set of programs
to access that data. The database contains information relevant to an enterprise. The primary
goal of developing a DBMS is to provide a way to store and retrieve the information in the
database, while maintaining access for authorised users, and assuring a certain level of
confidentiality and integrity of the data.
Database systems are designed to manage large quantities of information. Data Management
involves both the definition of structures for the storage of information and providing the
mechanisms for the manipulation of information. In addition, the database system must ensure
the safety of the information stored, despite system crashes or attempts at unauthorized access.
If data is to be shared among several users with different levels of credentials, the system must
be able to distinguish between them and allow access only to the data that they have been granted
access to. Since information is so important in most organizations, computer scientists have
developed a large body of concepts and techniques for managing data. These concepts and
techniques form the focus of this deliverable.
The earliest database systems arose in the 1960s in response to the computerized management
of commercial data. Those earlier applications were relatively simple compared to modern
database applications. Modern applications include highly sophisticated, worldwide enterprises.
All database applications share important common elements. The central aspect of the
application is the data itself. Today, the value of the data is crucial for corporations, even beyond
the importance of physical assets owned by the firm. For example, a bank or a social network
without the data would lose all their business value.
Database systems are used to manage collections of data that:
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 12 of 62 02/02/2021
• have value;
• are large;
• are accessed by multiple users;
• are accessed by multiple applications.
The first database applications were very simple and had structured and formatted data. Today,
database applications may include data with complex relationships and a more variable structure.
Modern database systems exploit commonalities in the structure of data to gain efficiency but
also allow for weakly structured data and for data whose formats are highly variable. As a result,
a database system is a large, complex software system whose task is to manage a large, complex
collection of data and objects.
The key technique to the management of complexity is with the concept of abstraction.
Abstraction allows a person to use a complex device or system, without having to know the details
of how that device or system is constructed. By providing a high level of abstraction, a database
system makes it possible for an enterprise to combine data of various types into a unified
repository of the information needed to run daily operations.
As the support for programmer interaction with databases improved, and computer hardware
performance increased even as hardware costs decreased, more sophisticated applications
emerged that brought database data into more direct contact not only with end users within an
enterprise but also with the general public.
In general terms there exist two modes in which databases are used:
• Online transaction processing
• Data analytics
Online transaction processing where a large number of users use the database, with each user
retrieving relatively small amounts of data, and performing small updates. For example, online
advertisers needs to gather data about an specific user, create a profile about this user, match
this data with similar users and use the data from similar users to improve the focus and
personalize the advertisement.
Data analytics is the processing of data to draw conclusions, and infer rules or implement
decision procedures, which are then used to drive business decisions. For example, the field of
data mining is an extreme example of a database which combines knowledge-discovery
techniques invented by artificial intelligence researchers and statistical analysts with efficient
implementation techniques that enable them to be used on extremely large databases.
Purpose of Database Systems
One way to keep the information on a computer is to store it in operating-system files. To allow
users to manipulate the information, the system has several application programs that manipulate
the files. This typical file-processing system is supported by a conventional operating system. The
system stores permanent records in various files, which needs different application programs to
extract records from, and add records to, the appropriate files. Keeping organizational information
in a file-processing system has the following major disadvantages:
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 13 of 62 02/02/2021
• Data Redundancy and inconsistency; Since different programmers create the files over
a long period of time, the various files created might have different structures. Also, the
programs which created the files may be written in different programming languages, so
the information could be duplicated in several places (files). This redundancy leads to
higher storage and access costs. In addition, it may lead to data inconsistency; that is, the
various copies of the same data may no longer agree.
• Difficulty in accessing data; Conventional file-processing environments do not allow
required data to be retrieved in a convenient and efficient manner. It does not respond
well to requests not anticipated by the original programmers or a lot of simultaneous
requests. More responsive data-retrieval systems are required for general use.
• Data isolation; Because data is scattered in different files with different languages and
formats, writing a new application to retrieve the appropriate data is difficult.
• Integrity problems; The data values stored in the database must satisfy certain types of
consistency constraints.
• Atomicity problems; A computer system may fail, and in many applications the reliability
and consistency of data upon a failure is crucial. In those cases, the data must be restored
to the consistent stare prior the failure, which is difficult to do with a file-processing system.
• Concurrent-access anomalies; The ability to support a large number of users and to
update simultaneously the data without inconsistences and delays.
• Security problems; Each user or application shall be only allowed to access to the data
that they have the appropriate credentials for.
These difficulties, among others, resulted in the initial development of database systems and the
transition of file-based applications to database systems, back in the 1960s and 1970s.
Data Models
A database system is a group of interrelated data and a set of programs that allow users to access
and modify this data. The purpose of the database system is to provide users an abstract view of
data. The data model of a database is a group of conceptual tools for describing data, data
relationships, data semantics and consistency constrains. Different data models are summarised
below:
• Relational Model, uses a collection of tables to represent both data and the relationships
between the data and is the most widely used data model. Each table (or relations) has
several columns and each column has a unique name. Each table also contains records
of a particular type. And each record type defines a fixed number of attributes and fields.
• Entity-Relationship Model, uses a collection of basic objects, called entities, and
relationships between these objects. An entity is an “object” in the real world that is
uniquely identifiable, and distinguishable from other objects.
• Semi-Structured Data Model, permits the specification of data where individual data
items of the same type may have different set of attributes. JSON and XML are typical
examples of semi-structured data model.
• Object-Based Data Model, Object oriented data model uses real life scenarios. In this
model, the scenarios are represented as objects. The objects with similar functionalities
are grouped together and linked to other different objects.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 14 of 62 02/02/2021
Database Languages
A database system provides a Data-Definition Language (DDL) to specify the database schema
and a Data-Manipulation Language (DML) to express database queries and updates. Both
languages usually form parts of a single database language, such as the Structured Query
Language (SQL) language.
The DDL uses a set of statements to specify a database schema including the storage structure
and access methods used by the database system. The processing of DDL statements generates
some output that is placed in the data dictionary, which contains metadata - that is, centralised
repository of information about data such as meaning, relationships to other data, origin, usage,
and format. The data dictionary is structured to be a special type of table that can be accessed
and updated only by the database system itself (not by a regular user). The database system
consults the data dictionary for required metadata before reading or modifying actual data.
The DML is a language that enables users to access or manipulate data as organized by
appropriate data model. The types of access are retrieval, insertion, modification and deletion of
data. There are basically two types of data-manipulation language:
• Procedural DMLs, requires a user to specify what data are needed and how to get those
data.
• Declarative DMLs, requires a user to specify what data are needed without specifying
how to get those data.
A query is statement requesting the retrieval of information. The portion of a DML that involves
information retrieval is called a query language.
To access the database, DML statements need to be sent from the host to the database where
they will be executed. This is normally done by using an application-program interface (set of
procedures) that can be used to send DML and DDL statements to the database and retrieve the
results
Database Design and Engine
Database systems are designed to manage large quantities of information. These quantities of
information do not exist in isolation. They are a part of the operation of some enterprise whose
product may be information from the database or may be some device or service for which the
database plays only a supporting role. Database design mainly involves the design of the
database schema.
The initial phase of database design is to characterize the data requirements of the users; and
then the designer chooses a data model and translates these requirements in a conceptual
schema of the database, the conceptual-design process includes decisions on key attributes that
are to be captured in the database and organise these attributes to form the various tables.
The functional components of a database system can be divided into the storage manager, the
query processor components, and the transaction management component.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 15 of 62 02/02/2021
The storage manager requires a large amount of storage space, and it is important that it is able
to recover its data as fast as possible. Data is moved between disk storage and main memory as
required. Since the movement of data to and from the disk is slow relative to the speed of the
CPU, it is imperative that the database system structure minimizes the need to move data
between disk and main memory. SSDs are used for database storage because they are faster
than traditional disks but also more costly.
The query processor simplifies and facilitates the access to data. It allows the database users
to obtain a good performance while being able to work without understanding the details of the
database.
The transaction manager allows developers of applications to treat a sequence of database
accesses as if they were a single unit; and allows developers to think at higher level of abstraction
without needing to be concerned with the lower level.
Database Architecture
In Figure 1, we represent the architecture of a database system that runs on a centralized server
machine. The figure summarizes how different types of users interact with a database, and how
different components of a database engine are connected to each other. This architecture is
applicable to shared-memory server architectures, which have multiple CPUs and can use
parallel processing, with all the CPUs accessing a common shared memory. To scale up to larger
data volumes and higher processing speeds, parallel databases are designed to run on a cluster
consisting of multiple machines. If those machines are geographically separated, they are called
distributed databases.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 16 of 62 02/02/2021
Figure 1. Architecture of a database system that runs on a centralized server machine.
If we want to consider the architecture of applications that use databases as their backend,
database applications can be partitioned into two or three parts. Earlier-generation database
applications used a two-tier architecture, with the application at the client machine that invokes
database system functionality at the server machine through query language statements. Modern
database applications use a three-tier architecture, where the client machine acts as a front end
and does not contain any direct database calls. The front end communicates with an application
server that, in turn, communicates with a database system to access data. The business logic of
the application is embedded in the application server, instead of being distributed across multiple
clients.
ASSET STATUS REPRESENTATION
A review of which assets and asset states will be handled by the TMS was done, to consolidate
the corresponding set of asset descriptions. The focus shall be on the operational status of assets,
hence on their time- or state-dependent attributes.
As the starting point for this task, Deliverable 9.1 of In2Rail [In2Rail D9.1] was reviewed, which
aims to describe a data representation for the status of assets within the railway infrastructure.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 17 of 62 02/02/2021
The steps followed by In2Rail, as described in Deliverable 9.1 [In2Rail D9.1] were:
1. Identification of the attributes needed to represent the operational status of a set of nine
railway assets relevant to the TMS (defined in other work packages within In2Rail project).
The considered assets were:
o Switch
o Crossing
o Track (Rail)
o Catenary
o Bridge
o Tunnel
o Embankments
o Line sections
o Level crossing
Each of them was described by characterising their attributes as being either static or
dynamic data.
2. A review of existing modelling approaches to the problem and production of
recommendations for modelling of assets as described in previous step. Different models
have been considered for both static and dynamic attributes: railML, railML2,
RailTopoModel/railML3, Register of Infrastructure (RINF) model, Infrastructure for Spatial
Information in Europe (INSPIRE), Open Geospatial Consortium’s (OGC), Sensor Web
Enablement (SWE) framework, Semantic Web for Earth and Environmental Terminology
(SWEET), Semantic Sensor Network (SSN).
3. Production of proof-of-concept examples showing the use of the proposed approach (level
crossing and switch)
A conclusion of Deliverable 9.1 of In2Rail [In2Rail D9.1] was that none of the reviewed existing
models was able to represent completely both static and dynamic attributes independently.
Therefore, a hybrid approach was proposed:
• railML to describe static elements
• OGC/sensorML to describe dynamic elements
For this document we will perform the first step of the above approach, as the review of existing
models and proof-of-concepts examples will be approached by other deliverables in the OPTIMA
project.
Asset Data Classification
As mentioned before, the first category will include the data in will be:
• Static data: related to static characteristics of the asset under examination, with values
that never change or change infrequently,
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 18 of 62 02/02/2021
• Dynamic data: with values that change frequently and are related to operational state.
They are further classified as:
o Internal: measurements collected internally from the asset;
o Asset-related: measurements collected by sensors attached or strictly related to
the asset;
o External: measurements from external sensors;
o Diagnostic: measurements collected by a passing diagnostic train and other
diagnostic devices;
o Maintenance: data related to maintenance operations / actions on the asset.
The data can be also characterised by their criticality when dealing with Traffic Management
System (TMS) decisions. As this task has already been done by the IMs in OPTIMA WP5, we will
be focused on the static/dynamic and what information will be stored in the database for historic
queries.
Asset Status Representation
The asset status representation will enable the exchange of key asset parameters (as captured
from on-asset sensors) and also the status/availability of the asset. It is important to note that, the
data from the IL, as with data from other asset data systems, is expected to be mapped to
OPTIMA CDM.
As mentioned in Deliverable 9.1 of In2Rail [In2Rail D9.1], the selection of an appropriate
representation for asset status should start with a consideration of the various aspects of the
domain that need to be captured. The inter-relationship of static and dynamic data in a single
domain is usually avoided in data models, because models designed for dynamic data need to
result in compact but context-rich representations that can convey a specific sub-set of
information rapidly and efficiently (in terms of bandwidth usage), while models intended for use
with more static data can afford to be more verbose in exchange for greater flexibility in the range
of content that can be represented. In the rail industry, this distinction is most obvious in fields
such as operations, where more static data (e.g. seasonal timetables) is represented in XML
based formats, including railML, but more dynamic data, such as the movements of vehicles
between track circuit bays, is frequently streamed as JSON or similar, with the streaming data
referencing elements of the more complex static model, but not directly including the detailed
information.
In the following sections, some of the key concepts involved in the description of asset status will
be introduced and presented.
5.2.1 Infrastructure Type
The type of infrastructure being studied is of vital importance to the asset status representation.
In Deliverable 5.1 of OPTIMA [OPTIMA D5.1], a detailed list is included of the infrastructure
needed for the rail business and external services within the OPTIMA demonstrator. Additional
data structures have been added in order to complete the railway system infrastructure:
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 19 of 62 02/02/2021
• Track
• Route body
• Signal
• Switch/Point
• Bridge
• Tunnel
• Embankment
• Station
• Axle counter
• Track circuits
• Level crossing
• Gauge changer
This data will be of type static and should be allocated in the Geographical Database of OPTIMA.
5.2.2 Physical Location
In the rail industry the description of a specific location has always been challenging, with several
systems being used each based on different references. Generally speaking, these systems can
be broken down into two main groupings, absolute and relative positions.
Infrastructure-absolute positions are the easiest of the two groups to explain, and they represent
specific points on the Earth’s surface described by a coordinate system (e.g., an OS grid
reference, or a WGS-84 position), normally augmented with a specified projection to account for
differing height profiles of the ground and the non-spherical shape of the Earth. Absolute
geographical positions have, historically, been difficult to calculate, requiring either surveying
equipment or a map and line-of-sight to several visual references in the surroundings. Both of
these systems were inconvenient in the early days of the railways, particularly in cuttings or
tunnels, and so relative positioning systems (see below) were adopted by the industry. The arrival
of Global Navigation Satellite Systems (GNSS), and the subsequent inclusion of positioning
hardware in smartphones and tablets, has made absolute positioning a much more practical
system for use by the railways in recent years. Modern infrastructure management tools use
absolute positions provided by the United States’ GNSS, the Global Positioning System (GPS),
to locate maintenance teams on the lineside, and vehicles are increasingly equipped with GPS
alongside other detection / positioning technologies.
Infrastructure-relative positions were adopted by the early rail industry as a convenient means of
describing locations on the infrastructure and are normally given as a distance along a known
route or track. On linear assets, such as the railway, relative positioning is a quick way to establish
a location that can be easily determined based only on the distance a vehicle has moved along a
known track. When working with data from outside the railway however, or data tagged using
other positioning systems, relative positions quickly become difficult to work with as complex
conversions are needed to switch position references between one system and another.
All data structures of the geographical database should have both references, absolute and
relative position. For dynamic assets like trains, at least a relative position should be provided, if
not exactly, at least in reference to other elements (track circuits, stations, etc.).
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 20 of 62 02/02/2021
5.2.3 Dynamic State
The dynamic state of an asset is the key contribution of the asset status representation to the
TMS and will rely on well described data from sensors in the field. As mentioned before in this
chapter, the range of formats for sensor data currently being used within the industry is
comparably broad (XML, JSON, YAML), even for data that are considered “critical” to the traffic
management process. This means that the data model for the dynamic state data will need to be
flexible, capable of specifying a variety of encodings as the specific dataset requires, and ideally
be able to handle that data in a decoupled way, to avoid passing large amounts of potentially very
sizable data around the traffic management system unnecessarily.
In OPTIMA, the dynamic state of each asset (if it’s necessary) will be stored in the database with
one or more attributes.
5.2.4 Actionable Status
The delivery of actionable status information from asset condition is an important element of the
integration of the asset status data with the TMS. Actionable data can be thought of as a “go / no
go” type message that describes whether the asset is currently available for use. Actionable data
will need to be derived based on the combination of sensor data, knowledge of the asset and its
behaviour, and the business rules of the owning IM.
As actionable state is a dynamic state, the availability of the asset should be stored as an asset
attribute.
OPTIMA DATABASES REQUIREMENTS
First, we will offer a list of general requirements that all OPTIMA databases attached to the IL
should fulfil. Then we will enumerate the databases needed and propose a list of dedicated
requirements for each of them. It´s important to specify that we will be talking about requirements
related to the databases, which will be used mainly for storing static data and historic queries but
will not be used to exchange critical data between TMS systems and services.
General Requirements
As a reference, we are going to use the “TAF TAP TSI” programme, a Technical Specification for
Interoperability (TSI) relating to Telematics Applications for Freight (TAF) and Passengers (TAP)
Services. Its aim is to define the data exchange between individual IMs and RUs.
The information that is contained in and underpins the messages set out in the TAF TAP TSI
regulations are used to support business processes for Path Management, Train Preparation and
Train Running. In rail, TAF and TAP systems comprise:
• TAF: including information systems (real-time monitoring of freight and trains), marshalling
and allocation systems, reservation, payment and invoicing systems, management of
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 21 of 62 02/02/2021
connections with other modes of transport and production of electronic accompanying
documents
• TAP: including systems providing passengers with information before and during the
journey, reservation and payment systems, luggage management and management of
connections between trains and with other modes of transport
For the databases mentioned in the TSI regulations a list of requirements is offered and will adapt
those as general requirements for the various OPTIMA databases.
1. Authentication: must support the authentication of users of the systems before they can
gain access to the database.
2. Security: must support the security aspects in the context of controlling access to the
database. The possible encryption of the database contents itself is not required.
3. Consistency: shall support the Atomicity, Consistency, Isolation and Durability (ACID)
principle.
4. Access Control: must allow access to the data to users or systems that have been granted
permission. The access control shall be supported down to a single attribute of a data
record and support configurable, role-based access control for insertion, update or
deletion of data records.
5. Tracing: must support logging all actions applied to the database to allow for tracing the
details of the data entry (who, what and when did the contents change).
6. Lock strategy: must implement a locking strategy which allows access to the data even
when other users are currently editing records.
7. Multiple access: must support the accessing of data by several users and systems
simultaneously.
8. Reliability: the reliability of a database must support the required availability.
9. Availability: a database must have an availability on demand of at least 99,9 %.
10. Maintainability: the maintainability of the database must support the required availability.
11. Safety: databases themselves are not safety-related. Hence safety aspects are not
relevant. This is not to be confused with the fact that the data — e.g. wrong or not actual
data — may have impact on the safe operation of a train.
12. Compatibility: must support a data manipulation language that is widely accepted, such
as SQL or XQL.
13. Import facility: shall provide a facility that allows the import of formatted data that can be
used to fill the database instead of manual input.
14. Export facility: shall provide a facility that allows to export the contents of the complete
database or its part as formatted data.
15. Mandatory Fields: must support mandatory fields that are required to be filled before the
relevant record is accepted as input to the database.
16. Plausibility Checks: must support configurable plausibility checks before accepting the
insertion, update or deletion of data records.
17. Response times: must have response times that allow users to insert, update or delete
data records in a timely manner.
18. Performance aspects: the reference files and databases shall support in a cost-effective
manner the queries necessary to allow the effective operation of all applications that use
the OPTIMA demonstrator.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 22 of 62 02/02/2021
19. Capacity aspects: a database shall support the storage of the relevant data for OPTIMA.
It shall be possible to extend the capacity by simple means (i.e. by adding more storage
capacity and computers). The extension of the capacity shall not require replacement of
the subsystem.
20. Historical data: A database shall support the management of historical data in the meaning
of making of data available that has been already transferred into an archive.
21. Back-up strategy: A back-up strategy shall be in place to ensure that the complete
database contents for up to a 24-hour period can be recovered.
22. Commercial aspects: A database system used shall be available commercially off-the-
shelf (COTS-product) or be available in the public domain (Open Source).
The above requirements must be handled by a standard DBMS. The general workflow is a
request/response mechanism, where an interested party requests information from the database
through a Common Interface (CI) The DBMS will respond to this request either by providing the
requested data or by responding that no data can be made available (no such data exists or
access is refused due to access control).
Specific Requirements
We will be using mainly Deliverable 6.1 of X2Rail-2 [X2Rail-2 D6.1] as the source for the data
structures needed for the OPTIMA demonstrator. This will be analysed further in other OPTIMA
deliverables. The specific data structures to be used will depend on:
• The CDM definitions, understood as the data structures needed;
• The specific needs stated by collaborative projects through COLA agreements;
• The information made available by the IMs.
6.2.1 Geographical Database
The geographical database should include all the data related to the infrastructure of the proposed
track as well as the railway elements along that track. This includes the static information that
should be inserted into the database and some of the dynamic data uploaded to the IL interface
from the business clients/services for the purpose of historic queries.
The infrastructure section has been elaborated based on topology and positioning concepts from
RailTopoModel v1.0. Net elements are the basic elements that make up the infrastructure and
they can have relations with other elements. Some of the principal data structures to consider
are:
• Positioned relations: Consist several variables which indicate the universal ID of points
the start and end points of a track and its potential movement direction.
• Linear locations: A linear location is defined based on partial or complete reference
object(s) – associated net elements, e.g. a route is defined based on complete or partial
track sections.
• Spot location: Instance with no dimensions, used for control points or location of signals.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 23 of 62 02/02/2021
• Positioning by intrinsic coordinate: Location along a chosen linear net element, value
between 0 and 1, corresponding to beginning and end of the element. To the intrinsic
coordinate, a horizontal and vertical offset can be added, forming the linear location.
• Line / track section: A line section or track section starts or ends at block section limits,
buffer stops and/or track joints. A line section or track section starts at intrinsic coordinate
0 and ends at 1
• Route body: The associated trajectory made by the movable point machine which results
from the interlocking system and signalling system, a route can ensure a train is moved
from a track to another at junction areas.
• Point: Point includes points and other moveable elements allowing a train to move from
one track to another.
The data relating to the infrastructure should be defined with at least the following elements:
• Switch.
• Crossing.
• Track.
• Bridge.
• Tunnel.
• Embankments.
• Line sections.
• Stations.
• Level crossing.
• Depots/yards.
• Access points/nodes.
For OPTIMA, the geographical data will be mainly coming from RFI. Because some of this data
is not in a parsable format, UIC is developing an application allowing railway-related data to be
imported from OpenStreetMap. The automatic import allows to create a network representation
at track level, conforming to RailSystemModel 1.2. The network representation extends over:
- Topology (elementary track sections and their connections);
- Track layout: geographic positioning in the WGS84 reference system: 2D alignment,
approximated by polylines;
- Synthesized functional information regarding turnouts: the import algorithm analyses the
geometry to determine turnout orientation (heel / toe, through track / diverted track).
As an example, a snapshot of a cross-over (in yellow), halfway between Vallecrosia and
Bordighera (double track line, in red), rendered in Google Earth is shown in Figure 2.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 24 of 62 02/02/2021
Figure 2. Snapshot of a cross-over halfway between Vallecrosia and Bordighera (double track line, in red, source: Google Earth).
Because we might be getting the data from different sources the importance of consistency in the
general requirements must be emphasized. The data coming from RFI should be aligned with the
data provided by other sources.
6.2.2 Timetable Database
The timetable database will include the data defining all planned train and rolling-stock
movements which will take place on the relevant infrastructure during the period for which it is in
force. Ideally, the detail will include the timings at every major station, junction, or other significant
location along the train's journey (including additional minutes inserted to allow for engineering
work or train maintenance) and which platforms are used at certain stations.
In Deliverable 6.1 of X2Rail-2 [X2Rail-2 D6.1] the following entities related to the timetable of each
train are identified, according to the life cycle of a train during the operation,
• Scheduled timetable: Defined timetable for the current day of operation, which is
established before the departure of the train. It is usually provided by an offline system in
charge of the long-term operation. It provides the established times of arrival and
departure for each control point defined for a train. This timetable information is provided
to third party systems such as Passenger Information Systems in stations. This is static
information that should be filled prior to the running of the demonstrator from the log file
provided by RFI, so it will be static data.
• Target timetable: During the train running, due to unscheduled events or traffic needs, it
could be required that the scheduled timetable is modified. Re-planning actions will be
included over the scheduled timetable by the user of the TMS with the aim to minimize the
impact on these events on the traffic evolution. Thus, the target timetable indicates the
running it is intended for the train to accomplish. This timetable is used within the TMS to
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 25 of 62 02/02/2021
minimize the deviation of the real timetable regarding the scheduled timetable, so it will
be dynamic data.
• Audited timetable: It indicates the real times when the arrival and departures of a train
takes place. This will be the dynamic data of the timetable database. Information coming
from the interlocking system coming from RFI are related to track circuits occupation,
whereas Timetable audit contains information related control points (station, platform,
relevant point over the track, etc.) This table is dynamic and had to be filled in by the TMS
after processing the data provided by the interlocking system.
• Forecasted timetable: According with the status of the traffic and the infrastructure, the
forecasted timetable is the estimation of arrival times for the control points of a train not
yet reached. This timetable is evolving during the running of the train and will be calculated
by TMS business applications, and it is dynamic data in timetable database.
A timetable for a train running on a specific route, will be formed by an ordered set of control
points with information related to the datetime when each event is
scheduled/targeted/audited/forecasted and information related to the event (start, end, stop or
pass of a train).
6.2.3 Vehicle Database
The data related to trains included in this database have been conceptually classified according
with the following items.
• Train Identification: It includes the general data and the additional information regarding
the numbering and the service of the train, as well as its priority in rail operation. Initially,
a train includes within a single train identifier the information related not only to a
commercial train mission but also the additional movements of the same, such as shunting
movements or movements without passengers/load.
o The general data of a train should define unequivocally a train through a train
identifier. This includes the characterization of the train according to a commercial
number and the structure allows the use of any kind of train identifier managed by
the TMS.
o Additional Features: This structure includes additional information to the timetable data which could be of interest for the traffic management.
• Train Formation: Scheduled, target and audited formation data, including details of
vehicles and containers.
o Scheduled data: Defined rolling stock for the current day of operation established
before the departure of the train. If the information about the specific vehicles to
be used is not available yet, scheduled data may use only the quantity and type of
vehicles.
o Target data: During the train running, due to unscheduled events or traffic needs,
the scheduled rolling stock requires to be modified or completed with the identifiers
of the specific vehicle.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 26 of 62 02/02/2021
o Audited data: It indicates the specific rolling stock that has been used. The
intention here is to collect or confirm that the rolling stock which was actually used.
• Train Staff: Scheduled, target and audited crew data.
• Train Path: Audited train path based on current location list data, expected detailed train
path, including effects of Temporary Seed Restrictions (TSR)s on the train and Movement
Authorities.
• Train Operation: Planned and energy consumption data, incidents and audited operation
modes.
As with the rest of data tables in Appendix A, a further analysis should be needed in order to
specify what data will be available in OPTIMA demonstrator.
QUALITY OF SERVICE (QOS)
General Concepts
IT services are widely employed for building loosely coupled distributed systems, such as e-
commerce, e-government, automotive systems, integrations layers and multimedia services. QoS
is generally used to describe the non-functional characteristics of IT services. QoS management
of IT services refers to the activities in QoS specification, evaluation, prediction, aggregation, and
control of resources to meet the requirements from end-to-end users and applications. Different
IT service QoS properties can be divided into user independent QoS properties and user
dependent QoS properties. Values of the user independent QoS properties (e.g., latency,
bandwidth) are usually advertised by service providers and are identical for different users. On
the other hand, values of the user dependent QoS properties (e.g., failure probability, response
time, throughput) can vary widely for different users influenced by the unpredictable Internet
connections and the heterogeneous user environments.
The following list shows the most commonly used QoS properties:
Database QoS:
• Availability: the percentage of time that IT service is operating during a certain time
interval.
• Popularity: the number of received invocations of an IT service during a certain time
interval.
• Data size: the size of the dataset service invocation response.
• Failure probability: the probability that a request has failed. Failure probability and failure
rate are interchangeable.
• Read and write performance: these are the finite amount of time that it takes to read or
write a finite amount of data.
Network QoS:
• Delay (or latency): This is the finite amount of time that it takes a packet to reach the
receiving endpoint after being sent from the sending endpoint.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 27 of 62 02/02/2021
• Jitter (or delay variation): This is the variation, or difference, in the end-to-end delay in
arrival between sequential packets.
• Packet drops: This is a comparative measure of the number of packets faithfully sent and
received to the total number sent, expressed as a percentage.
7.1.1 Traffic Class and QoS Functions
Traffic classes are categories of traffic (packets) that are grouped on the basis of similarity. Such
groups of traffic are called class maps. Classifying network traffic allows a user to enable a QoS
strategy in a network which helps users clarify the kinds of traffic the user has and treat some
types of traffic differently than others. Identifying and categorizing network traffic into traffic
classes (that is, classifying packets) enables a user to handle different types of traffic by
separating network traffic into different categories and allocating network resources to deliver
each type of traffic with the level of performance most appropriate to that traffic, within the
constraints of the available resources.
A user can place network traffic with a specific Internet Protocol (IP) address precedence into one
traffic class, and place traffic with a specific Differentiated Services Code Point (DSCP) value into
another traffic class. Each traffic class can be given a different QoS treatment, which is configured
in a policy map.
Traffic classification and traffic marking are closely related and can be used together. Traffic
marking can be viewed as an additional action, specified in a policy map, to be taken on a traffic
class. Traffic classification allows a user to organize into traffic classes on the basis of whether
the traffic matches specific criteria.
After the traffic is organized into traffic classes, traffic marking allows you to mark (that is, set or
change) an attribute for the traffic belonging to that specific class.
The match criteria used by traffic classification are specified by configuring a match command in
a class map. The marking action taken by traffic marking is specified by configuring a set
command in a policy map. These class maps and policy maps are configured using the MQC.
Traffic marking is a method used to identify certain traffic types for unique handling, effectively
partitioning network traffic into different categories. After the network traffic is organized into
classes by traffic classification, traffic marking allows you to mark (that is, set or change) a value
(attribute) for the traffic belonging to a specific class. For instance, a user may want to change
the Class of Service (CoS) value from 2 to 1 in one class, or to change the DSCP value from 3 to
2 in another class. In this discussion, these values are referred to as attributes.
Traffic marking allows users to refine the attributes for traffic in their network, this increased
granularity helps single out traffic that requires special handling and, thus, helps to achieve
optimal application performance.
Traffic marking also enables a user to determine how traffic will be treated, based on how the
attributes for the network traffic are set, then to segment network traffic into multiple priority levels
or classes of service based on those attributes, as follows:
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 28 of 62 02/02/2021
• Traffic marking is often used to set the IP precedence or IP DSCP values for traffic entering
a network. Networking devices within your network can then use the newly marked IP
precedence values to determine how traffic should be treated. For example, voice traffic
can be marked with a particular IP precedence or DSCP, and a queueing mechanism can
then be configured to put all packets of that mark into a priority queue.
• Traffic marking can be used to identify traffic for any class-based QoS feature (any feature
available in policy-map class configuration mode, although some restrictions exist).
• Traffic marking can be used to assign traffic to a QoS group within a device. The device
can use the QoS groups to determine how to prioritize traffic for transmission. The QoS
group value is usually used for one of the two following reasons:
1. To leverage a large range of traffic classes. The QoS group value has 100 different
individual markings, as opposed to DSCP and IP precedence, which have 64 and
8, respectively.
2. If changing the IP precedence or DSCP value is undesirable.
• If a packet (for instance, in a traffic flow) that needs to be marked to differentiate user-
defined QoS services is leaving a device and entering a switch, the device can set the
CoS value of the traffic, because the switch can process the Layer 2 CoS header marking.
Alternatively, the Layer 2 CoS value of the traffic leaving a switch can be mapped to the
Layer 3 IP or MPLS value.
• Weighted Random Early Detection (WRED) uses precedence values or DSCP values to
determine the probability that the traffic will be dropped. Therefore, the Precedence and
DSCP can be used in conjunction with WRED.
The differences between classification and marking is shown in Table 2:
Table 2. Comparison of traffic classification and traffic marking.
Feature Traffic classification Traffic marking
Goal Groups network traffic into specific traffic classes on the basis of whether the traffic matches the user-defined criterion.
After the network traffic is grouped into traffic classes, modifies the attributes for the traffic in a particular traffic class.
Configuration mechanism Uses class maps and policy maps in the MQC.
Uses class maps and policy maps in the MQC.
command-line interface (CLI)
In a class map, uses match commands (for example, match CoS) to define the traffic matching criteria.
Uses the traffic classes and matching criteria specified by traffic classification. In addition, uses set commands (for example, set cos) in a policy map to modify the attributes for the network traffic.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 29 of 62 02/02/2021
7.1.2 Marking and Mapping QoS Markings
DiffServ is a QoS model where network elements, such as routers and switches, are configured
to service multiple classes of traffic with different priorities. Network traffic must be divided into
classes based on a customer's requirements.
For example, voice traffic can be assigned a higher priority than other types of traffic. Packets are
assigned priorities using DSCP for classification. DiffServ also uses per-hop behaviour to apply
QoS techniques, such as queuing and prioritization, to packets.
Network architecture also affects how an organization implements QoS. A MPLS network
includes a private link that offers end-to-end QoS along a single path. SLAs for MPLS specify
bandwidth, QoS, latency and uptime. However, an MPLS can be expensive for organizations.
Software-defined WAN (SD-WAN) uses multiple connectivity types, including MPLS and
broadband. SD-WAN monitors the state of current network connections for performance issues
and uses its multiple connectivity types to fail over based on state. For example, if packet loss
exceeds a certain level on one connection, SD-WAN capabilities will look for an alternative
connection.
Layer 2 ISL frame headers have a 1-byte User field that carries an IEEE 802.1p class of service
(CoS) value in the three least-significant bits. On ports configured as Layer 2 ISL trunks, all traffic
is in ISL frames. Layer 2 802.1Q frame headers have a 2-byte Tag Control Information field that
carries the CoS value in the three most-significant bits, which are called the User Priority bits. On
ports configured as Layer 2 802.1Q trunks, all traffic is in 802.1Q frames except for traffic in the
native VLAN. Other frame types cannot carry Layer 2 CoS values. Layer 2 CoS values range from
0 for low priority to 7 for high priority.
Layer 3 IP packets can carry either an IP precedence value or a DSCP. QoS supports the use of
either value because DSCP values are backward-compatible with IP precedence values. IP
precedence values range from 0 to 7. DSCP values range from 0 to 63.
7.1.3 Policing and Shaping Traffic
To support QoS in a network, traffic entering the service provider network needs to be policed on
the network boundary routers to ensure that the traffic rate stays within the service limit. Even if
a few routers at the network boundary start sending more traffic than what the network core is
provisioned to handle, the increased traffic load leads to network congestion. The degraded
performance in the network makes it difficult to deliver QoS for all the network traffic.
Traffic policing functions (using the police algorithms) and shaping functions (using the traffic
shaping algorithms) manage the traffic rate but differ in how they treat traffic when tokens are
exhausted. The concept of tokens comes from the token bucket scheme, a traffic metering
function. Table 3 summarises the differences between policing and shaping.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 30 of 62 02/02/2021
Table 3. Differences between policing and shaping functions.
Policing Function Shaping Function
Sends conforming traffic up to the line rate and allows bursts.
Smooths traffic and sends it out at a constant rate.
When tokens are exhausted, action is taken immediately.
When tokens are exhausted, it buffers packets and sends them out later, when tokens are available. A class with shaping has a queue associated with it which will be used to buffer the packets.
Policing has multiple units of configuration – in bits per second, packets per second and cells per second.
Shaping has only one unit of configuration - in bits per second.
Policing has multiple possible actions associated with an event, marking and dropping being example of such actions.
Shaping does not have the provision to mark packets that do not meet the profile.
Works for both input and output traffic. Implemented for output traffic only.
TCP detects the line at line speed but adapts to the configured rate when a packet drop occurs by lowering its window size.
TCP can detect that it has a lower speed line and adapt its retransmission timer accordingly. This results in less scope of retransmissions and is TCP-friendly.
QoS policing feature can also be used with the priority feature to restrict priority traffic. If the rate
is exceeded, then a specific action is taken as soon as the event occurs. The policing algorithms
are listed below:
1. Single-rate two-colour policing: this algorithm is implemented using a single rate and
single token bucket. The traffic is identified as one of two states (two colour), and the
policer will check whether there are enough tokens in the bucket to allow the packet to get
through.
2. Single-rate three-color policing: this algorithm improves the single-rate two colour
policer with two token buckets. The traffic is identified as one of three states (three colours)
which decides the traffic flow is conforming to, exceeding or violating the committed
information rate, and then decide which packet will be delivered.
3. Dual-rate three-colour policing: this policer addresses the peak information rate (PIR),
which is unpredictable in the single-rate three-color model. Furthermore, the two-rate
three-color marker/policer allows for a sustainable excess burst (negating the need to
accumulate credits to accommodate temporary bursts) and allows for different actions for
the traffic exceeding the different burst values.
Shaping is the process of imposing a maximum rate of traffic, while regulating the traffic rate in
such a way that the downstream switches and routers are not subject to congestion. Shaping in
the most common form is used to limit the traffic sent from a physical or logical interface.
Shaping has a buffer associated with it that ensures that packets which do not have enough
tokens are buffered as opposed to being immediately dropped. The number of buffers available
to the subset of traffic being shaped is limited and is computed based on a variety of factors. The
number of buffers available can also be tuned using specific QoS commands. Packets are
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 31 of 62 02/02/2021
buffered as buffers are available, beyond which they are dropped. Main approaches for shaping
are listed below:
1. Average rate shaping: this traffic shaper typically delays excess traffic using a buffer, or
queueing mechanism, to hold packets and shape the flow when the data rate of the source
is higher than expected. It holds and shapes traffic to a particular bit rate by using the
token bucket mechanism.
2. Hierarchical shaping: the traffic is classified based on a framework that provides a clear
separation between a classification policy and the specification of other parameters that
act on the results of that applied classification policy. And then the shaper modifies the
traffic due to its different features.
7.1.4 Congestion Management
Congestion management minimises the intensity of data delivery by determining the order in
which packets are sent out of an interface based on priorities assigned to those packets.
Congestion management entails the creation of queues, assignment of packets to those queues
based on the classification of the packet, and scheduling of the packets in a queue for
transmission. The congestion management QoS feature offers four types of queueing protocols,
each of which allows you to specify creation of a different number of queues, affording greater or
lesser degrees of differentiation of traffic, and to specify the order in which that traffic is sent.
During periods with light traffic, that is, when no congestion exists, packets are sent out the
interface as soon as they arrive. During periods of transmit congestion at the outgoing interface,
packets arrive faster than the interface can send them. If you use congestion management
features, packets accumulating at an interface are queued until the interface is free to send them;
they are then scheduled for transmission according to their assigned priority and the queueing
mechanism configured for the interface. The router determines the order of packet transmission
by controlling which packets are placed in which queue and how queues are serviced with respect
to each other.
Both queuing and scheduling can be used to help prevent traffic congestion, the scheduling and
queueing approach can be diverse due to following features:
• Bandwidth;
• Weighted Tail Drop;
• Priority queues;
• Queue buffers.
The scheduling is an approach to select packets from classified queues with different priorities
and decides which package to send next considering the available bandwidth. Typical scheduling
policies include:
• Strict priority: outputs queues are serviced in strict priority order; that is, packets waiting
in the highest-priority queue are serviced until that queue is empty, then packets waiting
in the second-highest priority queue are serviced, and so on. Under congestion, strict
priority policy allows the highest priority traffic to get through, at the expense of lower-
priority traffic.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 32 of 62 02/02/2021
• Round-robin: round-robin policies can operate in one of three modes: normal, strict, or
alternate:
o Normal mode treats highest-priority queue like all other queues on a circuit. Each
queue receives its share of the circuit’s bandwidth according to the weight
assigned to the queue.
o Strict mode treats highest-priority queue over all other queues configured on a
circuit.
o Alternate mode treats the highest-priority queue and remaining queue
alternatively; after highest-priority queue is served, then the next queue is served.
highest-priority queue is served again, and the next queue in turn is served, and
so on.
• Weighted fair: ensures that queues do not starve for bandwidth and that traffic obtains
predictable service. These policies operate in one of two modes: alternate and strict. In
either mode, the Asynchronous Transfer Mode (ATM) Segmentation and Reassembly
(SAR) uses a class-based weighted fair queuing (WFQ) algorithm to perform QoS priority
packet scheduling. In strict mode, highest-priority queue is serviced immediately, and the
other queues are serviced in a round-robin fashion according to their configured weights.
In alternate mode, the servicing of queues alternates between highest-priority queue and
the remaining queues, according to their configured weights. highest-priority queue is
served, then the next queue is served. highest-priority queue is served again, and the next
queue in turn is served, and so on.
After all traffic has been placed into QoS classes based on their QoS requirements, you need to
provide bandwidth guarantees and priority servicing through an intelligent output queueing
mechanism, the queuing approaches are listed below:
• First-in, first-out queuing (FIFO): Packets arrive and leave the queue in exactly the
same order.
• Priority queuing (PQ): Traffic is classified into high, medium, normal, and low priority
queues. The high priority traffic is serviced first, then medium priority traffic, followed by
normal and low priority traffic.
• Custom queuing (CQ): Traffic is classified into multiple queues with configurable queue
limits. The queue limits are calculated based on average packet size, Maximum
Transmission Unit (MTU), and the percentage of bandwidth to be allocated. Queue limits
(in number of bytes) are dequeued for each queue, therefore providing the allocated
bandwidth statistically.
• Weighted fair queuing (WFQ): A hashing algorithm places flows into separate queues
where weights are used to determine how many packets are serviced at a time. You define
weights by setting IP Precedence and DSCP values.
• IP RTP priority queuing (PQ-WFQ): A single interface command is used to provide
priority servicing to all UDP packets destined to even port numbers within a specified
Topology-independent. The probe travels to a destination IP address—it has no knowledge of nodes, hops, and bandwidth availability on individual links.
Topology aware. The bandwidth availability on every node and every link is taken into account.
Transparent. Probes are IP packets and can be sent over any network, including SP backbones and the Internet.
To be the truly end-to-end method that reservation techniques are intended to be, the feature must be configured on every interface along the path, which means the customer owns the WAN backbone, and all nodes run code that implement the feature. Owning the entire backbone is impractical in some cases, so hybrid topologies may be contemplated—with some compromise to the end-to-end nature of the method.
Postdial delay An increase in postdial delay exists for the first call only; information on the destination is cached after that, and a periodic probe is sent to the IP destination. Subsequent calls are allowed or denied based on the latest cached information.
An increase in postdial delay exists for every call, as the Resource Reservation Protocol (RSVP) reservation must be established before the call setup can be completed.
Industry parity Several vendors have "ping"-like CAC capabilities. For a customer familiar with this operation, measurement-based techniques are a good fit.
—
CAC accuracy The periodic sampling rate of probes can potentially admit calls when bandwidth is insufficient. Measurement-based techniques perform well in networks where traffic fluctuations are gradual.
When implemented on all nodes in the path, RSVP guarantees bandwidth for the call along the entire path for the entire duration of the call. This is the only technique that achieves this level of accuracy.
Protecting voice QoS after admission
The CAC decision is based on probe traffic statistics before the call is admitted. After admission, the call quality is determined by the effectiveness of other QoS mechanisms in the network.
A reservation is established per call before the call is admitted. The quality of the call is therefore unaffected by changes in network traffic conditions.
Network traffic overhead
Periodic probe traffic overhead to a cached number of IP destinations. Both the interval and the cache size can be controlled by the configuration.
RSVP messaging traffic overhead for every call.
Scalability Sending probes to thousands of individual IP destinations may be impractical in a large network. However, probes can be sent to the
Individual flow reservation is important on the small-bandwidth links around the edge of the network. However, individual
WAN edge devices, which proxy on behalf of many more destinations on a high-bandwidth campus network behind the edge. This provides considerable scalability, because the WAN is much more likely to be congested than the campus LAN.
reservations per call flow may not make sense on large-bandwidth links in the backbone such as an OC-12. Hybrid network topologies can solve this need, and additional upcoming RSVP tools in this space will provide further scalability.
Resource reservation protocol (RSVP) is the only CAC mechanism that makes a bandwidth
reservation and does not make a call admission decision based on a "best guess look-ahead"
before the call is set up. This gives RSVP the unique advantage of not only providing CAC for
voice, but also guaranteeing the QoS against changing network conditions for the duration of the
call.
QoS on Railway Domain Databases
In order to view some examples of how the QoS has been handled in critical operations related
to the railway domain, we can review again the “TAF TAP TSI” programme.
There is a direct connection between QoS for databases and data quality so assuring high data
quality is necessary to achieve high QoS
In both TSI ([TAF_TSI] [TAP_TSI]) a chapter is dedicated to data quality and is stated that for
data quality assurance purposes, the originator of any TSI message will be responsible for the
correctness of the data content of the message at the time when the message is sent. In order to
ensure the quality of the data, if the source data is available from the databases provided as part
of the TSI, the data contained in those databases should be used. For example, if the static data
regarding rolling stock is in the database, it must ensure data quality. When the source data is
not coming from those databases, the responsibility for data quality assurance should rely on the
originator of the data. For example, the data regarding the temperature of a sensor should be
assured by the sensor.
Data quality assurance will include comparison with data from databases provided as part of this
TSI as described above plus, where applicable, logic checks to assure the timeliness and
continuity of data and messages.
Data are of high quality if they are fit for their intended uses, which means they are error free and
possess desired features such as relevant, comprehensive, proper level of detail, easy-to-read,
easy-to-interpret, etc.
The data quality is mainly characterised by:
• Accuracy:
The data required needs to be captured as economically as possible. This is only feasible if the
primary data is only recorded, if possible, on one single occasion for the whole transport.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 36 of 62 02/02/2021
Therefore, the primary data should be introduced into the system as close as possible to its
source, so that it can be fully integrated into any later processing operation.
• Completeness:
Before sending out messages the completeness and syntax must be checked using the metadata.
This also avoids unnecessary information traffic on the network. All incoming messages must also
be checked for completeness using the metadata.
• Consistency:
Business rules must be implemented in order to guarantee consistency. Double entry should be
avoided, and the owner of the data should be clearly identified.
• Timeliness:
The provision of information within the required timeframe is an important point. If the triggering
of data storage or of the sending of a message is event driven directly from the IT system, then
timeliness is not a problem if the system is well designed in manner according the needs of the
business processes. But in most cases, the initiation of the sending of a message is done by an
operator or at least is based on additional input from an operator (for example the sending of the
train composition or the actualising of train or wagon related data). To fulfil the timeliness
requirements the updating of the data must be done as soon as possible, and guarantee, that the
messages will have the actual data content when being sent out automatically by the system.
Some of the data quality metrics that are offered are:
• For completeness: 100 percent of data fields having values entered for mandatory data
• For consistency of data: 100 percent of matching values across tables/files/records
• For timeliness of data: at least 98 percent of data available within a specified time frame
threshold
• For accuracy of data: at least 90 percent of stored values that are correct when compared
to the actual value
In this same line of thought, another interesting document from the TAF TAP TSI is the Sector
Handbook [TAF_TAP_SH] which describes messages and elements used by the sector for RU/IM
communications for planning and operation in freight and passengers’ traffic. This document gives
an idea on the messages and elements legal status, explains their usage; the overall architecture,
the establishment and use of the reference data and relevant code list.
Chapter 21 of the document addressed the importance of good data quality for the efficiently use
the RU/IM message exchange and the related applications. Data quality requirements are shown
in Figure 3.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 37 of 62 02/02/2021
Figure 3. Data quality requirements.
Data quality is the responsibility of the sender. Before been sent, messages must be checked by
the sender in order to ensure that it is well-formed, complete and valid (against the
message/elements as defined in the message catalogue).
Data quality (can and) will be measured by the receiver of the message and the receiver talks to
sender in case the data quality needs to be increased.
Messages sent shall be secured by signing, encrypting and compressing the message:
o All IMs and RUs involved in the communication shall have the certificate from the
Certification Authority to use encryption and signing certificate.
o Encryption (using SSL/TLS) shall be provided assuring privacy and authentication of
the sender and avoid man in the middle attacks.
o Signing shall be used to assure integrity of messages.
o Two actors keeping a communication should have the certificate generated with same
root certificate to work with encryption functionality and a security alias of the certificate
should be provided.
The following possibilities are supported by the RU/IM architecture to detect and increase data
quality:
• Syntactically incorrect messages are placed in temporary storage for analysis and
reporting (Dead letter queues - storage of message that could not be delivered in a CI or
on application level). This would indicate problems in format data quality.
• In case a message is not delivered (no acknowledgment by the CI) the message can still
be found in sending queues. It is up to the sender to check the delivery.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 38 of 62 02/02/2021
• The Answer Not Possible message (ANP) in the Path Request process can be used to
report and measure semantic data quality (content level). The ANP message is sent from
the receiver of an earlier message to the sender of that earlier message. The message is
used when the information in the original message was not good enough to create an
answer. The message supports a list of elements that have errors or could also be
business related errors (e.g. a schedule is not logical). The level of detail of this ANP
message will depend on the application that emits this message. It should be as detailed
as feasible with the legacy system with the minimum information: Message was not
usable. Detailed information, if supported by legacy system, shall use a subset or all the
common error codes described for ANP.
• In case of the same message with the same content (e.g. train running message for train
X at location Y sent several times), rules like “Last message wins” would avoid the
handling of ambiguous message information. This must be implemented at application
level.
• For the central reference database, specific Entities are responsible for the storage of
specific codes (e.g. national entity for primary codes in its country). This Entity must make
sure, that one location is coded once – avoiding ambiguous codes.
• Timeliness of message exchange can be checked on application level, e.g. comparing the
actual timing of a train run and the timestamp of the message arriving. It depends on
contractual agreements.
Requirements for OPTIMA Database QoS
Given all previous information regarding QoS we can define the following requirements to
consider for OPTIMA development in relation to the database.
Transport priority
In case of conflicts between database operations due to limited bandwidth or similar, OPTIMA
databases should decide which message from which business services and external clients is
more important based on a defined transport priority requirement. Queuing policies will be
enabled on all nodes where there is a possibility of congestion, regardless of how rarely this in
fact may occur.
The services to be compliant with transport priority requirements are the following:
1. Interlocking:
2. RBC
3. Energy Management System
4. Maintenance Service Management System
5. Passenger Information Services
6. Weather forecast
Note that no databases are involved in train operations and safety-critical information. The
purpose of database information is to save data history and consultation. All communications are
done directly, precisely to avoid adding unnecessary risks, raised from each business service. If
a database is present is for historic queries and as an auxiliary element but never for actual train
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 39 of 62 02/02/2021
operations. This is QoS for the database, not for the IL, which should be detailed within WP4
scope.
Quality metrics
We will use [TAF TAP TSI] metrics as minimum values to evaluate the quality of the data offered
by the IMs that participate in the project through OPTIMA databases.
• For completeness: 100 percent of data fields having values entered of mandatory data
• For consistency of data: 100 percent of matching values across tables/files/records
• For timeliness of data: at least 98 percent of data available within a specified threshold
time frame
• For accuracy of data: at least 90 percent of stored values that are correct when compared
to the actual value.
As mentioned before, this metrics are acceptable in the case of the databases being used for
non-critical data and in a demonstrator platform. If during the development of the demonstrator,
a different use for the DBMS is required (for example, storing any kind of critical data regarding
safety), the accuracy of data metric should be increased accordingly.
CONCLUSIONS AND RECOMMENDATIONS
Database Requirements
Main conclusions for the establishment of database requirements are:
• A DBMS will be implemented to set programs to access data, which will be an appropriate
way to store and retrieve the information ensuring the access to authorised users and data
confidentiality and integrity.
• A database model will be defined in order to describe data relationships, data semantics
and consistency constrains as well as a specific database language will be defined. This
will allow the use a set of statements to specify a database schema including the storage
structure and access methods used by the database system.
• A list of general requirements that all OPTIMA databases attached to the IL will be defined.
Then, the databases needed will be enumerated and a list of dedicated requirements for
each of them will be proposed.
In the framework of OPTIMA’s demonstrator development, a list of general requirements was
proposed based on the [TAF TAP TSI] programme. For each of the OPTIMA databases a first
analysis is made, and specific requirements are mentioned. In further deliverables, a more
accurate analysis should be made considering the information provided by the IMs (WP5) and
the expected detail of the collaborative projects that will be using the OPTIMA demonstrator.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 40 of 62 02/02/2021
Data Structures
A complete set of data structures related to OPTIMA databases (geographical, timetable and
vehicle) is included in Appendix A. Those, along the data structures that WP5 will provided related
to the business clients and external services, will need to be addressed by the CDM proposed by
WP6.
Quality of Service Requirements
Main conclusions for the establishment of QoS requirements are:
• Database and network QoS requirements were identified.
• An IL must provide high-availability and scalability by distributing data across multiple
machines. QoS represents the possibility for the application to provide the IL requirements
on communication aspects. Non-functional requirements for the IL were proposed: QoS
performance, data delivery and special needs.
• In case of conflicts due to limited bandwidth or similar, OPTIMA can decide which
message from which business services and external clients is more important based on
priority requirement.
o Interlocking
o RBC
o Energy Management System
o Maintenance Service Management System
o Passenger Information Services
o Weather forecast
• Enable queuing policies at every node where the potential for congestion exists,
regardless of how rarely this in fact may occur.
Quality metrics are proposed based on the information from [TAF TAP TSI] to evaluate the quality
of the data stored in the database offered by the IMs that participate in the project.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 41 of 62 02/02/2021
REFERENCES
CODE (ID) Description
[In2Rail D8.3] Description of the IL and Constituents, In2Rail, Grant agreement no. 635900
[X2Rail-2 D6.1] System requirements specification (SRS) for the IL, X2Rail-2, Grant agreement no. 777465.
DBMS_1 Database System Concepts Seventh edition, Abraham Silberschatz, Henry F.Korth, S.Sudarshan
DBMS_2 Tim Szigeti, Christina Hattingh - End-to-End QoS Network Design_ Quality of Service for Rich-Media & Cloud Networks-Cisco Press (2013)
URLDBMS_3 (Advanced Topics in Science and Technology in China) Zibin Zheng, Michael R. Lyu (auth.) - QoS Management of Web Services-Springer-Verlag Berlin Heidelberg (2013)
[In2Rail D9.1] Asset status representation, In2Rail, Grant agreement no. 635900
[OPTIMA D5.1] Railway Business services software and external services analysis and requirements.
Linear location UUID of linear location which constitutes the route
UUID of net element
Route constraints
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 45 of 62 02/02/2021
Attribute Attribute description Admitted values
ID Universal identifier UUID of route to which the constraints apply to
128-bit code
Automatic Interlocking automatically sets this route
Boolean
Electrified Whether route is fully electrified Boolean
Priority When from entry to exit more than one route are possible, then there is a level of priority for the control system, 1 for highest.
Integer
Temporary Speed Restriction (TSR)
Temporary speed restriction Positive number
Intermediate point speed reduction
Speed reduction no longer applies once the train has cleared the point
Positive number
Automated Route Setting (ARS)
Availability of ARS, a system external to the interlocking
Boolean
Delay for lock Time delay for guaranteeing route has been cleared by previous railway vehicle
Positive number
Reduced braking distance Braking distance below the normal values
Boolean
Max line speed Max line speed within the route Positive number
Point
Point
Attribute Attribute description Admitted values
ID Universal identifier UUID of point
128-bit code
Point mechanism Indicates the type of mechanism used for its operation (manual, electro-mechanical, etc)
string
Point type Indicates the type of point - simple points, - double slip points, - single slip points, - moveable switch diamond crossings, - moveable crossing noses, - derailers.
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 46 of 62 02/02/2021
Trailing detection Indicates whether point is equipped with trailing detection
Boolean
Normal Position Position of the point blades for most frequent traffic and with the highest speed
Left, right, straight
Reversed position Position of the point for the least frequently used route, which may also be the route having the lowest speed
Left, right, straight
Length Length of the point panels in meters
Positive number
Geometric coordinates Coordinates in a chosen schematic, geographic or geodetic reference system. Z coordinate is optional.
X, y, z coordinates
Linear coordinate Measure from the beginning of the track. Optional: lateral and vertical (height) offset
State Operational status of installation Unknown, open, closed, opening, closing
Local control Whether it allows local control Boolean
Model Model of the level crossing, describing its functioning
Unprotected, protected manned, automatic
Type Type of automatic level crossing SAL0 (no barriers), SAL2 (two half barriers), SAL4 (four half barriers), SAL FC (low cost used for freight lines and low speeds)
Barrier length Length of the barriers Positive number
Nominal drop time and rise time
Time necessary to descend and rise the barriers in nominal conditions
Two positive numbers
Nominal max speed per track and direction
Maximum speed which trains may use for traversing the track at the level crossing in different directions
Positive numbers, lower than or equal to maximum line speed in the track section
Striking distance Distance of train detector sensor from the level crossing to trigger closure
Positive number
Minimal warning time
Time to allow for the response of the equipment
Positive number
Station
Station
Attribute Attribute description Admitted values
ID Universal identifier UUID of station 128-bit code
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 51 of 62 02/02/2021
Superstation ID Refers to the superstation in case this macroscopic node is only part of what is commercially refered to as one station (e.g. "Frankfurt (M) Hbf lower level" as part of "Frankfurt (M) Hbf)")
UUID of net element
Linear location UUID of linear locations which constitute the station
UUID of net elements
Halt A halt is similar to a normal station but without any points
Boolean
Stop point
Attribute Attribute description Admitted values
ID Universal identifier UUID of stop point 128-bit code
Associated net element
UUID of track section to which the stop point is topologically related
UUID of net element
Linear coordinate Intrinsic coordinate along the associated net element. Optional: lateral and vertical (height) offset
Whether the signal is used in the same direction of the net element to which it is associated.
Normal, reverse, both
Category ID Categories are for example "dirty water disposal", "fresh water supply", "brake air"
ID
Mobile flag Whether the facility is stationary or mobile, like a fuel truck or a dirty water truck
Boolean
Opening hours List of time ranges of opening time for facility
Standard validity expression
DATA RELATED TO TRAIN IDENTIFICATION AND TIMETABLE
Train
General Data
Attribute Attribute description Admitted values
Train Identifier Internal identification of a train with the aim to collect under a single identifier all the movements of the train. Typically it is composed by a number and the date of the commercial departure of the train.
String
Train Number Main commercial number of the train. String
Commercial Service Start Time
Departure Datetime of the first movement of the Train in commercial service.
Timestamp
Train Type Type of train attending if it is scheduled at long-time plan or not created during operation according to the operational needs.
Listed [Scheduled, Unscheduled]
Train Category Type of elements carried on the train. Listed [Passengers, Freight, Mixed]
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 53 of 62 02/02/2021
Train SubCategory Subtype according to the elements carried and the type of service.
Listed [Fast-Freight, SlowFreight, Local Passengers, Sub-Urban Passengers, High Speed Passengers, etc.]
Train Status Commercial status of the train. Listed [Scheduled, Close to Run, Running, Finished, Cancelled]
Railway Undertaking Company allowed to interface with the Infrastructure Manager to get allocation of slots.
String
Train Operating Company
Company allowed to carry persons or goods in the railway network.
String
N x Additional Feature Additional information regarding numbering, parking and movements.
Array [Complex [Additional Feature]]
Additional Feature
Attribute Attribute description Admitted values
Feature Type Type of information provided in the feature of the train.
Listed [Numbering, Stabled, Movement]
Specific Attributes according to the Type value
Set of attributes according to the value of the Type of the Additional Feature.
Array [Complex [Specific Attributes]]
Specific Attributes for Type: Numbering
Attribute Attribute description Admitted values
Number Commercial number or any other identifier of the train during shunting, parking, etc. assigned to the train in the control point.
String
Control Point Reference to a control point where the stop of the train is allowed.
String
Scheduled Timetable Control Point Reference
Order of Timetable Control Point in Scheduled Timetable
Numeric
Specific Attributes for Type: Stabled
Attribute Attribute description Admitted values
Control Point Reference to a control point where the stop of the train is allowed.
String
Scheduled Timetable Control Point Reference
Order of Timetable Control Point in Scheduled Timetable
Numeric
Start Time Timestamp when the parking in the Control Point starts.
Timestamp
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 54 of 62 02/02/2021
End Time Timestamp when the parking in the Control Point ends.
Timestamp
Specific Attributes for Type: Moving
Attribute Attribute description Admitted values
Moving Type Type of movement according to its commercial or internal characteristic.
Listed [Commercial, Empty, Shunting]
Control Point Start Reference to an identified location (node) where is allowed to define a datetime value for a train movement where the type of movements starts.
String
Scheduled Timetable Control Point Reference Start
Order of the Timetable Control Point in Scheduled Timetable where the type of movements starts.
Numeric
Control Point End Reference to an identified location (node) where is allowed to define a datetime value for a train movement where the type of movements ends.
String
Scheduled Timetable Control Point Reference End
Order of the Timetable Control Point in Scheduled Timetable where the type of movements ends.
Numeric
Train timetable
Timetable Control Point
Attribute Attribute description Admitted values
Control Point Reference to an identified location where is allowed to define a Datetime value for a train movement (stop or pass).
String
Arrival DateTime DateTime when is planned that the train arrive at the control point.
Timestamp
Commercial Departure DateTime
DateTime defined officially for the departure time of the train from a control point.
Timestamp
Technical Departure DateTime
DateTime contemplating the reserved time for the departure of a train in case it is required additional dwell time for technical purposes (such as extra time for passenger exchange in peak hours). This time
Timestamp
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 55 of 62 02/02/2021
is reserved during the train scheduling but may not be necessarily respected during the train operation.
Movement Type Defines if the train stops or passes at the control point.
Listed [Stop, Pass, Start, End]
Arrival Track Identification of the track within the node where is defined the arrival of the train. In case of a stop movement, is the track where the train wait for passenger's exchange
String
Departure Track Identification of the track within the node where the train is before to departure. Usually, it is the same track defined as arrival track.
String
Arrival Circulation Track
Identification of the circulation track defined to be used by the train to reach a node (outside the node).
String
Departure Circulation Track
Identification of the circulation track defined to be used by the train to leave a node (outside the node)
String
Stop Type Scheduled kind of stop defined for a train. Commercial stops need to be respected during operation but other types may not be required.
Listed [Commercial Stop, Technical Stop]
Minimum Commercial Dwell Time
Minimum defined stop duration of the train for commercial purposes such as passengers boarding
Numeric
Scheduled Timetable
Attribute Attribute description Admitted values
N x Timetable Control Point
Set of ordered Timetable Control Point defining the complete scheduled timetable for train running on D-day.
Array [Complex Control Point]]
[Timetable
Target Timetable
Attribute Attribute description Admitted values
N x Timetable Control Point
Set of ordered Timetable Control Point defining the complete target timetable for train running on D-day.
Array Complex Control Point Timetable
Audited Timetable Control Point
Attribute Attribute description Admitted values
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 56 of 62 02/02/2021
Control Point Reference to an identified location where is allowed to define a datetime value for a train movement (stop or pass).
String
Target Timetable Control Point Reference
Order of the referred Timetable Control Point in Target Timetable.
Numeric
Audited Arrival DateTime
Date and time when the train reached the control point.
Timestamp
Audited Departure DateTime
Date and time when the train left the control point.
Timestamp
Audited Movement Type
Defines if the train has stopped at the control point or only has passed.
Listed [Stop, Pass, Start, End]
Audited Arrival Track
Identification of the track within the node where is performed the arrival of the train. In case of a stop movement, is the track where the train waits for passenger’s exchange.
String
Audited Departure Track
Identification of the track within the node where the train performs its departure. Usually, it is the same track used as arrival track.
String
Audited Arrival Circulation Track
Identification of the circulation track used by the train to reach a node (outside the node).
String
Audited Departure Circulation Track
Identification of the circulation track used by the train to leave a node (outside the node)
String
Audited Stop Type Kind of stop performed by the train. Listed [Commercial Stop, Technical Stop]
Audit Type Defines when the audited values have been received automatically or set manually by the TMS user.
Listed [Manual or Automatic]
Audited Timetable
Attribute Attribute description Admitted values
N x Audited Timetable Control Point
Set of ordered Audited Timetable Control Point defining the complete audited timetable for train running on D-day.
Array [Complex [Audited Timetable Control Point]]
Forecasted Timetable Control Point
Attribute Attribute description Admitted values
Control Point Reference to an identified location where is allowed to define a datetime value for a train movement (stop or pass).
String
Target Timetable Control Point Reference
Order of the referred Timetable Control Point in Target Timetable.
Numeric
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 57 of 62 02/02/2021
Forecaseted Arrival DateTime
DateTime when is expected that the train arrives to the control point according with the current traffic situation.
Timestamp
Forecaseted Departure DateTime
DateTime when is expected that the train leaves the control point
Timestamp
according with the current traffic situation.
Forecasted Movement Type
Defines if the train stops or passes at the control point.
Listed [Stop, Pass, Start, End]
Forecaseted Arrival Track Identification of the track within the node where is defined the arrival of the train. In case of a stop movement, is the track where the train wait for passenger’s exchange.
String
Forecasted Departure Track
Identification of the track within the node where the train is before to departure. Usually, it is the same track defined as arrival node track.
String
Forecasted Arrival Circulation Track
Identification of the circulation track defined to be used by the train to reach a node (outside the node).
String
Forecasted Departure Circulation Track
Identification of the circulation track defined to be used by the train to leave a node (outside the node)
String
Forecasted Stop Type Kind of stop expected to be carried out by the train.
Listed [Commercial Stop, Technical Stop]
Forecasted Timetable
Attribute Attribute description Admitted values
N x Forecasted Timetable Control Point
Set of ordered Forecasted Timetable Control Point defining the expected timetable for train running on D-day.
Array [Complex [Forecasted Timetable Control Point]]
DATA RELATED TO ROLLING STOCK
Scheduled Formation
Scheduled Formation Control Point
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 58 of 62 02/02/2021
Attribute Attribute description Admitted values
Control Point Reference to an identified location where is allowed to define a datetime value for a train movement (stop or pass).
String
Scheduled Timetable Control Point Reference
Order of the referred Timetable Control Point in Scheduled Timetable.
Numeric
Formation Identifier of the formation leaving the Control Point.
String
Weight Total weight of the formation during the section.
Numeric
Length Total length of the formation during the section.
Numeric
Loading Gauge Maximum gauge of the formation during the section in use.
Listed [GC, GB+,GB, GA, Universal] According with UIC Loading gauge values (EN15273)
Track Gauge Type of the gauge of the track used by the formation for the section delimited by Control Point Start and Control Point End.
Listed [UIC gauge, Iberian gauge, Metric gauge, Swedish gauge]
Features and Amenities Main features of the formation for the section.
Array [String]
N x Vehicle Ordered set of vehicles to be used during the section.
Array [Complex [Vehicle]]
Scheduled Formation
Attribute Attribute description Admitted values
N x Scheduled Formation Control Point
Ordered set of data related to the formation of a train scheduled for each section of the train route where the formation does not changes.
Array [Complex [Scheduled Formation Control Point]]
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 59 of 62 02/02/2021
Target Formation
Target Formation Control Point
Attribute Attribute description
Admitted values
Control Point Reference to an identified location where is allowed to define a datetime value for a train movement (stop or pass).
String
Target Timetable Control Point Reference
Order of the referred Timetable Control Point in Target Timetable.
Numeric
Formation Identifier of the formation leaving the Control Point.
String
Weight Total weight of the formation during the section.
Numeric
Length Total length of the formation during the section.
Numeric
Loading Gauge Maximum gauge of the formation during the section in use.
Listed [GC, GB+,GB, GA, Universal] According with UIC Loading gauge values (EN15273)
Track Gauge Type of the gauge of the track used by the formation for the section delimited by Control Point Start and Control Point End.
Listed [UIC gauge, Iberian gauge, metric gauge, Swedish gauge…]
Features and Amenities Main features of the formation for the section.
Array [String]
N x Vehicle Ordered set of vehicles to be used during the section.
Array [Complex [Vehicle]]
Target Formation
Attribute Attribute description Admitted values
N x Target Formation Control Point
Ordered set of data related to the formation of a train assigned for each section of the train route where the formation does not change.
Array [Complex [Formation in Section]]
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 60 of 62 02/02/2021
Audited Formation
Audited Formation Control Point
Attribute Attribute description Admitted values
Control Point Reference to an identified location where can define a datetime value for a train movement (stop or pass).
String
Target Timetable Control Point Reference
Order of the referred Timetable Control Point in Target Timetable
Numeric
Audited Formation Identifier of the formation String
Audited Weight Total weight of the formation during the section.
Numeric
Audited Length Total length of the formation during the section.
Numeric
Audited Loading Gauge Maximum gauge of the formation during the section in use
Listed [GC, GB+,GB, GA, Universal] According with UIC Loading gauge values (EN15273)
Audited Track Gauge Type of the gauge of the track used by the formation for the section
Listed [UIC gauge, Iberian gauge, metric gauge, Swedish gauge…]
Audited Features and Amenities
Main features of the formation for the section
Array [String]
N x Vehicle Ordered set of vehicles already used for the section.
Array [Complex [Vehicle]]
Audit Type Type of audit according to the system or user providing the information.
Listed [Manual, Automatic]
Audited Formation
Attribute Attribute description Admitted values
N x Audited Formation Control Point
Ordered set of data related to the formation used for a train during each section of the train route where the formation has not been changed.
Array [Complex [Formation in Section]]
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 61 of 62 02/02/2021
Vehicle
Vehicle
Attribute Attribute description Admitted values
Vehicle Identifier Identification of the specific vehicle (UIC)
String
Vehicle Type Identifier of the type of vehicle Listed [Vehicle Types]
Traction If the vehicle is pulling or not during the section.
Boolean
Loaded If the vehicle is loaded or not, in case of wagons.
Boolean
Goods Code Code related to the type of load transported by the vehicle, in case of wagons.
Listed [Type of goods]
Goods Weight Total weight related to the load transported by the vehicle, in case of wagons.
Numeric
Seals Number Number of seals closing used to close the wagon during the section.
Numeric
Dangerous Goods If is included dangerous goods in the vehicle, in case of wagons.
Boolean
Dangerous Goods Code Code related to the dangerous goods transported by the wagon.
Listed [Type of dangerous goods]
Passengers Number of passenger on-board by category
Array [Complex [Class, Number of passengers]]
Line Permit Holder Lines allowed for the circulation of the specific vehicle.
Array [String]
Status Alarm Active affected Alarm to Vehicle. It is included only in the Vehicle in the Audited Formation in Section.
Listed [Alarm Status]
N x Container Ordered set of containers included over a vehicle, in case of wagons.
Array [Complex [Container]]
Container
Attribute Attribute description Admitted values
Container Identifier Identification of the specific container (UIC)
String
Container Type Identifier of the type of container.
Listed [Container Type]
Contract No. H2020 – 881777
OPTIMA-WP6-D-ARD-001-06 Page 62 of 62 02/02/2021
Loaded If the container is loaded or not, in case of wagons.
Boolean
Goods Code Code related to the type of load included on the container.
Listed [Type of goods]
Goods Weight Total weight related to the load transported on the container.
Numeric
Seals Number Number of seals used to close the container.
Numeric
Dangerous Goods If is included or not dangerous goods in the container.
Boolean
Dangerous Goods Code Code related to the dangerous goods included on the container.