-
D5.2 Service Platform Final Release
Project Acronym 5GTANGOProject Title 5G Development and
Validation Platform for Global Industry-Specific Network
Services and AppsProject Number 761493 (co-funded by the
European Commission through Horizon 2020)Instrument Collaborative
Innovation ActionStart Date 01/06/2017Duration 30 monthsThematic
Priority H2020-ICT-2016-2017 – ICT-08-2017 – 5G PPP Convergent
Technologies
Deliverable D5.2 Service Platform Final ReleaseWorkpackage
WP5Due Date M24Submission Date 03/06/2019Version 1.0Status To be
approved by ECEditor Carlos Parada (ALB)Contributors José Bonnet
(ALB), Carlos Marques (ALB), Felipe Vicens (ATOS), Ignacio
Dominguez (ATOS), Eleni Fotopoulou (UBITECH), Anastasios
Zafeiropoulos(UBITECH), Pol Alemany (CTTC), Ramon Casellas (CTTC),
Ricard Vilalta(CTTC), Raul Muoz (CTTC), Juan L. de la Cruz (CTTC),
Evgenia Kapassa(UPRC), Marios Touloupou (UPRC), Konstantinos
Lambrinoudakis (UPRC),George Xilouris (NCSRD), Stavros Kolometsos
(NCSRD), Panos Trakadas(SYN), Panos Karkazis (SYN), Thomas Soenen
(IMEC), Antón Román Porta-bales (QUO), Ana Pol González
(QUO)
Reviewer(s) José Bonnet (ALB), Ricard Vilalta (CTTC), George
Xilouris (NCSRD), SoniaCastro (ATOS)
Keywords:
SONATA, 5GTANGO, Service Platform, MANO, Network Slice, SLA,
Policy, Kubernetes, Openstack,NFV
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Deliverable Type
R DocumentDEM Demonstrator, pilot, prototype XDEC Websites,
patent filings, videos, etc.OTHER
Dissemination Level
PU Public XCO Confidential, only for members of the consortium
(including the Commission Ser-
vices)
Disclaimer:This document has been produced in the context of the
5GTANGO Project. The research leading to these resultshas received
funding from the European Community’s 5G-PPP under grant agreement
n◦ 761493.All information in this document is provided “as is” and
no guarantee or warranty is given that the informationis fit for
any particular purpose. The user thereof uses the information at
its sole risk and liability.For the avoidance of all doubts, the
European Commission has no liability in respect of this document,
which ismerely representing the authors’ view.
ii Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Executive Summary:
This Deliverable describes the final architecture and
developments of the 5GTANGO Service Plat-form (SP) for the final
Release (M24, May 2019), corresponding to the Release 5.0 of the
SONATAopen source software. This document concludes the work
initiated in the Deliverable D5.1 [16] withthe description of the
5GTANGO first release (SONATA Release 4.0), following the
requirementsand scenarios collected in D2.1 [7] and the high-level
architecture designed in D2.2 [8]. Similarly,Deliverable D3.3 [20]
and D4.2 [21] describe the final versions of the Verification and
Validation(V&V) platform and the Software Development Kit
(SDK), respectively, the other two componentsof the SONATA
integrated platform and developed under the scope of 5GTANGO.
The 5GTANGO project did not create the Service Platform software
from scratch; but extendedthe work initiated by the SONATA project
[1]. 5GTANGO has extended and re-factored someinternal components
created in the SONATA project. In other cases, to cope with new
requirements,new internal components have been developed by 5GTANGO
from scratch, such as Service LevelAgreements (SLAs), the Policies,
or Network Slicing.
This document reports the architecture of the 5GTANGO final
release, briefly describing itsinternal components, as well as the
APIs towards the outside. But the main focus of the documentis
probably the detailing of the new features developed for this final
release (SONATA Release 5.0).These new features were highly pushed
by requirements coming from the three pilots of the
project(Communications, Immersive Media and Smart Manufacturing).
For that reason, this documentalso indicates how the different
pilots needed these features and why are they used. More detailson
that will be provided in D7.3 (M33), the last Deliverable dedicated
to pilots.
5GTANGO Public iii
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Contents
List of Figures vii
List of Tables ix
1 Introduction 11.1 Document Scope . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 1
1.3 Document Structure . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 2
2 Architecture 32.1 Components . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Gatekeeper . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 3
2.1.2 Catalogue . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 5
2.1.3 Repository . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 5
2.1.4 MANO Framework . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 6
2.1.5 Infrastructure Abstraction . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 6
2.1.6 Monitoring Manager . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 7
2.1.7 Network Slice Manager . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 8
2.1.8 Policy Manager . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 9
2.1.9 SLA Manager . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 10
2.1.10 Portal . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 11
2.2 External Interfaces . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 11
2.2.1 Authentication and Authorization Management . . . . . . .
. . . . . . . . . . 11
2.2.2 Packages Management . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 12
2.2.3 Functions Catalogue . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 12
2.2.4 Services Catalogue . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 12
2.2.5 Slices Catalogue . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 12
2.2.6 LCM Requests . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 13
2.2.7 Repository . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 13
2.2.8 Policy Management . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 13
2.2.9 SLA Management . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 14
2.2.10 Infrastructure Management . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 14
2.2.11 Monitoring Management . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 14
3 Final Release Features 163.1 Deployment Flavours . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 ETSI NFV . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 16
3.1.2 5GTANGO Approach . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 18
3.2 Network Slicing . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 20
3.2.1 Network Service Composition within a Network Slice . . . .
. . . . . . . . . . 22
3.2.2 Network Service Sharing . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 23
3.2.3 Quality of “Slice” . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 25
iv Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
3.2.4 Contributions to OSM . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 253.3 Attaching Ingress and Egress Service
Endpoints . . . . . . . . . . . . . . . . . . . . . 253.4 Quality
of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 27
3.4.1 QoS Specification . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 273.4.2 QoS Enforcement . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5 WAN Infrastructure . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 313.5.1 Endpoint population . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 313.5.2
Virtual Link Configuration . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 313.5.3 Virtual Link Deconfiguration . . . . . . .
. . . . . . . . . . . . . . . . . . . . 33
3.6 Kubernetes VIM . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 343.6.1 5GTANGO Kubernetes Deployment
Model . . . . . . . . . . . . . . . . . . . 353.6.2 Mapping ETSI to
Kubernetes . . . . . . . . . . . . . . . . . . . . . . . . . . .
353.6.3 Kubernetes Deployment Flow . . . . . . . . . . . . . . . .
. . . . . . . . . . . 373.6.4 Kubernetes Features . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 373.6.5 Hybrid
Network Services . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 39
3.7 Monitoring Cloud Native Functions . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 403.7.1 Kubernetes Monitoring
Architecture . . . . . . . . . . . . . . . . . . . . . . . 403.7.2
Sidecar Monitoring . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 42
3.8 Orchestration on the Emulator . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 423.9 Dynamic Network Allocation . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.10
Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 44
3.10.1 Licensing Model . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 453.10.2 License Service Level Objective
. . . . . . . . . . . . . . . . . . . . . . . . . . 463.10.3
Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 46
3.11 VNF Migration . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 493.12 Advanced Policy Management .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.13
Improved and On-demand Scaling . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 513.14 Portal Enhancements . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.14.1 Interface Update . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 523.14.2 User Management . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 523.14.3
Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 523.14.4 Service Platform Features . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 523.14.5 Service
Management Features . . . . . . . . . . . . . . . . . . . . . . . .
. . . 53
4 Pilots Requirements 564.1 Communications Pilot . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 564.2
Immersive Media Pilot . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 574.3 Smart Manufacturing Pilot . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 59
5 Next Evolutions 625.1 ETSI Alignment . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 625.2 Network
Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 625.3 SLAs . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 635.4 Policies
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 645.5 Infrastructure . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 645.6
High-availability . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 645.7 Portal . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6 Source Code 66
5GTANGO Public v
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
7 Conclusions 68
A Appendices 69A.1 Appendix A: Deployment Flavour Examples . . .
. . . . . . . . . . . . . . . . . . . . 69
A.1.1 Appendix A.1: Traditional VNFD . . . . . . . . . . . . . .
. . . . . . . . . . 69A.1.2 Appendix A.2: DF-aware VNFD . . . . . .
. . . . . . . . . . . . . . . . . . . 70A.1.3 Appendix A.3:
Traditional NSD . . . . . . . . . . . . . . . . . . . . . . . . .
75A.1.4 Appendix A.4: DF-aware NSD . . . . . . . . . . . . . . . .
. . . . . . . . . . 76
A.2 Appendix B: SLA Template Example . . . . . . . . . . . . . .
. . . . . . . . . . . . . 79A.3 Appendix C: Service Platform
External APIs . . . . . . . . . . . . . . . . . . . . . . 81A.4
Appendix D: Slice Management . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 84
B Bibliography 87
vi Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
List of Figures
2.1 Service Platform Final Architecture. . . . . . . . . . . . .
. . . . . . . . . . . . . . . 4
3.1 ETSI NFV Deployment Flavour General [43]. . . . . . . . . .
. . . . . . . . . . . . . 173.2 ETSI NFV Deployment Flavour Concept
[43]. . . . . . . . . . . . . . . . . . . . . . . 183.3 5GTANGO VNF
Deployment Flavour Implementation. . . . . . . . . . . . . . . . .
193.4 5GTANGO NS Deployment Flavour Implementation. . . . . . . . .
. . . . . . . . . . 213.5 Diagram of the previous presented Network
Slice Template. . . . . . . . . . . . . . . 223.6 Network Slice
Instantiation process. . . . . . . . . . . . . . . . . . . . . . .
. . . . . 243.7 Diagram with two Network Slice Instantiations
sharing a Network Service. . . . . . . 243.8 Communications Suite
topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
263.9 Database Population flow diagram. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 323.10 Virtual Link creation process
from wrapper point of view. . . . . . . . . . . . . . . . 333.11
Virtual Link deconfiguration process from wrapper point of view. .
. . . . . . . . . . 343.12 Mapping ETSI CNFs. . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 373.13 Kubernetes
Deployment Flow. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 383.14 Smart Manufacturing pilot deployment. . . . . .
. . . . . . . . . . . . . . . . . . . . 393.15 Hybrid Network
Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 403.16 Prometheus-based Kubernetes Monitoring Architecture.
. . . . . . . . . . . . . . . . 413.17 Prometheus Monitoring
Targets for Kubernetes. . . . . . . . . . . . . . . . . . . . .
413.18 Sidecar Monitoring Flow. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 423.19 Roles involved in dynamic
network allocation. . . . . . . . . . . . . . . . . . . . . . .
453.20 License based internal architecture. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 463.21 Instantiation with public
license. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
483.22 Instantiation with private license. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 483.23 Instantiation with trial
license. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 493.24 Specific managers for state migration and configuration. .
. . . . . . . . . . . . . . . 513.25 Portal WIM edition. . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
533.26 Portal Slice instance details (up page). . . . . . . . . . .
. . . . . . . . . . . . . . . . 543.27 Portal Slice instance
details (down page). . . . . . . . . . . . . . . . . . . . . . . .
. 543.28 Network service instance (up page). . . . . . . . . . . .
. . . . . . . . . . . . . . . . 553.29 Network service instance
(down page). . . . . . . . . . . . . . . . . . . . . . . . . . .
55
4.1 Diagram of Communication Pilot deployment in Kubernetes. . .
. . . . . . . . . . . 574.2 Architecture of immersive media pilot.
. . . . . . . . . . . . . . . . . . . . . . . . . . 584.3 Diagram
of the monitoring integration in the immersive media pilot. . . . .
. . . . . 584.4 Network slicing applied in the immersive media
pilot. . . . . . . . . . . . . . . . . . . 594.5 Architecture of
smart manufacturing pilot. . . . . . . . . . . . . . . . . . . . .
. . . . 60
5GTANGO Public vii
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
List of Tables
2.1 Authentication and Authorization external APIs. . . . . . .
. . . . . . . . . . . . . 112.2 Packages Management external APIs.
. . . . . . . . . . . . . . . . . . . . . . . . . . 122.3 Functions
Catalogue external APIs. . . . . . . . . . . . . . . . . . . . . .
. . . . . . 122.4 Services Catalogue external APIs. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 122.5 Slices Catalogue
external APIs. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 122.6 LCM Requests Management external APIs. . . . . . . . .
. . . . . . . . . . . . . . 132.7 Repository external APIs. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.8
Policy Management external APIs. . . . . . . . . . . . . . . . . .
. . . . . . . . . . 132.9 SLA and Licensing Management external
APIs. . . . . . . . . . . . . . . . . . . . . 142.10 Infrastructure
Management external APIs. . . . . . . . . . . . . . . . . . . . . .
. . 142.11 Monitoring Management external APIs. . . . . . . . . . .
. . . . . . . . . . . . . . . 14
3.1 Openstack QoS contraints. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 29
4.1 Pilot Requirements vs. 5GTANGO Final Release Features. . . .
. . . . . . . . . . . 56
5GTANGO Public ix
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
1 Introduction
This section introduces the reader to this Deliverable D5.2,
providing scope, overview and explainingthe document structure.
1.1 Document Scope
The objective of this Deliverable D5.2 is to describe the design
and implementation details of thelast release (SONATA Release 5.0)
of the 5GTANGO Service Platform due in project month 24(M24, May
2019), focusing on the major features of the Service Platform
Release. This Deliverableprovides a good overview of the new
features made available. It also describes the final Service
Plat-form architecture, namely summarizing the components as well
as the external interfaces exposedto the outside.
The description contained in this document is an update of
Deliverable D5.1 released in month10 (M10, March 2018). The
Deliverable D5.1, focused on the first release (SONATA Release
4.0),already provided a good overview of the architecture,
including components and interactions amongthem. This final release
brings some updates and extensions, but it has not dramatically
changed interms of architecture. The purpose of summarizing the
architecture in D5.2 is mostly to make thisdocument more
self-contained for the reader and to document the exact final
version of componentsand interfaces.
1.2 Overview
This Deliverable describes the Service Platform final release
(SONATA Release 5.0), the last ver-sion released under the scope of
the 5GTANGO project. This version is a follow-up of the
firstrelease (SONATA Release 4.0), made available in July 2018
(M11), developed under the scope of5GTANGO and already described in
the Deliverable D5.1 [16]. The development of the ServicePlatform
has previously started in the SONATA project [1], which released
versions till SONATARelease 3.1. As decided from the very
beginning, this final version is released as open source,making it
publicly available for a large community (Github) [11].
The main focus of this Deliverable is the documentation of the
new features available in the finalrelease. Some particular topics
have been significantly evolved in this release. One of the
mostsignificant evolution was the support of Kubernetes
infrastructures. Kubernetes is an emergingcontainer-based
lightweight technology that follows the cloud native computing
paradigm, and isespecially well fitted for edge computing
environments, another relevant trend today. The finalrelease of the
Service Platform is able to deploy Virtual Network Functions (VNFs)
and NetworksServices (NSs) designed to be managed and executed
within Kubernetes infrastructures, enablingits management in a
smooth and transparent manner in comparison to other technologies,
suchas Openstack. Furthermore, it allows the design of hybrid VNFs
and NSs, partially running onKubernetes and partially on
Openstack.
Other significant evolution has been achieved in the Quality of
Service (QoS) topic. Pushedby the pilots’ requirements, the Service
Platform is now capable of providing different levels ofend-to-end
service quality. In this area, a special remark has to be appointed
to the support of
5GTANGO Public 1
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Deployment Flavours (DF), enabling the creation of VNFs and NSs
with multiple flavours. Thismakes possible to describe multiple
ways to deploy a VNF/NS and allocate more resources (com-putational
or networking) according to the level of performance required. This
feature, associatedto the SLAs already supported in the first
release and the support of Transport API (T-API) inter-face for
implementing networking control mechanisms (i.e. T-API enables the
dynamic allocationof transport resources using software-defined
networking (SDN) technology), close the loop for amore controlled
and comprehensive business model.
The Network Slicing features, a very hot topic nowadays, have
also been significantly improved,enabling the creation of Network
Slices by composing them out of multiple NSs. The interconnec-tions
among NSs can be defined in the Network Slicing Template (NST), and
they can be localwithin a single PoP, or be remote and need the
establishment of a Wide Area Network (WAN) linkamong NSs. In
addition to that, different Network Slices can now share one or
more NSs, meetingthis way one of the 3GPP requirements for the 5G
technology, which is the ability to share somefunctional components
of the 5G Core among multiple slices.
Another important improvement of this final release is
Licencing. This is an important featurethat allows the enforcement
of multiple business models associated to VNFs and NSs, making
theService Platform an industry grade tool. Licencing covers
several models, such as public, trialand private licenses,
permitting licenses acquisition and defining limits to the number
of VNF/NSinstantiations. Apart from licensing, other miscellaneous
features were also improved, increasingthe networking dynamics,
extending the NS outside the PoPs, increasing the Portal
capabilities,improving the policy control, supporting migration or
allowing on-demand scaling operations, justto name the most
relevant.
This Deliverable also describes the final Service Platform
architecture, briefly detailing all com-ponents and interfaces.
Although there are no significant changes from the first release
(describedin D5.1 [16]), there are a few changes that deserve some
remarks. On the other hand, this makesthis document more
self-contained and readable. This Deliverable also maps the final
release andthe requirements coming from the pilots, showing that
they were the main influence for this release.An important final
remark goes for the description of the public repositories of the
software (opensource) released, as this is the main output of the
project that will last far beyond the projectscope.
1.3 Document Structure
This Deliverable is structured as follows. Sec. 2 describes
briefly the SP final architecture, namely,the main components and
external interfaces, referencing to detailed descriptions already
writtenin the Deliverable D5.1 and publicly available in the SONATA
Github [11]. Sec. 3 elaborates on theSP final release new features,
describing the details of the final release to be developed under
thescope of the 5GTANGO project. Sec. 4 relates the requirements
coming from the three pilots, whichwere the main triggers of the
new features released in the last release. Sec. 5 intends to
identify newfeatures suitable to be developed in the future for
SONATA, even thought that will not be possiblewithin the 5GTANGO
project lifetime. Sec. 6 details the entire open source software
repositoriesthat comprise the final SP release, publicly available
for the community in Github. And finally,Sec. 7 presents the
conclusions, highlighting the main achievements and lessons
learnt.
2 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
2 Architecture
The SONATA Service Platform consists of several components
running as microservices and in-teracting with each others in order
to manage the lifecycle of Virtual Network Functions (VNFs),Network
Services (NSs) and Network Slices. The Gatekeeper is the front-end
component, respon-sible to secure and forward the incoming requests
to the platform (OSS/BSS). The MANO Frame-work is the core of the
Service Platform and implements the ETSI NFV Orchestrator (NFVO)
andVNF Manager (VNFM) functions, responsible to on-board and manage
the lifecycle of VNFs andNSs. The Slice Manager is responsible for
the Network Slice layer, on-boarding Network Slice Tem-plates
(NSTs) and managing the lifecycle of Network Slices Instances
(NSIs). The InfrastructureAbstraction, implements the interaction
with the infrastructures, providing an abstraction to sup-port
multiple the Virtualized Infrastructure Managers (VIMs) and WAN
Infrastructure Managers(VIMs). The Catalogue and Repository are
databases that store descriptors (and other stuff) andrecords,
respectively. The Policy Manager centralizes the policies of the
whole system, while theSLA Manager defines and manages SLAs,
checking whether violations occur. Finally, the Moni-toring Manager
is responsible to collect and deliver monitoring data related to
VNFs, NSs, andNetwork Slices, while the Portal provides a user
friendly Graphical User Interface (GUI) for thedifferent roles to
manage the entire ecosystem.
This section describes the final Service Platform (SP)
architecture, an evolution from the ar-chitecture details described
in [16]. As some components and interfaces remain untouched
andinterfaces are in essence the same but with some extensions,
this section will summarize the ar-chitecture already described
[16], pointing to the documentation already produced and focusingon
the new updates and extensions. The Fig. 2.1 depicts the final
Service Platform architecture.The next sub-sections describe the
Service Platform internal components, as well as the
ExternalInterfaces exposed to the outside.
The overall SONATA Framework is composed by three pieces, where
the Service Platform (SP)fits together with the Software
Development Kit (SDK), which intends to help on the creation
ofDescriptors and Packages, on-boarding artifacts into the system
Catalogue, and the Validation andVerification (V&V) Platform,
which intends to test, validate and benchmark the artifacts
beforethey go into a production environments.
2.1 Components
This section intends to briefly describe the components that
comprise the Service Platform. Moredetails on how the components
work can be found in [16], Section 2.
2.1.1 Gatekeeper
The Gatekeeper, as shown in Fig. 2.1, is the component
responsible for exposing the ServicePlatform’s APIs to the outside,
ensuring that every access to the Service Platform is by
authen-ticated and authorized users, firstly filtering out invalid
requests before they are processed by therelevant internal
component (micro-service). The Gatekeeper is also responsible for
retrieving De-scriptors from the Catalogue, enforcing licensing
models (through the SLA Manager) and enrichingthe requests with
other data.
5GTANGO Public 3
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 2.1: Service Platform Final Architecture.
The Gatekeeper interacts internally with the components of the
Service Platform, namely:
• Catalogue, whenever packages, services, functions, slices,
SLAs and Policies are requested;
• MANO Framework, whenever a service instantiation, service
instance termination or serviceinstance scaling is requested;
• Repository, whenever records for instantiated services and
functions are requested;
• Slice Manager, whenever a slice template is created or
requested and a slice template instan-tiation or termination is
requested;
• SLA Manager, whenever an SLA is created or associated with a
service;
• Policy Manager, whenever a policy is created or requested;
• Infrastructure Abstraction, whenever there’s the need to
configure it with VIMs and WIMs,and also to know which
infrastructure is available, and create or delete networks
betweenservices of the same slice;
• Monitoring Manager, whenever a developer wants to extract some
monitoring data relatedto the services and functions has developed
and that were instantiated.
Further details can be found in [16], section 2.1, and in the
main Gatekeeper’s Github repositorywiki page [3]. Note that this
component is shared between the Service and the Validation
andVerification platforms, being deployed with different
configurations, as defined in the DevOps scripts(see [10]).
4 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
2.1.2 Catalogue
The Catalogue is an instrumental component in the 5GTANGO
environment present in differentparts of it (a common building
block with the V&V). Primarily, it hosts the different
descriptorsof the 5GTANGO packages. Since 5GTANGO aims at a multi
platform environment, it enablesthe NS developers to orchestrate
and deploy their NSs using different Service Platforms. In
thiscontext the 5GTANGO Catalogue is being adapted to support the
storing and retrieval of newpackages, that are “marked” like ONAP
[45], OSM [36] or 5GTANGO packages. Moreover, theCatalogue is
aligned with the principle of persistent storage by extending the
hosted VNFDs andNSDs with valuable fields for successful data
integration, accuracy in the format of the document,confirmed time
of creation, etc. In this way, it enables the development of
enhanced operationsfor Creating, Retrieving, Updating and Deleting
(CRUD) descriptors inside it, while re-assuresthe correct data
format of the stored documents. Going beyond a conventional data
storage, thepresented 5GTANGO Catalogue provides intelligent
functionalities in this 5G environment. Sincethe types of
information vary, one of the requirements, satisfied by the
Catalogue, is the full-textsearch capabilities in
structure-agnostic documents. Since the schema of the diverse
documents(i.e. NS descriptors, SLA descriptors, Slice descriptors
etc.) is variable, the Catalogue providessearching capabilities
without the necessity of indexes. Thus, it provides seamless
retrieval abilitiesin deep-hierarchical machine-readable document
structures. Furthermore, besides the plain NoSQLdocument store for
the diverse descriptors, Catalogue provides a scalable file system
for hostingthe artifact files, required for the instantiation
life-cycle of the NSs. Finally, Catalogue providesa set of
end-points where the CRUD methods are supported for the different
descriptors of theproject. Moreover, as previously addressed, the
Catalogue are responsible to store different objectsin regards of
the management of the NSs’ life-cycle and more. Thus, the following
object categoriesare dened:
1. Packages
2. Virtual Network Functions (VNFs)
3. Network Services (NSs)
4. Network Slice Templates (NSTs)
5. Service Level Agreements (SLAs)
6. Policies
The main interactions of the Catalogue is the one with the
Gatekeeper. Since the Portal re-quests all the available
descriptors for visualization purposes, all the requests to the
Catalogue areprovided through the Gatekeeper. Also, the SLA Manager
and the Policy Manager are accessingthe Catalogue in order to
retrieve the NSDs and VNFDs for their internal processes. The
MANOFramework also accesses the Catalogue while it needs the
information regarding the NSs and theVNFs that are going to be
instantiated. The same occurs for the Slice Manager which gets
fromthe Catalogue the NSTs to be instantiated.
Further details and related work are attached in the
corresponding Github repository [9] and in[16].
2.1.3 Repository
The Repository, as shown in Fig. 2.1, is the component
responsible for storing the runtimedata (i.e. records) of the
Network Services, and Slices. This component is widely used
internally
5GTANGO Public 5
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
in the Service Platform by the Gatekeeper, MANO, Monitoring,
among others to keep track ofdeployed instances of services and
slice. The Repository exposes a northbound REST API thatallows the
external requests to a backend database engine. The REST API was
developed in Ruby2.4.3 and distributed as microservice inside a
container. The database engine used by the API isMongoDB. Moreover,
the Repository API supports CRUD functions to handle the
information ofVNF Records (VNFRs), Network Service Records (NSRs)
and Network Slice Instances Records(NSIRs). Additionally, the REST
API validates each record that is being added to the
databaseagainst the 5GTANGO information model located in the Github
Repository sonata-nfv/tng-schema[6].
The northbound interface of the Repository interact with the
following components:
• Gatekeeper, operations: READ vnfr, nsr, nsir
• MANO Framework, operations: READ, WRITE vnfr, nsr
• Slice Manager, operations: READ, WRITE nsir
• Monitoring Manager, operations: READ nsr, vnfr
• Policy Manager, operations: READ nsr, vnfr
More details on the Repository, namely with regards to
architecture and APIs, can be found in[16], section 2.3, and in the
tng-rep Github repository wiki page [27].
2.1.4 MANO Framework
The Management and Orchestration (MANO) Framework is one of the
core components ofthe Service Platform. It manages the lifecycle of
all active network service instances (instantiate,scale, migrate,
configure, etc.) by orchestrating the available infrastructure,
both compute andnetworking. It lays on top of the Infrastructure
Adapter (IA), which provides a uniform APItowards the available
infrastructures. The MANO Framework consumes all the requests to
changethe lifecycle of a network service, coming from various
entities (Gatekeeper, Policy Manager, SpecificMonitoring Managers,
etc.), and translates them into IA instructions. This translation
is done byconsidering the descriptors, records, current state of
the infrastructure, etc.
The MANO Framework exposes an internal API through the RabbitMQ
message bus on whichit can be requested to instantiate, scale,
migrate or terminate a network service. The appropriatetopics and
message payloads are described in the MANO Framework related GitHub
repositorywiki [47]. As of now, this API is being consumed by the
Gatekeeper (who exposes it to the SliceManager, Portal and external
users) and the Policy Manager.
To perform its tasks, the MANO Framework interacts with a bunch
of other SP components. Ituses the Catalogue to obtain descriptors,
the Repository to store runtime information related toservices and
VNFs, the Monitoring Manager to configure monitoring for running
services, and theInfrastructure Abstraction to make requests to the
controlled infrastructure. A description of theconsumed APIs can be
found in the section of the respective component.
More details on MANO Framework architecture can be found in [16]
and in the MANO Frame-work related GitHub repository [49].
2.1.5 Infrastructure Abstraction
The Infrastructure Abstraction (IA) component is responsible to
interact with the multipleVirtualized Infrastructure Managers
(VIMs) and WAN Infrastructure Managers (WIMs), providing
6 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
a unified and abstracted NorthBound Interface (NBI) API to
components that require the man-agement of resources (mainly the
MANO Framework, but also the Slice Manager). This way,
themanagement of different types of technologies (e.g. VMs,
containers) becomes unified even thoughthe original resource
management APIs are significantly different. By using the IA, other
compo-nents can be agnostic to the details of a particular
technology (like Openstack, Kubernetes, etc.),as those details are
embedded in IA plug-ins, leading to an easy utilization and
flexible VIM/WIMextensibility. Currently, the IA supports 3 VIMs:
Openstack (heat), Kubernetes (k8s), VIM Emu-lator (vim-emu); and 2
WIMs: Virtual Tenant Network (VTN) (OpenDayLight app) and
TransportAPI (T-API).
The IA interacts internally with the different IA plugins (or
wrappers) in order to request re-sources to the different
technologies supported. The IA-NBI acts as a proxy, identifying
andforwarding management messages to the appropriate plug-in.
Externally, the IA mainly interactswith the MANO Framework,
accepting requests to manage resources for VIMs and WIMs. Inthe
final release, the slice Manager also interacts with the IA (via
Gatekeeper) to create the net-works required to combine NSs and
form end-to-end Network Slices. All the above interactions
usemessage-based communication (RabbitMQ). The IA also provides a
REST API to manage VIMsand WIMs, enabling this configuration
through the Portal, via Gatekeeper.
More details on IA, namely with regards to architecture and
APIs, can be found in [16], section2.5, and in the IA Github
repository wiki page [30].
2.1.6 Monitoring Manager
The Monitoring Manager is responsible for collecting and
processing data from several sources,providing the ability to
activate metrics and thresholds to capture generic or
service-specific NSand VNF behaviour. Moreover, the Monitoring
Manager provides interfaces in order to define rulesbased on
metrics gathered from one or more VNFs deployed in one or more
NFVIs and createsnotifications in real-time. Monitoring data and
alerts are also accessible through a RESTful API,following standard
format.
5GTANGO adopted Prometheus monitoring tool [46] as its basis for
the monitoring solution,prior becoming a Cloud Native Computing
Foundation (CNCF) project. To this end, within thework of 5GTANGO
Prometheus capabilities have been adapted and properly enhanced to
addressadditional requirements coming from the 5G-related use cases
of the project.
As discussed in detail below, in its current implementation, the
following types of sources forcollecting data are available:
• Network infrastructure monitoring : data accessible from
physical or virtual network elementsor through the Network
Management System (NMS);
• NFV infrastructure monitoring : the data collection methods go
beyond the native Ceilometerservice and for better granularity, the
open-source software Libvirt has been integrated;
• Kubernetes infrastructure monitoring : monitoring data are
collected per cluster node, perpod and per container using cloud
native tools already available;
• VNF monitoring : data collected with monitoring probes from
the VNF specific monitoringmetrics, exposed by each VNF according
to the developer-provided descriptor;
• SDN monitoring : data collected from the OpenDayLight (ODL)
controller, with respect tothe control and data plane of the
network.
5GTANGO Public 7
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
It is noticed that the time interval for collecting monitoring
data can be set as fast as one second,according to use case
requirements.
The Monitoring Manager mainly interacts with the following
components:
1. MANO Framework provides notifications regarding the
instantiation and termination of NS;
2. SLA Manager defines monitoring rules per service based on SLA
contracts and consumes thegenerated alerts from message-based
communication (RabbitMQ);
3. Policy Manager defines monitoring rules per service and
consumes the generated alerts frommessage-based communication
(RabbitMQ);
4. Gatekeeper, provides graphs based on monitoring data
regarding the performance of the SPand the running NS (requested by
the Portal).
More information can be found in the Monitoring Manager Github
repository wiki pages [25],[23] and [24].
2.1.7 Network Slice Manager
The Network Slice Manager (NetSlicer) is the component
responsible to manage any NetworkSlice Template (NST) - i.e. a set
of interconnected Network Services - and its related NetworkSlice
Instance (NSI). By adding one layer over the Network Services (NS)
and create NetworkSlices, this component allows the SONATA Service
Platform (SP) to create and offer better andisolated services to
the final user. Thus, it is able to deploy the different services
composing a NST,in single and multi-VIMs scenarios, allowing the
capability to distribute the services among thedifferent VIMs based
on the resource availability of each VIM and the requirements of
each service.In addition, the NetSlicer is able to manage the
sharing of a Network Service among differentNetwork Slices,
allowing the SONATA SP to have a more efficient resource
allocation.
The network operator is responsible for describing the Network
Slice type, which defines the QoSpossibilities for the Network
Slice and this is relevant towards the configuration of the RAN
partand 5G core. The NetSlicer must handle the deployment of a
network slice that contains multiplenetwork services, each one with
their specified SLA level.
The NetSlicer has two main interactions groups: on one hand,
those actions in charge to send orreceive the data objects: Network
Slice Template descriptors(NSTds) and Network Slice
Instancesrecords (NSIrs) to and from the databases (Catalogue for
NST, Repository for NSI); these interac-tions are direct between
the NetSlicer and the other two components. On the other hand, the
mainactions in charge of creating/removing the data objects (NSTd,
NSIr) go through the Gatekeeperas there are internal interactions
(i.e. networks creation done by the Infrastructure
Abstractioncomponent or network service deployment made by the MANO
Framework component) that arenecessary to keep under control as
there might be many at the same time.
Similarly to other components within the SONATA Service Platform
environment, the NetSlicerexposes to the Gatekeeper a set of
RESTful APIs. Among these exposed APIs, it is possible to con-sider
two different groups: the first group exposes the RESTful APIs
through the Gatekeeper to theBSS/OSS user (i.e. Portal, CLI) to
trigger the main NetSlicer actions (instantiate/terminate/get);the
second group of APIs are only available for the Gatekeeper and they
allow the NetSlicer tocomplete internal procedures when certain
conditions are fulfilled regarding the main operations inother
SONATA Components (i.e. networks creation/removal management by the
IA component orthe network service instantiation/termination
managed by the MANO Framework components).
More details on the Network Slice Manager, namely with regards
to architecture and APIs, canbe found in [16], section 2.8, and in
the Network Slice Manager Github repository wiki page [48].
8 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
2.1.8 Policy Manager
The Policy Manager is responsible for the activation and
management of runtime policies over thedeployed Network Services,
aiming at injecting intelligence in the orchestration mechanisms.
Focusis mainly given on elasticity policies (e.g. scenarios for
scaling in and out VNFs). However, furtheractions are also
supported, related to triggering of alerts and security aspects
(e.g. consumptionof an alert through an IDS and enforcement of a
security incident handling action). The PolicyManager is
implemented based on Drools that is an open-source rules-based
management system.In the second version of the Policy Manager,
extensions have been realized in terms of performanceof the
supported rule-based management system as well as dynamic
management of the policyrules.
Dynamic management of the policy rules regards the capacity to
adapt during runtime the en-forced policy rule-set, without
impacting the overall enforced policy efficiency. Such an
adaptationmay be required in cases that a non-efficient operation
is identified (e.g. in cases of oscillating scal-ing in and out
actions due to non proper specification of the inertia period) or
in cases of changesin the desired QoS level to be guaranteed for
the NS (e.g. change the threshold for triggering ascaling action).
With regards to performance, updates have been realized in the way
that the inputdata in the rule-based management system are tackled.
An event-driven management approachhas been introduced, reducing
significantly the required data processing compared to the
previousapproach, where the working memory of the Policy Manager
was periodically scanned.
In order to support runtime policies enforcement within the
Service Platform, a set of interactionswith other components takes
place. The Policy Manager interacts with the Gatekeeper through a
setof REST APIs in order to fetch all the declared runtime policies
for a specific NS, get/create/updateand delete a specific runtime
policy descriptor, bind a runtime policy descriptor with an SLA,
definea runtime policy as default policy, enforce or deactivate a
runtime policy upon the instantiation ofa network service, activate
the associated policy monitoring probes and request the termination
ofa policy’s enforcement along with the termination of the related
monitoring probes.
The Policy Manager also consumes REST APIs provided by the
Catalogue for retrieving NSdescriptors and the process of defining
a policy descriptor for a specific NS, while it also stores
andmanages (CRUD operations) the produced policy descriptors to the
Catalogue. There is also inter-action between the Policy Manager
and the MANO through RabbitMQ, where the latter providesinformation
for the instantiation and deletion of a NS as well as the result of
any horizontally scalingaction. The Policy Manager passes
information to the MANO with regards to scaling actions thathave to
be enforced (add/remove workers), while the MANO informs the Policy
Manager aboutthe status of the requested scaling actions.
The Policy Manager interacts with the Repository in order to get
deployment information relatedto the deployed VNFs and NSs based on
the relevant records in the Repository. Finally, the PolicyManager
provides to the Monitoring Framework a request for the creation of
monitoring alert rulesupon the instantiation of a NS, based on the
relevant information in the policy descriptor. Thisinteraction is
an asynchronous one and is supported via a REST API exposed on
behalf of theMonitoring Framework. The Policy Manager also
interacts with the Monitoring Framework in anasynchronous way via
the RabbitMQ by consuming the monitoring alert messages in case
that theactivated monitoring alert rules are satisfied by the
monitored NS metrics values.
More details on the Policy Manager, namely with regards to the
architecture, interactions withother components and APIs, can be
found in [16], section 2.9, and in the Policy Manager
Githubrepository wiki page [18].
5GTANGO Public 9
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
2.1.9 SLA Manager
The SLA Manager (SLAM) component is responsible for interacting
with a set of 5GTANGOService Platform components, in order to
support the life-cycle management of Service Level Agree-ments,
starting from the template formulation to the agreement violation,
considering also the wholeNS lifecycle. The SLAM is a plugin-based
component that can be adapted and extended to workon different
service platforms, supporting a variety of RESTful APIs. The
implementation of theSLA Manager aims at governing the interaction
between a distributed set of users and resources,beneting both the
service providers and customers.
The SLAM was a component introduced in 5GTANGO first release and
initially described in [16],and [8], section 5.2.12. The SLAM’s
architecture has been slightly updated in this new release.It is
currently partitioned into two phases, namely a) SLA Template
Management and b) theInformation Management.
The SLA Template Management phase takes place prior to the NS
deployment, while it includesthe template formulation. The SLA
Template Management phase was introduced in the final releasekey
updates (Sec. 3). The first update, refers to a Mapping Mechanism
between QoS deploymentfavours with a specified SLA; the second one,
refers to the introduction of License Based SLAs,allowing the NS
instantiation with license information, as it will be described in
detail in Sec. 3.10.
The Information Management phase starts during the NS
deployment, incorporating the SLAinstance creation, among with the
violations management. The SLA Manager interacts internallywith the
Catalogue, Gatekeeper and Monitoring
For the 5GTANGO final release, the interactions between the SLA
Manager and the Catalogueare based on RESTful APIs, exposed by the
Catalogue and consumed by the SLAM. The maininteraction between the
SLA Manager and the Catalogue is the retrieval of the chosen NS
descriptorby the customer in order to generate the SLA template. In
a next step, and after an agreementbetween the customer and the
provider is achieved, the SLA Template is stored in the
Cataloguefor future use. Also, the SLA Manager can retrieve the SLA
Templates by an interface exposed bythe Catalogue, as well as
delete one or more of them.
The SLA Manager acts as a micro service in a multi components
environment providing a setof RESTful APIs. Thus, given an API
request to the SLA manager, it is firstly passed throughthe
Gatekeeper. Also, APIs of the SLA manager that need the user’s
credentials (e.g user name,user email) are adapted to process the
authentication token and extract that info provided by
theGatekeeper. Additionally, the SLAM interacts with the GK in
order be aware of the NS lifecycle(i.e. instantiation and
termination of a network service). Specifically, the Gatekeeper
also exposesan API where other components, like the SLAM, can
request the NS life-cycle state, as well asproviding messages
through a message bus system, through which the SLAM can access the
NSinstantiation / termination information. Moreover, prior to the
instantiation request, the GKinteracts with the SLAM through REST
API, in order to get a mapping between the NS that isgoing to be
deployed, and the selected deployment flavor that is linked to the
selected SLA. Moreinfo for the deployment flavors can be found in
Sec. 3.1. Furthermore, during the NS instantiation,the Gatekeeper
requests the SLAM information regarding the licensing records.
Specifically, theGK requests if the customer who requested the
instantiation is permitted to do so, based on theselected NS, SLA
and the corresponding license type.
The interaction between the SLAM and the Monitoring Framework,
includes the creation ofalert rules based on particular monitored
metrics. The monitoring of these particular metrics isactivated
upon the instantiation of a NS. The aforementioned rules provide
information towards theMonitoring Framework regarding the
guaranteed Service Level Objectives (SLOs). This interactionis an
asynchronous one and is supported via a REST API exposed on behalf
of the MonitoringFramework. The SLAM also interacts with the
Monitoring Framework in an asynchronous way via
10 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
a message bus system, by consuming the monitoring alert messages
in case the activated monitoringalert rules are satisfied by the
monitored NS metrics values (i.e. a violation of the specified
SLOwas occurred).
2.1.10 Portal
The Portal is an Angular 7 application responsible to offer the
users the different features developedin the scope of the project
in a graphic and friendly way. The Portal is a traversal tool, used
bythe 3 main blocks of SONATA integrated platform: SDK, V&V and
SP. With regards to the SP,it includes the lifecycle management of
Network Slices, Network Services, VNFs, from on-boardingand
instantiation to termination. The Portal also manages SLAs,
Policies, and Licenses, allowingits CRUD and association to NSs.
The Portal is responsible to perform some configurations, such
asVIMs, WIMs and Endpoints, in order define the infrastructure to
be used to support the artifacts.In addition, the Portal displays
relevant information (records) related to Packages, NSs,
VNFs,Requests, SLAs, Policies and Instances. Finally, the Portal
displays a useful Dashboard comprisingthe key performance
indicators (KPIs) of the integrated SONATA, allowing operators,
developersand customers to see in a single page the big picture of
the platform operations.
The Portal interacts with the Gatekeeper through its set of APIs
exposed towards the outside.This communication provides all the
information required to be displayed in the web app and also,it
allows to manage the different elements developed in the scope of
the project such as networkslices or network services. From this
perspective, the Portal is seen as an external entity whoseonly
entry point to operate with the different features is the
Gatekeeper.
More details on the Portal can be found in [16] and in the
Portal Github repository wiki page[26].
2.2 External Interfaces
This section briefly describes the external interfaces exposed
from the Service Platform final releaseto the outside. Some details
about the exposed interfaces can be found in D5.1 [16] Section
3.2(related to the first release), and full details can be
consulted in [3]. The Appendix C (Sec. A.3)shows the configuration
file of the exposed APIs in the Gatekeeper.
2.2.1 Authentication and Authorization Management
The Authentication and Authorization Management capabilities
allow the different usersof the Service Platform to be use the
platform in a secured manner. Users are authenticated tovalidate
their identity and authorized to get access to the available
features, either using directly theAPIs or indirectly through the
Portal. Users can be associated to certain roles and set
authorizationrules Based on those groups. The Tbl. 2.1 depicts the
Authentication and Authorization APIs.
Table 2.1: Authentication and Authorization external APIs.
Feature Base API Methods Description
Manage Users /api/v3/users GET, DELETE,OPTIONS, PATCH, POST
Query, add, update anddelete users
Manage Roles /api/v3/users/roles POST, GET, PATCH Query, add,
update anddelete roles
Authenticate andAuthorize Users
/api/v3/users/sessions POST Authenticate and authorizeusers
5GTANGO Public 11
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
2.2.2 Packages Management
The Packages Management capabilities allow the management of
packages in the Service Plat-form. Packages are used in on-boarding
operations and carry ETSI NFV artifacts (VNFs andNSs). During the
on-boarding operation, packages’ contents are stored in the
Catalogue. Whenon-boarded, VNFs and NSs can be instantiated in the
virtual infrastructure. Packages can alsocarry tests for the sake
of the Validation and Verification (V&V) Platform (see [20] for
more details).The Tbl. 2.1 depicts the Packages management
APIs.
Table 2.2: Packages Management external APIs.
Feature Base API Methods Description
Manage Packages /api/v3/packages GET, DELETE,OPTIONS, POST
Query, add and delete Packageson the Catalogue
2.2.3 Functions Catalogue
The Functions Catalogue capabilities allow the query of
Catalogue to get Functions (VNF)information. This information is
stored on package on-boarding and is required during Services(and
iteratively VNF) deployment (and other Function lifecycle
management operations). TheTbl. 2.4 depicts the Functions Catalogue
APIs.
Table 2.3: Functions Catalogue external APIs.
Feature Base API Methods Description
Functions Catalogue /api/v3/functions GET, OPTIONS Query
information aboutFunctions in the Catalogue
2.2.4 Services Catalogue
The Services Catalogue capabilities allow the query of Catalogue
to get Services (NS) informa-tion. This information is stored on
package on-boarding and is required during NS deployment(and other
Service lifecycle management operations). The Tbl. 2.4 depicts the
Services CatalogueAPIs.
Table 2.4: Services Catalogue external APIs.
Feature Base API Methods Description
Services Catalogue /api/v3/services GET, OPTIONS Query
information aboutServices in the Catalogue
2.2.5 Slices Catalogue
The Slices Catalogue capabilities allow the query of Catalogue
to get Slices information. Thisinformation is stored during the
Network Slice Template (NST) on-boarding and is required
duringSlice deployment (and other Slice lifecycle management
operations). The Tbl. 2.5 depicts the SlicesCatalogue APIs.
Table 2.5: Slices Catalogue external APIs.
Feature Base API Methods Description
Slices Catalogue /api/v3/slices GET, DELETE,OPTIONS, POST
Query information about SliceTemplates in the Catalogue
12 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Feature Base API Methods Description
2.2.6 LCM Requests
The LCM Requests capabilities allow the issue of requests of
lifecycle management operation(e.g. NS/Slice deployment) in the
Service Platform. Requests can also be queried and be in
differentstatus, such as, ongoing, completed, error, etc. They can
be related to historical or present data,although usually the query
of ongoing requests is more useful. The Tbl. 2.6 depicts the
Requestsmanagement APIs.
Table 2.6: LCM Requests Management external APIs.
Feature Base API Methods Description
LCM Requests /api/v3/requests GET, POST, OPTIONS Issue and Query
Requests for LCMoperations on NSs and Slices
2.2.7 Repository
The Repository capabilities allow the query of Repository to get
information about Records ofVNFs (VNFRs) and NS (NSRs). Repository
contain Records about VNF and NS instances withall the details,
such as, Ids, IPs, ports, VMs, etc. Records indicate the status of
the instance, eitherfor VNFs/NSs still running or for historical
data. The Tbl. 2.7 depicts the Repository APIs.
Table 2.7: Repository external APIs.
Feature Base API Methods Description
Functions Repository /api/v3/records/functions GET, OPTIONS
Query VNF records (VNFR) onthe Repository
Services Repository /api/v3/records/services GET, OPTIONS Query
NS records (NSR) on theRepository
Slices Repository /api/v3/records/slices GET, OPTIONS Query NSI
records (NSIR) onthe Repository
2.2.8 Policy Management
The Policy Management capabilities allow the management of
Policies to rule the behaviour ofService instances. The Service
Platform supports two types of policies, placement, which decideon
where the NSs and VNFs are deployed, and runtime, which, based on
monitoring, decide thepolicies to be enforced to VNFs and NSs, e.g.
scaling in/out, migration, etc. Policies can be createdand
associated to NSDs at design time and can be changed at running
time. The Tbl. 2.8 depictsthe Policy Management APIs.
Table 2.8: Policy Management external APIs.
Feature Base API Methods Description
Runtime Policy Management /api/v3/policies GET, DELETE,OPTIONS,
PATCH, POST
Query, add, update anddelete Runtime Policies
Placement PolicyManagement
/api/v3/policies/placement GET, DELETE,OPTIONS, PATCH, POST
Query, add, update anddelete Placement Policies
5GTANGO Public 13
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
2.2.9 SLA Management
The SLA Management capabilities allow the management of SLAs
(Service Level Agreements)and Licensing of Service instances. SLAs
define a set of key parameters (KPIs) that can be used tocheck
whether a certain quality of service (QoS) level is delivered. SLA
Templates can be definedwith a set of KPIs and respective
thresholds (e.g. reliability, bandwidth, latency, jitter, etc.),
andassociated with a Service (NSD). Anytime a service is
instantiated, an automatic SLA Agreement iscreated and attached to
the NS instance. The thresholds is continuously verified (via
monitoring),raising a violations alarms in case the limits are
reached. Associated to SLAs are also licensingmodels (e.g. public,
trial, private), which rule the utilization of the service. The
Tbl. 2.9 depictsthe SLA and Licensing Management APIs.
Table 2.9: SLA and Licensing Management external APIs.
Feature Base API Methods Description
SLA Template Management /api/v3/slas/templates GET,
DELETE,OPTIONS, POST
Query, add and delete SLATemplates
SLA Agreements Management /api/v3/slas/agreements GET,
DELETE,OPTIONS, POST
Query, add, update and deleteSLA Agreements
SLA Violations Query /api/v3/slas/violations GET, OPTIONS Query
Violations occurred agiven NS and SLA Agreement
Licensing Management /api/v3/slas/licenses GET, DELETE,OPTIONS,
POST
Query, add, update and deleteSLA-based Licenses
SLA Configurations Query /api/v3/slas/configurations GET,
OPTIONS Query SLA Miscellaneousconfigurations (flavours, SLOs)
2.2.10 Infrastructure Management
The Infrastructure Management capabilities allow the management
of infrastructure, eitherVIM (Virtualized Infrastructure Manager),
within the scope of PoP (Point of Presences), or WIM(WAN
Infrastructure Manager), on network WAN segments outside PoPs. VIMs
and WIMs canbe configured in order to make the Service Platform use
this infrastructure. The Tbl. 2.10 depictsthe Infrastructure
Management APIs.
Table 2.10: Infrastructure Management external APIs.
Feature Base API Methods Description
VIM Management /api/v3/settings/vims GET, DELETE, PATCH,OPTIONS,
POST
Query, add, update anddelete VIMs
WIM Management /api/v3/settings/wims GET, DELETE, PATCH,OPTIONS,
POST
Query, add, update anddelete WIMs
2.2.11 Monitoring Management
The Monitoring Management capabilities allow the management of
monitoring, setting andgetting data related to VNFs and NSs. This
monitoring data is collected by the Monitoringcomponent and can be
delivered by the Service Platform to external parties. In addition
to data,the Service Platform can also deliver some data in a
graphical view (charts). The Tbl. 2.11 depictsthe Monitoring
Management APIs.
Table 2.11: Monitoring Management external APIs.
Feature Base API Methods Description
Data Management /api/v3/monitoring/data GET, OPTIONS,POST
Query data fromMonitoring
14 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Feature Base API Methods Description
Graphics Management /api/v3/monitoring/graphics GET,
OPTIONS,POST
Query graphics (charts)from Monitoring
5GTANGO Public 15
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
3 Final Release Features
This section describes in detail the new features available for
the Service Platform final Release(SONATA Release 5.0), when
compared to the previous first release (SONATA Release 4.0),
thefirst produced in the scope of the 5GTANGO project. The final
release will be the last versionproduced by the 5GTANGO project.
Nevertheless, minor developments may be required untilthe end of
the project, like minor bug-fixing or some customization work, in
order to support thedevelopment of the three pilots by the end of
the project.
The list of new features developed by 5GTANGO represent a
significant evolution for theSONATA Service Platform. In
particular, the introduction of the Kubernetes technology as VIMis
a huge step forward, considering its momentum in the industry,
especially for edge ecosystems.The support of the Deployment
Flavours (DF) concept is another significant advancement, in
orderto deliver service instances with different performances and
Quality of Services (QoS). QoS is infact another relevant
evolution, enabling the description and enforcement of different
quality levels,both within the datacenter and in WAN segments. The
Network Slicing support is also a keyaspect, enabling the creation
of Networks Slices by combining ETSI NFV artifacts (NSs).
Con-cerning to infrastructures, new plug-ins were developed to deal
with new infrastructures (T-API),emulators (VIM-EMU), apart from
the Kubernetes (k8s) described above. Other improvements onthe
agility of the management and orchestration of resources with
regards to Policies and SLAs,and the introduction of Licensing are
also relevant. Finally, the Portal has been extended, coveringthe
management of the most relevant features.
3.1 Deployment Flavours
The flexible and on-demand creation of virtual resources is one
of the most relevant advantagesof virtualization. Today, cloud
technologies like Openstack, permit the creation of tailored
virtualresources (e.g. virtual machines), customizing parameters
such as, the number of CPUs, memory,storage, network interfaces, to
enable the adjustment to certain requirements. For example, when
avirtual machine is created in Openstack, a flavour needs to be
associated to it, which indicates theabove mentioned resources.
Other infrastructure virtualization technologies like Azure,
Amazon,etc. have similar concepts.
The flavour concept can similarly be applied to ETSI NFV
artifacts, namely Virtual NetworkFunctions (VNFs) and Network
Services (NSs). Flavours basically define different
configurationsused to deploy a VNF or a NS. The support of VNF/NS
flavours can be useful for example todeploy VNFs/NSs with different
levels of performance.
3.1.1 ETSI NFV
The flavouring concept has already been introduced by ETSI NFV
with the name DeploymentFlavours (DFs) in many documents, namely
MAN001 [39], IFA011 [38], IFA014 [37] or SOL001[43], just to name a
few, providing guidelines about how to implement it.
According to ETSI NFV IFA014 [37], a NS/VNF Deployment Flavour
shall have the followingrequirements:
16 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 3.1: ETSI NFV Deployment Flavour General [43].
• “Describe how many instances of each constituent VNF are
required.”
• “Reference a VNF flavour to be used for each constituent
VNF.”
• “Enable describing affinity and anti-affinity rules between
the different instances of a con-stituent VNF.”
• “Enable describing affinity and anti-affinity rules between
the constituent VNFs.”
• “Enable referencing a VL flavour to be used for each VL
connected to its constituent VNFs.”
• “Enable describing affinity and anti-affinity rules between
the different instances of a con-stituent VL.”
• “Enable describing affinity and anti-affinity rules between
the constituent VLs.”
• “Support the capability to describe dependencies.”
Using DFs, the deployment of a VNF require the indication of the
flavour ID, in addition tothe VNFD, in order to select the right
resources and topology to deploy, as the Fig. 3.1 from [43]depicts.
The same applies to the deployment of NSs.
According to ETSI NFV SOL001 [43], A.2, “The idea of VNF
deployment flavour (. . . ) is thateach deployment flavour
describes the required vduProfiles and virtualLinkProfiles (. . .
)” and “(. . . )describe each deployment flavour as a standalone
implementable TOSCA service template (. . . )”.In addition “(. . .
) different deployment flavours can define different topologies of
the same VNF,with different scaling aspects, different VDUs and
different internal connectivity.”.
The Fig. 3.2 depicts this model defined in [43], where “the
templates describe two variants of theVNF each corresponding to a
deployment flavour: a simple one and a complex one. The simpleVNF
consists of one server: a DB backend whereas, the complex VNF
variant consists of minimumthree DB backend servers and one
serviceNode, which may be scaled out in one-size increments.”
ETSI NFV describes a VNF DF-aware as a single VNFD top-level and
multiple VNFDs lower-level. The VNFD top-level is an abstracted
view of the VNF, containing all the common informationsuch as VNF
information, and, eventually, parts of the lifecycle management
interface definition.The VNFD lower-level part is a list of
concrete VNF descriptions of the entire set of VDUs andnetwork
topology. When it comes to a VNF deployment, the VNF Manager (VNFM)
needs to lookup into the lower-level information and the flavour id
of the request and deploy it normally as if itwas a regular
VNF.
5GTANGO Public 17
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 3.2: ETSI NFV Deployment Flavour Concept [43].
3.1.2 5GTANGO Approach
The 5GTANGO implementation of DFs developed in 5GTANGO was
mainly triggered by the pilotsunder development in the WP7. Those
pilots require different levels of performance and Quality
ofService (QoS), which can only be provided by deploying different
sets of VDUs, different networktopologies and respective QoS
(bandwidth, latency, etc.).
The 5GTANGO DF model is aligned with ETSI NFV standards and
applies both to VNFs andNSs, allowing for a flexible and
comprehensive service design. For compatibility reasons, VNFDsand
NSDs still support old descriptors, describing a flavour with a
single infrastructure, topology,monitoring rules, scaling rules,
etc. This flavour is the default, and is used when no
particularflavoud id is requested on instantiation or when the
flavour id does not exist. In addition, VNFDsand NSDs can define a
list of flavours.
3.1.2.1 VNFs
Traditional VNFDs have the following format (see Appendix A
(Sec. A.1.1) for a full example).
Using DF-aware VNFDs, the YAML root nodes remain untouched,
which ensure backward com-patibility. In addition, a new YAML
deployment flavour node is created to with the list of DFsinside,
using exactly the traditional syntax and semantic.
18 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 3.3: 5GTANGO VNF Deployment Flavour Implementation.
# existing
# existing, repeated for each *DU
# existing
# existing
# new, repeated for each flavour to be defined
- name "flavour_name" # new
# new, but similar to existing, repeat for each *DU
# new, but similar to existing
# new, but similar to existing
With this feature, VNFDs can be described in multiple ways, (1)
DF-unaware VNFD, traditionalway not using the deployment flavour
node; (2) DF-aware VNFD, define a list of flavours using
thedeployment flavour node; and (3) Mixed VNFD, define a VNF
traditionally and DF-aware at thesame time, where the traditional
description is considered the default one, used when no flavour
idis passed or does not exist.
The Fig. 3.3 depicts this model, when a VNFD is described by
generic information (name,version, author, description) and the
traditional resources and topology are specified by the de-ployment
units, virtual links and connection points nodes. In addition to
that, 3 other DFs aredescribed (A, B and C) and associated to
different flavour ids. This VNFD is stored in the Cata-logue as
usually during on-boarding and, when an instantiation occurs, the
MANO Framework willdetermine which of the 3 flavours (or the
default) will be deployed. Appendix A (Sec. A.1.2) showsa full
example of a VNFD DF-aware with two different flavours (High, Low)
and the default one.
The definition of different flavours for different levels of
performance and QoS is the main driverfor the utilization of DFs in
5GTANGO. However, using 5GTANGO DFs, other use cases canbe
approached. As an example, a VNF can define two flavours for
different VIM technologies,e.g. Openstack and Kubernetes and,
deploy this VNF in different PoPs by indicating the flavourto be
used. We can even think on VNF migration scenarios, e.g. from Core
to Edge, when a VNFis moved from a VM-based environment to a
container-based scenario in a smooth manner. Thisexample shows the
potential of the DF technology.
5GTANGO Public 19
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
3.1.2.2 NSs
Traditional NSDs have the following format (see Appendix A (Sec.
A.1.3) for a full example).
# existing
- vnf_id: "id_VNF"
...
Using DF-aware NSDs, the YAML root nodes remain untouched, which
ensure backward compat-ibility. In addition, a new YAML deployment
flavour node appears to specify the list of DeploymentFlavours
inside, using exactly the same syntax and semantic, but just within
the deployment flavournode and not in the root. Furthermore, it
becomes possible to indicate, for each VNF, the flavourto be
deployed. For example, to create a Gold NS, probably it makes sense
to use Gold VNFs andso on. Note that the name doesn’t need to
match, as the VNF and NS developers can be differententities and
have different scales of values; just a match among them is
needed.
# existing
# existing
- vnf_id: "id_VNF" # existing
... # existing
vnf_flavour: "VNF_Flavour_name" # new, optional, the VNF flavour
to use
# existing
# existing
# existing
# new, repeated for each flavour to be defined
- name "name" # new
# new, but similar to existing
- vnf_id: "VNF_id" # new, but similar to existing
... # new, to indicate the VNF flavour to deploy
vnf_flavour: "VNF_Flavour_name" # new, optional, the VNF flavour
to use
# new, but similar to existing
# new, but similar to existing
# new, but similar to existing
Using a similar approach as VNFs, different NS flavours can be
built by using different numberof VNFs (and respective flavours),
topologies, forwarding graphs, etc. The Fig. 3.4 shows thismodel
graphically. Like for VNFs, there is a default NS description and 3
flavoured ones, ensuringbackward compatibility. The Appendix A
(Sec. A.1.4) shows a full example of a NSD DF-awarewith two
different flavours (Gold, Silver) and a default one.
3.2 Network Slicing
While in 5GTANGO first release the concept of Network Slice was
introduced, in this final release,the Network Slice Manager
component manages a more complete concept of a Network Slice
withthe following new features:
20 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 3.4: 5GTANGO NS Deployment Flavour Implementation.
• Network Service Composition within a Network Slice
• Network Service Sharing
• Quality of “Slice” (Quality of Service within a Network
Slice)
While developing the new features, some updates were necessary
in order to enhance the NetworkSlices management. The two main
changes were:
• Communications with the Gatekeeper component - In the first
release, when a user requesteda Network Slice instance, the
internal communications were synchronous which obliged theuser to
wait until each one of the NS instantiation composing the slice was
done. Giventhe above mode of operation, the wait time blocked the
rest of Portal operations until thecomposition was concluded and
secondly there was no updates in the GUI available until theprocess
ended. In order to address the above situation, the internal
communications betweenthe Network Slice Manager and the Gatekeeper
were implemented to work asynchronously,allowing the user to create
multiple slices in parallel without the necessity of waiting for
anyprevious slice to be ready before instantiating another slice.
Thus, now the user can monitorthe current status information of the
Network Slice instantiation/termination procedure.
• Network Slice Objects Data Models - Due to the previously
described communication model,and, together with the new features
to be described, the other important internal improve-ment is the
update of the data models defining the Network Slice
Template/Instances descrip-tors/records (NSTd and NSIr) of the
objects created. In the first release, inside the NSTdthere were
only the basic Network Slice information and the Network Services
descriptionslist called “slice ns subnets”. From now on, as the
Appendix D (Sec. A.4) shows, the newdata models contain not only
this information, but also the networks descriptions list
called“slice vld” for the “Network Service Composition” feature. On
the other hand, inside theNSIr, there are new added parameters to
have more detailed information and a better man-agement of this
kind of Network Slice objects. For example, the “sliceCallback” to
managethe updates coming from the Gatekeeper due to the
asynchronous communications or the“datacenter” to know where is the
slice deployed. Moreover, and similar to the NSTd, thenetwork
service instances list has more detailed information and a new list
with the VirtualLinks (networks created to link the network
services among them) was added.
5GTANGO Public 21
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 3.5: Diagram of the previous presented Network Slice
Template.
3.2.1 Network Service Composition within a Network Slice
One of the main objectives of a Network Slice manager is to
offer to the OSS/BSS better servicesby creating a more complete and
efficient service deployment through a Network Slice composed
ofindividual NSs (similar to what is done with VNFs within a NS) as
an end-to-end Network Slice.To compose the Network Slice is
necessary to link and allow traffic between the Virtual
Machines(VMs) or containers, hosting each one of the VNFs composing
the multiple NSs of a Network Slice.These links are defined in the
NSTd as “slice-vld” and they follow the same concept of the
VLDswithin a NSD. Each “slice-vld” within the NSTd defines a link
were all the necessary NSs (and sotheir VNFs) might be connected.
When deploying a Network Slice, all the “slice-vld” will becomethe
links that must be created in a Virtualized Infrastructure Manager
(VIM), in case that nomultiple VIMs are involved. If not, a WIM and
multiple links might be needed.
If the presented NSTd in the previous JSON structure is
deployed, the result would be thecreation of the Network Slice
presented in Fig. 3.5. This Network Slice is composed by three
NSsand three different virtual links (VIM networks) called: mgmt,
data west and data east.
3.2.1.1 Instantiation Flow
In order to reach the full deployment of any Network Slice,
multiple actions are done within theSONATA SP environment. Fig. 3.6
shows the whole process from the moment a requests in theportal is
done until everything is ready and the slice-mngr notifies the GTK
and updates thedatabases.
The whole flow is divided in 6 steps:
• Step 1: Network Slice Request. The user sends a request
(action 1 in Fig. 3.6) to instantiatea slice (based on an available
NSTd) from the Portal which is received and forwarded (2)by the
Gatekeeper towards the Network Slice Manager (from now on called
“slicer”). Whenthe slicer receives the request, it processes the
request to create a NSIr object based on theNSTd within the
incoming information (at this moment, the next phases start in
parallel tothe current one). Once the object is saved in the
database, the slicer answers back to theGatekeeper (and this, to
the portal) that the request has been accepted and is being
carriedon (4,5).
• Step 2: Network Services Placement. While the first phase is
in process, the slicer (mainprocessor) starts a parallel
sub-processor (from now on, lcm-slicer) in which the whole
in-stantiation lifecycle management of the slice is being managed.
Initially, the lcm-slicer has tocheck if the VIM (or VIMs)
registered in the Service Platform have enough resources to
in-stantiate all the services within the desired Network Slice. To
do this, the lcm-slicer requests
22 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
the information to the Gatekeeper and this to the Infrastructure
Abstraction (IA) component(6,7). The IA sends back (forwarded by
the Gatekeeper) all the current information (8,9) andthe lcm-slicer
decides (10) either it is possible to deploy the slice (enough
available resources)and where (all in one VIM or different VIMs if
one has no enough resources). If there are noresources at all, then
the lcm-slice notifies the GTK with an error message and updates
theNSIr with an error status and the information.
• Step 3: Network Slice Networks Creation. If there are enough
resources, the lcm-slicer assignsan id to each one fo the networks
defined within the NSIr (11) and requests to the IA (throughthe
Gatekeeper) to create all the necessary networks into the right VIM
(12 - 15).
• Step 4: Network Slice Services Deployment. Once the networks
are all completed and readyto be used, the lcm-slicer will start to
send a request to the Gatekeeper to deploy (intothe right VIM) each
of the network services defined in the NSIr. While there is only
singlerequest to create all the networks networks, here each NS has
its own request (16 - 24) toallow an easier track of each service
status due to their individual deployment time. Then,the Gatekeeper
forwards (18) each request to the MANO component which is in charge
ofthe service lifcycle management and together with the IA to
stitch the NSs into the rightnetwork (19 - 22). As previously said,
each NS has its own deployment time and so, updatedinformation
(regarding the status) might come at any moment. While this is
happening,the lcm-slicer (the sub-processor in charge of the slice
lifcycle) is waiting until it has all theservices with status
“Instantiated”. For this condition to be fulfilled, update messages
foreach NS will come (25 - 29) and once all NS have their status as
“Instantiated”, then theNetwork Slice is ready.
• Step 5: WIM Enforcement. In case the placement is done in
different VIMs, it will benecessary to do this step. The idea is
basically to create a WAN link between PoPs in orderto keep control
of the data flow within the slice and to allow the deployment of
NetworkServices in PoPs placed near the final user. The Sec. 3.5
describes in detail how a WANsegment can be created using the T-API
plug-in.
• Step 6: Network Slice notification. Once the whole process is
done (networks created, servicesdeployed) and the status of the
NSIr is “Instantiated”, we can consider the Network Slicecompleted
and so, the slicer can finally notify the Gatekeeper (30) about it,
so this componentcan finish its internal procedures (closing
connections, etc.).
3.2.2 Network Service Sharing
Even though each slice is independent from the other slices, we
cannot forget that one of theobjectives of SDN/NFV is to improve
the resource efficiency. In this context and in alignment withthe
previous feature, Network Slice Manager must have the “ability” to
manage NSs that can beshared among multiple Network Slices.
For example, let us imagine there are two different NSTds (NSTd
1 and NSTd 2) and bothof them using the exact same NS as Fig. 3.7
shows. Inside of any NSTd, each description of theselected NSs
(subnets) composing the Network Slice has a field (called
“is-shared”) that determineswhether that NS can be shared or not. A
NS can only be shared between Network Slices if thisfield is
defined as “True” in both used NSTds, otherwise the two resulting
NSIrs will share any NS.
Following the previous example with two different NSTds (NSTd 1
& NSTd 2) composed bythree NSs each one: NS A, NS shared, NS B
for NSTd 1 and NS C, NS shared, NS D for NSTd 2.
5GTANGO Public 23
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
Figure 3.6: Network Slice Instantiation process.
Figure 3.7: Diagram with two Network Slice Instantiations
sharing a Network Service.
24 Public 5GTANGO
-
Document: 5GTANGO/D5.2Date: June 3, 2019 Security: PublicStatus:
To be approved by EC Version: 1.0
They both have in common the usage of one NS (NS shared) and in
each NSTd this NS is definedto be shared (“is-shared” = True):
• When NSTd 1 is instantiated (resulting in NSIr 1), the
instantiation process follows theservice composition procedure
described in the previous section and all networks and NS
aredeployed into VIM.
• Similarly, when NSTd 2 is instantiated (resulting in NSIr 2),
the Network Slice managerfollows the exact same instantiation
procedure but with one difference. Once it finds outthat NS shared
can be shared, it will check among the NSIr database if there is
any NSIr(in the example NSIr 1) with the same shared NS (NS shared)
already instantiated. If so,the Network Slice manager will request
to instantiate only the “not shared” NSs and get theinformation of
the already instantiated shared NS to fill in section regarding the
shared NSwithin the NSIr 2.
Regarding the termination procedure, only the last remaining
active NSIr will request to termi-nate the shared NS, while the
other will simply not ask anything and mark this NS inside the
NSIrdata objects as “TERMINATED”.
3.2.3 Quality of “Slice”
It is well-known that Quality of Service (QoS) is an absolutely
important concept to not forget asit defines the performance of any
service. As Network Slices are basically a set of Network
Services,it is obliged to keep in mind the QoS concept and check
how could be applied at a Network Slicelevel.
Each Network Slice contains a Network Slice type, which
specifies the type of service that issupported. These are the
following Network Slice Types:
• Enhanced Mobile Broadband slice (eMBB)
• Ultra Reliable Low Latency Communications slice (URLLC)
• Massive Machine Type Communications slice (mMTC)
Each Network Slice will include the list of necessary NSs (which
might deploy a 5G core) andthe SLA-level per network service.
3.2.4 Contributions to OSM
The Network Slicing concept, design and implementation described
in this section has been re-cently accepted by the ETSI OSM as a
contribution for the RELEASES FIVE (Network Slicebasic concepts and
NS composition feature) and SIX (sharing NS among Network Slices
feature).Because of the particular OSM software organization, the
code was not used “as is”, but additionaldevelopments were required
for this adaptation (performed by 5GTANGO project partners).
Thisclearly shows the quality and acceptance of the Network Slicing
developed by 5GTANGO.
3.3 Attaching Ingress and Egress Service Endpoints
In the final release, the 5GTANGO Service Platform provides full
support for the attachment of anetwork service to its service
endpoints, if defined. While not relevant for all network services,
this
5GTANGO Public 25
-
Document: 5GTAN