G ENERALISED A RCHITECTURE FOR D YNAMIC I NFRASTRUCTURE S ERVICES Large Scale Integrated Project Co‐funded by the European Commission within the Seventh Framework Programme Grant Agreement no. 248657 Strategic objective: The Network of the Future (ICT‐2009.1.1) Start date of project: January 1st, 2010 (36 months duration) D5.1.1 Test‐bed implementation update Version 1.0 Due date: 31/06/12 Submission date: 15/08/12 Deliverable leader: TID Author list: José Ignacio Aznar, Luis Miguel Contreras, Juan Rodríguez, Amanda Azañón, Sergio Martínez de Tejada (TID), Bartosz Belter, Damian Parniewicz, Łukasz Łopatowski, Jakub Gutkowski, Artur Binczewski (PSNC), Eduard Escalona (UESSEX), Monika Antoniak‐Lewandowska, Łukasz Drzewiecki (TP), Giada Landi, Giacomo Bernini, Nicola Ciulli, Gino Carrozo, Roberto Monno (NXW), Attilio Borello, Sébastien Soudan (Lyatiss), Jens Buysse, Chris Develder (IBBT), Jordi Ferrer Riera, Joan A. Garcia‐Espin, Steluta Gheorghiu, Sergi Figuerola (i2CAT). Dissemination Level PU: Public PP: Restricted to other programme participants (including the Commission Services) RE: Restricted to a group specified by the consortium (including the Commission Services) CO: Confidential, only for members of the consortium (including the Commission Services)
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
GENERAL ISED ARCHITECTURE FOR DYNAMIC INFRASTRUCTURE SERVICES
Large Scale Integrated Project Co‐funded by the European Commission within the Seventh Framework Programme Grant Agreement no. 248657 Strategic objective: The Network of the Future (ICT‐2009.1.1)
Start date of project: January 1st, 2010 (36 months duration)
D5.1.1 Test‐bed implementation update
Version 1.0
Due date: 31/06/12
Submission date: 15/08/12
Deliverable leader: TID
Author list: José Ignacio Aznar, Luis Miguel Contreras, Juan Rodríguez, Amanda Azañón, Sergio Martínez de Tejada (TID), Bartosz Belter, Damian Parniewicz, Łukasz Łopatowski, Jakub Gutkowski, Artur Binczewski (PSNC), Eduard Escalona (UESSEX), Monika Antoniak‐Lewandowska, Łukasz Drzewiecki (TP), Giada Landi, Giacomo Bernini, Nicola Ciulli, Gino Carrozo, Roberto Monno (NXW), Attilio Borello, Sébastien Soudan (Lyatiss), Jens Buysse, Chris Develder (IBBT), Jordi Ferrer Riera, Joan A. Garcia‐Espin, Steluta Gheorghiu, Sergi Figuerola (i2CAT).
Dissemination Level
PU: Public
PP: Restricted to other programme participants (including the Commission Services)
RE: Restricted to a group specified by the consortium (including the Commission Services)
CO: Confidential, only for members of the consortium (including the Commission Services)
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
2
Abstract
This document describes the current status of the testing environment deployed in GEYSERS, whose high level description has been provided in deliverable D5.1 [REF 7] and which comprises the validated architecture and the prototypes developed within the project. This document updates the status of the local test‐beds deployed by each involved partner and the integration of the prototypes developed in WP3 and WP4 which aim to validate the whole architecture through tests that evaluate its features and operability. This document also provides the specification of the GEYSERS Demonstrators and matches them to the test‐cases described in deliverable D1.5 [REF 1]. This matching covers most of the test‐cases demonstrating the benefits of GEYSERS architecture. Moreover, this document describes the development plan for each of the GEYSERS Demonstrators and a detailed description of their physical environment and the virtualization planning process of such physical resources available at the end of M30.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
3
Table of Contents
1 INTRODUCTION 9
2 GEYSERS GLOBAL TEST‐BED DESIGN 10
2.1 Introduction 10
2.2 GEYSERS local test‐beds updates 12
2.2.1 Lyatiss local‐test‐bed 12
2.2.2 University of Essex local‐test‐bed 13
2.2.3 IRT/ALU‐I local‐test‐bed 14
2.2.4 PSNC local‐test‐bed 15
2.2.5 TID local‐test‐bed 18
2.2.6 i2CAT local‐test‐bed 19
2.2.7 IBBT local‐test‐bed 20
2.2.8 UVA local test‐bed 21
2.3 Interconnection capabilities between local test‐beds update 22
2.3.1 Data plane inter‐connections update 22
2.3.2 Control Plane/Management Plane Update 23
3 REFERENCE SCENARIO FOR JOINT PROVISIONING OF NETWORK AND IT RESOURCES 26
3.1 Reference test‐bed 26
3.2 Network and IT resource virtualization 27
3.2.1 IT Resource Virtualization 28
3.2.2 Network Resource Virtualization 30
3.2.3 Connecting Virtual IT Resources and Virtual Networks in a Virtual
Infrastructure 31
3.3 Virtual infrastructure provisioning 32
4 GEYSERS TEST‐BED DEMONSTRATORS 34
4.1 Introduction 34
4.2 DEMONSTRATOR 1 35
4.2.1 Introduction and description 35
4.2.2 Physical test‐bed provided for this Demonstrator 36
4.2.3 Physical resources planning and virtualization 37
4.2.4 Virtualized infrastructure of this Demonstrator 45
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
4
4.3 DEMONSTRATOR 2 46
4.3.1 Introduction and description 46
4.3.2 Physical test‐bed provided for this Demonstrator 48
4.3.3 Physical resources planning and virtualization 50
4.3.4 Virtualized infrastructure of this Demonstrator 56
4.4 DEMONSTRATOR 3 59
4.4.1 Introduction and description 59
4.4.2 Physical test‐bed provided for this Demonstrator 61
4.4.3 Physical resources planning and virtualization 63
4.4.4 Virtualized infrastructure of this Demonstrator 64
4.5 DEMONSTRATOR 4 65
4.5.1 Introduction and description 65
4.5.2 Physical test‐bed provided for this Demonstrator 65
4.5.3 Physical resources planning and virtualization 70
4.5.4 Virtualized infrastructure of this Demonstrator 72
5 CONCLUSIONS AND NEXT STEPS 74
6 REFERENCES 76
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
5
Figure Summary
Figure 1: General local test‐beds overview ..........................................................................................................11 Figure 2: Lyatiss local test‐bed details..................................................................................................................12 Figure 3: University of Essex local test‐bed details ..............................................................................................13 Figure 4: Interoute and ALU‐I local test‐bed details.............................................................................................14 Figure 5: PSNC's local test‐bed details .................................................................................................................15 Figure 6: Calient DiamondWave FiberConnect partitioning ................................................................................16 Figure 7: ADVA FSP 3000 R7 DWDM ring .............................................................................................................17 Figure 8: ADVA FSP 3000 RE‐II DWDM ring ..........................................................................................................18 Figure 9: TID local test‐bed details .......................................................................................................................19 Figure 10: i2CAT local test‐bed details .................................................................................................................20 Figure 11: IBBT local test‐bed details ...................................................................................................................21 Figure 12: UVA local test‐bed details ...................................................................................................................22 Figure 13: The Control/Management Plane topology in the GEYSERS test‐bed..................................................24 Figure 14: Reference test‐bed relative to GEYSERS’ adopted roles. ....................................................................26 Figure 15: GEYSERS reference test‐bed implementation.....................................................................................27 Figure 16: A Virtualization‐enabled PIP with a Datacentre Virtualization Management (DCVM) solution
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
7
Acronyms and Abbreviations
AAI Authentication and Authorization Infrastructure API Application Program Interface C‐VLAN Client VLAN CAPEX Capital Expenditure CCI Connection Controller Interface CPE Customer Premises Equipment CPU Central Processing Unit DCVM Datacentre Virtualization Management DWDM Dense Wavelength Division Multiplexing EIS Enterprise Information System FS Fibre Switching FSC Fibre Switching Capable GLIF Global Lambda Infrastructure Facility GMPLS Generalized Multi‐Protocol Label Switching GRE Generic Routing Encapsulation IP Internet Protocol L2VPN Layer 2 VPN LAN Local Area Network LICL Logical Infrastructure Composition Layer LSC Lambda Switching Capable LSP Label Switched Path MEMS Micro Electro‐Mechanical System MLI Management to LICL Interface MPLS Multiprotocol Label Switching NAT Network Address Translation NCP Network Control Plane NFS Network File System NIPS Network + IT Provisioning System OPEX Operational Expenditure OSGi OSGi Alliance (formerly Open Services Gateway Initiative) PCE Path Computation Engine PIP Physical Infrastructure Provider PIP‐IT Physical Infrastructure Provider – IT PIN‐N Physical Infrastructure Provider – Network Q‐in‐Q Ethernet networking standard formally known as IEEE 802.1ad RAM Random Access Memory ROADM Reconfigurable Optical Add Drop Multiplexer ROI Return of Investment S‐VLAN Service VLAN SCN Signalling Communication Network SDH Synchronous Digital Hierarchy SLI SML to LICL Interface SML Service Middleware Layer TDM Time Division Multiplexing UNI User to Network Interface VI Virtual Infrastructure
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
8
VIO Virtual Infrastructure Operator VIO‐IT Virtual Infrastructure Operator – IT VIO‐N Virtual Infrastructure – Network VIP Virtual Infrastructure Provider VLAN Virtual Local Area Network VM Virtual Machine vNIC Virtual Network Interface Card VPN Virtual Private Network VR Virtual Resource WSDL Web Services Description Language
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
9
1 INTRODUCTION
This document describes the current status of the test‐bed deployed in GEYSERS as introduced in deliverable D5.1 [REF 7]
by presenting the status of the local test‐beds deployed by each involved partner and the related prototypes integration
developed in WP3 and WP4. The focus of these 2 work packages is on validating the entire architecture through specific
tests used to evaluate respective features and operability.
The main goal of this deliverable is to provide a complete specification of test‐bed facilities available for GEYSERS
partners in order to implement the GEYSERS architecture in the test environment and validate it with a set of well‐
defined procedures, as part of the GEYSERS Demonstrators. The deliverable reports how the test‐bed resources have
been allocated and partitioned to satisfy all Demonstrators’ needs.
The document is composed of the following sections:
Section 2 presents the details of local test‐beds provided by each partner. This section includes the deployment of local
control planes for test‐bed infrastructures, which are responsible for the configuration and provisioning of particular
infrastructure layers. Local test‐bed infrastructures include optical layer 1 hardware, layer 2 switching capabilities, and
various IT hardware, including computational and storage facilities.
In Section 3 a reference test‐bed is explained. The section provides an explanation how a joint provisioning of network and IT resources is performed. It conceptually shows how PIP resources are virtualized by means of a smooth collaboration among GEYSERS specific modules and the agreed upon virtualization tools (e.g. OpenNebula, KWM, etc.). The technology offered by each telecoms Service Provider, the tools used by each IT provider and the role of the SML/LICL and NCP+ in the provisioning of IT and network services are also explained in this theoretical use case. The joint virtualization of resources constitutes a key item from the lower layer (PIP) to the upper layers of the value chain (VIP and VIOs) with the requested services performing GEYSERS prototypes features and operation actions.
Section 4 provides information on the GEYSERS Demonstrators, which have been selected to validate and demonstrate
the key features of the GEYSERS architecture. The matching of GEYSERS Demonstrators to the test‐cases described in
deliverable D1.5 [REF 1] covers most of the test‐cases demonstrating the benefits of GEYSERS architecture. This section
also provides a detailed description of the Demonstrator physical environment and how the project plans to use these
physical resources. The description of each of the four identified Demonstrators includes details of the physical test‐bed
provided, the joint virtualization plans carried out for the IT and network resources and the virtualized resultant
scenarios.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
10
2 GEYSERS GLOBAL TEST‐BED DESIGN
2.1 Introduction
This section describes the physical local test‐bed infrastructure (including both network and IT resources) deployed by
each partner, in order to perform the validation procedures on the GEYSERS prototypes developed in WP3 and WP4. The
local test‐bed facilities have been distributed among the GEYSERS Demonstrators which match the use cases and
scenarios presented in deliverable D1.5 [REF 1]. Local test‐beds include optical layer 1 hardware, layer 2 switching
capabilities, and various IT hardware, including computational and storage facilities. This section updates the test‐bed
implementations described in deliverable D5.1 [REF 7] providing a higher level of detail of the specific devices, ports and
connectivity, etc.
Figure 1 shows the global picture of the local test‐beds integration. A more detailed description of the infrastructure in
each local test‐bed is given in the remainder of this section. This section concludes with a description of the
interconnections between the local test‐beds which includes both data plane and control/management plane
capabilities.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
11
Figure 1: General local test‐beds overview
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
12
2.2 GEYSERS local test‐beds updates
2.2.1 Lyatiss local‐test‐bed
Figure 2: Lyatiss local test‐bed details
The Lyatiss test‐bed is composed of IT resources, two Dell R210 servers are installed in the IN2P3 (Lyon) Datacenter and
KVM is the Hypervisor. Demonstrators can deploy Virtual Machines (VMs) and the GEYSERS stack software needed on
these servers.
The technical details of each server are as follows:
The two following figures show the LSC domain of the PSNC’s test‐bed. Figure 7 presents four interconnected ADVA FSP
3000 R7 devices. Three of them are equipped with client cards which are connected through the Calient and MLX‐8
boxes to the remote test‐beds. The fourth box, located in the middle, consists of three 8ROADM modules (one for each
direction). This configuration enables up to three 1Gbit unprotected tunnels using two lambdas, one for WCA2G5 card
and one for 4TCA (TDM card).
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
17
Figure 8 introduces the remaining part of the LSC domain at the PSNC test‐bed. The three legacy ADVA FSP 3000 RE‐II
devices are connected to the whole GEYSERS test‐bed in the same manner as described above. Available resources allow
for the creation of a DWDM ring with three lambdas as shown in the picture.
The former technology will be mainly used by the Demonstrator 2, while the latter technology will be used by the
Demonstrator 4.
Figure 7: ADVA FSP 3000 R7 DWDM ring
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
18
Figure 8: ADVA FSP 3000 RE‐II DWDM ring
2.2.5 TID local‐test‐bed
The TID local test‐bed comprises both IT and network resources. IT resources consist of two servers: DELL POWER EDGE
and HP DL380. The first one is dedicated to IT resources, while, the second one is dedicated to Parent‐PCE and Child‐PCE
servers, GMPLS+ controllers, NIPS server, and Lower LICL modules. OpenNebula software will be installed in the Front‐
End and there is also an image repository where VM templates are stored. Network resources consist of four ADVA FSP
3000 ROADMs. Several 1 Gbit LSPs have been established among them in the data plane. The Lower LICL module will
manage these network resources to provide virtualized network resources to the global test‐bed. Juniper routers enable
the access of clients to the ROADMs. Riverstone switch routers constitute the gateways to the global GEYSERS test‐bed
for both data and signalling planes.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
19
Figure 9: TID local test‐bed details
2.2.6 i2CAT local‐test‐bed
The test‐bed in i2CAT is composed of both IT and network resources. One SuperMicro Server will be used for deploying
the LICL software, and a SunFire x2200 server will be used for installing OpenNebula. An additional SunFire x2200
together with a Dell PowerEdge 850 server will act as IT resources controlled by the LICL. The network resources consist
of three W‐Onesys Proteus devices, which make use of two Transmode TM‐301 boxes as supporting equipment to
provide connectivity to the test‐bed.
Several network devices (2 Cisco Catalyst switches, one Alcatel 6850 and two SMC Aggregator devices) are providing the
data plane connectivity and the connectivity to other GEYSERS test‐beds, while the SCN connectivity is ensured by an
Allied Telesis switch.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
20
Figure 10: i2CAT local test‐bed details
2.2.7 IBBT local‐test‐bed
IBBT provides 6 nodes with the following specifications:
12GB memory.
2x Intel Xeon CPU E5620 @ 2.40GHz: a total of 8 cores and hence 16 threads per machine.
160GB local storage.
A shared NFS of 500 GB between the nodes, which is located on a 1Gb linked iSCI server.
One of the servers will have both an OpenNebula installation and act as the NFS server. All other nodes will have a KVM
installation.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
21
Figure 11: IBBT local test‐bed details
2.2.8 UVA local test‐bed
The UvA test‐bed (Figure 12) provides a cloud‐machine (Dell R810: 48cores, 128GB‐RAM, 600GB storage) for on‐demand
computation and storage resources. These resources are provided with the aid of OpenNebula installed and the
necessary software to deploy the Lower‐LICL. This test‐bed provides access via two GLIF lightpaths, to i2CAT and to
UEssex. The Dell R810 will be used to provide IT resources (computing and storage) for Demonstrators 1, 3 and 4.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
22
Figure 12: UVA local test‐bed details
2.3 Interconnection capabilities between local test‐beds update
2.3.1 Data plane inter‐connections update
In this sub‐section an update in terms of data plane inter‐connections is presented. Only the new updates which have taken place in terms of the connectivity between partners’ local test‐beds with respect to deliverable D5.1 [REF 7] have been included. The remaining data plane inter‐connections can be found in the same document.
IRT/ALU – Lyatiss
Technology Gigabit Ethernet over SDH
Infrastructure Interoute private fiber
Status Operative
Capacity 1Gbps
VLAN transmission Yes, 1 VLAN, (QinQ available on Lyatiss site)
Notes Waiting for end‐to‐end test between IT resources.
Table 1: IRT/ALU – Lyatiss
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
23
Local test‐bed i2CAT – Local test‐bed TID
Technology 10GbE
Infrastructure RedIRIS
Status Operative
Capacity 10 Gbps
VLAN transmission Yes, 1 VLAN, 688
Notes ‐
Table 2: i2CAT – TID
Local test‐bed UESSEX – Local test‐bed TID
Technology GEANT Plus Service
Infrastructure UESSEX ‐ Janet – GEANT – IRIS ‐ TID
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
59
4.4 DEMONSTRATOR 3
4.4.1 Introduction and description
The objective of this Demonstrator is to dynamically adjust the available compute, storage and network resources for an
Enterprise Information System (EIS) based on the demand from the EIS users. The intention is to motivate methods of
synchronising the scaling of IT infrastructure and network capacity dynamically. The number of EIS users can vary up and
down, as well as the frequency and size of their requests. SAP leads this Demonstrator and has implemented a workload
and payload generator to emulate changing user and transaction loads on a virtual infrastructure. Two alternatives are to
be compared: firstly, allowing the infrastructure to perform scaling on behalf of the EIS, synchronising the scaling of the
virtual infrastructure (compute and network) against the user load. The second case is having the EIS perform and
coordinate the scaling by using the SML’s Request API. In both cases, the benefits for cost and overall “satisfaction” of
the EIS provider’s operational objectives, without disrupting the application users’ service level objectives and
experience is assessed.
An Enterprise Information System (EIS) is a query‐intensive application‐server and database system that serves various
concurrent users. In a dynamic EIS, the number of users and frequency of queries changes, such that the network and
resource demand changes. The capacity of the network and computational power of the servers involved should then
reflect this demand without being over‐provisioned. Since the objective is to cause the virtual resources in a virtual
infrastructure serving an EIS to be scaled up and down based on the user’s load, the input data required is for the
generation of load on IT resources as well as on the network. Moreover, it is necessary to state what the expectations of
users are in order to validate the response of the architecture to the changing user loads. This data can be obtained from
empirical analysis of an existing EIS or based on existing benchmarks for Enterprise information systems including
analytics and business information warehouses. There are also existing commercial and open‐source load generation
tools that can be used to simulate the changing of user loads and the generation of payloads to pass over the network.
This can be randomised data or test data from a real‐world system. The latter case might be difficult from the
perspective of sample data sensitivity from real systems. The experiment should show scaling under different types of
load conditions. Load can be created from the client of the system under study or from the business of the servers
dealing with other requests from different clients in parallel.
The EIS application consists of multiple components, each of which is an OSGi bundle. They can be deployed in one
container or distributed over multiple containers. In this report, a completely distributed approach where each
component runs in its own container is assumed. Each component offers multiple services that are transparently
discovered by the underlying OSGi framework. Furthermore, a WSDL is generated for each component so that external
applications can communicate with the components as well. The workflow for the EIS scaling experiments is shown
below.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
60
(1) Vary request size and instances
(2) Vary request send frequency to application end point
(3) Vary VM load based on server activity and try to maintain constant
Query speed
(4) Collect query data from data
source
(5) Return simulated
response to client
(6) Aggregate responses and
response times in order to calculate
metrics
Initialise iterations and variations
Figure 38: Experimental Workflow for Demonstrator 3
The first stage is to initialise and setup the virtual infrastructure required for the EIS scaling. This includes setting up the
necessary monitoring probes and controllers in the infrastructure required to implement dynamic scaling. The
experiment then continues as follows in a series of iterations:
(1) Starting with an initial request size and simulation of concurrent instances, the first stage is the creation of the
EIS requests from the client machines to the VM with the EIS compute instance.
(2) The request send‐frequency varies over time, as well as the size and nature of payload. The EIS server is
implemented to support these different requests and seeks to handle the requests with a constant response
time
(3) The locally‐generated loads at the VM1 are varied in order to emulate shared infrastructure conditions at the
server.
(4) Each request includes a query to be submitted to the EIS database. These are expected to be executed within a
response time threshold although the size and frequency of requests is varied. A response is generated to show
that the request has been handled and the time at which it was received.
(5) The response generated at the server is returned to the client machine. The response should contain its send
time, receive time at the server, processing time and return time when stored at the client.
(6) The client aggregates and stores all responses so that they can be traced to the original requests.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
61
The experiment is repeated for a number of iterations in three different scenarios, using infrastructures that are
increasingly complex:
‐ Scenario 1: reliance on best‐effort network and static configuration of infrastructure where there is no variation
with load. It might also be possible to implement an application‐layer load balancer that starts up new instances
of the EIS server on new VMs when there is a detected load.
‐ Scenario 2: over‐ or median‐ provisioned resources based on a calculation of the maximum set of resources that
would be required. The load should hence never exceed the available resources, although there is excess
available during periods of low load.
‐ Scenario 3: dynamic scaling of the network infrastructure according to the load on the EIS. This is the main
scenario of the Demonstrator, showing the capabilities of the GEYSERS architecture.
It is predicted that there will be advantages for Scenario 3 over Scenarios 1 and 2 based on cost and effectiveness.
However, Scenario 3 is the most difficult configuration to implement of the three, since this is based on the novel
mechanisms to be developed within GEYSERS. The local, physical test‐beds required do not vary from the descriptions in
Demonstrator 1 and 2.
4.4.2 Physical test‐bed provided for this Demonstrator
The same physical test‐bed will be used for all scenarios in the Demonstrator, since the aim is to compare the results
across scenarios. The physical test‐bed uses resources from three IT resource providers (Lyatiss, UvA and TP) and one
network resource provider (TID). The three IT resource providers are used to emulate a multi‐cloud, where multiple
providers are used to make physical hosts available for virtual workloads. In the following, the details about the physical
resources allocated for this Demonstrator are presented:
Lyatiss plays the role of a primary cloud provider in the Demonstrator, enabling the creation of VMs for EIS instances.
Their CloudWeaver appliance is used behind the Lower‐LICL to manage the creation of the initial VMs, as well as
instances required dynamically in Scenario 3 of the Demonstrator.
UvA is also a cloud provider in the Demonstrator, but referred to as a “secondary provider”, whereby their IT resources
are engaged when there is a need for load balancing or redirection. UvA uses OpenNebula as the resource manager
behind the Lower‐LICL.
TP is used as the composite or multi‐cloud provider in this Demonstrator, where they play the role of VIP. Their resources
are hence used to install the Upper‐LICL, which requires connectivity to the Lower‐LICLs at Lyatiss and UvA. TP will be
used as the site where the NCP+ and SML are deployed for the scenario, as this seems to be a reasonable assumption
that the same organisation will act as VIP and VIO.
TID is the network provider used to connect all three sites. This enables control over both the scaling of IT servers and
the network resources used in the Demonstrator.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
62
Figure 39: Physical Infrastructure for Demonstrator 3
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
63
4.4.3 Physical resources planning and virtualization
Table 12 lists the usage of the physical infrastructure for the 3 scenarios, while Figure 40 illustrates the topology and
usage of the GEYSERS components in the scenarios.
Scenario 1
“Best Effort”
Scenario 2
“Median Provisioning”
Scenario 3
“Dynamic Scaling”
NET PHY Infrastructure
TID provides access to their optical infrastructure consisting of 4 ADVA FSP 3000 F7 switches. These are just configured with VLAN tagging.
TID still used but the bandwidth is set to a determined median limit for the VLAN.
TID’s configures switches to support dynamic scaling scenario.
IT PHY Infrastructure and Lower LICL Controllers
Lyatiss: 2 x Dell 210 servers. One used for Load‐balancer and the other for the EIS application instance (s). Also assume availability of probes for CPU, RAM and power usage.
Lyatiss: same resources a scenario 1 but more EIS instances started initially
TP: same traffic load generated
Lyatiss: same resources as scenario 1 and 2
UvA: DAS 3 Blade. Note that UvA used as secondary resource provider
TP – In addition to traffic generator will use the test‐bed server: HP Proliant DL580 G7
Machines to deploy NCP+ controllers
N.A. N.A. TP: current plan is to use TP’s server for the NCP+ as it will implement the VIP. If this proves inefficient during trials, we will use UvA’s servers as an alternative.
Machines to deploy Upper LICL
TP used as primary site for deploying the Upper‐LICL. UvA used in case planned capacity is unavailable.
(Same) (same)
Machines to deploy SML
TP used as primary site for deploying the Upper‐LICL. UvA used in case planned capacity is unavailable.
(Same) (same)
Table 12: Demonstrator 3 – overview of physical resources
The distribution and topology of resources described in Table 12 is shown in Figure 40. SAP is also providing a VM for
representing the application provider and end user perspective on the Virtual Infrastructure.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
64
Figure 40: Topology of Infrastructure for Demonstrator 3
4.4.4 Virtualized infrastructure of this Demonstrator
The Virtual Infrastructure should appear as a single domain to the application provider (Figure 41). The IT resources for
the EIS and load generators are deployed on VMs in different physical domains but connected by the same network
provider (TID).
Demonstrator 3
Release date: 30 - 05 - 2012
Purpose:
Virtual infrastracture for Demonstrator 3
Virtual Network Node
Virtual IT Node
Legend:
IT3/UvA(Secondary EIS
Instances)
N2
N3
N4
IT2/Lyatiss(Load Balancer and
Primary EIS Instances)
IT1/TP(Load
Generator)
TIDN1
IT1/SAP(Experiment
Control)
Figure 41: Experimental Workflow for Demonstrator 3
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
65
4.5 DEMONSTRATOR 4
4.5.1 Introduction and description
Demonstrator 4 is aimed at the demonstration of advanced network and IT management functionalities, with a specific
focus on network infrastructure re‐planning. The virtual infrastructure re‐planning service is offered by the Virtual
Infrastructure Provider (VIP) to the Virtual Infrastructure Operator (VIO), allowing the VIO to request the modification,
up‐scaling or down‐scaling (e.g. upgrade of link capabilities, modification of network topology) of the leased virtual
infrastructure in order to optimise the network resource utilisation. The Demonstrator will evaluate and compare manual
and dynamic methods for the VI re‐planning, by providing usage examples for both of them.
Two main GEYSERS architecture components: LICL and NCP+ take part within this Demonstrator. LICL is providing virtual
infrastructure re‐planning capabilities whereas NCP+ is responsible for automatic or manual triggering re‐planning
actions and adapting for dynamical infrastructure changes.
The physical infrastructure provided for Demonstrator 4 should allow for testing and demonstration of all re‐planning
capabilities defined for this Demonstrator:
Link bandwidth re‐planning ‐ VIO operator is increasing/decreasing virtual link bandwidth,
Connectivity re‐planning ‐ VIO operator is adding virtual connectivity between any two nodes,
Link re‐planning ‐ VIO operator is removing a virtual link,
Node re‐planning ‐ VIO operator is removing a virtual node.
4.5.2 Physical test‐bed provided for this Demonstrator
Demonstrator 4 will be deployed over a subset of the overall GEYSERS test‐bed, composed of five different sites (PSNC,
Lyatiss, IRT/ALU‐I, UvA, TP). Most of the sites are interconnected via GÉANT and local NRENs. In this section, the local‐
test‐beds are briefly described, and the overall physical infrastructure for GEYSERS Demonstrator 4 is provided.
PSNC local test‐bed resources – PSNC local test‐bed (Figure 42) is composed of three ADVA FSP 3000 RE2 devices and
one Calient DiamondWave FiberConnect switch which are managed by GEYSERS software. All tributary ports of ADVA
DWDM system are connected to Calient switch which perform both supporting (five ports are configured manually) and
operational actions (four ports are configured by GEYSERS software). Additionally, Calient switch connects PSNC optical
equipment to another supporting Brocade NetIron MLX 8000 switch responsible for establishing of VLAN Q‐in‐Q
interconnections with Lyatiss and UvA and TP test‐beds. The last supporting equipment within PSNC test‐bed is IBM
System x3550 M3 server which will allow deploying Lower‐LICL and NCP+ VMs.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
66
Figure 42: PSNC test‐bed for Demonstrator 4
Lyatiss local test‐bed resources – Lyatiss deploys (Figure 43) an IT‐only local test‐bed, composed by two Dell R210
servers installed in IN2P3 (Lyon) Datacenter. The hypervisor adopted is KVM; VMs and GEYSERS stack software can be
deployed as needed. Additionally, a Cisco ME 3600‐X is used for establishing VLAN dot1Q interconnections with the
IRT/ALU‐I test‐bed, and VLAN Q‐in‐Q interconnections with the PSNC test‐bed.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
67
Figure 43: Lyatiss test‐bed for Demonstrator 4
IRT/ALU‐I local test‐bed resources – IRT and ALU‐I jointly deploy (Figure 44) a local test‐bed for Demonstrator 4,
composed of an optical network node and an IT server. IRT basically provides connectivity towards the Lyatiss test‐bed in
Lyon over its optical network, and it offers public IP Internet access and co‐location services for ALU‐I equipment. In fact,
all the equipment is installed at IRT’s premises in Milan Caldera, Building C. The Cisco 1841 router acts as a CPE and is
deployed as an IRT standard solution for managed public IP Internet access. The other equipment is provided by ALU‐I.
The optical network node is an Alcatel‐Lucent 1850 TSS‐160, a Packet‐Optical Transport switch of the Alcatel‐Lucent TSS
product family. The IT server is an HP Z600 Workstation, which will run several VMs, plus a lover LICL instance. A Cisco
3600 router is also deployed by ALU‐I for the remote management of their equipment.
Figure 44: IRT/ALU‐I test‐bed for Demonstrator 4
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
68
UvA local test‐bed resources – The UvA test‐bed (Figure 22) provides a cloud‐machine (Dell R810: 48cores, 128GB‐RAM,
600GB storage) for on‐demand computation and storage resources. These resources are provided with the aid of
OpenNebula installed and the necessary software to deploy the Lower‐LICL. This test‐bed provides access via two GLIF
lightpaths, to i2CAT and to UEssex.
TP local test‐bed resources – The TP local test‐bed (Figure 45) includes an HP Proliant DL580 G7 that will be used to
provide a set of physical IT resources. These resources will be considered in combination with the physical network
resources within the PSNC test‐bed as a physical domain owned by a single PIP. This means that the overall set of
resources located into the PSNC and TP test‐beds will be managed by a single instance of a Lower‐LICL (instantiated on
the PSNC servers).
Figure 45: TP test‐bed for Demonstrator 4
The overall test‐bed for the GEYSERS Demonstrator 4 is depicted in Figure 46. In this figure, interconnections between
local test‐beds are presented with values of VLAN tag(s). The single VLAN values in the figure imply the usage of 802.1Q
standards with a given tag value. The double VLAN values in the figure (i.e. S‐VLAN, C‐VLAN) imply the usage of the
802.1QinQ standard with given s‐tag and c‐tag values. The PSNC‐UvA data link is established using UEssex as an
intermediate hop. Similarly, PSNC‐IRT data links are established via a statically configured hop in Lyatiss.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
69
Figure 46: Overall test‐bed for Demonstrator 4
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
70
4.5.3 Physical resources planning and virtualization
Physical resources provided for the Demonstrator 4 are provided by five independent organisations: PSNC, Lyatiss,
IRT/ALU‐I, TP and UvA. For this reason, five Physical Infrastructure Providers (PIPs) have been defined, which deploy
Lower‐LICL software. PSNC and IRT/ALU‐I are managing the network infrastructure as PIP‐N. Lyatiss, TP and UvA are
managing the IT infrastructure as PIP‐IT. Lower‐LICL software offers part of the managed physical infrastructure to the
Virtual Infrastructure Provider (VIP). In this Demonstrator, TP is performing the role of the VIP and deploys Upper‐LICL
software, responsible for creating a single virtual infrastructure composed of logical resources offered by all five PIPs.
Finally, PSNC is using a given virtual infrastructure as VIO and deploys enhanced Network Control Plane (NCP+). It is
important to note that during these scenarios two actors (PSNC and TP) play two roles at the same time: PSNC is a PIP‐N
and VIO, TP acts as a PIP‐IT and VIP.
Figure 47 presents the organisational structure of the GEYSERS‐defined roles performed by organisations participating in
the Demonstrator 4. Additionally, GEYSERS software elements are installed within each of organisations and physical
infrastructure (hardware equipment and data links) controlled by the GEYSERS software. Physical resources are shown in
a logical form: this means that the supporting equipment (such as switches) is not shown here. It is, however, used to
interconnect IT resources (servers), but does not play an active role in the GEYSERS environment. Therefore, this
equipment can be treated as a part of a physical link – from the point of view of the user (VIP in this case, as it is the
entity that will actually create a VI, and VIO as this an organisation that will actually use it) and the actual physical
connection method is not important since it will not be visible to the users.
IT resource virtualization refers primarily to segmenting and isolating portions of the GEYSERS CPU, memory and storage
on a physical server, such that a different guest Operating System can be run.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
71
Figure 47: Demonstrator 4 – Business role assignment and GEYSERS components deployment
Each organisation, taking part in the Demonstrator 4 as PIP‐N, PIP‐IT, VIP or VIO role, has to deploy some components of
the GEYSERS software.
Table 13 presents supporting infrastructure requirements for the software deployment. Each software component is
provided in form of a VM which has to be installed using a physical server located in a particular organisation. Each
physical infrastructure is based in a different geographical location.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
Network node 5 Alcatel‐Lucent 1850 TSS‐160 IRT Port switching
IT node 1 Dell 210 Intel X3430 Lyatiss Multimedia server
IT node 2 ? UvA Multimedia client
IT node 3 HP Proliant DL 580 G7 TP Multimedia server
Table 14: Demonstrator 4 virtual nodes
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
74
5 CONCLUSIONS AND NEXT STEPS
This document is an update of the test‐bed high‐level specification provided in [D5.1]. This deliverable D5.1.1 offers a
detailed description of the local test‐beds deployed by each involved partner both for data and control planes and the
interconnections between them. The specification of the local test‐bed capabilities enables the precise identification of
the devices, the capabilities and roles that each partner offers in each of the Demonstrators by their partial or their full
local test‐bed infrastructure. These local test‐beds constitute the basis for the definition and deployment of the
Demonstrators.
The provisioning of IT and network resources constitutes one of the key outcomes shown in this document. Physical
resource providers (PIPs) integrate in their local test‐beds GEYSERS modules and virtualization tools. The virtualization
planning is carried out by segmenting the physical resources, resulting in a better utilisation of these resources based on
the GEYSERS modules and tools. These virtualized infrastructures are then ready to be managed by VIPs. Enabling the
virtualization technology openness with the approach of physical resource adapters is one of the emergent capabilities of
the GEYSERS framework. This permits the lower levels of the stack that implement virtualization functions to be selected
from a wide range of technologies that exist in the marketplace and as open source today, extending further the value
chain market. On the other hand, the virtualization of optical nodes can be achieved by partitioning or aggregating, while
optical links (i.e. network resources) are virtualized taking into account the bandwidth granularity or switching
technology to be used.
Demonstrators constitute the focus of WP5. A key outcome of the document is the exhaustive definition of each of the
Demonstrators matching the use cases and scenarios described in [GEYSERS‐D1.5] which show the added value of the
GEYSERS capabilities. Each Demonstrator is focused on different technical items showing the potential applicability of the
GEYSERS architecture and technology:
Exploitation of physical devices through virtualization mechanisms taking into account scalability and optimum
resources utilisation.
The benefits of the LICL and NCP+ for managing/operating the virtualized resources and infrastructures in the
network.
SML and NCP+ interactions to provide the VIOs with enhanced mechanisms to offer dynamic services (unicast,
restricted anycast, etc) specifically tailored to application requirements upon on demand requests.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
75
For each Demonstrator the physical resources corresponding to the local test‐beds as well as the virtualization planning
and the resulting virtualized infrastructures have been defined. By means of the Demonstrator scenarios, the document
shows both the physical and virtualized infrastructure and the management processes carried out by PIPs, VIPs and VIOs.
Future steps in WP5 will focus on the lead partners’ finalisation of the deployment of the local test‐beds and inter‐
connecting them to other partners’ local test‐beds in order to validate the final development of the Demonstrators. The
story‐line to show the GEYSERS architecture functionalities and performances within each Demonstrator is currently
being refined. This will guide the process to show all of GEYSERS’ capabilities, technical solutions and modules developed
in the context of the project.
In parallel, the software and hardware integration activities will continue. The results of the integration activities will
feed the GEYSERS Demonstrators with the integrated software stack, and enable experiments in a real, networking
environment. At least two face‐to‐face meetings are planned in the forthcoming period to finalise the integration work
between the software components being developed in WP3 and WP4, taking into account the specification of cross‐layer
interfaces detailed in WP2.
The impact of further steps in WP5 will reflect the achievements done in WP2, WP3 and WP4, and the feedback based on
validation results will be provided to the technical WPs to refine both architecture design and prototype implementation.
In the context of the cross WP5‐WP6 interaction, new dissemination opportunities will be studied, related to the
development carried out in WP5. In the forthcoming months, a closer collaboration is expected with industrial partners
involved in both work packages to identify and detail the exploitation plan and study the impact of potential use cases
implementation in local test‐beds of selected industrial GEYSERS stakeholders.
The final version of the integrated GEYSERS software stack will be prepared in WP5 and shared with external users for
download and dissemination. WP5 has been supporting the development work packages, i.e. WP3 and WP4, in making a
decision on the software licensing scheme to be used in the project. The shared stack placed on the public GEYSERS web
site in collaboration with WP6 will reflect the actual decision on the software licensing scheme made by the software
developers of each module of the GEYSERS project.
Test‐bed implementation update
Project: GEYSERS (Grant Agr. No. 248657) Deliverable Number: D5.1.1 Date of Issue: 15/08/12
76
6 REFERENCES
[REF 1] GEYSERS‐D1.5. Collected Use Cases and Scenarios. Available at: http://wiki.geysers.eu/images/8/8b/GEYSERS_D1.5_final.pdf [REF 2] GEYSERS‐D2.2 – Update. GEYSERS overall architecture & interfaces specification and service provisioning workflow, May 2011. Available at: http://wiki.geysers.eu/images/3/38/WP2_GEYSERS_D2.2_update.doc [REF 3] GEYSERS‐D2.6. Refined GEYSERS architecture, interface specification and service provisioning workflow. Available at: http://wiki.geysers.eu/images/7/7f/D2.6‐final.pdf [REF 4] GEYSERS‐D3.2. Preliminary LICL Software release, January 2012. Available at: http://wiki.geysers.eu/index.php/D3.2 [REF 5] GEYSERS‐D3.3. LICL Sub‐systems release, April 2012. Available at: http://wiki.geysers.eu/images/b/b2/GEYSERS_D3.3_v1.0.docx [REF 6] GEYSERS‐D4.1 GEYSERS Deliverable D4.1, GMPLS+/PCE+ Control Plane architecture, November 2010. Available at: http://wiki.geysers.eu/images/2/29/GEYSERS_WP4_D4.1_v1.0.doc [REF 7] GEYSERS‐D5.1. GEYSERS test‐bed Implementation, July 2011. Available at http://wiki.geysers.eu/images/b/b4/GEYSERS‐D5.1_v1.6‐final.pdf [REF 8] OpenNebula: http://OpenNebula.org/
[REF 9] CloudWeaver: http://www.lyatiss.com/resources/ [REF 10] Libvirt: http://libvirt.org/ [REF 11] KVM: http://www.linux‐kvm.org/page/Main_Page [REF 12] XEN: http://xen.org/ [REF 13] VMW: http://www.vmware.com/ [REF 14] 802.1q IEEE Standard for Local and metropolitan area networks‐‐Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks. Available at: http://standards.ieee.org/findstds/standard/802.1Q‐2011.html