Deliverable D2.52 Planning of trials and evaluation - Final Editor G. Xilouris (NCSRD) Contributors E. Trouva (NCSRD), E. Markakis, G. Alexiou (TEIC), P.Comi, P. Paglierani (ITALTEL), J. Ferrer Riera (i2CAT), D. Christofy, G. Dimosthenous (PTL), J. Carapinha (PTIN), P. Harsh (ZHAW), Z. Bozakov, D. Dietrich, P. Papadimitrioy (LUH), G. Gardikis (SPH). Version 1.0 Date Oct 30 th , 2015 Distribution PUBLIC (PU) Ref. Ares(2016)35932 - 05/01/2016
60
Embed
Planning of trials and evaluation - Final › docs › projects › cnect › 0 › ... · benchmarking methodology as well as industry-based platforms and tools for testing of network
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Thevalidation,assessmentanddemonstrationoftheT-NOVAarchitectureasacompleteend-to-endVNFaaSplatform,iscriticalforthesuccessofT-NOVAasanIntegratingProject.Theaim is not only to present technical advances in individual components, but –mainly - todemonstratetheaddedvalueoftheintegratedT-NOVAsystemasawhole.Tothisend,theoverallplanforthevalidationandassessmentoftheT-NOVAsystem,totakeplaceinWP7,ismostlyconcentratedonend-to-endsystem-wideusecases.
Thefirststepistheassemblyofatestingtoolbox,takingintoaccountstandardsandtrendsinbenchmarking methodology as well as industry-based platforms and tools for testing ofnetworkinfrastructures.AnothervaluableinputisthecurrentsetofguidelinesdraftedbyETSIforNFVperformancebenchmarking.
ThenextstepisthedefinitionoftheoverallT-NOVAevaluationstrategy.ThechallengesinNFVenvironmentvalidationare first identified;namelya) the functionalandperformancetestingofVNFs,b)thereliabilityofthenetworkservice,c)theportabilityandstabilityofNFVenvironments, as well as d) themonitoring of the virtual network service. Then, a set ofevaluationmetricsareproposed, includingsystem-levelmetrics (with focusof thephysicalsysteme.g.VMdeployment/scaling/migrationdelay,dataplaneperformance,isolationetc.)aswellasservice-levelmetrics(withfocusonthenetworkservicee.g.servicesetuptime,re-configurationdelay,networkserviceperformance).
Thespecificationoftheexperimentalinfrastructureisanothernecessarystepinthevalidationplanning.Areferencepilotarchitectureisdefined,comprisingNFVI-PoPswithcomputeandstorage resources, eachone controlledby theVIM.NFVI-PoPsare interconnectedoveran(emulated)WAN (TransportNetwork),whileoverallmanagementunits (OrchestrationandMarketplace) interface with the entire infrastructure. This reference architecture will beinstantiated(withspecificvariations)inthreeintegratedpilots(inAthens/Heraklion,AveiroandHannover,supportedbyNCSRD/TEIC,PTINandLUHrespectively),whichwillassessandshowcasetheentiresetofT-NOVAsystemfeatures.Otherlabsparticipatingintheevaluationprocedure(Milan/ITALTEL,Dublin/INTEL,Zurich/ZHAW,Roma/CRATandLimassol/PTIN)willfocusontestingspecificcomponents/functionalities.
ThevalidationplanisfurtherrefinedbyrecallingthesystemusecasesdefinedinD2.1andspecifyinga step-by-stepmethodology– includingpre-conditionsand testprocedure– forvalidating each of them. Apart from verifying the expected functional behaviour viawell-definedfitcriteria,asetofnon-functional(performance)metrics,bothsystem-andservice-levelisdefined,forassessingthesystembehaviourundereachUC.Thisconstitutesadetailedplanforend-to-endvalidationofallsystemusecases,whileatthesametimemeasuringandassessingtheefficiencyandeffectivenessoftheT-NOVAarchitecture.
Last,inadditiontouse-case-orientedtesting,aplanisdraftedfortestingeachofthefiveVNFsdeveloped in the project (vSBC, vTC, vSA, vHG, vTU, vPXaaS). For each VNF, specificmeasurement tools are selected, mostly involving L3-L7 traffic generators, producingapplication-specific traffic patterns for feeding the VNFs. A set of test procedures is thendescribed,definingthetoolsandparameterstobeadjustedduringtest,aswellasthemetricstobecollected.
Inthiscontext,thevalidation,assessmentanddemonstrationoftheT-NOVAsolutiononend-to-endbasisbecomescriticalforthesuccessofT-NOVAasanIntegratingProject.Theaimisnot only to present technical advances in individual components, but – mainly – todemonstratetheaddedvalueoftheintegratedT-NOVAarchitectureasawhole.Tothisend,theoverallplanforthevalidationandassessmentoftheT-NOVAsystem,totakeplaceinWP7,ismostly concentratedonend-to-end system-wideuse cases, rather thanonunit tests ofindividual components or sub-components, which is expected to take place within therespectiveimplementationWP(WP3-WP6).
Thepresentdeliverablecontainsisafirstapproach–tobefurtherelaboratedinD2.52–totheplanningofthevalidation/experimentationcampaignofT-NOVA,describingtheassetstobeinvolved,thetoolstobeusedandthefollowedmethodology.Itisanevolvedversionoftheinitialreport(D2.51),containingsomeupdatesonthemethodologytobeadopted,theinfrastructure tobeusedandalsosomeamendmentsonthetestcases. It is structuredasfollows:Chapter2overviewsinhigh-leveltheoverallvalidationandevaluationmethodologyframework,highlightingsomegenericframeworksandrecommendationsfortestingnetworkandITinfrastructures.Chapter3discussesthechallengesassociatedwithNFVenvironmentvalidationandidentifiedcandidatesystem-andservice-levelmetrics.Chapter4describesthepilot infrastructures (onwhich the entire T-NOVA systemwill be deployed) aswell as thetestbeds,whichwillbeusedforfocusedexperimentation.Chapter5definesthevalidationprocedures(steps,metricsandfitcriteria)tobeusedforvalidatingeachoftheT-NOVAUseCases. Moreover, the procedures for assessing Virtual Network Function (VNF) specificscenariosaredescribed.Finally,Chapter6concludesthedocument.
This section attempts a survey of the related standard and industry base methodologiesavailableaswellasrecommendationsfromETSINFVISG.
2.1. Standards-BasedMethodologiesReview
2.1.1. IETF
In the frame of IETF, the BenchmarkingMethodologyWG (bmwg) [BMWG] is devoted toproposing the necessary metrologies and performance metrics to be measured in a labenvironment, so that will closely relate to actual observed performance on productionnetworks.
The group has proposed benchmarking methodologies for various types of interconnectdevices.Althoughthesetestsarefocusedonphysicaldevices,themainmethodologiesmightaswellbeappliedinvirtualisedenvironmentsforperformanceandbenchmarkingofVNFs.ThemostrelativeidentifiedRFCsare:
Additionally,theIETFIPPerformanceMetrics(ippm)WG[IPPM]hasreleasedaseriesofRFCs,related to standardmetrics that canbeapplied tomeasure thequality,performance,andreliabilityofInternetdatadeliveryservicesandapplicationsrunningoverIP.RelatedRFCsare:
Inadditiontotheabove,theIRTFWGonNFV(NFVRG)hasrecentlyaddressedtheissueofNFVbenchmarkingbutfocusingmostlyinon-line,ad-hocVNFbenchmarking,highlightingtheproblemsarisingfromthedeviationfromthedefinitionofperformanceparametersaspartoftheVNFdescription(i.e.VNFD[ETSI-NFV-1])withtheactualVNFbehaviourwhilerunning.Thisis the topic of a recent proposed Internet-Draft [ID-ROSA15]. The authors propose anarchitecture for provision of VNF Benchmarking as-a Service integrated with the NFVArchitecture. From T-NOVA point of view, related work items are the workloadcharacterisationframeworkthathasbeendevelopedinWP4,Task4.1andwouldallowthecreation of the VNF profiles anticipated by the framework as well as the monitoringframeworkthatwouldbeabletomonitor inreal-timetheperformancemetricsdefinedbythedeveloperfortheVNF.InadditionthisframeworkhasbeenproposedinOPNFVupstream
projectYardstick,alongwiththevTCVNFtousedasaproofofconcept.OPNFVhasacceptedandwill include the frameworkat thenextOPNFV release (i.e.Brahmaputra)onFebruary2016.
2.1.2. ETSINFVISG
ETSINFVIndustrySpecificationGroup(ISG)completedPhase1ofitsworkintheendof2014with thepublicationof11specifications. relevantdocumentOneof those specifications isfocusedonNFVperformance(ETSIGSNFV-PER001V1.1.1)methodologiesforthetestingofVNFs[NFVPERF].TheaimistounifythetestingandbenchmarkingofvariousheterogeneousVNFsunderacommonmethodology.InaddtiontotheaboveduringthesecondphaseForthesakeofperformanceanalysis,thefollowingworkloadtypesaredistinguished:
§ Control-planeworkloads,whichcoveranyothercommunicationbetweenNetworkFunctions (NFs) that is not directly related to the end-to-enddata communicationbetweenedgeapplications.
§ Signalprocessingworkloads,whichcoverallNF tasks related todigitalprocessingsuchastheFFTdecodingandencodinginaC-RANBaseBandUnit(BBU).
ETSINFVISGphase2,spanningtheperiod2015/16,hascontinuedtheworkofETSINFVphase1.Inparticular,theresponsibilitiesoftheTSTworkinggroup(Testing,ExperimentationandOpenSource) include,amongothers, thedevelopmentof specificationon testingand testmethodologies. Two TSTWork Items, currently under development, should be taken intoaccountbyWP7:
• “TestingMethodology;ReportonNFVinteroperabilitytestmethodology”:coverstheanalysis of the NFV interoperability methodology landscape and suggests aframeworktobeaddressed.[ETSI-NFV-TST002].
2.2. Industrybenchmarkingsolutions
For the testing and validation of networks and network application several vendors havedevelopedsolutionsforautomaticstresstestingwithavarietyofnetworktechnologiesandprotocolsrangingfromL2toL7.Amongthese,themostprominentareIXIA[IXIA],andSpirent[SPIRENT]. They both adopt standardisedmethodologies, benchmarks andmetrics for theperformanceevaluationandvalidationofavarietyofphysicalsystems.Lately,duetotheeverincreasingneedfortestingintheframeofNFV,theyhavealsodevelopedmethodologiesthataddresstheneedforbenchmarkinginvirtualisedenvironments.
2.2.1. Spirent
Spirent supports standards-based methodologies for the NFV validation. In general, themethodologiesusedaresimilartothoseemployedtophysicalDevicesUnderTest(DUT).Thefunctionalitiesandprotocolsofferedbystandardhardwaredevices,alsohavetobevalidatedinavirtualenvironment.VNFperformanceistestedagainstvariousdataplaneandcontrolplanemetrics,including:
§ Controlplanemetrics:o Statesandstatetransitionsforvariouscontrolplaneprotocols;o Controlplaneframessentandreceivedoneachsession;o Controlplaneerrornotifications;o Validationofcontrol-planeprotocolsathighscale;o Scaling up on one protocol and validating protocol statemachines
anddataplane;o Scaling up on multiple protocols at the same time and validating
sample of a comprehensive set of control-plane and data-planestatistics, states and error conditions that are measured for athoroughvalidationofNFVfunctions.
2.2.2. IXIA
Ixia’s BreakingPoint Resiliency Score [IXIABRC] and the Data Center Resiliency Score aresettingstandardsagainstwhichnetworkperformanceandsecurity(physicalorvirtual)canbemeasured.Eachscoreprovidesanautomated, standardized,anddeterministicmethod forevaluatingandensuringresiliencyandperformance.
TheResiliencyScore ispresentedasanumericgradefrom1to100.Networksanddevicesmayreceivenoscoreiftheyfailtopasstrafficatanypointortheydegradetoanunacceptableperformancelevel.TheDataCenterResiliencyScoreispresentedasanumericgradereflectinghow many typical concurrent users a data center can support without degrading to anunacceptablequalityofexperience(QoE)level.Bothscoresallowquickunderstandingofthedegreetowhichinfrastructureperformance,security,andstabilitywillbeimpactedbyuserload,newconfigurations,andthelatestsecurityattacks.
Functionalandperformancetestingofnetwork functions - In thegeneralcasewheretheperformance testing results are provided for end-user consumed network services, theprimaryconcernistheirapplicationperformanceandtheexhibitedqualityofexperience.Theview in this case ismoremacroscopic and does not delve to the protocol level or to theoperationofe.g.BGP,routingorCDNfunctionalities.HoweverfortheOperators,additionalconcerns exist, regarding specific control plane and data plane behaviour; whether, forexamplethenumberofPPPoEsessions,throughputandforwardingrates,numberofMPLStunnelsandroutessupportedarebroadlysimilarbetweenphysicalandvirtualenvironments.Testingmustensurethattheperformanceofvirtualenvironmentsisequivalenttothatofthecorrespondingphysicalenvironmentandprovidetheappropriatequantifiedmetrictosupportit.
Validating reliability of network service -Operators and users are accustomed to 99.999percent availability of physical network services and will have the same expectations forvirtualenvironments.Itisimportanttoensurethatnode,linkandservicefailuresaredetectedwithin milliseconds and that corrective action is taken promptly without degradation ofservices.Intheeventthatvirtualmachinesaremigratedbetweenservers,itisimportanttoensurethatanylossofpacketsorservicesiswithinacceptablelimitssetbytherelevantSLAs.
EnsuringportabilityofVMsandstabilityofNFVenvironments-Theabilitytoloadandrunvirtualfunctionsinavarietyofhypervisorandserverenvironmentsmustalsobetested.Unlikephysicalenvironments,instantiatingordeletingVMscanaffecttheperformanceofexistingVMsaswellasservicesontheserver.Inaccordancewithestablishedpolicies,newVMsshouldbeassignedtheappropriatenumberofcomputecoresandstoragewithoutdegradingexistingservices. It is also critically important to test thevirtualenvironment (i.e.NFVIandVNFs),includingtheorchestratorandVirtualInfrastructureManagement(VIM)system.
Activeandpassivemonitoringofvirtualnetworks-Inadditiontopre-deploymentandturn-uptesting, it isalso important tomonitorservicesandnetwork functionsoneitheranon-going,passivebasisoranas-needed,activebasis.Monitoringvirtualenvironmentsismorecomplexthantheirphysicalequivalentsbecauseoperatorsneedtotapintoeitheranentireservice chain or just a subset of that service chain. For active monitoring, a connectionbetween the monitoring end-points must also be created on an on-demand basis, againwithoutdegradingtheperformanceofotherfunctionsthatarenotbeingmonitoredinthatenvironment.
tenancy–e.g. time todeployanewVNF instance, time to scaleout/in, isolationbetweentenants,etc.Ontheotherhand,validationobjectivesshouldbedefinedbothfromsystemandserviceperspectives,whichareconsideredseparatelyinthefollowingsub-sections.
3.2.1. Systemlevelmetrics
Thesystemlevelmetricsaddresstheperformanceofthesystemanditsseveralparts,withoutassociatingtoaspecificNFVservice.Thefollowingisapreliminarylistofsystemlevelmetricsto be checked for validation purposes. Although the overall system behaviour (e.g.,performance,availability,security,etc.)dependsontheseveralsub-systemsorcomponent,forevaluationpurposesweareonlyinterestedinservicehigh-levelgoalsandtheperformanceofthesystemasawhole.
• Performanceundertransientconditionso Stallundertransientconditions(e.g.VMmigration,VMscale-out/in)o Time to modify an existing virtual network (e.g. insertion of new node,
o Variabilityofdataplaneperformancewiththenumberoftenantssharingthesameinfrastructureresource
o Variabilityofcontrolplaneperformancewiththenumberoftenantssharingthesameinfrastructureresources
3.2.2. Servicelevelmetrics
Service levelmetricsare supposed to reflect theservicequalityexperiencedbyendusers.Often,thiskindofmetricsisusedasthebasisforSLAcontractsbetweenserviceprovidersandtheircustomers.
• Timerelatedmetricso Time to start a new VNF instance (interval between submission request
through thecustomerportaland the timewhen theVNFbecomesupandrunning).
o Timetomodify/reconfigurearunningVNF(intervalbetweensubmissionofthereconfigurationrequestthroughthecustomerportalandthetimewhenthemodificationisenforced).
Testing is a key phase in software development and deployment lifecycle, which if doneproperly couldminimize the service disruptions upon deployment and release to the endusersinaproductionenvironment.Withincreasingnumberofservicesbeingdeployedonthecloud,thetraditionaltestingmechanismsarequicklybecominginadequate.Thereasonsareobvious.Traditionaltestenvironmentsaretypicallybasedonahighlycontrolled,singletenantsetup,whilethecloudoffersitsbenefits,albeitinamulti-tenantenvironment.Multi-tenantdoesnotonlymeanmultipleusersusingtheservicesinasharedresourcemodel,butitcouldalsomeanmultipleapplicationsbelongingtothesameuserbeingexecutedwithinasharedresourcemodel.
Thesituationbecomessignificantwhennetworkfunctionsaretobetransformedfromthebundled hardware+software model to NFV deployment models over a virtualisedinfrastructure. In T-NOVA project, in order to test NFV deployments in a provider's cloudenvironment, having a formal testmethodology oriented to cloudenvironments, assumessignificantimportance.
This section describes how a cloud-oriented testing approach can be applied in T-NOVA,focusedonthecloudinfrastructureandthedeployedworkloads, inadditiontothesystemandservicemetricsdescribedintheprevioussection.
Thetestsaretobeconductedin2modes-oneunconstrainedwithnoOpenStackschedulinghints allowed, and the other run with specific placement hints associated with the VMdeploymentHeatscripts.
This chapter contains the description of the different test-beds involved in the T-NOVAproject,aswellasthedescriptionofthedifferentpilots,whichwillbeusedtoperformallthetestingandvalidationactivities.ThefinaldeploymentandinfrastructureoftheT-NOVAPilotswillberefinedandpresentedduringWP7activityandmorespecificallyinTask7.1.
4.1. ReferencePilotarchitecture
4.2. T-NOVAPilots
In order to guide the integration activities, a reference pilot architecture is elaborated. Apreliminaryviewofthereferencearchitecture is illustrated inFigure2. ItcorrespondstoacompleteimplementationoftheT-NOVAsystemarchitecturedescribedinD2.22includingasingleinstanceoftheOrchestrationandMarketplacelayers,oneormoreNFVI-PoPs,eachonemanaged by a VIM instanced, interconnected by a (real or emulated)WAN infrastructure(core,edgeandaccess).
ThereferencepilotarchitecturewillbeenrichedastheT-NOVAimplementationsprogress,and will be detailed and refined in order to present finally all the building blocks andcomponentsofthePilotdeployment.Thearchitecturewillbeimplementedinseveralpilotdeployments,asdetailedinthenextsection.However,ineachpilotdeployment,giventheequipmentavailabilityandthespecificrequirementsforUseCasevalidation,thereferencearchitecture will be adapted appropriately. Starting the description from bottom up, theInfrastructureVirtualisationandManagementLayerincludes:
• anexecutionenvironmentthatprovidesITresources(computing,memoryandstorage)for the VNFs. This environment comprises i) Compute Nodes (CNs) based on x86
architecture commodity hardware without particular platform capabilities and ii)enhancedCNs(eCNs)thataresimilarlybasedonx86architecturecommodityhardwareenhancedwithparticulardataprocessingaccelerationcapabilities(i.e.DPDK,AES-NI,GPUacceleration).
• a Cloud Controller node (one per NFVI-PoP) for the management and control of theaforementioned IT resources, based on Openstack platform. The Liberty release isplannedtobeadoptedforT-NOVAexperimentation.
• an SDN Controller (one perNFVI-PoP), bsased on the recent version ofOpenDayLightplatform, for the control of the virtualised network resources. The interaction andintegrationoftheSDNcontrollerwiththeOpenStackplatformisachievedviatheML2PlugincomponentprovidedbyNeutronservice.
The integration of the ODL with the Openstack in-cloud network controller (Neutron) isachieved via theML2 plugin. In this sense, Openstack is able to control the DC networkthrough this plugin and having ODL control OVS instances via OpenFlow protocol. AnalternativedeploymentmodeistouseOpenstackProviderNetworkdeploymentmode,withthenoticethatthenetworkprovisioningandtenantnetworksupportneedstobecompletelydelegatedtotheNMSusedbytheNFVI-PoPsomehowlimitingtheelasticitywrtthenetworkprovisioning.
The connectivityof this infrastructurewithotherdeployedNFVI-PoP it is realized via a L3gateway.Asitcanbeobserved,inadditiontoNFVI-PoPequipmentitisanticipatedthatanauxiliaryinfrastructureexiststofacilitatethedeploymentofcentralisedcomponents,suchastheOrchestratorandtheMarketplacemodules.
The Athens-Heraklion pilot will be based on a distributed infrastructure between Athens(NCSRDpremises)andHeraklion(TEICpremises).TheinterconnectionwillbeprovidedbytheGreekNREN(GRNET).Thisfacility isfreelyavailablefortheacademicinstitutes,supportingcertain levels of QoS. The idea behind this Pilot is to be able to demonstrate T-NOVAcapabilitiesoveradistributedtopologywithatleasttwoNFVI-PoPs,interconnectedbypre-configuredlinks.ThesetupisidealforexperimentationwithNSandVNFdeploymentissues,andperformancetakingintoaccountpossibledelaysandlossesintheinterconnectinglinks.Additionally,thisPilotwilloffertotherestoftheWPsacontinuousintegrationenvironmentin order to allow verification and validation of the proper operation of all developed andintegratedsoftwaremodules.
Inadditiontoafull-blowndeploymentofanNFVI-PoP(backboneDC),theaccommodationofalegacynetworkdomain(non-SDN)isalsoconsideredinthepilotarchitecture.Thisnetworkdomain will act as Transport network, providing connectivity to other simpler NFVI-PoPs.These PoPs will be deployed using an all-in-one logic, where the actual IT resources areimplemented around a single commodity server (with limited of course capabilities).However,theselectionoftheabovetopologyisjustifiedbytheneedtobeabletovalidateandevaluatetheServiceMappingcomponentsandexperimentwithVNFscalingscenarios.NCSRDandTEIC infrastructurebeingalready interconnectedvia theGreekNREN,which isGRNET, it is fairlyeasy tobe interconnectedandconstituteadistributedPilot forT-NOVAexperimentation.ThiswillprovidetheopportunitytoevaluateNScompositionandServiceFunction Chaining issues over a larger than a laboratory testbed deployment over 100%controllableconditions(dependingontheSLAwithourNREN).
TEICplanstohaveafullT-NOVAdeployment(i.e.includingalltheT-NOVAstackcomponents)to be able to run local testing campaigns but also participate to distributed evaluationcampaignsalongwithfederatedAthensPilot.
4.2.2. AveiroPilot
4.2.2.1. Infrastructureandtopology
PTIN'testbedfacilityistargetedatexperimentationinthefieldsofCloudNetworking,networkvirtualization and SDN. It distributed across two sites, PTIN headquarters and Institute ofTelecommunications(IT),bothlocatedinAveiro,asshowninthefigurebelow(Figure5).Theinfrastructure includes Openstack-based IT virtualized environments, an OpenDaylight-controlled OpenFlow testbed and a legacy IP/MPLS network domain based on Ciscoequipment (7200, 3700, 2800). This facility has hosted multiple experimentation anddemonstrationactivities,inthescopeofinternalandcollaborativeR&Dprojects.
PTINwill be able to host all components of the NFV infrastructure. Distributed scenariosinvolvingmultiplesNFVIPoPsseparatedbylegacyWANdomainwillalsobeeasilydeployedtakingadvantageoftheIP/MPLSinfrastructureavailableatthelab.
4.2.3. HannoverPilot
4.2.3.1. Infrastructureandtopology
FutureInternetLab(FILab)–illustratedinFigure6-isamedium-scaleexperimentalfacilityownedbythe InstituteofCommunicationsTechnologyatLUH.FILabprovidesacontrolledenvironment in which experiments can be performed on arbitrary, user-defined networktopologies,usingtheEmulabmanagementsoftware.
o IntelXeonX5675six-coreCPUat2.66GHzo 6GBDDR3RAMat1333MHzo 1NICwith2x10Gbpsportso InterconnectedbyaCISCONEXUS5596switchwith48x10Gports
• 22programmableNetFPGAcards• 20wirelessnodes,andhigh-precisionpacketcapturecards• Various software packages for server virtualization (e.g., Xen, KVM), flow/packet
processing (e.g.,OpenvSwitch,FlowVisor,ClickModularRouter,Snort)androutingcontrol (e.g., NOX, POX, XORP) have been deployed into FILab allowing thedevelopmentofpowerfulplatformsforNFVandflowprocessing.
4.2.3.2. DeploymentofT-NOVAcomponents
TheHannoverPilotwillbesetupasaNFVPoPfortheevaluationofselectedcomponentsofthe T-NOVAorchestrator.More specifically, evaluation testswill be conducted for servicemapping (i.e., assessing the efficiency of the T-NOVA service mapping methods) and forservicechaining.Intermsofservicechaining,wewillvalidatethecorrectnessofNFchaining(i.e., that traffic traverses theNFs in the order prescribed by the client) and quantify anybenefitsintermsofstatereductionusingourSDN-basedportswitchingapproach.
ITALTEL testing labs are composed by interconnected test plants (located in Milan andPalermo, Italy) and based on proprietary or third party equipment, to emulate real-lifecommunicationnetworksandcarryoutexperimentsonanytypeofvoiceand/orvideooverIP service. Theexperimental testbedwill bebasedon theavailablehardwareplatforms inItalteltestplants.ThistestplantwillbeusedtoverifythebehaviorofthevirtualSBCandthevirtualTUVNFs.
The virtual SBC, which represents the Device under Test, will be connected to Site B. ByexploitingthecapabilitiesofferedbyItalteltestlab,anumberofexperimentwillbedesignedinordertoverifytheDUTbehaviorunderawidevarietyoftestconditions.
TheSBCinSiteAisthecurrentcommercialsolutionofItaltel,namelytheItaltelNetmatch-S.Netmatch-S is a proprietary SBC, based on bespoke hardware, which can perform a highnumberofconcurrentsessions,andprovidevariousservices,suchasNATandTranscoding,bothofaudioandvideosessions.Avarietyofend-userterminalsarepresentinthetestplant,and can be used in order to perform testing on any type of service. In the lab, alsoHighDefinitionvideocommunicationandTele-presencesolutionsarepresent,andcanbeusedfortesting activities. Traffic generators are available, to verify the correct behavior of theproposedsolutionsunderloadingtrafficconditions.Finally,differenttypesofMeasurementProbes can be used, which can evaluate different Quality of Service parameters, bothintrusivelyandnon-intrusively.
The scheme in Figure 8 represents the virtual Transcoder Unit (TU) VNF. It provides thetranscodingfunctionforthebenefitofmanyotherVNFsinordertocreateenhancedservices.
The Intel Labs Europe test-bed ismedium scale data centre facility comprising of 35+ HPserversofversiongenerationsrangingfromG4toG9.TheCPUsareXEONbasedwithdifferingconfigurationsofRAMon-boardstorage.AdditionalexternalstorageoptionsintheformofBackblazestorageserversarealsoavailable.ThisheterogeneousinfrastructureisavailableasrequiredbytheT-NOVAproject.HoweverfortheinitialexperimentalprotocolsadedicatedlabbasedconfigurationwillbeimplementedasoutlinedinFigure9.Thistestbedisdedicatedto the initial researchactivities for Task3.2 (resource repository) andTask4.1 (virtualisedinfrastructure).ThenodeswillbeamixtureofInteli74770,3,40GhzCPUswith32GBofRAMandonewith2XeonE52680v2,2.8GHzand64GBofRAM.Thelatterprovides10coresperprocessor(thecomputenodehasintotal20cores)andprovidesasetofplatformfeaturesofinteresttoTask4.1and3.2(e.g.VT-x,VT-d,Extendedpagetables,TSX-NI,TrustedExecutionTechnology(TXT)and8GT/sQuickPathInterconnectsforfastintersocketcommunications).EachcomputenodefeaturesanX540-T2networkinterfacecard.TheX540hasdualEthernet10GBportswhichareDPDK-compatibleandisSR-IOVcapablewithsupportforupto64virtualfunctions.InthetestbedconfigurationoneportontheNICisconnectedtoaManagementNetwork and the other is connected to a Data Network. Inter Virtual Machine traffic ondifferentcomputenodesisfacilitatedviaanExtremeNetworksG67048portSDNswitchwithOpenFlowsupport.Themanagementnetworkisimplementedwitha10GB12portNetgearProSafe switch.Fromasoftwareperspective the testbed is running theLibertyversionofOpenStackandtheHeliumversionofOpenDaylight.Oncetheinitialconfigurationhasbeenfunctionally validated the testbed will be upgrade to Juno and Helium version releases.IntegrationbetweenOpenStackNeutronmoduleandOpenDaylightisimplementedusingtheML2plugin.VirtualisationofthecomputeresourcesisbasedontheuseofKVMhypervisors
• Technology Characterisation – Evaluation the candidate technologies for the IVM(e.g. Open vSwitch, DPDK, SR-IOV etc.) and identification of themost appropriateconfigurationsetc.
InstituteofApplied InformationTechnology (InIT)'s cloudcomputing lab (ICCLab)atZurichUniversity of Applied Sciences (ZHAW) run multiple cloud testbeds for research andexperimentationpurposes.Belowisthesummaryofvariousexperimentationcloudtestbedsmaintainedbythelab.
Table5ZHAWTestbedavailability
4.3.3.1. Description
ICCLab's bart openstack cloud is generally used for variousR&Dprojects.However, due tocommitments inotherprojects,bart(asoriginallyplanned)willnotbeavailablefortestingpurposesuntilmid2016.
Disk 4x 1 TB Enterprise SATA-3 Hard Disk, 7200 U/min, 6 Gb (SeagateST1000NM0011)
Eachofthenodesofthistestbedisconnectedthrough1GBpsethernetlinkstoHPProCurve2910AL switch, and using 1GB/s link to ZHAWuniversity network. This testbed has beenallocated 32 public IPs in 160.85.4.0/24 block, which allows collaborative work to beconducted over this testbed. ICCLab currently has 3 OpenFlow switches that can beprovisionedforuseinT-NOVAatalaterpoint.Thecharacteristicsoftheseswitchesare:
Model Pica8P-3290
Processor MPC8541
PacketMemoryBuffer 4MB
Memory 512MBSystem/2GBSD/CF
OS PicOS,stockversion
TestbedName
No.ofvCPUs RAM(GB) Storage(TB) Purpose
Lisa 200 840 14.5 Used for education and byexternalcommunity
Thistestbedcanbeeasilymodifiedtoaddmorecapacityifneeded.InitiallythistestbedwillbeusedtosupportZHAW'sdevelopmentworkinT3.4ServiceProvisioning,ManagementandMonitoring,andtask4.3SDKforSDN.Later,thistestbedcanbeusedforT-NOVAconsortiumwide validation tests as a Zurich point-of-presence (POP) (site) for the overall T-NOVAdemonstrator.Forinter-sitetests,ourtestbedcanbeconnectedtoremotesitesthroughVPNsetup.
Specifically,intheframeofTask4.3whereZHAWisdevelopinganSDKforSDNbasedontheconcrete implementationsofagreed referenceapplications,a small SDN testbedhasbeensetupwithOpenStackandOpenDaylightHeliumastheSDNcontroller.ThedescriptionandcharacteristicofthistestbedareillustratedinFigure11:
Figure12ZHAWSDNtestbednetworkdiagram
Asthefiguredescribes,thistestbedismadeof2computenode,1controllernodeand1SDNcontroller node. All nodes are connected using a physical switch with OpenVSwitch. Thecontroller node is co hosted with the switch. The OpenStack (Juno release) is setup and
configuredwithOpendaylightML2 plugin, and the SDN controller is Opendaylight Heliumrelease.Someoftheworkbeingtestedinthisenvironmentincludesachievingtenantisolationwithout using tunnels, flow redundancy to achieve resilience, service function chainingstrategies,etc.ThephysicalcharacteristicsofthistestbedaresummarizedintheTable7:
Table7SDNTest-bedspecifications
VCores 12
RAM 109GB
Nova-Storage 147GB
Cinder-Storage(distributed) 450GB
GlanceStorage(distributed) 450GB
Uplink 100Mbps
4.3.3.2. TestPlanning
ZHAWtestbedswillbeusedtovalidatetheT-NovastackandwillbeconfiguredasaPOPfordeployingtheNFsthroughtheorchestrator.Furthermore,theSDKforSDNtoolthatwillbedeveloped in T4.3 will undergo functional testing using ZHAW testbed. The tests will becategorizedunderfourbroadcategories:
• Testbed validation - The set of tests will be planned to evaluate the generalcharacteristicsoftheOpenStacktestbeditself,VMprovisioninglatencystudies,etc.
• Marketplace testing and integration – ZHAW is adapting their Cyclops billingfreameworktoincorporatetheT-NovamarketplacerequirementsofenduserbillingaswellasrevenuesharereportsfortheFPs.Lisaopenstacktestbedisbeingusedtodeploymarketplacemodules that interactwithCyclops to aid in thedevelopmentphase.The integration testswith the restof themarketplacemoduleswill alsobecarriedoutafter thedevelopmentphase isover.These testswillbecarriedout inconjunctionwith ATOS and TEICwho are themain contributors in the dashboardmodule.
4.3.4. Rome(CRAT)
4.3.4.1. Description
TheconsortiumfortheResearchinAutomationandTelecommunication(CRAT)developedasmall SDN testbed at the Network Control Laboratory (NCLAB), with the purpose ofperformingacademicanalysisandresearchfocusedonnetworkcontrolandoptimization.
CRAT testbed will be used to validate the functionalities of the SDN control plane underdevelopment inTask4.2Experimentalplanswillbedevelopedtocompareperformance indifferent scenarios (single controller, multiple controllers). Moreover, research activitiesfocusedonthevirtualizationoftheSDNcontrolplane,intermsofelasticdeploymentandloadbalancingofcontrolplaneinstances,willalsobenefitfromthetestbeddescribedabove.
4.3.5. Limassol(PTL)
4.3.5.1. Description
PrimeTel’sTripleplayplatform,calledTwister,isaconvergedend-to-endtelecomplatform,capable of supporting an integrated multi-play network for various media, services anddevices. The platform encompasses all elements of voice, video and data in a highlycustomisableandupgradeablepackage.The IPTVstreamersreceivecontent fromsatellite,off-air terrestrialandstudiosandconvert it toMPEG-2/MPEG-4overUDPmulticast,whileVideoondemandservicesaredeliveredoverUDPunicast.TwistertelephonyplatformusesVoiceoverIP(VoIP)technology.ThesolutionisbasedonopenSIPprotocolandprovidesallessentialfeaturesyouexpectfromClass5IPCentrexsoftswitches.MediaGatewaysareusedfor protocol conversion between VoIP and traditional SS7/ISDN telephone networks. IPinterconnectionswithinternationalcarriersareprovidedthroughinternationalPOPs.Italsoincludes components that provide centralized and distributed traffic policy enforcement,monitoring and analytics in an integratedmanagement system. Twister Converged BillingSystemprovidesmediation,ratingandbillgenerationformultipleservices.Itmaintainsalsoaprofileforeachsubscriber.Thecustomerpremisesequipment(CPE)providestocustomersInternet,telephonyandIPTVconnection.ItbehavesasanintegratedADSLmodem,IProuter,Ethernet switch andVoIPmedia gateway. STB receivesmulticast/unicastMPEG-2/MPEG-4UDP streams and shows them on TV. Through a Sonus interface and IP Connectivity theplatform is linked to partner’s 3GMobile Network for offering IP services provisioning tomobilecustomers.
R&DTestbed
PrimeTel’sR&Dtest-bedfacilitiescanconnecttothecompany’snetworkbackboneandutilizethenetworkaccordingly.ThroughtheR&Dtest-bedresearchengineerscanconnectstopartsofinterestontherealnetwork.IncollaborationwiththeNetworksDepartmentR&Dcouldconductnetworkanalysis, trafficmonitoring,powermeasurementsetc. andalsoallow fortestingandvalidationofnewlyintroducedcomponentsaspartoftheitsresearchprojectsandactivities. A number of beta testers could be connected to the test-bed for supportingvalidationactingasrealusersandprovidingthenecessaryfeedbackofanyproposedsystem,componentorapplicationdeveloped.
PrimeTel’stest-bedisidealforrunningthevirtualhome-boxusecaseandmoreideallytotestthiswithrealendusers.PrimeTelcurrentlyhasaround12000TVsubscribers,amongstthemanumberofwhohaveexpressedinterestinparticipatingintestingandevaluationactivities.It is foreseentoallowrealendusertestingofT-NOVAplatform,specifically for testingHGVNF.PrimeTel'sBetaTesters(around100)willbeinvitedtoparticipateintheT-NOVAtrialsduringY3,morespecifically.
This section approaches the system level validation needs by providing a step-by-stepapproachforthevalidationoftheT-NOVAUseCasesastheyhavebeenlaidoutinDeliverableD.2.1[D2.1]and laterupdated inD2.22[D2.22].ForeachUC,thetestdescription includespreconditions,methodology,metricsandexpectedresults.
TestMethodologyTheSPwillperformtheservicedescriptionprocedureinvolvingtheSLA template fulfilment by means of the connection to the SLAmanagementmodulefordifferentkindforservices.
• TimebetweentheservicedescriptioniscompletedbytheSP and the notification that the service information isavailableintheBusinessServicecatalogueandSLAmodule.
• Time since the customer has accepted the applicableconditions till theSLAcontract is store in theSLAmodule(includingSLAparametersthatwillneedtobemonitoredbytheorchestratormonitoringsystem)
TestMethodologyMeasuretimebetweenservicerequestandthemomenttheserviceis fully operational (how to verify that the service is operationaldependsonthespecificVNF)
MetricsObserve the updates in the Orchestratormonitoring repositoriesandmeasureaccuracyandresponsetimei.e.timeintervalfromthechangeinresourceusageuntiltheOrchestratorrecordsthechange
ExpectedResults• Metrics are properly propagated and correspond to the
TestMethodologyFollowprocedure similar toUC5.2 (artificially consume and drainVNFresources)and/orUC4.3 (disruptserviceoperation).ValidatethatSLAstatusisaffected.
MetricsMeasureSLAmonitoringaccuracy,especiallySLAviolationalarms.Measure response time (from the incident to the display of theupdatedSLAstatusontheDashboard)
1. Responsetime,toteardowntheservice2. Update of associated information (duration of service,
billinginfo,SLAdata).
ExpectedResultsThe resources used by this service, will be released. Billinginformation must be sent. In customer’s marketplace view, thisservicewillbeshownasstopped.
The resources used by the services, must be released. Billinginformation must be sent. In customer’s marketplace view, thisservicewill be shownas stopped. In SP’s portal view, all servicesmustbestopped.
The resources used by this service will be released. Billinginformation must be sent. In customer’s marketplace view, thisservicewill be shownas stopped. In SP’sportal view, the servicemustbestopped.
The pktgen software package for Linux [PKTGEN] is a popular tool in the networkingcommunity for generating traffic loads for network experiments. Pktgen is a high-speed
packetgenerator,runningintheLinuxkernelveryclosetothehardware,therebymakingitpossibletogeneratepacketswithverylittleprocessingoverhead.Thepacketgenerationcanbecontrolledthroughauserinterfacewithrespecttopacketsize,IPandMACaddresses,portnumbers,inter-packetdelay,andsoon.Pktgenisusedtotestnetworkequipmentforstress,throughput and stability behavior. A high-performance traffic generator/analyzer can becreatedusingLinuxPC.
At the transport layer, D-ITG currently supports TCP (Transmission Control Protocol), UDP(User Datagram Protocol), SCTP1 (Stream Control Transmission Protocol), and DCCP1(Datagram Congestion Control Protocol). It also supports ICMP (Internet ControlMessageProtocol). Among the several features described below, FTP-like passive mode is alsosupportedtoconductexperimentsinpresenceofNATs,anditispossibletosettheTOS(DS)andTTLIPheaderfields.Theusersimplychoosesoneofthesupportedproto-colsandthedistributionofbothIDTandPSwillbeautomaticallyset.
netmap/VALEisaframeworkforhighspeedpacketI/O.ImplementedasakernelmoduleforFreeBSDandLinux,itsupportsaccesstonetworkcards(NICs),hoststack,virtualports(the"VALE"switch),and"netmappipes".netmapcaneasilydolinerateon10GNICs(14.88Mpps),movesover20MppsonVALEports,andover100Mppsonnetmappipes.netmap/VALEcanbe used to build extremely fast traffic generators, monitors, software switches, networkmiddleboxes, interconnect virtual machines or processes, do performance testing of highspeednetworkingappswithouttheneedforexpensivehardware.Wehavefullsupportforlibpcapsomostpcapclientscanuseitwithnomodifications.netmap,VALEandnetmappipesareimplementedasasingle,nonintrusivekernelmodule.NativenetmapsupportisavailableforseveralNICsthroughslightlymodifieddrivers;forallotherNICs,weprovideanemulatedmodeontopofstandarddrivers.netmap/VALEarepartofstandardFreeBSDdistributions,andavailableinsourceformatforLinuxtoo.
The Multi-Generator (MGEN) [MGEN] is open source software by the Naval ResearchLaboratory (NRL) PROTocol Engineering Advanced Networking (PROTEAN) group thatprovidestheabilitytoperformIPnetworkperformancetestsandmeasurementsusingUDPandTCPIPtraffic.Thetoolsetgeneratesreal-timetrafficpatternssothatthenetworkcanbeloadedinavarietyofways.Thegeneratedtrafficcanalsobereceivedandloggedforanalysis.Scriptfilesareusedtodrivethegeneratedloadingpatternsoverthecourseoftime.Thesescriptfilescanbeusedtoemulatethetrafficpatternsofunicastand/ormulticastUDPandTCP IPapplications.Thetoolsetcanbescriptedtodynamically joinand leave IPmulticastgroups.MGENlogdatacanbeusedtocalculateperformancestatisticsonthroughput,packetloss rates, communication delay, and more. MGEN currently runs on various Unix-based(includingMacOSX)andWIN32platforms. Theprincipaltoolisthemgenprogram,whichcan generate, receive, and log test traffic. This document provides information on mgenusage,message payload, and script and log file formats. Additional tools are available tofacilitateautomatedscriptfilecreationandlogfileanalyses.
IPERF
IPERF[IPERF]isacommonlyusednetwork-testingtoolthatcancreateTransmissionControlProtocol(TCP)andUserDatagramProtocol(UDP)datastreamsandmeasurethethroughputofanetworkthatiscarryingthem.IPERFisatoolfornetworkperformancemeasurementandspecificallyforactivemeasurementsofthemaximumachievablebandwidthonIPnetworks.Itsupportstuningofvariousparametersrelatedtotiming,protocols,andbuffers.Foreachtest it reports the bandwidth, delay jitter, datagram loss and other parameters. IPERF iswritteninC.
Ostinato[OSTINATO]isanopen-source,cross-platformnetworkpacketandtrafficgeneratorand analyzer with a friendly GUI. It aims to be "Wireshark in Reverse" and thus becomecomplementarytoWireshark.Itfeaturescustompacketcraftingwitheditingofanyfieldforseveralprotocols:Ethernet,802.3,LLCSNAP,VLAN(withQ-in-Q),ARP,IPv4,IPv6,IP-in-IPa.k.aIPTunneling,TCP,UDP,ICMPv4,ICMPv6,IGMP,MLD,HTTP,SIP,RTSP,NNTP,etc.ItcanimportandexportPCAPcapturefiles.Ostinatoisusefulforbothfunctionalandperformancetesting.
The following table summarizes someof themostwidelyused trafficgenerators for L2-L4assessment.
• SIPp[SIPp]:whichisafreeOpenSourcetesttool/trafficgeneratorfortheSIPprotocol.Itincludesa fewbasicSipStoneuseragentscenarios (UACandUAS)andestablishesandreleasesmultiplecallswith the INVITEandBYEmethods. It canalso readcustomXMLscenariofilesdescribingfromverysimpletocomplexcallflows.Itfeaturesthedynamicdisplayofstatisticsaboutrunningtests(callrate,roundtripdelay,andmessagestatistics),periodicCSV statistics dumps, TCPandUDPovermultiple socketsormultiplexedwithretransmissionmanagementanddynamicallyadjustablecallrates.
• Seagull[SEAGULL]:Seagull isafree,OpenSource(GPL)multi-protocoltrafficgeneratortesttool.PrimarilyaimedatIMS(3GPP,TISPAN,CableLabs)protocols(andthusbeingtheperfectcomplementtoSIPpfor IMStesting),Seagull isapowerfultrafficgeneratorforfunctional,load,endurance,stressandperformance/benchmarktestsforalmostanykindofprotocol.Inaddition,itsopennessallowstoaddthesupportofabrandnewprotocolin less than 2 hours - with no programming knowledge. For that, Seagull comeswithseveralprotocolfamiliesembeddedinthesourcecode:Binary/TLV(Diameter,Radiusandmany3GPPandIETFprotocols),Externallibrary(TCAP,SCTP),andText(XCAP,HTTP,H248ASCII).
• TCPReplay [TCPREP]: is a suite of GPLv3 licensed utilities for UNIX (andWin32 underCygwin)operatingsystemsforeditingandreplayingnetworktrafficwhichwaspreviouslycapturedbytoolsliketcpdumpandEthereal/Wireshark.Itallowstoclassifytrafficasclientor server, rewrite Layer2, 3 and4packets and finally replay the trafficbackonto the
AmeasurementframeworkfortheevaluationofOpenFlowswitchesandcontrollershasbeendevelopedinOflops[OFLOPS].OFLOPSisanopenframeworkforopenflowswitchevaluation.The software suite consists of two modules OFLOPS and Cbench. OFLOPS (OpenFLowOperationsPerSecond)isadummycontrollerusedtostressandmeasurethecontrollogicofOpenFlowswitches.Ontheotherhand,Cbenchemulatesacollectionofsubstrateswitchesby generating large numbers of packet-in messages and evaluating the rates of thecorrespondingflow-modificationmessagesgeneratedbythecontroller.Asthesourcecodeofthe framework is distributed under an open license it can be adapted to evaluate itsperformanceofwithintheT-NOVAproject.
5.2.1.3. Service/Resourcemappingevaluationtools
AutoEmbed [DIETRICH13]wasoriginallydevelopedfortheevaluationofvariousaspectsofmulti-providerVNembedding,suchastheefficiencyandscalabilityofembeddingalgorithms,theimpactofdifferentlevelsofinformationdisclosureonVNembeddingefficiency,andthesuitabilityofVNrequestdescriptions.TheAutoEmbedframeworksupportsdifferentbusinessrolesandstorestopologyandrequest informationaswellasthenetworkstate inordertoevaluate mapping efficiency. AutoEmbed includes an extendable library which supportsintegrationofadditionalembeddingalgorithmswhichcanbecomparedagainstareferenceembedding,e.g.byusinglinearprogramoptimizationtofindoptimalsolutionsfordifferentobjectives,orbyusingadifferentresourcevisibilitylevel.Requestandtopologyinformationare exchanged using XML schema and thus simplifies intercommunication with existingcomponents. The evaluation can either be done online by using the GUI, or by furtherprocessingofthemeta-statistics(.csvfiles)computedbyAutoEmbedlibrary.
Alevin
ALgorithmsforEmbeddingofVIrtualNetworks(ALEVIN)isaframeworktodevelop,compare,andanalyzevirtualnetworkembeddingalgorithms[ALEVIN].ThefocusinthedevelopmentofALEVINhasbeenonmodularityandefficienthandlingofarbitraryparametersforresourcesanddemandsaswell ason supporting the integrationofnewandexistingalgorithmsandevaluationmetrics.ALEVINisfullymodularregardingtheadditionofnewparameterstothevirtualnetworkmodel.
For platform independence, ALEVIN is written in Java. ALEVIN’s GUI and multi-layervisualizationcomponentisbasedonMuLaViTo[MULATIVO]whichenablesustovisualizeandhandletheSNandanarbitrarynumberofVNsasdirectedgraphs.
5.2.2. VNFSpecificvalidationtools
5.2.2.1. TrafficClassifier(vTC)
InT-NOVAthevTCsharescommonpropertieswithitshardwarebasedcounterpart.Activitiesin the frame of IETF Benchmarking Methodology WG, have proposed benchmarkingmethodologies for such devices i.e. [Hamilton07] (more specific to media aware type ofclassification). The goal of this document is to generate performance metrics in a lab
TheabovemetricsareindependentoftheDeviceunderTest(DUT)implementation.TheDUTshouldbeconfiguredaswhenusedinarealdeploymentortypicalfortheusecasewherethedevice is intended. The selected configuration should be available along with thebenchmarking results. In order to increase and guarantee repeatability of the tests, theconfigurationscriptsandalltheinformationresultingtothetestbedsetupshouldbemadeavailable.Averyimportantissueforthebenchmarkingofcontent-awaredevicesisthetrafficprofilethatwillbeutilizedduringtheexperiments.Sincetheexplicitpurposesofthesedevicesvary widely but they all inspect deeply in the packet payload in order to support theirfunctionalities,thetestsshouldutilizetrafficflowsthatresampletotherealapplicationtraffic.It is important for the testing procedure to define the following application flow specificcharacteristic:
1. Maximum application session establishment rate - Traffic pattern generation shouldbeginat10%oftheexpectedmaximumthrough110%oftheexpectedmaximum.Theduration of each test should be at least 30 seconds. The followingmetrics should beobserved:• MaximumApplicationFlowrate–maximumrateatwhichtheapplicationisserved• Application flow duration – min/max/avg application duration as defined by
The vSBC incorporates two separate functionswithin a single device: the InterconnectionBorderControlFunction(IBCF)forthesignallingproceduresandtheBorderGatewayFunction(BGF) focused on the user data plane. Signalling procedures are implemented using theSessionInitiationProtocol(SIP),whilethedataoruseplaneusuallyadoptsRealtimeTransportProtocol(RTP)formultimediacontentdelivery.
Theprovidedqualityofserviceisusuallyverifiedbyanalyzingasetofparametersevaluatedineachactivesession.Thebasicparametersarerelatedtonetworkjitter,packetlossandend-to-end delay [RFC3550]. However, also instrumental measurements of ad hoc objectiveparameters should be performed. In particular, objective assessment of speech and videoqualityshouldbeachieved,using,forinstance,thetechniquesdescribedinrec.ITU-TP.862(PerceptualEvaluationofSpeechQuality,PESQ)foraudio,orfollowingtheguidelinesgiveninITU-TJ.247(Objectiveperceptualmultimediavideoqualitymeasurementinthepresenceofafullreference)forvideo.
Themetrics above summarized are strictly correlated. In fact, itmust be verified that themaximumnumberof concurrent sessions and themaximum session rate canbe achievedsimultaneously. Moreover, the quality of service must be continuously monitored underloadingconditions,toverifythattheend-userperceptionisnotaffected.Tothisend,adhocexperimentsmustbedesigned,forinstancebyanalysingafewsamplesessions,maintainedalwaysactiveduringloadingtests.
• LowOrbitIonCannon(LOIC):Thisisanopensourceapplicationthatcanbeusedforstress testing and denial-of-service attack generation. It is written in C# and iscurrentlyhostedonsourceforge (http://sourceforge.net/projects/loic/)andGitHub(https://github.com/NewEraCracker/LOIC/).
• hping (http://www.hping.org/): hping is a command-line oriented TCP/IP packetassembler/analyzer.Theinterfaceisinspiredbytheping(8)unixcommand,buthpingisn'tonlyabletosendICMPechorequests.ItsupportsTCP,UDP,ICMPandRAW-IPprotocols,hasatraceroutemode,theabilitytosendfilesbetweenacoveredchannel,andmanyotherfeaturesincluding:firewalltestingandPortscanning.Hpingworkson
• SendIP (http://www.earth.li/projectpurple/progs/sendip.html): SendIP is acommand-linetooltosendarbitraryIPpackets.Ithasalargenumberofoptionstospecify the contentof everyheaderof aRIP, RIPng,BGP, TCP,UDP, ICMP,or rawIPv4/IPv6packet.Italsoallowsanydatatobeaddedtothepacket.Checksumscanbecalculated automatically, but if you wish to send out wrong checksums, that issupportedtoo.
• Internet Control Message Protocol (ICMP): this protocol can be used to reportproblemsoccurredduringthedeliveryofIPdatagramswithinanIPnetwork.ItcanbeutilizedforinstancewhenaparticularEndSystem(ES)isnotresponding,whenanIPnetworkisnotreachable,orwhenanodeisoverloaded.
• Ping:The"ping"applicationcanbeused tocheckwhetheranend-to-end InternetPath is operational. Ping operates by sending Internet Control Message Protocol(ICMP)echorequestpacketstothetargethostandwaitingforanICMPresponse.Intheprocess,itmeasuresthetimefromtransmissiontoreception(RoundTripTime-RTT-)andrecordsanypacketloss.Thisapplicationcanbeusedtodetectwhetheraservice is under attack or not. As an example, if a service is running in a virtualmachine,checkingtheperformanceofthevirtualmachinethroughtheRTTvariationmightshowwhethertheserviceisunderattackornot.
Fortestingthevideoqualityattheuserside,somestandardizedapproachesexist.Theywillbe used as performance metrics for validating video encoding/decoding and QoS/QoEestimationtools.
TheNTIAVideoQualityMetric(VQM)[ALICANTED8.1]isastandardizedmethodofobjectivelymeasuringvideoqualitybymakingacomparisonbetweentheoriginalandthedistortedvideosequences based only on a set of features extracted independently from each video. Themethod takesperceptual effectsof various video impairments intoaccount (e.g., blurring,jerky/unnatural motion, global noise, block distortion, colour distortion) and generates asinglemetricwhichpredictstheoverallqualityofthevideo.
SubjectiveQualityEvaluations
Astheuserenvironmentisdedicatedtotheperceivedqualityoftheservicebytheuser,thereis theneed toperform subjectivequality evaluations toeffectivelydetect thequalityof a
system [ITU-RBT50013]. In this case, one can use a vast number of different evaluationmethodssuchasDoubleStimulusContinuousQualityScale[PINSON04].TheDSCQSprovidesmeansforcomparingtwosequencessubjectively.Thismeansthattheuserevaluatesonceareferenceversion(i.e.,aversionnotprocessedbythesystemunderinvestigation)andonceaprocessed version (i.e., a version processedby the systemunder investigation). The givenratinggivesafeedbackhowwellthesystemunderinvestigationperformsandifthereistheneedtoadjustparameters.
• ObjectSetSize–Thesizeoftheobjectcachedbytheproxy.Theproxycachemustbe able to quickly determine whether a requested object is cached to reduceresponselatency.Theproxymustalsoefficientlyupdateitsstateonacachehit,missorreplacement
• ObjectSize–The issuefortheproxycache is todecidewhethertocachea largenumberofsmallobjects(whichcouldpotentiallyincreasethehitrate)ortocacheafewlargeobjects(possiblyincreasingthebytehitrate).
• Recency of Reference – Most web proxies use the Least Recently Used (LRU)replacementpolicy.Recencyisacharacteristicofwebproxyworkloads.
• WebPolygraph[WEBPOLY]isafreelyavailablebenchmarkingtoolforWebcachingproxies. Polygraph distribution includes high-performance Web client and serversimulators.Polygraph iscapableofmodelingavarietyofworkloads formicro-andmacro-benchmarking. Poly has been used to test and tune most leading cachingproxiesandisthebenchmarkusedforTMFcache-offs.
• httperf [HTTPERF] is a tool for measuring web server performance. It provides aflexible facility for generating various HTTP workloads and for measuring serverperformance.Thefocusofhttperfisnotonimplementingoneparticularbenchmarkbutonprovidingarobust,high-performancetoolthatfacilitatestheconstructionofbothmicro-andmacro-levelbenchmarks.
The virtual transcoding unit evaluation involves performance comparison betweenacceleratedandnon-acceleratedversionsofthesameVNF.Theacceleratedversionexploitamulticore GPU card installed in the host machine. The vTU performance evaluation willemploymethodologiesandtoolsalreadyanalysedintheabovesectionsespeciallytheonesusedforvSBCandvHG.Themainperformancemetricsconsideredare:
DeliverableD2.52presentedarevisedplanforthevalidation/experimentationcampaignofT-NOVA. The target of experimentationhasbeen the entire integrated T-NOVA systemas awhole, rather than individual components. Taking into account the challenges in NFVevaluation, a set of system- and service- level metrics were defined, as well as theexperimentationproceduresforthevalidationofeachoftheT-NOVAusecases.Thetestbedsalreadyavailableatthepartners’sites,aswellasthepilotstobe integrated,constituteanadequatefoundationfortheassessmentandevaluationtheT-NOVAsolution,undervariousdiversesetupsandconfigurations.
ItcanbededucedthattheplanningfortheT-NOVAexperimentationcampaigntobecarriedout in the frame of WP7 is complete with regard to infrastructure allocation as well asmethodology. It is however expected that some fine-tuning of this planwill be necessaryduringtheroll-outofthetests.TheactualsequenceanddescriptionofthestepsappliedwillbereflectedinthefinalWP7deliverables.
[DIETRICH13] David Dietrich, Amr Rizk, and Panagiotis Papadimitriou. 2013. AutoEmbed:automatedmulti-providervirtualnetworkembedding.InProceedingsoftheACMSIGCOMM2013conferenceonSIGCOMM(SIGCOMM'13).ACM,NewYork,NY,USA,465-466. DOI=10.1145/2486001.2491690http://doi.acm.org/10.1145/2486001.2491690
[ID-ROSA15] R. Rosa, C. Rothenberg, R. Szabo, “VNFBenchmark-as-a-Service”,WorkingDraft,Internet-Draft, draft-rorosz-nfvrg-vbaas-00.txt, Oct 19, 2015, IETF Secretariat.<https://www.ietf.org/id/draft-rorosz-nfvrg-vbaas-00.txt>
[OFLOPS] Charalampos Rotsos, Nadi Sarrar, Steve Uhlig, Rob Sherwood, and Andrew W.Moore. 2012. OFLOPS: an open framework for openflow switch evaluation. InProceedings of the 13th international conference on Passive and ActiveMeasurement(PAM'12),NinaTaftandFabioRicciato(Eds.).Springer-Verlag,Berlin,Heidelberg, 85-95. DOI=10.1007/978-3-642-28537-0_9http://dx.doi.org/10.1007/978-3-642-28537-0_9
[RFC2681] Almes,G.,Kalidindi,S.,andM.Zekauskas,"ARound-tripDelayMetric for IPPM",RFC2681,September1999,<http://www.rfc-editor.org/info/rfc2681>.
[RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology for LAN SwitchingDevices",RFC2889,August2000,<http://www.rfc-editor.org/info/rfc2889>.
[RFC3511] Hickman,B.,Newman,D.,Tadjudin,S.,andT.Martin,"BenchmarkingMethodologyfor Firewall Performance", RFC 3511, April 2003, <http://www.rfc-editor.org/info/rfc3511>.
[RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage, "Framework for TCPThroughput Testing", RFC 6349, August 2011, <http://www.rfc-editor.org/info/rfc6349>.