50 GbE, 100 GbE and 200 GbE PMD Requirements Ali Ghiasi Ghiasi Quantum LLC NGOATH Mee6ng Atlanta January 20, 2015
Observa;onon50GbE,200GbE,andNG100GbEPMDs
q 50GbEand200GbEarecomplimentarysetofstandardsjustasweobservedinthemarketplacethecomplimentarynatureof25GbE/100GbE
– Currentgenera;onofswitchASICoffer4x25GbEbreakoutforsmallincrementalcost– Nextgenera;onswitchASICwilloffer4x50GbEbreakoutforsameeconomics– Completeeco-systemrequirebackplane,Cucable,100mMMF,possibly500mPSM4,2000m,
and10,000mPMDsandshouldfollow25/100GbEPMDsq NG100GbEPMDsaNributesandrequirements
– Currentlywiththeincreaseinvolumethemarketisenjoyingsignificantcostreduc;onfor100GbEPMDssuchas100GBase-SR4,PSM4,andCWDM4/CLR4
– CostmaynotbethemaindrivertodefineNG100GbEPMDswithexcep;onofCAUI-2– Currentlydefined100GbEPMDswillrequireinverse-muxwithintroduc;onof50GASICIO
• APMA-PMAdevicecouldaddressanyI/Omismatch• SimplestformofPMA/PMDimplementa;onoccursforthecasewhen#ofelectricallanes=
#ofop;callanes/λ– Doweneedwitheverygenera;onofelectricalI/O25G,50G,100Gintroducenew100GbE
PMDswhichareop;mizedforgivengenera;onofASICbutnotop;callybackwardcompa;ble– Thedecisiontodefinenewop;calPMDshouldnotbetakenlightlytosaveaPMA-PMAmux!
A.Ghiasi 2IEEE802.3NGOATHStudyGroup
Today’sEthernetMarketIsn’tJustaboutEnterprise
q Router/OTN– Leadsthedeploymentwithfastestnetworkinterface– Drivesbleedingedgetechnologyathighercostandlowerdensity
q Clouddatacenters– Closfabricstypicallyoperateatlowerportspeedspeedtoachieveswitch
ASICradixof32or64– Drivescost-power-densitytoenablemassivedatacenterbuiltout– Forklicupgradedoublingcapacityevery~2.5yearswithdoublingof
switchASICcapacityq Enterprise
– Enjoysthevolume-costbenefitofdeployingpreviousgenera;onofCloudDataCenterstechnology
– MorecorporateITservicesarenowhostedbytheCloudoperator– AccordingtoGoldmanSachsresearchfrom2013-2018Cloudwillgrowat
rateof30%CAGRcompareto5%forEnterpriseIT• hfp://www.forbes.com/sites/louiscolumbus/2015/01/24/roundup-of-cloud-compu;ng-forecasts-and-market-es;mates-2015/.
A.Ghiasi 3IEEE802.3NGOATHStudyGroup
EthernetSerialBitrateandPortSpeedEvolu;onq Router/OTN,Cloud,vsEnterpriseapplica6ons
– NGOATHprojectaddressesnextgenera;onCloudandEnterprise– 50GbEisnotonlyaninterfaceonCloudserverbutalsoareplacementfor40GbE
intheEnterprise.
A.Ghiasi IEEE802.3NGOATHStudyGroup 4
1"GbE""
10"GbE"
100"GbE"""
25"GbE"
400"GbE"
50"GbE"
200"GbE"
800"GbE"
1"
10"
100"
1000"
1995" 2000" 2005" 2010" 2015" 2020" 2025"
Seria
l"Bitrate"(Gb/s)"in"Rela:
on"to
"Etherne
t"Stand
ard"
Year"
Serial"Bitrate"
Standard"
Evolu;onofEthernetSpeed-Feedq NGOATHprojectisaddressingtheneedfornextgenera6onCloudtrack
– P802.3.bsaddressingtheneedfornextgenera;onRouter/OTNtrack– 25GSMFprojectaddressingtheneedofnextgenera;onEnterprise/campus
A.Ghiasi 5
20182017201620152014201320122011201020092008 2019
480GSwitch48PortsSFP+48x10GbE
1440GSwitch36PortsQSFP1036x40GbE/144x10GbE
3200GSwitch32PortsQSFP2832x100GbE/64x50GbE/128x25GbE
6400GSwitch32PortsQSFP5632x200GbE/64x100GbE128x50GbECloudTrack
400GLinecard4PortsCFP4x100GbE
800GLinecard8PortsCFP28x100GbE
3200GLinecard8PortsCFP88x400GbERouter/OTN
Track
1280GSwitch32PortsQSFP1032x40GbE/128x10/1GbE
3200GSwitch32PortsQSFP2832x100GbE/128x25/10GbE
EnterpriseTrack
480GSwitch48PortsSFP+48x10/1GbE
ApproximateYearofIntroduc6on*
*Notallpossibleconfigura;onarelisted.IEEE802.3NGOATHStudyGroup
CurrentandNextNGOATHPMDs
A.Ghiasi IEEE802.3NGOATHStudyGroup 6
25G/50GI/100G-MAC
Reconcilia6on(RS)
CGMII
PCS
FECRS(528,514)ForallPMDsexceptLR4
PMA
PMD
PMA
CAUI-44x25GAUI
100G
Base-LR4
100G
Base-SR4
100G
Base-CR4
100G
Base-KR4
PSM4
CWDM
4
CLR4
I.50GbEIncludedinhNp://25gethernet.orgapplica6onistoincreasefabricradix.
100GbE
25/100GbE
25/50/100GbE
50G/100G/200G-MAC
Reconcilia6on(RS)
CCMII
PCSFECRS(528,514)or
(544,514)PMA
PMD
PMA
CCAUI-44xLAUI
50G-LR/200G-LR4
200G
-SR4
/100G-
SR2/50G-SR
50G-CR
/200G-CR
450G-KR
/200G-KR
4
100G
-DR2
/200G-
DR4
50G-FR/200G-FR4
50/200GbE
50/200GbE
50/100/200GbE
100GbE
100G
base-FR2
or
100G
base-FR
TheChallengewith100GbENextGenPMDsq Approachsupportexis6ng100GbEPMD
A.Ghiasi 7
TP2 TP3
CAUI-4
TP3 TP2
PM
ALegacy
100 GbE KR4 FEC
Next Gen 100 GbE
KP4? PM
D
CAUI-2
PM
D
PM
A
QSFP28
PM
A
PM
A
½ofQSFP56
KR
4-K
P4
q Approachtosupportnew100GbEPMDs
TP2 TP3
CAUI-4
TP3 TP2
PM
ALegacy
100 GbE
Next Gen 100 GbE
KP4? PM
D
CAUI-2
PM
D
PM
A
QSFP28
PM
A
PM
A
½ofQSFP56
KP
4 FE
C?
q Thesimplestapproachiffeasibleistodefinenew100GbEPMDsbasedonKR4FEC.
TP2 TP3
CAUI-4
TP3 TP2
PM
ALegacy
100 GbE KR4 FEC
Next Gen 100 GbE KR4 FEC P
MD
CAUI-2 P
MD
PM
A
QSFP28
PM
A
PM
A
½ofQSFP56
IEEE802.3NGOATHStudyGroup
Observa;on:50/200GbEaretheenablerswhile100GbEisaniceaddi;on
q 200GbEPMDs(Applica6onnextgenera6onCloudDataCenter)– 200Gbase-LR4basedonCWDM4enablesnextgenera;onuncooledlowcostPMD
• Does200Gbase-FR4offerssignificantlylowercostsolu;ontodefineseparatePMD?– 200Gbase-DR4offers200GbEaswellas50/100GbEbreakout– 100Gbase-SR4offers200GbEaswellas50/100GbEbreakout– 200Gbase-KR4with30+dBrequiredtomeet1mbackplane
• Backplanelosswilldetermineexactcablereachof3to5m
q 100GbEPMDs(Applica6ondoublingradixinCloudandcouldbeabeNermatchwithnextgenera6on50GASICsIO)– 100Gbase-LR2toenablesnextgenera;onuncooledlowcostPMD
• Does100Gbase-FR4offerssignificantlylowercostsolu;ontodefineseparatePMD?– 100Gbase-DR2use½ofthe200Gbase-DR4– 100Gbase-SR2op;ons:use½ofthe200Gbase-SR4ordefineadual-λduplex– Tooearlytodefineserial100Gb/sandnoneedtodefine2lanesCuKR2/CR2
q 50GbEPMDs(Nextgenera6onserversandnextGenEnterprise/campus)– 50Gbase-LRrequiredforthecampusandaccessapplica;on– Doweneedtodefineboth50Gbase-FRand50GBase-DR?– 50Gbase-SR– 200Gbase-KR4with30+dBrequiredtomeet1mbackplane
• Backplanelossshoulddeterminetheexactcablereach3to5m.A.Ghiasi 8IEEE802.3NGOATHStudyGroup
50Gb/s/laneInterconnectSpaceq OIFhasbeendefiningUSR,XSR,VSR,MR,andLR
– OIF-56G-LRisgoodstar;ngpointbutdoesnotsupportprac;cal1mbackplaneimplementa;onbut27.5dBisinsufficienttobuildprac;calbackplanes!
A.Ghiasi 9
Application Standard Modulation Reach Coupling Loss
Chip-to-OE(MCM)
OIF-56G-USR NRZ <1cm DC 2dB@28GHz
Chip-to-nearbyOE(noconnector)
OIF-56G-XSR NRZ/PAM4
<5cm DC [email protected]@14GHz
Chip-to-module(oneconnector)
OIF-56G-VSR NRZ/PAM4 <10cm AC 18dB@28GHz10dB@14GHz
IEEECDAUI-8 PAM4 <10cm AC [email protected]
Chip-to-chip(oneconnector)
OIF-56G-MR NRZ/PAM4 <50cm AC 35.8dB@28GHz20dB@14GHz
IEEECDAUI-8 PAM4 <50cm AC [email protected]
Backplane(twoconnectors)
OIF-56-LRIEEE
PAM4 <100cm100cm
ACAC
27.5dB@[email protected]
IEEE802.3NGOATHStudyGroup
TEWhisper40”Backplane“TheGoldStandard”
A.Ghiasi 10
See:hfp://www.ieee802.org/3/bj/public/jul13/tracy_3bj_01_0713.pdf
IEEE802.3NGOATHStudyGroup
Responseof40”TEWhisperBackplanewithMegtron6– 30”backplaneMegtron6HVLPwith6milstraces
• Fordenseapplica;onmoreprac;caltracewidthwillbe4.5-5mils– Daughtercards5”eachMegtron6VLPwith6milstraces– Thelossis~30dBat12.87– Actualimplementa;onmayneedtousenarrowertraceslike4-5mils
increasingthelossfurther– Withbackplanenotshrinking30-32dBlossisrequiredforprac;callinecards.
A.Ghiasi 11IEEE802.3NGOATHStudyGroup
25G/50GChannelSummaryResultsforTEWhisper1mBackplane
TestCases
ChannelIL(dB)
Channel+PKGIL(dB)
ILD ICN(mV)
PSXT(mV)
COM(dB)
25GNRZWithIEEE12mmPackage 28.4 30.4 0.37 1.60 4.0 5.5
25GNRZWithIEEE30mmPackage 28.4 32.5 0.37 1.63 3.3 4.8
25GPAM4WithIEEE12mmPackage 16.4 17.1 0.05 0.98 2.0 5.7
25GPAM4WithIEEE30mmPackage 16.4 18.1 0.05 0.98 1.8 5.7
50GPAM4WithIEEE12mmPackage 29.7 34.7 0.41 1.65 3.1
50GPAM4WithIEEE30mmPackage 29.7 36.7 0.41 1.64 2.66
A.Ghiasi 12
q Closingthelinkbudgeton30dBchannelwith2dBCOMmarginisnottrivial– Alossreduc;onalsonotanop;ongivenTEbackplanetracewidthratherwide
of6milswheretypicallinecardtracewouldbein4.5-5mils!
IEEE802.3NGOATHStudyGroup
MoreAdvancePCBMaterialOnlyModestlyImprovestheBackplaneLoss
q EvenmovingfromMegtron6DF~0.005toTachyonDF~0.0021thelossonlyimprovesby~20%– WithDF≤0.005lossisnowdominatedbyconductorsizeandroughness.
A.Ghiasi 13
LeeRitcheySourceDeisgncon2015
IEEE802.3NGOATHStudyGroup
Summaryq In802.3todayweneedaddress3markets
– Router/OTNbleedingedgetechnologyandspeed– ClouddrivenbyforklicupgradeasresultofswitchBWdoublingevery~2.5years– Enterpriseleveraginglastgenera;oncloudtechnology
q 50/200GbEofferop6mumsolu6onsetfornextgenera6oncloudwith50GbEforserversand200GbEforfabrics
– Incurrentdatacenterbuildout50GbE(25GMSA)isdeployedtodoubleradixandfabriccapacity
• Innextgenera;ondatacentershighdensity100GbElikelywillbedeployedtobuildultrascalefabric
q NextGen100GbEPMDscanbebasedon200GbEPCS/FECoritcanbedefinedtobebackwardcompa6bleusingClause82PCSandKR4FEC
– TheadvantageofusingcommonFECfor100GbEand200GbEistoachieveiden;calperformanceforaPMDopera;nginfullrateorbreakoutmode
– Consideringtheinvestmentmadeincurrent100GbEPMDsbackwardcompa;bilityshouldanimportantconsidera;on
q Toenablenextgenera6on6.4Tblinecard,thebackplanebasedonimprovedFR4materialmustoperateat50Gb/s/lane
– Aminimumlossof30dBisrequiredforconstruc;onof1mconven;onalbackplaneq The802.3needtobalancecloudapplica6onsdrivenbyforklioupgradeaswellas
synergyandcompa6bilityacrossEtherneteco-system.
A.Ghiasi 14IEEE802.3NGOATHStudyGroup