Silicon Photonics and the Future of Optical Connectivity in the Data Center ECOC MARKET FOCUS September 19, 2016 Robert Blum Director, Strategic Marketing and Business Development 1
Silicon Photonics and the Future of Optical Connectivity in the Data CenterECOC MARKET FOCUS
September 19, 2016Robert Blum
Director, Strategic Marketing and Business Development
1
Data Center TrafficIs doubling
every 12 months
2Source: Estimates based on Facebook and Google publications
waves of compute1960S
Mainframes
1990S
Client + Server
2000S
Web
2010+
Cloud
Terminals Personal Computers Mobile Things & Machines
3
Data is the next disrupter
Smartphone
30 MB / day
PC
90 MB / dayConnected Car
4 TB / dayConnected Plane
40 TB / day
Connected Factory
1 PB / dayMachine to Machine (M2M) connections will be47% of total devices & connections
4Source: http://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-index-gci/Cloud_Index_White_Paper.htmlhttps://datafloq.com/read/self-driving-cars-create-2-petabytes-data-annually/172
Other names and brands may be claimed as the property of othersSource: Estimates from Facebook, Google, Cisco publications, and Intel network model
Optical Connectivity as % of Networking Spend
45%+
Facebook Data Center Network Design
>200K Servers10K+ switches
>$1B
Facebook Data Center Fort Worth, TexasGoogle Data Center
Worldwide Annual Data Center Traffic
(~5X global internet traffic)
5 ZB
Emergence of Hyper Scale Data CentersData center networks are struggling to keep up with exponential data growth
5
Source: 2015 Intel market model based in part on Dell’Oro, Crehan and Lightcounting 2015 reports
In rack
Across row
Between DC
Across DC
$2.1B
$1.2B
2016 2018 2020
$5.1B
Data Center 100G+ TAM
Data center connectivity TAM
Data Center total spend on 100G and 400G interconnects
Connected world, machine-to-machine traffic, data analytics &
machine learning driving exponential data growth
Continual innovation needed to support data growth
6
THE Data Center network
7
DEPLOYED TODAY
UPGRADING NOW
Inter DataCenter10km-metro
10G/40G/100G DWDM
100/200/400GDWDM
Spine-Core500m-2km
40G SMF
100G SMFLeaf-Spine300m-2km
40G MMF or SMF
TOR-Leaf100m-500m
40G MMF or SMF
Server-Top of Rack (TOR)1m-30m
10GCu or AOC
25GCu or AOC
Network Architecture TODAY
8
Core Network / Inter Data Center
Super Spine/Core
Leaf
Spine
ToR
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
Worldwide Enterprise Server Market Segment
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020
Worldwide Cloud Server Market Segment
Worldwide Server Market – Network MetricsDell’Oro Group July 2016
1GbE 10GbE 25GbE 40GbE 50GbE 100GbE
9Source: Dell’Oro worldwide server market, July 2016Other names and brands may be claimed as the property of others
Disaggregation of resources
10Source: M. Kumar, M. Nachimuthu, Intel, Next Generation Rack Scale Design, IDF 2016
MICROSOFT VIEw – Intra DC Generation 2
11
ToR
Tier 1 Tier 1
Tier 3 Tier 3
ToR
…Optical Fan-Out…
<80km Connections
100/200Gb
(400G FlexE)
>80km Connections
100/150/200Gb
(400G FlexE)
Intercity Metro
Tier 3 Switch
X by 400Gb
Tier 1 Switch
X by 400Gb<20m AOC
Connections
400Gb
<20m Connections
100GbTOR
Xx100Gb
Servers<3m DAC50/100Gb
Tier 2 Tier 2Tier 2 Switch
X by 400Gb
<2,000m Connections
400Gb
<2,000m Connections
400Gb
Tier 1 Tier 1
• 400Gb ecosystem• Primarily 50Gb attached servers with initial
100Gb deployments• Optics to the server likely with 100Gb
deployments• All links above the TOR are 400Gb
• 500m DR optics used for all links• FlexEtherent allows 400Gb inter DC links• 400Gb ecosystem looks like a good time to
transition to on-board optics
Tom Issenhuth, Microsoft. Panel discussion at IDF. IDF16 CLDPN01 — Silicon Photonics and the Future of Optical Connectivity in the Data Centerhttp://myeventagenda.com/sessions/0B9F4191-1C29-408A-8B61-65D7520025A8/14/5#sessionID=1486. Other names and brands may be claimed as the property of others
MICROSOFT VIEw – Optics to the server• Today copper is only used intra rack so a TOR is required for every server rack
• TORs keep getting larger but the number of servers per rack keeps reducing due to increased power consumption so the architecture is getting increasingly inefficient
• As server NICs keep increasing in speed (25Gb to 50Gb to 100Gb to ?) at some point copper will not be suitable for intra rack connections so optics will be required
• It is not a question if optics to the server will be required but when
• The use of optics would dramatically increase but at a much lower $/Gb
ToR
Servers<3m DAC Cables
to ToRs
<20m
ToR
…
Tier 1 Tier 1
<20m
Optical Fan-Out
…
Tier 1 Tier 1
Servers<20m OpticalLinks to tier 1
Tom Issenhuth, Microsoft. Panel discussion at IDF. IDF16 CLDPN01 — Silicon Photonics and the Future of Optical Connectivity in the Data Centerhttp://myeventagenda.com/sessions/0B9F4191-1C29-408A-8B61-65D7520025A8/14/5#sessionID=1486. Other names and brands may be claimed as the property of others 12
BEYOND 100G - NEXT GENERATION INTERFACES for 400G+
QSFP-DD NEW MODULE SIMILAR TO CURRENT QSFP, BUT WITH AN ADDITIONAL ROW OF CONTACTS PROVIDING FOR AN EIGHT LANE ELECTRICAL INTERFACEhttp://www.qsfp-dd.com/
COBOConsortium for On-Board Optics Developing specifications for interchangeable and interoperable optical modules that can be mounted onto printed circuit boardshttp://cobo.azurewebsites.net/
FOUNDERS/PROMOTERS
FOUNDING MEMBERS
13
Intel Optical Engine
QSFP-DD image from: http://www.qsfp-dd.com/Other names and brands may be claimed as the property of others
QSFP-DD
Ethernet switch historyand Package Limitations
Source: Rob Stone, http://www.ethernetalliance.org/wp-content/uploads/2014/10/Rate-Debate-TEF-v6.pdf. OFC 2016 Executive Forum Panel 4. http://ieee802.org/3/cfi/1115_1/CFI_01_1115.pdfOther names and brands may be claimed as the property of others
• I/O Bandwidth is becoming a network constraint
• Continued bandwidth growth requires increasing the number of physical I/O ports on the switch to support radix and bandwidth requirements
• This drives the need for embedded and integrated optics
14
2020+2016100G MSA Pluggable
Enabling transition to100G switch-to-switch
connectivity
High density integrated
Address electrical I/O constraintsHighest density
Lowest system power
400/800G embedded
Embedded optics for density, signal integrity,
and power
2018 / 2019400G MSA Pluggable
Supporting next-generation switch12.8Tb/s and 50G I/O
Technology development for miniaturization, high-temp operation, low power process
Future of optical connectivity in Data centersExpected form factor evolution
QSFP-DD image from: http://www.qsfp-dd.com/Other names and brands may be claimed as the property of others
15
Requirements for integrated optical I/O
Size
Bandwidth Density (Gbps/mm2)
100xPower per Bit
(pJ/bit)
<10
High-temperature laser operation
>100C
Temperature
Power
Cost per Bit ($/Gbps)
<$1
Cost
16
Intel® Silicon Photonics
17
INTEL®Silicon
Photonics
INTEL® Silicon Photonics
New high-speed I/O connectivity based on photonics
Intel wafer-scale manufacturing with hybrid laser
Platform for future high-density switch integration
Silicon integration Silicon scale Silicon manufacturing
18
Intel® Silicon PhotonicsWafer Scale Processing;
lithographically defined laserCMOS process
High-speed modulators and receivers for 25G and 50G
Scalable to multiple frequencies for high bandwidth, high density
19
High Temperature Reliability and PErformance
5000 hours HTOL 70C PERFORMANCE (>35% MM)
20See product data sheet for complete performance specifications
HIGH volume manufacturing
21
LOT ID#(8 wafers each)
STATISTICAL PROCESS CONTROL
DESIGN OF EXPERIMENTS
DOE ID#
Design Target
Enabling Bandwidth Growth with intel® Silicon Photonics
Future Requirements for Optical InterconnectsTrue wafer-scale integration and manufacturing with hybrid laser
Most flexible optical integration platform including WDM
Bandwidth Density (Gbps/mm2)
100xPower per Bit
(pJ/bit)
>30 <10Cost per Bit ($/Gbps)
<$1
Shipping to customers now for 100G data center ramp
Supporting open standard100G optics and beyond
TodayPluggable 100G
TomorrowPluggable/Embedded
400G/800G
NextIntegrated Opticsto Switch & Server
Driving Industry Form Factor Evolution
Intel® Silicon Photonics
Other names and brands may be claimed as the property of others 22
Thank you!
23