Sudhanshu (Suds) Jain and Pushkar Patil SER1729BE #VMworld2017 #SER1729BE Networking for Virtual Environments – What’s New and What’s Next for vSphere networking VMworld 2017 Content: Not for publication or distribution
Sudhanshu (Suds) Jain and Pushkar Patil
SER1729BE
#VMworld2017 #SER1729BE
Networking for Virtual Environments – What’s New and What’s Next for vSphere networking
VMworld 2017 Content: Not fo
r publication or distri
bution
• This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not been determined.
Disclaimer
#SER1729BE CONFIDENTIAL 2
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere Networking
3
Telco & HPC
Cloud
Storage
NSX
Ultra-low latency, packet
intensive workloads,
scale-out
Multi-tenancy, zero-trust,
Specialized hardware and
customization
Low latency, high-
bandwidth & predictable
performance
High packet rate, fast packet
manipulation, hardware offloads,
All-IP networks
ADDRESSING VARIED USE-CASES
vSphere Networking
Transforming World of Enterprise
Scale, HPC-like and Highly
Distributed
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
#SER1729BE CONFIDENTIAL 4
Agenda
1 vSphere Networking 101
2 vSphere Networking – What’s New?
3 Innovations Moving Forward
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere Networking 101
#SER1729BE CONFIDENTIAL 5
Basic Concepts NIOC v3.0
L3-vMotion with
multiple TCP/IP
stack
Network
Monitoring and
Troubleshooting
Guest
Introspection
Multicast
VMworld 2017 Content: Not fo
r publication or distri
bution
Host NHost A
Virtual Switch Concepts
#SER1729BE CONFIDENTIAL 6
vSphere Virtual Switch
VM1 VM2
Virtual NIC – vNIC
Virtual Port/s
Virtual SwitchvSphere Virtual Switch
VM1 VM2
…
…
…
PG-A PG-B
PortGroup/s (PG/s)
PG-A PG-B
- vmnic / dvUplink
- Physical NIC / UplinkVLAN tagged traffic
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
Data Center Host NHost A
vSphere Distributed Switch
#SER1729BE CONFIDENTIAL 7
…
…
vSphere Virtual Switch vSphere Virtual SwitchvSphere distributed Switch
VM1 VM2VM1 VM2
vCenter
…
dvPort
dvPortGroup
PGPG
dvUplink-PG
dvUplink/dvUplinkPG
Central
management
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
Host CHost BHost A
VDS(local component)
VDS(local component)
VDS Architecture: Local Components
• A local component of the VDS is instantiated on each host
• dvPortGroups, dvUplink PortGroup configuration on each host is pushed from vCenter
dvUplink-PGdvUplink-PG
dvPG-A
VM1
vCenter
dvPG-A
VM2
dvPG-B
VM3
VDS(local component)
dvUplink-PG
Physical data center
vCenter
representation of
the data center
#SER1729BE CONFIDENTIAL 8
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
Host A Host B
vSphere Distributed Switch
Vmkernel Interfaces
• Several services create dedicated interfaces, called vmkernel interfaces or vmknics.
• Example:
dvUplink-PG
dvPG-B
VM3
dvPG-A
VM1
dvUplink-PG
VM2
dvPG-A
dvPG-MGMT
MGMT
dvPG-HA
HA
dvPG-vMotion
vMotion
dvPG-MGMT
MGMT
dvPG-HA
HA
dvPG-vMotion
vMotion
– Management, High Availability, vMotion, NSX VTEP, etc…
dvPG-VTEP
VTEP
#SER1729BE CONFIDENTIAL 9
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
Host BHost A
vSphere Distributed Switch
End-to-End L2
Physical Network
VLAN Backed VDS
• One dvPortGroup one VLAN on the physical infrastructure
• Coupling of virtual/physical is achieved with ToR integration
• Require End-to-end Layer 2 network
VLAN 1
Legend :
dvPG-A backed by VLAN 1
dvPG-B backed by VLAN 2
dvUplink PG
dvUplink
VLAN 2 Physical infrastructure needs
to be configured for VLAN 2
#SER1729BE CONFIDENTIAL 10
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
Host A
Host Uplink Connectivity Recommendations
• Avoid single point of failure
– Connect to separate network devices
– Up to 32 uplinks possible,
– Recommend 2x10G rather than N x 1G
• When using a limited number of physical uplinks
– Don’t dedicate physical uplinks to vmknics
– Move all uplinks to VDS and share infrastructureand data traffic
– Enable NIOC different classes of service sharing uplink
• VXLAN HW capable NICs recommended (Emulex, Intel)
• Configure Port Fast and BPDU guard on Switch Ports
– No STP running on VDS
– VDS never bridge traffic between its uplinks cannotloop traffic
#SER1729BE CONFIDENTIAL 11
vmnic0 vmnic1
vSphere
Distributed Switch
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
VDS Uplink Connectivity
A Host is typically connected via several uplinks for redundancy and added bandwidth
Redundancy
– The host can sustain the loss of an uplink
– Link failure results in degraded bandwidth (and this might have an impact on operations)
Added Bandwidth
First, remember that 2x10Gbps uplinks is not equivalent to a 20Gbps uplink
VDS
vmnic0 vmnic1
ToR 0 ToR 1
20Gps: Lucky
VDS
vmnic0 vmnic1
ToR 0 ToR 1
10Gps: Unlucky
– 2x10Gbps uplinks provide between a theoretical 20Gbps down to 10Gpbs
– Efficiency depends on even packet load balancing, VDS provides several options (detailed later)
#SER1729BE CONFIDENTIAL 12
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
Load Balancing Options in Traditional VLAN Backed VDS
• Teaming options only differ on how they spread traffic.
VDS
vmnic0 vmnic1
VM1
dvPG A
VM2
dvPG B
Explicit Failover
Granularity: Port group
• All traffic to/from a given port group
is pinned to an uplink
VDS
vmnic0 vmnic1
VM1
dvPG A
VM2
Source port hash
Granularity: VM
• All traffic to/from a particular VM
is pinned to an uplink
VDS
vmnic0 vmnic1
VM1
IP Hash/LACP
Granularity: flow
• A particular flow is pinned
to an uplink
dvPG A
Flow A Flow B
#SER1729BE CONFIDENTIAL 13
Basic Concepts
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere Multicast
Up to vSphere 5.5, multicast support is achieved basedon mac address filtering:
• VM registers for a certain IP multicast group
• Programs the vnic to receive the corresponding L2 multicast mac address
• The VDS only delivers matching mac addresses
Limitations:
• Mac address collision inefficiency
• Implementation allows for 32 mac address filter per vnic, beyond that multicast is delivered to uninterested VMs
• No source specific filtering possible
VDS
on Host A
VM1 VM2 VM3
224.11.1.2 230.11.1.2
0100.5e0b.0102 0100.5e0b.0102
destIP: 224.11.1.2destMac: 0100.5e0b.0102destIP: 230.11.1.2destMac: 0100.5e0b.0102
#SER1729BE CONFIDENTIAL 14
Multicast
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere Multicast
vSphere 6.0 introduces:
• IGMP v1-3 snooping for IPv4 multicast
• MLD v1-2 snooping for IPv6 multicast
• The implementation allows for precise and efficient multicast delivery within the host*.(*IGMP/MLD snooping information not used to prune traffic between hypervisors on a VXLAN backed logical switch)
VDS
On Host A
VM VM VM VM VM VM VM
• No collision (does not rely on mac addresses)
• Capable of source specific filtering
• Up to 256 multicast groups can be handled per port
(S1,224.11.1.2) (S1,224.11.1.2)(S1,224.11.1.2)(S2,224.11.1.2) (S2,224.11.1.2)(S2,224.11.1.2)
230.11.1.2
230.11.1.2
230.11.1.2
#SER1729BE CONFIDENTIAL 15
Multicast
VMworld 2017 Content: Not fo
r publication or distri
bution
Goal for NIOC
• Protect and prioritize important traffic when there is contention on the uplinks
Gold dvPG Bronze dvPG
VDS
Gold
VM
Bronze
VM
Contention
VDS/Manage/Resource Allocation
#SER1729BE CONFIDENTIAL 16
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
NIOCv3: Reservation Appears
VDS/Manage/Resource Allocation/System Traffic
• Minimum bandwidth can be reserved for system traffic
NEW
#SER1729BE CONFIDENTIAL 17
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
System Traffic Rules Enforced at the Uplink
VDS/Manage/Resource Allocation/System Traffic
Gold dvPG
VDS
Gold
VM1
Gold
VM2
#SER1729BE CONFIDENTIAL 18
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
VM Configuration Enforced at the vNic!
VDS/Manage/Resource Allocation/System Traffic
Gold dvPG
VDS
Gold
VM1
Gold
VM2
VM Hardware Configuration
#SER1729BE CONFIDENTIAL 19
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
Integration with the Distributed Resource Scheduler (DRS)
NIOCv3 allows DRS to take network requirements into account
• Newly created VMs are dispatched to hosts that can accommodate their bandwidth reservation
• If the network cannot meet the constraints of a VM, that latter VM cannot be started
• A VM can be moved dynamically to a different host by DRS if its bandwidth constraints are not met any more (ex. following uplink failure.)
#SER1729BE CONFIDENTIAL 20
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
500Mbps500Mbps
Example: DRS in Action with NIOCv3
• Uplinks have been configured to reserve 500 Mbps of VM traffic
• 2 VMs, each reserving 300 Mbps are up on Host 1
• An uplink on Host 1 fails: the reservation for the VM is not met
Gold dvPG
VDS
Gold
VM2
Gold
VM3
Gold dvPG
VDS
Gold
VM1
500Mbps
300Mbps300Mbps 300Mbps
500Mbps
Demo Cluster
Uplink Failure
+ = 600 Mbps
Host1 Host210.114.221.196 10.114.221.199
#SER1729BE CONFIDENTIAL 21
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
500Mbps
Example: DRS in Action with NIOCv3
• Uplinks have been configured to reserve 500 Mbps of VM traffic
• 2 VMs, each reserving 300 Mbps are up on Host 1
• An uplink on Host 1 fails: the reservation for the VM is not met
• The DRS evacuates automatically one VM to Host 2
Gold dvPG
VDS
Gold
VM2
Gold dvPG
VDS
Gold
VM1
500Mbps
300Mbps300Mbps 300Mbps
500Mbps
Host1 Host2
Gold
VM3
10.114.221.196 10.114.221.199
#SER1729BE CONFIDENTIAL 22
NIOC
VMworld 2017 Content: Not fo
r publication or distri
bution
VM
What Do We Call L3 vMotion and Why Would I Want It?
• L3 vMotion: achieved leveraging Layer 3 connectivity only, no Layer 2 needed
Host1
L3 Network
InfrastructureESXi
Host2
ESXi
No Layer 2 Required
Why?
• Because I am expanding my network with hosts in a different subnet (with no L2 connectivity)
• Because I don’t want to stretch L2 in my network
#SER1729BE CONFIDENTIAL 24
L3 vMotion
VMworld 2017 Content: Not fo
r publication or distri
bution
VM2
vMotion over a Layer 3 Network
Two kinds of traffic are involved during vMotion:
• The guest traffic:
– There need to be L2 adjacency between source and destination
– This is achieved by running L2 over VXLAN
• The vMotion infrastructure traffic used to sync the VM state:
– This traffic is IP already routed from a vmkernel interface
VM1
Host1
L3 Network
InfrastructureESXi
Host2
ESXiVXLAN vmk1vmk1
L3
L2
#SER1729BE CONFIDENTIAL 25
L3 vMotion
VMworld 2017 Content: Not fo
r publication or distri
bution
Default IP Stack
default 11.0.0.1
Default IP Stack
default 10.0.0.1
So, What is the Problem?
• We want vmkernel interfaces on different networks for security reason
• But (most) vmkernel interfaces share the same IP stack
• The default gateway for this stack is configured for the management traffic
L3 Network
Infrastructure
• An easy solution is to have all the vMotion vmknics in the same subnet vMotion vmknics don’t need a default gateway if all their peers are in the same subnet
ESXivmk1
vMotionvmk0
mgmt ESXivmk1
vMotionvmk0
mgmt
10.0.0.10 20.0.0.10 11.0.0.10 20.0.0.20
L2
#SER1729BE CONFIDENTIAL 26
L3 vMotion
VMworld 2017 Content: Not fo
r publication or distri
bution
How about Putting vMotion vmknics in Different Subnets?
• Because the default gateway is already used by the management vmknic, we need to add more specific static routes for remote networks
• If there are many remote networks, several static routes must be entered
– On all the hosts
– Error prone
– Not supported by VMware, unless the customer goes through the RPQ process
Default IP Stack
default 11.0.0.1
20.0.0.1030.0.0.1
40.0.0.1030.0.0.1
50.0.0.1030.0.0.1
Default IP Stack
default 10.0.0.1
30.0.0.1020.0.0.1
40.0.0.1020.0.0.1
50.0.0.1020.0.0.1L3 Network
InfrastructureESXivmk1
vMotionvmk0
mgmt ESXivmk1
vMotionvmk0
mgmt
10.0.0.10 20.0.0.10 11.0.0.10 30.0.0.10
#SER1729BE CONFIDENTIAL 27
L3 vMotion
VMworld 2017 Content: Not fo
r publication or distri
bution
vMotion IP Stack
default 20.0.0.1
vMotion IP Stack
default 30.0.0.1
Solution: Multiple TCP/IP Stacks
• Provide several IP stacks: this way, vMotion vmknic can have its own default gateway
Default IP Stack
default 11.0.0.1
Default IP Stack
default 10.0.0.1
L3 Network
InfrastructureESXivmk1
vMotionvmk0
mgmt ESXivmk1
vMotionvmk0
mgmt
10.0.0.10 20.0.0.10 11.0.0.10 30.0.0.10
• Other benefits: better isolation (memory heap, ARP tables etc…)
• VXLAN already had its own TCP/IP stack
#SER1729BE CONFIDENTIAL 28
L3 vMotion
VMworld 2017 Content: Not fo
r publication or distri
bution
Configuration
#SER1729BE CONFIDENTIAL 29
L3 vMotion
VMworld 2017 Content: Not fo
r publication or distri
bution
Network Operation & VisibilityGain Valuable Insight for Efficient Operation
- Capture packet data- Collect logs
- Collect network and application flows
- Poll data- Receive
events
• Monitor interface and service statistics
• Discover network topologies
SNMP MIBs
APIs
Integration Options
Use cases
• Application and network performance
• Capacity planning
IPFIX
NetFlow
• Packet level troubleshooting
and root cause analysis
• Log analysis
Port Mirroring
Logs
Traffic Copy
#SER1729BE CONFIDENTIAL 30
Network Operation
& Visibility
VMworld 2017 Content: Not fo
r publication or distri
bution
Generic IP Fabric
Host A
vSphere
Distributed Switch
Host Level Packet Capture
dvUplink-PG
Logical SW A
VM1
dvPG-VTEP
VXLAN
VTEP
▪ CLI based lowest level troubleshooting tool
▪ Can operate at any stage in the packet’s life like:
▪ dvPort
▪ Vmknic
▪ Uplink
▪ Extensive range of filters such as source/destination mac, IP, Protocol, VLAN, VXLAN, ports etc…
▪ pcap format output for use with protocol analyzers such as Wireshark
#SER1729BE CONFIDENTIAL 32
Network Operation
& Visibility
VMworld 2017 Content: Not fo
r publication or distri
bution
Web App DB
VMVM
VMVM
VMVM
VMVM VMVM
VMVM
VMVM
VMVM VMVM
VMVM
VMVM
VMVM VMVM
VMVM
VMVM
VMVM
Leverage vSphere Data for Operation & Visibility
33
e.g. vRealize Network Insight with vDS, IP FIX, vCenter, NSX Manager Data
Network Operation
& Visibility
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
• Guest Introspection support for varied GOS, leveraging agent
less approach for security
• Get all VM asset inventory and context - file, process, registry
key - via Guest Introspection
• Aligned with NSX EPSec partner support
Overview
• Real-time security without administrative and memory
overhead using Agentless approach
• Increased security for VDI use cases
Benefits
Guest Introspection
34
Windows 10 Support
34
Guest IntrospectionFile, user identity, process
(application), network connections, registry keys, etc.
Virtualization Platform
Simplified DeploymentAutomated deployment of 3rd
party appliance to all selected clusters in data center.
Guest Introspection
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
End-point Security: Leverage NSX Manager
35
Available as part of vSphere Download Group
Guest Introspection
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
End-point Security: Third-party Solutions
36
Available on VMware Compatibility Guide
Guest Introspection
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
37
Agenda
1 vSphere Networking 101
2 vSphere Networking – What’s New?
3 Innovations Moving Forward
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere 6.5 – What’s New?
#SER1729BE CONFIDENTIAL 38
25/50/100GPerformance
Improvements
Default
Gateway per
vmkNIC
RDMA &
pvRDMA
SR-IOV
EnhancementScaling LimitsVMworld 2017 Content: N
ot for publicatio
n or distribution
1G 10G 40G 50G 100G
#SER1729BE CONFIDENTIAL 39
Datacenter is getting ready for next forklift in infrastructure
Good market traction for 25G
Volume and cost curve is in favor
Many drivers – analytical workloads, large data-
sets, all-flash storage, HPC, NFV
Many challenges – need hardware acceleration,
stack optimization to deliver on promise
OCP Project Olympus is designed with 50G standard connectivity for servers
25/50/100G
VMworld 2017 Content: Not fo
r publication or distri
bution
100G Host & 100G VM
40
Bring your networking appetite over vSphere
Guest OS 1 Guest OS 2
ESXi 1ESXi
VMXNET3
100 Gbps 100 Gbps
100 Gbps
100 Gbps 100 Gbps
VMK VMK
❖ vSphere I/O is continuously evolving along
with NIC speeds
❖ In vSphere 6.5, support for 100G
bandwidths is added
❖ Unlocking new possibilities for vSphere
services/ apps
❖ ESXi can ‘now’ achieve close to 100G with
Single VMK interface
❖ A single VM can achieve 100G in ESXi
25/50/100G
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
vMotion on SteroidsFaster Host Evacuations; Reduced Maintenance Window!
VM Size is growing
Monster VM is common deployment
on vSphere
vMotion could be challenging to
perform for large VM
Customers are engineering network
with multiple vmkNIC; complex
networking, additional port, cabling
cost
Accelerate vMotion using higher
speeds/feeds; 50/100G Ethernet
Performance optimize vSphere
networking stack and kernel to
leverage single vmkNIC to
achieve higher throughput
No more multiple Ethernet ports
just for vMotion, simplified
networking; lower TCO
Challenges
Tech Preview
vSphere vMotion
over 100G Ethernet
Approach
25/50/100G
41
VMworld 2017 Content: Not fo
r publication or distri
bution
Tech Preview 25/50/100G
Demo: 100G vMotionFaster Host Evacuations; Reduced Maintenance Window!
VMworld 2017 Content: Not fo
r publication or distri
bution
Upping the Stake in Networking Stack
44
Strengthening the core stack for 25/50/100G
PNIC
Virtual Switch
VM
vNIC2vNIC1
PNIC
Virtual Switch
VM
vNIC2vNIC1
PNIC
Virtual Switch
VM
vNIC1
PNIC
Virtual Switch
VM
vNIC1 vNIC2
* - All items listed in green are in vSphere 6.5
Improved vmxnet3 data-path
Single thread per VM is a bottleneck
Virtual switch optimization – improve caching
mechanism, lock enhancements
Build data-path option to address workloads
needs
Performance Improvements
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
100G VM Demo
45
Throughput Benchmark
25/50/100G
NetPerf NetPerf
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
100G VM DemoThroughput Benchmark
25/50/100G
NetPerf NetPerf
VMworld 2017 Content: Not fo
r publication or distri
bution
GENEVE Offloads – Addressing NFV and NSX Needs
#SER1729BE CONFIDENTIAL 47
Build Logical Flow with Confidence
Data Source: FMS, 2016
Single Tunnel mechanism for all your flows
Build complex networking flows for mutli-
service traffic
Software Defined, on commodity Ethernet, no
need of specialized hardware – lower TCO
Performance Improvements
VMworld 2017 Content: Not fo
r publication or distri
bution
Default Gateway per vmkNICEliminate the need for static routes!
Gateway setting overridden at VMkernel port
level
No need for a separate netstack instance, no
need for static routes
Independent L3 connectivity for services using
vmkernel ports
Initial support for vMotion and provisioning
services
#SER1729BE CONFIDENTIAL 48
Default Gateway
per vmkNIC
VMworld 2017 Content: Not fo
r publication or distri
bution
Introducing RDMA over Converged Ethernet (RoCE)
#SER1729BE CONFIDENTIAL 49
Brings true convergence over single fabric
49
RDMA over Converged Ethernet (RoCE):
Provides the benefits of RDMA for existing Ethernet data center infrastructure
Classical NIC RDMA NIC
Why RoCE?• Deployment: Most widely deployed RDMA solution over Ethernet
• Link-Speeds: Available for all Ethernet speeds, including 25/50/100G
• OS Ubiquity: Drivers available in Red Hat, SUSE, Microsoft Windows and other common operating systems
• Low Latency: Lowest latency for Ethernet in the industry
• Tremendous Ecosystem Support: IBTA standard, major NIC vendors, all major OEMs, all major OSs
Build single Ethernet fabric to deliver all your datacenter needs!
RDMA & pvRDMA
VMworld 2017 Content: Not fo
r publication or distri
bution
Para-virtualized RDMA (PVRDMA)Industry’s First Virtualized RDMA Solution
2015 +
Guest OS 1
RDMA App
Buffers
libvrdma libibverbs
PVRDMA Driver
PVRDMA NIC
ESXi RDMA Stack
HCA Device Driver
NIC
Guest OS 2
RDMA App
Buffers
libvrdma libibverbs
PVRDMA Driver
PVRDMA Device Emulation
ESXi RDMA Stack
HCA Device Driver
RoCE(RDMA over Converged
Ethernet)
PVRDMA Device Emulation
Control Path
Data Path
NIC
vSphere vSphere
Ultra-low latency – near bare-metal RDMA
performance
Auto-detection of peer end-points and
connection management
Best of both world – ultra-low latency with live
vMotion and DRS (future)
Improve application performance and
infrastructure efficiency == $$$
Works without underlying RDMA hardware –
perfect tool developer environment
#SER1729BE CONFIDENTIAL 50
RDMA & pvRDMA
VMworld 2017 Content: Not fo
r publication or distri
bution
SR-IOV – Addressing the Market Needs
#SER1729BE CONFIDENTIAL 51
Bring SR-IOV mainstream for networking and other use-cases
To configure VMs, needs to specify the exact PCI slot
location (Bus:Device.Function)/PF
Flexible Provisioning: Automated assignment of physical
functions. Enable automation of provisioning
SR-IOV Enhancements
VMworld 2017 Content: Not fo
r publication or distri
bution
Enable I/O Technologies via SR-IOV
52
Faster Route to Market via Generic SR-IOV Certification
NVMe SSDPCI SSD
Interconnect like
IB and OPA
NIC
Storage
Controller
PCIe
GPU/GPGPU
FPGA
Hardware Accelerations
e.g QAT
SR-IOV
Evolving landscape of connectivity and I/O
technologies
Need to deliver faster route to market for
workloads to take advantage of it
SR-IOV is one such enabler technologies
Building a program for technologies to be
enabled as generic SR-IOV based I/O
technologies
SR-IOV EnhancementsTech Preview
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
Scaling NIC Hardware Limits
53
Enable with Server Vendors Partnerships
vSphere Maximum Configuration Publish
Limits – limited configuration available
Extending the published limits using sever
vendor certification program
Aligned with Server vendor GTM plan to
support specific configuration on vSphere
platform
Scaling LimitsTech Preview
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
54
Agenda
1 vSphere Networking 101
2 vSphere Networking – What’s New?
3 Innovations Moving Forward
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
Key Infrastructure Challenges
EXPLOSIVE GROWTH OF
DATA
NEAR REALTIME
HIGHLY DISTRIBUTED
Build “Capacity”Need “efficient”
access of “capacity”
“everywhere”
Need “efficient” access of
“capacity””
#SER1729BE CONFIDENTIAL 55
VMworld 2017 Content: Not fo
r publication or distri
bution
Explosive Growth of Data
#SER1729BE CONFIDENTIAL 56
Stressing your storage and network benchmarks
Source: Intel IDF 2016
Cloudera
ZookeeperVMworld 2017 Content: N
ot for publicatio
n or distribution
Near Real-time
#SER1729BE CONFIDENTIAL 57
Need to act fast
Source: http://jtonedm.com/2012/11/21/decision-latecy-revisited/
What we do with data is changing Act fast to gain business value
VMworld 2017 Content: Not fo
r publication or distri
bution
Highly Distributed
58
Build, deploy and manage interconnected, collaborative workloads & infrastructure
Big Data Cloud-Native App Multi-tier App
Deploy multiple workloads with strong
demand for inter-VM traffic
Optimize data delivery to applications
Policy driven application management for
network and security policy
Distributed Storage
Virtual
SAN
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
Moore’s Law is Transforming
59
Why the change?
– Keep buying power from consolidating
– Workload innovations demand more value from hardware
Compression
Crypto
Remoting
Container
Security
FPGA
CPU
Industry Impacts:
– No more CPU focused platform; need to invest in hardware offloads and convergence
• Require more vertical integrations
– Opportunity to expand and differentiate SDDC offering with hardware innovation and integration
Multi –Core
Processors
Hardware Market Shift
Traditional Processors
HPC
Application & Infrastructure Optimization
Power is delivered via Convergence and Consolidation
Fast IO
Smart NIC
Convergence
NVM GPGPU
#SER1729BE CONFIDENTIAL
VMworld 2017 Content: Not fo
r publication or distri
bution
Storage Industry Under TransformationEvolving space of compute, storage and interconnect
Source: http://www.theregister.co.uk/2016/09/05/wikibon_server_san_takeover/
Technology Trends
CPU densities
continues to
increase
Hi-Density Flash and
NVDIMM will dominate
Enterprise Storage
High speed
interconnect to keep
up with fast storage
• Higher CPU density, faster data access and high-speed interconnects, all are changing the paradigm of IT infrastructure designs
• These trends are already changing how enterprise storage solutions designed and deployed; disrupting the complete ecosystem of SAN as well as DAS market
VMworld 2017 Content: Not fo
r publication or distri
bution
What Constitutes Modern Network Fabric?
#SER1729BE CONFIDENTIAL 61
True convergence
with datacenter as
a computer
Easy to scale
and manage
Performant with
various
acceleration
Inherently
secure VMworld 2017 Content: Not fo
r publication or distri
bution
Smart Ethernet FabricBuild the Data-centric Infrastructure
Data Source: FMS, 2016
CPU
ESXi
CPU
ESXi
Ethernet Fabric provides multiple benefits
• Ubiquity: Ethernet is everywhere
• High performance: 25/50/100G Ethernet and low latency
• High Scalability: Non-block topology and multiple Ethernet connections
• Deterministic: QoS & advanced congestion control
• Cost/Performance Optimized: Single fabric for all your scale-out needs
Ethernet Fabric
Pools of ComputeStorage/GPUs/
FPGAs/Accelerator…Ethernet rNIC
10G/25G/50G/100G
CPU
ESXi
10G/25G/50G/100G
Concept
#SER1729BE CONFIDENTIAL 62
VMworld 2017 Content: Not fo
r publication or distri
bution
Accelerating Virtual Switch using DPDKHigh Performance Stack
Increasing network speed need better
handling of resources in the stack – cache,
memory, interrupt etc
Today’s DPDK solutions have limited support
Build the performant stack without sacrificing
the agility and flexibility of virtualization
Transparent to applications, make it easier to
consume DPDK like concept without the
need of engineering your application
Tech Preview
#SER1729BE CONFIDENTIAL 63
VMworld 2017 Content: Not fo
r publication or distri
bution
Service A2Service A1
Service C2Service C1
Stretch or Public Cloud InfrastructureOn-Premise Infrastructure
Application-Aware DRS and vMotion
#SER1729BE CONFIDENTIAL 64
Build Network Intelligence at Every Layer
Service A Service B1
• DRS and Netflowautomatically identify and co-locate cooperating VMs
• Workloads are migrated as a single unit to reduce Cross-Cloud traffic
Service B2
Tech Preview
VMworld 2017 Content: Not fo
r publication or distri
bution
Data@Rest encryption –VM Encryption, VSANvMotion Encryption
Secure Boot, vTPMForensic, App Defense, Remote Host Attestation
Data Security
Secure Fabric
#SER1729BE CONFIDENTIAL 65
Zero-trust is key to success
Network SecurityCompute Security
Security Management
Network SecurityCompute Security
Simplified Security Management
Secure Boot, vTPMForensic, App Defense, Remote Host Attestation
Data@Rest encryption –VM Encryption, VSANvMotion Encryption
Data-in-flight encryptionMicro-segmentation
Data Security
Concept
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere Host FabricBuild Your Infrastructure with Confidence
Internet
L2 Switch Firewall Load BalancerL3 Router VPN
Local StorageVMFS
Shared StorageVMFS/NFS
vVOLDataStore
vSANDatastore
* - It is a logical representation and may vary for different services
Software Defined
Secure Converged
Best of Breed
Performant@Scale
vSphere Host
#SER1729BE CONFIDENTIAL 66
VMworld 2017 Content: Not fo
r publication or distri
bution
vSphere Networking: Summary and Takeaways
Foundation to deploy Software Defined Infrastructure
Best of breed innovations and architecture to deliver value
Rich integration and optimization to address varied needs
67
VMworld 2017 Content: Not fo
r publication or distri
bution