Top Banner
2013 OpenFlow Korea All Rights Reserved SDN for Cloud Datacenter March, 2013 기술 매니져 @ OpenFlow Korea Worldwide 9 th Quintuple CCIE#12303 [email protected]
35

4th SDN Interest Group Seminar-Session 2-2(130313)

Oct 17, 2014

Download

Technology

지난 2013년 3월 13일 진행된 제4차 SDN Interest Group Seminar의 발표 자료 입니다.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

SDN  for  Cloud  Datacenter  

March,  2013  

넷 맨  -­‐  김 창 민    

기술 매니져 @  OpenFlow  Korea  Worldwide  9th  Quintuple  CCIE#12303  [email protected]  

Page 2: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Agenda  

1. Overview  of  SDN  and  Cloud  Datacenter  

2. ConsideraLon  for  Provisioning  and  AutomaLon  FuncLons  for  OpenFlow  Enabled  Switch  

3. Moore’s  Law  and  Networking  

4.  Low  Latency  and  Non-­‐Blocking  2-­‐Ler  Leaf-­‐Spine  Design  for  OpenFlow  Enabled  Cloud  Datacenter  

Page 3: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

What  is  SDN  

•  In  the  SDN  architecture,  the  control  and  data  planes  are  decoupled,  network  intelligence  and  state  are  logically  centralized,  and  the  underlying  network  infrastructure  is  abstracted  from  the  applicaLons.  -­‐  Open  Networking  FoundaLon  white  paper  

•  Let’s  call  whatever  we  can  ship  today  SDN  -­‐  Vendor  X  

•  SDN  is  the  magic  buzzword  that  will  bring  us  VC  funding  from  -­‐  Startup  Y  

Page 4: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

SDN  Use  Cases  -­‐  Let’s  focus  on  Intra  Cloud  Datacenter  only  

VM   VM   VM  

PHY   PHY  

VM   VM   VM  

PHY   PHY  

L2/L3VPN WAN

Data Center

SDN  OrchestraLon  &  SDN  Controller  

SDN  Cloud  Gateway  

3  

WAN  Network  VirtualizaLon  

WAN  VirtualizaLon  App  &  SDN  Controller  

DC 1 DC 2 10/100G WAN

Customer 1

Customer 2

7  

Services  CreaLon  &  InserLon  

Services  InserLon  App  &  SDN  Controller  

ADC   FW   Cache  

AAA  

6  

WAN  

Data Center

Customer 1

Customer 2

Customer 3

ADC  

ADP  APP  &  SDN  Controller  

ApplicaLon  Delivery  

2  

DC  Network  VirtualizaLon  

DC Network Fabric

VM VM VM

PHY PHY

VM VM VM

PHY PHY

VM VM VM

PHY PHY

DC  VirtualizaLon  App  &  SDN  Controller  

1  

DC 1 DC 2 Optical

Packet-­‐OpLcal    IntegraLon  APP  &    SDN  Controller  

Packet-­‐OpLcal  IntegraLon    

MPLS/IP  

DC1  SDN  

Cloud  OrchestraLon  

DC2  SDN  OTN  

4  

Network  AnalyLcs  App  &  SDN  Controller  

Production 10/100G WAN

Analytics Network

Tool  1  Tool  2   Tool  3  

5  

Network  AnalyLcs  

?  

Page 5: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Where  I’m  focusing  on  …  

Page 6: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Real  Datacenters  

•  Physical  Plant  •  Power  •  Cooling  •  IsolaLon  •  Lot’s  of  Servers  •  Lot’s  of  Storage  •  Lot’s  of  Cables,  Networks  •  Lot’s  of  complexity  

Page 7: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

DefiniGon  of  Cloud  CompuGng  by  NIST  

NaLonal  InsLtute  of  Standards  and  Technology,  U.S.  Department  of  Commerce  

Cloud  compuGng  is  a  model  for  enabling  ubiquitous,  convenient,  on-­‐demand  network  access  to  a  shared  pool  of  configurable  compuGng  resources  (e.g.,  networks,  servers,  storage,  applicaLons,  and  services)  that  can  be  rapidly  provisioned  and  released  with  minimal  management  effort  or  service  provider  interacGon.  This  cloud  model  promotes  availability  and  is  composed  of  five  essenGal  characterisGcs,  three  service  models,  and  four  deployment  models.  

CharacterisGcs  •  On-­‐demand  self-­‐service    •  Broad  network  access    •  Resource  pooling  

Rapid  elasLcity  •  Measured  Service  

Service  model  •  Infrastructure  as  a  Service  

(IaaS)  •  Plagorm  as  a  Service  (PaaS)    •  Sohware  as  a  Service  (SaaS)    

Deployment  model  •  Private  cloud    •  Public  cloud    •  Hybrid  cloud  •  Community  cloud    

csrc.nist.gov/publicaLons/nistpubs/800-­‐145/SP800-­‐145.pdf  

Page 8: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Why  Cloud  CompuGng?  

•  Cloud  compuLng  is  the  future  -  Regardless  of  personal  opinions  and  foggy  definiLons  

•  Cloud  compuLng  requires  large-­‐scale  elasGc  data  centers  -  Hard  to  build  them  using  the  old  tricks  

•  Modern  applicaLons  generate  lots  of  east-­‐west  (inter-­‐server)  traffic    -  ExisLng  DC  designs  are  focused  on  north-­‐south  (server-­‐to-­‐user)  traffic  

Page 9: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

All  about  SDN  for  Cloud  Datacenter  

•  Network  Programmability  -  API  interacLon  with  network  elements  -  Local  and  remote  programmability  via  structured  APIs  -  Open  OperaLng  Systems  

•  SeparaGon  of  Control  Plane  and  Forwarding  Plane  -  Infrastructure  AgnosLc  and  broadest  array  of  controller  

support,  freedom  of  choice  on  architecture  and  protocols  -  Forwarding  Plane  can  be  Sohware  or  Hardware  

•  Strong  integraGon  with  leading  Cloud  Management  (OrchestraGon)  PlaXorms      -  OpenStack,  CloudStack,  vCloud  Director  etc  

Page 10: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

SoYware-­‐Defined  Network  Architecture  

Open  Networking  FoundaLon  white  paper  

Page 11: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

SDN  Framework  for  Cloud  Datacenter  

“SDN  is  a  soYware-­‐to-­‐infrastructure  interface    that  allows  applicaGons  to  drive  infrastructure  acGons.”  

Page 12: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

OpenFlow  SpecificaGons  

•  OpenFlow  1.0  -  Released  at  the  end  of  2009,  target  for  “Campus  research”  -  The  first  stable  and  most  deployed  version  at  the  moment  -  If  a  packet  match  in  the  flow  table  =>  perform  acLon  

•  OpenFlow  1.1  -  Released  on  March  2011,  target  for  “WAN  research”  -  If  a  packet  match  in  the  flow  table  =>  look  at  instrucLons  -  InstrucLons  =  acLons,  OR  set  acLons  in  the  acLon  set  OR  change  

pipeline  processing  -  Allows  mulLple  flow  tables  

•  OpenFlow  1.2  -  Approved  on  Dec  2011,  described  as  “Extensible  Protocol”  -  Support  for  IPv6  and  support  of  mulLple  controllers  

•  OpenFlow  1.3  -  Add  “Meter  table”  in  support  of  QoS  

Page 13: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

(Almost)  Shipping  OpenFlow  Products  

Switches  –  Commercial  •  Arista  7000  Family  •  Cisco  (roadmapped)  •  Brocade  MLX/NetIron  products  •  Extreme  BlackDiamond  X8  •  HP  ProCurve  •  IBM  BNT  G8264  •  NEC  ProgrammableFlow  switches  •  Juniper  MX-­‐Series  (SDK)  •  Smaller  vendors  

Controllers  –  Commercial  •  Big  Switch  Networks  (EFT?)  •  NEC  ProgrammableFlow  Controller    •  Nicira  NVP  

Switches  –  Open  Source  •  Open  vSwitch  (Xen,  KVM)  •  NetFPGA  reference  implementaLon  •  OpenWRT  •  Mininet  (emulaLon)  

Controllers  –  Open  Source  •  NOX  (C++/Python)  •  Beacon  (Java)  •  Floodlight  (Java)  •  Maestro  (Java)  •  RouteFlow  (NOX,  Quagga,  ...)  

More@  hup://www.sdncentral.com/shipping-­‐sdn-­‐products/  hup://www.sdncentral.com/comprehensive-­‐list-­‐of-­‐open-­‐source-­‐sdn-­‐projects  

Page 14: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Current  SDN  offerings  in  Silos  

Page 15: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

SDN  Strategy  for  Cloud  Datacenter  

Page 16: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

OpenFlow  Switch  Architecture    for  Cloud  Datacenter  

•  In  a  pure  “OpenFlow”  device,  the  OS  is  minimal.  Only  chip  firmware  and  simple  device  management  funcLons  are  included.  

•  Complexity  moves  to  the  controller/  SDN  layer.  

•  But  a  device  could  also  maintain  protocols  AND  have  OpenFlow  support  

•  x86  64bit  Linux/Unix  plaXorm  can  be  used  at  OpenFlow  switch  

•  Support  for  add-­‐on  our  own  agents  on  Network  OS  for  Cloud  Datacenter  

[Basic  OpenFlow  enabled  Switch]  

[OpenFlow  enabled  Switch  for  Cloud  Datacenter]  

Page 17: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Why    needs  intelligence  on  Network  OperaGng  Systems  for  Cloud  Datacenter  

•  The  Device  OperaLng  System  handles  all  device  operaLons  such  as  Boot,  Flash,  Memory  Management,  TCAM,  OpenFlow  Protocol  handler,  SNMP  agent  and  so  on.  

•  Consider  a  device  with  no  OSPF,  MulLcast,  BGP,  STP,  MAC  address  tables,  VLAN  tagging,  LDP…Or  a  device  without  code  bloat,  only  what  you  need  

•  Smaller  code  =  less  bugs,  less  resources,  less  cost  

•  Cloud  Datacenter  needs  some  more  intelligent  funcGons  at  Device  OperaGng  System  for  provisioning  and  automaGon  purpose  

•  Pure  Linux/Unix  plaXorm  for  this  purpose,  not  modified  one    •  All  Linux/Unix  distribuGons  can  be  …  •  Running  our  own  codes  at  OpenFlow  enabled  Switch  

 

Page 18: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

OpenFlow  Is  Not  the  Only  SDN  Tool  

Vendor  APIs  •  Cisco  :  Open  Networking  Environment  (ONE),  EEM  (Tcl),  PythonscripLng  •  Juniper  :  JUNOS  XML  API  and  SLAX  (human-­‐readable  XSLT)  •  Arista  :  XMPP,  Linux  scripLng  (including  Python  and  Perl)  •  DellForce10  :  Open  AutomaLon  Framework  (Perl,  Python,  NetBSDshell)  •  F5  :  iRules  (Tcl-­‐basedscripts)  

Page 19: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

OpenFlow  Config  

•  OpenFlow  ConfiguraLon  Protocol  •  OpenFlow  OperaLon  ConfiguraLon  (currently  v  1.1)  •  Main  purpose  is  remote  management.  

 cf.  OpenFlow  is  for  control  •  RFC  6241  NETCONF  is  mandatory  protocol    •  Data  Model  is  based  on  XML  &  YANG  

Page 20: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Comparing  SNMP  and  NETCONF  

SNMP   NETCONF  

Data  Models   Defined  in  MIBs   Defined  in  YANG  modules  (or  XML  schema  documents)  

Data  Modeling  Language   Structure  of  Management  InformaLon  (SMI)  

YANG  (and  XML  schema)  

Management  OperaLons   SNMP   NETCONF  

RPC  EncapsulaLon   Basic  Encoding  Rules  (BER)   XML  

Transport  Protocol   UDP   TCP  (reliable  transport)  

•  NETCONF  seems  to  be  almost  similar  with  SNMP  but…  

Page 21: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Current  limitaGon  of  NETCONF  

•  Schemas  are  not  part  of  the  NETCONF  standard,  so  not  possible  to  reuse  schema  from  vendor/plagorm/product  to  another  (or  even  between  different  plagorms  from  the  same  vendor)  and  schema  ends  up  convoluted  and  non-­‐intuiLve  

•  Only  covers  ‘config’  commands  and  a  subset  of  ‘show’  commands  

•  Do  you  believe  whether  NETCONF  can  do  everything?  

•  We  definitely  need  some  fancy  tools  for  provisioning  and  automaGon  for  our  Cloud  Datacenter  

 

Page 22: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Current  Management  Protocol  but  …  

•  Needs  for  some  thing  fancy  agent  or  interfaces  within  management  protocol  areas  

Page 23: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

40  Y  

1,000,000X  

2X/2Y  

Moore’s  Law  1971-­‐2011  

Page 24: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

100X  

12Y  

Semiconductor  Technology  Roadmap  

Page 25: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

100X  Performance  by  2022  

64-­‐bit  CPU  Cores  over  Time  

Page 26: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

CPU:  2X/2Y  =  64X/12Y  

1GigE-­‐10GigE:  10X/12Y  

Time  

Performance  

What happened???

Moore’s  Law  and  Networking  

Page 27: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

ASIC  Design:  10  Chips  Custom  Design:  1  Chip  

8  ports  

8  ports  

8  ports  

8  ports  

8  ports  

8  ports  

8  ports  

8  ports  

XBAR  

XBAR  

64  port  10G  Switch:  Custom  vs  ASIC    

Page 28: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Technology" 130nm" 65nm" 40nm"

10G ports" 24" 64" 128"

40 ports" ---" 16" 32"

Throughput" 360MPPS" 960MPPS" 2BPPS"

Buffer Size" 2MB" 8MB" 12MB"

Table Size" 16K" 128K" 256K"

Availability" 2008" 2011" 2013"

Improvement" N/A" 3X/3Y" 2X/2Y"

Single Chip Switch Silicon Roadmap  

Page 29: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

•  Next Two Generations follow Moore’s Law –  Table  sizes  double  every  process  generaLon  –  Industry  catching  up  on  process  roadmap  

•  I/O Speed scales slower than Moore’s Law –  I/O  doubles  about  every  four  years  –  Next  step  is  25  Gbps  SERDES    

•  Moore’s Law requires Custom Designs –  ASIC  flow  wastes  silicon  potenLal    

Moore’s  Law  and  Networking  

Page 30: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Lower  Latency,  Lower  OversubscripGon  

Benefits  of  2-­‐Ger  architecture  •  Lower  oversubscripLon,  lower  latency  •  Reduced  hierarchy,  fewer  management  points    •  Enabled  by  high-­‐density  core  switches      Crucial  quesGons  remain  but  OpenFlow  can  address  them  •  PosiLoning  of  services  infrastructure  (FW,LB)    •  RouLng  or  bridging  (N/S  and  E/W)  

Page 31: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Cost  of  InterconnecGng  Nodes  •  Network  cost  per  node  =    (  switches  +  power  +  opLcs  +  fiber)  /                                                                                    (  total  nodes  *  oversubscripLon)  

•  2-­‐Ler  designs  provide  a  beuer  cost  basis  than  3-­‐Ler  •  Each  Ler  adds  significant  cost  due  to  opLcs/fiber  of  interconnects  

•  Costs  go  up  with  scale  

 N  ports  (1  switch  of  N  ports)  

2N  ports  3X  cost  per  usable  port  

(6  switches  for  2x  increase  in  usable  ports  compared  to  single  switch)  

4N  ports  3.5X  cost  per  usable  port    

(14  switches  for  4x  increase  in  usable  ports  compared  to  single  switch)  

Single  Tier   Two  Tier   Three  Tier  

N  

½N   ½N   ½N   ½N  

½N   ½N   ½N   ½N  ½N   ½N   ½N   ½N  

Page 32: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Cloud  Spine  Leaf  Network  Design  (1)  

2  Spine  

1   2  

72  Leaf  32 32 32 32

32 32 32   32

Scales  to  2,304  x  10G  nodes  non-­‐oversubscribed  

1   72  ...  

4  Spine  

1   2   3   4  

144  Leaf  32 32 32 32

32   32 32 32

Scales  to  4,608  x  10G  nodes  non-­‐oversubscribed  

1   144  ...  

Page 33: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Cloud  Spine  Leaf  Network  Design  (2)  

8  Spine  

1   2   7   8  

288  Leaf  32 32 32 32

32   32 32 32

Scales  to  9,126  x  10G  nodes  non-­‐oversubscribed  

1   288  

...  

...  

16  Spine  

1   2   15   16  

576  Leaf  32 32 32 32

32   32 32 32

Scales  to  18,432  x  10G  nodes  non-­‐oversubscribed  

1   576  

...  

...  

Page 34: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

Cloud  Spine  Leaf  Network  Design  (3)  

2-­‐Ler  Leaf,  16-­‐Way  Spine  3:1  oversubscripLon  

16  Spine  …  

Scales  to  55,296  x  10G  nodes  @  3:1  oversubscribed  

1  

16 16 16 16

2   15   16  

1,152  Leaf  48 48 48 48 1   1,152  

Page 35: 4th SDN Interest Group Seminar-Session 2-2(130313)

2013  OpenFlow  Korea  All  Rights  Reserved  

 OpenFlow  Korea  

 

(www.OPENFLOW.or.kr)