Top Banner
19 January 2015 Report DR141231 NETGEAR, Inc. M6100 Managed Switch Report DR141231B
33

NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

Aug 23, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

19 January 2015

Report DR141231

NETGEAR, Inc.

M6100 Managed Switch

Report DR141231B

Page 2: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 2 DR141231B

Contents

i. Executive Summary ................................................................................................... 3

ii. About the NETGEAR M6100 Managed Switch ........................................................ 4

iii. Test Bed Setup ......................................................................................................... 6

iv. How We Did It ........................................................................................................... 7

1. Port-Pair Throughput, per RFC 2544 ....................................................................... 8

Port-Pair Throughput on Blade 1: XCM8948, 48 x 1GE Ports ................................................ 8 Port-Pair Throughput on Blade 2: XCM8944, 40 x 1GE Ports ................................................ 9 Port-Pair Throughput on Blade 3: XCM8924, 24 x 10GE Ports .............................................. 9

2. Full-Mesh Throughput, per RFC 2889 .................................................................... 11

Full-Mesh Throughput on Blade 1: XCM8948, 48 x 1GE Ports ..............................................12 Full-Mesh Throughput on Blade 2: XCM8944, 40 x 1GE Ports ..............................................12 Full-Mesh Throughput on Blade 3: XCM8924, 24 x 10GE Ports ............................................13

3. Cross-Fabric Throughput ....................................................................................... 15

Cross-Fabric Throughput, Blades 1 (XCM8948) and 2 (XCM8944) .......................................15 Cross-Fabric Throughput, Blades 2 (XCM8944) and 3 (XCM8924) .......................................16

4. Layer 3 Multicast Throughput, per RFC 3918 ....................................................... 17

RFC 3918 Multicast Throughput Configuration ......................................................................17 Multicast Throughput, Blade 1, 48 x 1GE ports .....................................................................18 Multicast Throughput, Blade 2, 40 x 1GE ports .....................................................................18 Multicast Throughput, Blade 2, 4 x 10GE ports .....................................................................19 Multicast Throughput, Blade 3, 24 x 10GE ports ...................................................................20

5. Port-Pair Latency, per RFC 2544 ............................................................................ 21

Port-Pair Latency, Blade 1, 48 x 1GE ports ...........................................................................21 Port-Pair Latency, Blade 2, 48 x 1GE ports ...........................................................................22 Port-Pair Latency, Blade 3, 24 x 10GE ports .........................................................................23

6. Full-Mesh Latency, per RFC 2889 .......................................................................... 24

Full-Mesh Latency, Blade 2, 40 x 1GE ports .........................................................................24 Full-Mesh Latency, Blade 3, 24 x 10GE ports .......................................................................24

7. Multicast Latency, per RFC 3918 ........................................................................... 27

Multicast Latency for Blade 1, One to 47 x 1GE Ports ...........................................................27 Multicast Latency for Blade 2: One to 39 x 1GE Ports ...........................................................27 Multicast Latency for Blade 2: One to 3 x 10GE Ports ...........................................................28 Multicast Latency for Blade 3: One to 23 x 10GE Ports .........................................................29

8. High Availability Testing ......................................................................................... 30

Link Aggregation ...................................................................................................................30 Hot-swapping ........................................................................................................................31 Power Supply Failover ..........................................................................................................31

About Miercom ............................................................................................................ 33

Use of This Report ...................................................................................................... 33

Page 3: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 3 DR141231B

i. Executive Summary

Miercom was engaged by NETGEAR to independently assess the performance and key features of its latest switching system, the M6100 chassis (XCM8903), containing three of the vendor's high-capacity switching blades. The system was shipped to and tested at Miercom's main New Jersey test lab in the fall of 2014.

The switching blades deliver a high density of 1- and 10-GE ports. The testing focused on the ability of the system to handle high data volumes with minimal loss and low latency. Key high availability features of the multi-slot system were also exercised and assessed.

Each blade was first tested on its own – that is, throughput and latency was measured between on-board ports. Then traffic was passed between switching blades, across the chassis backplane. The system features a passive backplane and distributed switching fabric.

Key Findings and Observations

Wire-speed. In all cases tested, L2 and L3 traffic between ports on the same switching blade is supported at wire-speed. Likewise, traffic between blades across the chassis switching fabric occurred at wire-speed for all the scenarios tested.

Low latency. Traffic between ports on the same blade experiences impressively low latency. Traffic across the chassis backplane exhibits slightly higher latency, as expected. All tested scenarios tested yielded average latency within normal limits.

High availability. Several scenarios were tested, all showing that the NETGEAR M6100 was designed well to provide high reliability and continued availability. A hot blade-swap showed no impact on active traffic when an adjoining blade was removed and replaced. A redundant-power-supply failure produced no impact on any active data flows. Also, failure of an active link in a Link Aggregation Group (LAG) yielded minimal loss of data as traffic is rerouted to the surviving link.

With results in all tested areas – throughput, latency, survivability – demonstrating superior performance, Miercom is proud to award its Performance Verified certification to NETGEAR for the M6100 Chassis with assorted high-density switching blades.

Robert Smithers

CEO

Miercom

Page 4: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 4 DR141231B

ii. About the NETGEAR M6100 Managed Switch

The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to four powers supplies. In addition, optional daughter boards can be applied for delivering Power-over-Ethernet (PoE) via 1-Gbps RJ45 copper links. PoE was not tested.

Conceptually, the flexibility of the M6100's multislot chassis (see photo below) enables the system to be configured and deployed as an access switch, or a consolidation switch – accepting up-link data from multiple access switches. The system can even serve as a core switch in SMB networks. The system can be configured to support up to 144 x 1GE RJ45 copper ports, or up to 72 x 10GE ports, or combinations of both.

The 4U-high (7 inch) M6100 system we tested was configured with the same mix of port/switching cards as discussed above:

The XCM8948, shown in the top slot, yields 48 x 1GE RJ45 copper ports.

The XCM8924, shown in the middle slot, which yields 24 x 10GE RJ45 copper ports. Sixteen of these are so-called combo ports, which can use either an SFP+ fiber connection or RJ45 copper link (one or the other, not both).

The XCM8944, shown in the third slot, which delivers 44 x 1GE RJ45 copper and four 10GE ports – two are RJ45 copper and two are SFP+ fiber. Another version offers SFP fiber instead of RJ45 copper connections for the 1GE ports.

Page 5: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 5 DR141231B

The M6100 employs a distributed switching fiber – each blade has its own portion of the system's overall switching capacity. The system offers full Layer 2 and Layer 3 switching support – able to forward, based on MAC-layer frame attributes, as well as on Layer 3 IP (v4 and v6) packet information. Support for all the appropriate support and control protocols (RIP, OSPF, BGP, PIM, PBR, and so on) is included.

Redundancy and high-availability features that were exercised include:

The ability to hot-swap a line/switching blade without impacting live traffic on the other cards.

Failure and replacement of a redundant power supply.

Failure of a LAG connection and automatic rerouting of traffic onto surviving links.

The M6100 system supports yet other high availability features, such as dual supervisory modules, which deliver a Non-Stop Forwarding (NSF) capability.

The fully managed M6100 system can be accessed either via either a 1GE RJ45 out-of-band Ethernet service port or an RJ45 (RS-232 straight-through) serial console port.

Page 6: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 6 DR141231B

iii. Test Bed Setup

Testing of the NETGEAR M6100 was largely accomplished using the Ixia XG12 (see www.ixiacom.com), which is the highest-port-density test system available for Ethernet today. The XG12 can be used for Layer 1-to-Layer 7 testing and features a 12-slot modular chassis.

A single Ixia XG12 chassis can support up to 384 x 1GE ports, 192 x 10GE ports, or 20 x 100GE ports, as well as 48 x 40GE HSE (High Speed Ethernet) QSFP+ ports, 24 Packet over SONET (POS) ports, or 24 Asynchronous Transfer Mode (ATM) ports.

The same test bed was used for all the throughput and latency testing, although re-cabling was periodically required. We first tested throughput and latency blade by blade, and then across the M6100 switching fabric between two blades at a time: Between blades in slots 1 and 2, and then between blades in slots 2 and 3.

Each of the tests conducted is discussed individually in this report. The NETGEAR M6100 is often referred to throughout this report as the device under test, or DUT.

In testing of the fail-over capability of the NETGEAR M6100 LAG (Link Aggregation Group) a second NETGEAR switch was required (the M6100 behaves as a single switch). So a NETGEAR M7100 was acquired and used for the LAG-failover testing.

Source: Miercom, January 2015

Page 7: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 7 DR141231B

iv. How We Did It

Performance testing was done of Layer 2 (MAC-layer) and Layer 3 (IP) traffic. The M6100 switching blades support both or either with equal facility.

Besides testing with packets at conventional sizes (from minimum-sized packets of 64 bytes to the maximum legal Ethernet packet size of 1,518-bytes), our testing also included Jumbo frames, which the NETGEAR M6100 switching system and many commercial switches and routers today support. We tested with Jumbo packet sizes of 9,216 bytes and 12,288 bytes.

Most test traffic streams consisted of a single packet size, such as 64 bytes or 512 bytes. However, in many tests we did also apply IMIX streams, which consisted of packets of multiple fixed sizes, sent in recurring sequences of pre-specified frequency. Many systems perform differently with streams of mixed packet sizes than streams with all the same packet size, and we did note some different performance between same-size packet streams and IMIX streams with the M6100 as well.

Most of the throughput and latency performance testing was done in compliance with Internet RFCs developed for this purpose. The pertinent RFCs that were applied are detailed in the separate test sections that follow.

The testing for throughput and latency entailed three primary RFCs:

RFC 2544: Measures throughput based on port pairs, where bi-directional traffic is delivered between pairs of ports. Latency of packets is also measured with RFC 2544, for the same port-to-port traffic flows.

RFC 2889: Measures full-mesh throughput and latency, based on round-robin forwarding of packets, from each input port to every other port of the same speed. A limitation of the Ixia test system is that all streams in such a full-mesh test need to be the same data rate. For this reason 1GE and 10GE ports and streams are not aggregate-throughput-measured at the same time.

RFC 3918: Measures throughput and latency for multicast packet delivery. Layer-3 IP-based multicast entails the replication of packets by the switch and then forwarding the packet to two or more outbound ports, governed by the multicast 'group' definition. Our testing applies a maximum multicast processing load: uni-directional traffic is delivered to one switch port, which then has to be replicated and delivered to every other switch port.

Page 8: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 8 DR141231B

1. Port-Pair Throughput, per RFC 2544

Testing per RFC 2544 runs pre-defined throughput and latency tests, exercising switch performance with bidirectional line-rate traffic, 100-percent capacity, on all ports.

We first tested the throughput for each blade separately by applying bidirectional traffic through port pairs – that is, traffic from Port A was always forwarded to Port B and vice versa for the return path. The Ixia paired its traffic-generation ports with ports on the blade being tested. For testing Blades 1 and 2 (the switching modules in slots 1 and 2, the Ixia sent line-rate traffic through the all the 1GB ports on each blade, one blade at a time.

The Ixia traffic-generation ports are all also bi-directional, and the test system carefully compares received traffic with the delivered traffic stream, noting any packets dropped or missing. The Ixia system at the same time calculates latency on each packet – the delta time between when a packet is sent to the DUT, until it is received back. The Ixia then calculates minimum, average and maximum latency for each traffic stream.

Port-Pair Throughput on Blade 1: XCM8948, 48 x 1GE Ports

Wire-Rate, Port-to-Port. On a port-pair basis, per RFC 2544, the XCM8948 blade forwards bi-directional L2 and L3 traffic at the maximum rate it can be delivered, on all 48 ports, for all packet sizes, including the IMIX assortment. No frame loss occurred.

64 128 256 512 1024 1280 1518 9216 12.3K IMIX

Line Rate(%) 100 100 100 100 100 100 100 100 100 100

0

20

40

60

80

100

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2544 Throughput, Layer 2 Blade 1 - 48 x 1GE ports

LineRate(%)

Source: Miercom, January 2015

Page 9: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 9 DR141231B

For the XCM8948 blade, maximum throughput was verified for the full range of packet sizes, ranging from 64 to 12,288 bytes and including the IMIX traffic load -composed of a mix of packet sizes, which better emulates a real-world environment. The actual throughput performance achieved in this test was slightly less than 96 Gbps, for large-sized packets.

Port-Pair Throughput on Blade 2: XCM8944, 40 x 1GE Ports

For the XCM8944 blade, as with the 48-port blade in Slot 1, maximum throughput was verified for the full range of packet sizes, ranging from 64 to 12,288 bytes and including the IMIX traffic. The actual throughput performance achieved in this test was slightly under 80 Gbps, for large-sized packets.

Port-Pair Throughput on Blade 3: XCM8924, 24 x 10GE Ports

As the following chart shows, the high-capacity XCM8924 blade, in Slot 3 of our M6100 chassis, likewise delivers 24 x 10GE ports of throughout at wire-speed, based on port-pair testing. As with the other two switching blades, maximum throughput was verified for the full range of packet sizes, ranging from 64 to 12,288 bytes and including the IMIX traffic. The actual throughput performance achieved in this test was slightly under 480 Gbps, for large-sized packets.

64 128 256 512 1024 1280 1518 9216 12228 IMIX

Line Rate(%) 100 100 100 100 100 100 100 100 100 100

0

20

40

60

80

100

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2544 Throughput, Layer 2 Blade 2 (XCM8944) - 40 x 1GE ports

LineRate(%)

Source: Miercom, January 2015

Page 10: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 10 DR141231B

Using RFC 2544-based testing, then, the results show that the on-board switching fabric of each of three switching blades tested can handle maximum line-speed, bi-directional throughput between all ports on the switch blade – on a port-pair basis – for all packet sizes, including the IMIX stream.

64 128 256 512 1024 1280 1518 9216 12228 IMIX

Line Rate(%) 100 100 100 100 100 100 100 100 100 100

0

20

40

60

80

100T

hro

ug

hp

ut

% L

ine

Ra

te

Frame Size (bytes)

NETGEAR M6100 - RFC 2544 Throughput, Layer 2 Blade 3 (XCM8924) - 24 x 10GE ports

LineRate(%)

Source: Miercom, January 2015

Page 11: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 11 DR141231B

2. Full-Mesh Throughput, per RFC 2889

Similar to Port-Pair testing, testing per RFC 2889 runs pre-defined throughput and latency tests, exercising switch performance with bidirectional line-rate traffic, 100-percent capacity, on all ports.

With RFC 2889, however, testing is based on the round-robin forwarding of packets, from each input port to every other port of the same speed, as shown in the below diagram. This testing is more demanding than port-pair throughput testing, because the switch has to manage the forwarding of every input packet to a different output port.

As noted, full-mesh testing is applied to all ports of the same speed. So for Blades 1 and 2, full-mesh test streams were applied to 1-GE ports. For Blade 3, full-mesh test streams are delivered to all 10-GE ports and forwarded to all 10-GE ports.

As the results show, one notable difference in throughput performance surfaced from the full-mesh testing.

Source Ixia, IxNetwork

Page 12: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 12 DR141231B

Full-Mesh Throughput on Blade 1: XCM8948, 48 x 1GE Ports

As with port-pair testing, full-mesh testing for Blade 1, the 48-port XCM8948 switch blade, showed that on-board switching could handle round-robin forwarding of full-rate bi-directional data streams, on all 48 x 1GE ports, for all packet sizes, including IMIX.

Full-Mesh Throughput on Blade 2: XCM8944, 40 x 1GE Ports

Full-mesh testing on the XCM8944 blade turned up an anomaly: with full load on all 40 x 1GE ports, some loss of Jumbo frames occurred. The loss, about 2.5 percent of the traffic, occurred in both directions and to the same extent for both Jumbo frame sizes.

64 128 256 512 1024 1280 1518 9216 12.3K IMIX

Line Rate(%) 100 100 100 100 100 100 100 100 100 100

0

20

40

60

80

100

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2889 Full-Mesh Throughput, Layer 2, Blade 1 - 48 x 1GE ports

LineRate(%)

80.0

82.5

85.0

87.5

90.0

92.5

95.0

97.5

100.0

74 128 256 512 1024 1280 1518 9216 12.3K

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2889 Full-Mesh Percent of Max Throughput - Blade 2, 40 x 1GE ports

Agg RxThroughput(% LineRate)

Source: Miercom, January 2015

Source: Miercom, January 2015

Page 13: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 13 DR141231B

Full-Mesh Throughput on Blade 3: XCM8924, 24 x 10GE Ports

It was decided to apply load to the XCM8924 switch blade in increments, starting at 50 percent. As the below chart shows, handling of 50-percent full-mesh traffic load went without issue: All of the traffic was handled in both directions.

Then 75 percent load was applied. Again, all of the traffic was handled in both directions without a problem (see chart below).

0

25

50

75

100

74 128 256 512 1024 1280 1518 9216 12288

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2889 Full-Mesh 50% load Agg Rx Throughput - Blade 3, 24 x 10GE ports

Agg Rx Throughput (% Line Rate)

0

25

50

75

100

74 128 256 512 1024 1280 1518 9216 12288

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2889, full-mesh 75% load Agg Rx Throughput - Blade 3, 24 x 10GE ports

Agg Rx Throughput (% Line Rate)Source: Miercom, January 2015

Source: Miercom, January 2015

Page 14: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 14 DR141231B

Then 100 percent load was applied. Again, all full-mesh traffic was handled without any problems. The results, without Jumbo frame throughput, are shown in the chart.

0

10

20

30

40

50

60

70

80

90

100

74 128 256 512 1024 1280 1518

Th

rou

gh

pu

t %

Lin

e R

ate

Frame Size (bytes)

NETGEAR M6100 - RFC 2899 Full Mesh - layer 3 Blade 3 - Agg Rx Throughput - No Jumbo frames

Agg Rx Throughput (% Line Rate)Source: Miercom, January 2015

Page 15: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 15 DR141231B

3. Cross-Fabric Throughput

In the cross-fabric tests, the ports of one blade were connected with the ports of another blade, across the switch backplane. Then bi-directional traffic was passed between the corresponding ports, using the various packet sizes as before.

The first test applied bi-directional traffic between Blade 1 and Blade 2 over the switch's backplane. Eighty Ixia XM12 test-traffic-generation ports were connected to 40 x 1GE ports on each blade and full-rate Layer 2 traffic was applied and forwarded between Blades 1 and 2.

Below the chart shows the throughput results. About 79 Mbps of throughput was achieved, based on 1,518-byte packets, over the backplane and switching fabric between the 80 x 1GE ports on Blades 1 and 2. No traffic was dropped or lost and all traffic was delivered at line rate.

Cross-Fabric Throughput, Blades 1 (XCM8948) and 2 (XCM8944)

Nearly identical results were achieved for connecting the four 10GE ports of Blade 2 (the XCM8944 switch blade) with four 10GE ports of Blade 3 across the M6100's backplane and switching fabric. The eight ports conveyed a total of 79 Gbps, based on 1,518-byte packets. As before, no traffic was dropped or lost and all traffic was delivered at line rate.

61.0

69.2 74.2 77.0 78.5 78.8 79.0

70.6

0

10

20

30

40

50

60

70

80

90

64 128 256 512 1024 1280 1518 IMIX

Gb

ps

Frame Size (bytes)

NETGEAR 6100, Blades 1 <-> 2 Cross-Fabric Aggregate Switching Throughput (Gbps), 40 x 1GbE ports each blade

Source: Miercom, January 2015

Page 16: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 16 DR141231B

Cross-Fabric Throughput, Blades 2 (XCM8944) and 3 (XCM8924)

The results show wire speed, line-rate bi-directional throughput between the four 10GE ports on Blade 2 with four 10GE ports on Blade 3, achieving 79 Gbps for a total wire speed capacity of 80 Gbps. There was no data loss.

61.0

69.2 74.2 77.0 78.5 78.8 79.0

70.6

0

10

20

30

40

50

60

70

80

90

64 128 256 512 1024 1280 1518 IMIX

Gb

ps

Frame Size (bytes)

NETGEAR 6100 - Blades 2 <-> 3 Cross-Fabric, Layer 2 Aggregate switching throughput (Gbps)

4 x 10GE ports each blade

Source: Miercom, January 2015

Page 17: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 17 DR141231B

4. Layer 3 Multicast Throughput, per RFC 3918

The Layer 3 Multicast Throughput test, defined by RFC 3918, validates the maximum rate that a device can process Layer 3 IPv4 multicast: one-to-many traffic.

The Ixia test system conducts this test using a binary or linear search to detect the maximum loss-free load that the switch can handle one-to-many packet-replication and traffic mapping. IGMP snooping, based on the IGMPv2 protocol, was enabled on the NETGEAR M6100 switch so that it would learn multicast groups and their members (the ports to which replicated multicast traffic is forwarded).

The IxAutomate application running on the Ixia XM12 then injected IGMPv2 multicast traffic. Following this, Ixia delivered traffic to one port on a switch blade of the M6100, which replicated and routed it to all other ports of that blade. The same tests also produced multicast latency measurements.

RFC 3918 Multicast Throughput Configuration

As with the other throughput metrics, multicast throughput was tested on each of the individual blades. With Blade 2, with 40 x 1GE and 4 x 10GE ports, the 1GE and 10GE ports were tested separately for multicast throughput.

Source Ixia, IxNetwork

Page 18: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 18 DR141231B

Multicast Throughput, Blade 1, 48 x 1GE ports

The graph shows multicast throughput of 46.4 Gbps, resulting from one unidirectional 1GE traffic stream, replicated and forwarded to the other 47 x 1GE ports on the switch blade (XCM8948). Wire speed throughput was achieved; there was no packet loss.

Multicast Throughput, Blade 2, 40 x 1GE ports

37.0 40.6

43.6 45.2 46.1 46.3 46.4

0

5

10

15

20

25

30

35

40

45

50

74 128 256 512 1024 1280 1518

Gb

ps

Frame Size (bytes)

Blade 1 (XCM8948) 48 x 1GBE Multicast Agg Rx Throughput

30.7 33.7

36.2 37.5 38.3 38.4 38.5

0.0

5.0

10.0

15.0

20.0

25.0

30.0

35.0

40.0

45.0

74 128 256 512 1024 1280 1518

Gb

ps

Frame Size (bytes)

Blade 2 (XCM 8944) 40 x 1GBE Multicast Agg Rx Throughput

Source: Miercom, January 2015

Source: Miercom, January 2015

Page 19: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 19 DR141231B

The graph shows multicast throughput of 38.5 Gbps, resulting from one unidirectional 1GE traffic stream, replicated and forwarded to the other 39 x 1GE ports on the switch blade (XCM8944). Wire speed throughput was achieved; there was no packet loss.

Multicast Throughput, Blade 2, 4 x 10GE ports

The above graph shows multicast throughput of 29.6 Gbps, resulting from one unidirectional 10GE traffic stream, replicated and forwarded to the other three 10GE ports on the switch blade (XCM8944). Wire speed throughput was achieved; there was no packet loss.

23.6 25.9

27.8 28.9 29.4 29.5 29.6

0.0

5.0

10.0

15.0

20.0

25.0

30.0

35.0

74 128 256 512 1024 1280 1518

Gb

ps

Frame Size (bytes)

Blade 2 (XCM 8944) 4 x 10 GE Multicast Agg Rx Throughput

Source: Miercom, January 2015

Page 20: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 20 DR141231B

Multicast Throughput, Blade 3, 24 x 10GE ports

The above graph shows multicast throughput of 227 Gbps, resulting from one unidirectional 10GE traffic stream, replicated and forwarded to the other 23 x 10GE ports on the switch blade (XCM8924). Wire speed throughput was achieved; there was no packet loss.

In conclusion, the NETGEAR M6100 successfully transmitted traffic to all multicast member ports at 100-percent line rate for frame sizes from 68 to 1518 bytes.

Testing verified that the M6100 switch successfully learns the multicast groups and then properly transmits multicast traffic at 100-percent line rate with zero loss to each multicast group member.

181.1 198.9

213.3 221.4 225.6 226.5 227.0

0

50

100

150

200

250

74 128 256 512 1024 1280 1518

Gb

ps

Frame Size (bytes)

Blade 3 (XCM 8924) 24 x 10 GE Multicast Agg Rx Throughput

Source: Miercom, January 2015

Page 21: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 21 DR141231B

5. Port-Pair Latency, per RFC 2544

Latency adds delay to data. And excessive latency can be troublesome. Besides delaying the timeliness of data and slowing response time, the major complication with latency becoming too high is that packets can end up being delivered out of order. This can confound many applications, resulting in delays and re-transmissions.

As noted, the throughput testing we performed – per RFC 2544 (port-pairs), RFC 2889 (full-mesh) and RFC 3918 (multicast) – also produced precise latency measurements. These latency measurements are reported in this section.

Generally, an average latency of packets traversing a switch, whether Layer 2 or

Layer 3, of 5 microseconds (sec) or less is excellent. This represents the performance delivered by about the top third of switches now commercially available.

Port-Pair Latency, Blade 1, 48 x 1GE ports

The above graph shows the latency experienced by packets during wire-speed, bi-directional test traffic loads on all ports. The average latency for all same-size packet

streams, even Jumbo frames, is excellent, under 5 microseconds (sec). The average and maximum latencies experienced by packets in IMIX (mixed-packet-size) streams tends to be notably higher.

64 128 256 512 1024 1280 1518 921612.3

KIMIX

Max Latency (uSec) 7.86 6.9 8.2 7.48 7.66 8.22 7.34 7.68 7.54 20.14

Avg Latency (uSec) 3.42 3.43 4.00 4.58 4.87 4.59 4.54 4.54 4.60 10.86

Min Latency (uSec) 2.82 2.76 3.24 3.84 4.1 3.88 3.78 3.82 3.88 2.64

0

5

10

15

20

25

uS

ec

Frame Size (Bytes)

NETGEAR M6100 - RFC 2544 Latency, 48 x 1GE ports, Blade 1

Source: Miercom, January 2015

Page 22: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 22 DR141231B

Port-Pair Latency, Blade 2, 48 x 1GE ports

The below graph shows the latency experienced by Layer 2 packets through Blade 2 during wire-speed, bi-directional test traffic loads on all 1GE ports. These were recorded during port-pair throughput testing, when there was no packet loss. The average latency

tends to be 15 to 20 sec, though it does climb for the IMIX stream.

64 128 256 512 1024 1280 1518 921612.3

KIMIX

Max Latency (uSec) 33.3 28.0 36.4 37.0 37.3 29.5 29.4 34.4 34.5 46.2

Avg Latency (uSec) 16.7 14.3 16.7 17.7 18.9 15.7 14.9 17.7 18.3 23.8

Min Latency (uSec) 1.6 1.6 1.7 1.7 1.8 1.8 1.7 1.7 1.8 1.6

0

10

20

30

40

50

uS

ec

Frame Size (bytes)

NETGEAR M6100 - RFC 2544 Layer 2 Latency Blade 2, 40 x 1GE ports

Source: Miercom, January 2015

Page 23: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 23 DR141231B

Port-Pair Latency, Blade 3, 24 x 10GE ports

The latency applied to packets across Blade 3 – the XCM8924, with 24 x 10GE ports – is superb, even with wire-speed, bi-directional test traffic loads on all 10GE ports. Even

the maximum latency of IMIX-stream packets was under 5 sec.

64 128 256 512 1024 1280 1518 9216 12.3K IMIX

Max Latency (uSec) 3.59 3.59 3.7 3.69 3.74 3.68 3.63 3.68 3.67 4.84

Avg Latency (uSec) 3.38 3.38 3.40 3.40 3.41 3.39 3.39 3.38 3.38 4.02

Min Latency (uSec) 3.26 3.28 3.28 3.28 3.29 3.31 3.3 3.29 3.29 3.27

0

1

2

3

4

5

6

uS

ec

Frame Size (bytes)

NETGEAR M6100 - RFC 2544 - Layer 2 Latency Blade 3 -- 24 x 10 GE ports

Source: Miercom, January 2015

Page 24: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 24 DR141231B

6. Full-Mesh Latency, per RFC 2889

The effect of packet loss is seen in the below graph, showing latency measured during full-mesh, Layer 2 throughput testing of Blade 2 – the XCM8944 switching blade.

Full-Mesh Latency, Blade 2, 40 x 1GE ports

In this case, there was minor loss, 2.5 percent, of the Jumbo packet streams. This tends

to hike the average latency – to about 50 sec in this case. And the maximum latency

climbs precipitously, in this case to nearly 100 sec.

Full-Mesh Latency, Blade 3, 24 x 10GE ports

Full-mesh throughput testing of Blade 3, the XCM8924, with 24 x 10GE ports, was done in increments, starting with 50-percent bi-directional traffic load on all ports, then 75 percent and finally 100 percent.

The below graph shows the latency experienced by Layer-2 packets traversing Blade 3 during full-mesh throughput testing, with the initial 50-percent traffic load. As seen with the port-pair latency results, Blade 3 applies very impressive, very brief latency to

passing packets, in all cases, even worst-case maximum latency, under 4 sec.

74 128 256 512 1024 1280 1518 9216 12.3K

Max Latency (uSec) 23.42 23.58 31.98 32.68 35.8 25.28 25.24 96.48 98.28

Avg Latency (uSec) 12.6 13.9 20.1 19.7 20.6 14.9 14.3 47.1 47.9

Min Latency (uSec) 2.52 2.52 3.14 3.7 4.04 3.72 3.7 3.76 3.76

0

20

40

60

80

100

120

uS

ec

Frame Size (bytes)

NETGEAR M6100 - RFC 2889 Full Mesh Latency - 100% load Blade 2, 40 x 1GE ports

Max Latency(uSec)

Avg Latency(uSec)

Min Latency(uSec)

Source: Miercom, January 2015

Page 25: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 25 DR141231B

As it turned out, the latency experienced by passing packets was basically identical when the full-mesh, Layer 2 throughput load was upped to 75 percent (see graph below). That is normal, as long as there are no lost or dropped packets. As before, this latency performance is superb.

74 128 256 512 1024 1280 1518 9216 12.3K

Max Latency (uSec) 3.62 3.48 3.65 3.59 3.63 3.57 3.49 3.6 3.43

Avg Latency (uSec) 3.3 3.3 3.3 3.3 3.4 3.3 3.3 3.3 3.3

Min Latency (uSec) 3.24 3.22 3.23 3.23 3.23 3.22 3.22 3.23 3.22

3

3.1

3.2

3.3

3.4

3.5

3.6

3.7u

Se

c

Frame Size (bytes)

NETGEAR M6100 - RFC 2889 Full-Mesh Latency - 50% load Blade 3, 24 x 10GE ports

Max Latency(uSec)

Avg Latency(uSec)

Min Latency(uSec)

74 128 256 512 1024 1280 1518 9216 12288

Max Latency (uSec) 3.64 3.54 3.64 3.6 3.61 3.53 3.46 3.59 3.43

Avg Latency (uSec) 3.3 3.3 3.3 3.3 3.4 3.3 3.3 3.3 3.3

Min Latency (uSec) 3.26 3.23 3.22 3.23 3.23 3.22 3.22 3.22 3.22

3

3.1

3.2

3.3

3.4

3.5

3.6

3.7

uS

ec

Frame Size (bytes)

NETGEAR M6100 - RFC 2889 Full-Mesh Latency - 75% load Blade 3, 24 x 10GE ports

MaxLatency(uSec)

AvgLatency(uSec)

MinLatency(uSec)

Source: Miercom, January 2015

Source: Miercom, January 2015

Page 26: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 26 DR141231B

At full traffic loads during full-mesh throughput testing with Layer 3 traffic, the latency applied to packets traversing Blade 3 increases slightly (see graph below). Still, the

average latency for all packet sizes is at or below 5 sec, which is excellent. Only worst-

case maximum latency edges up beyond 5 sec, but never exceeds 10 sec.

74 128 256 512 1024 1280 1518

Max Latency (uSec) 6.62 5.86 5.94 7.36 9.18 7.44 6.5

Avg Latency (uSec) 3.6 3.6 4.1 4.7 5.0 4.7 4.7

Min Latency (uSec) 2.54 2.54 3.12 3.68 4 3.74 3.64

0

2

4

6

8

10

uS

ec

Frame Size (bytes)

NETGEAR M6100 - RFC 2899 Full-Mesh Latency Layer 3 - Blade 3 - 24 x 10GE ports - No Jumbo frames

Source: Miercom, January 2015

Page 27: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 27 DR141231B

7. Multicast Latency, per RFC 3918

An average latency value was calculated for each frame size applied in RFC 3918-based, Layer-3 throughput testing. With the additional switch processing required for handling multicast traffic, the latency incurred by multicast traffic is typically higher than other traffic streams (port-pair or full-mesh)

Multicast Latency for Blade 1, One to 47 x 1GE Ports

The Blade-1 XCM8948 switch exhibited low latency for all multicast frame sizes tested

(below graph). Average latency was under 5 sec for all frame sizes, which is excellent.

The maximum latency, 6 to 7 sec, is inconsequential.

Multicast Latency for Blade 2: One to 39 x 1GE Ports

The Blade-2 XCM8944 switch exhibited moderately high average latency for all multicast

frame sizes tested (see below graph). Average latency ranged from 33 to about 39 sec.

The maximum latency of nearly 70 sec is on the high side. Both the average and maximum latencies are fairly consistent across all frame sizes, which means most multicast traffic will experience roughly the same latency.

74 128 256 512 1024 1280 1518

Max Latency (usec) 5.76 5.76 6.22 6.6 6.9 6.62 6.56

Avg Latency (usec) 3.4 3.4 4.0 4.6 4.9 4.6 4.5

Min Latency (usec) 2.66 3.06 3.48 4 4.06 4.02 3.9

0

1

2

3

4

5

6

7

8

uS

ec

Frame Size (bytes)

Blade 1 (XCM 8948) 48 x 1GE Multicast Latency

Source: Miercom, January 2015

Page 28: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 28 DR141231B

Multicast Latency for Blade 2: One to 3 x 10GE Ports

Another aspect of the Blade 2 multicast-latency performance is seen for traffic onto and distributed to the switch blade's 10GE ports. As the graph shows,

average multicast latency is just 2 sec, which is superb.

63.6 65.6 66.1 67.0 67.1 66.2 66.1

33.1 34.8 35.5 35.7 35.9 34.5 34.1

2.5 2.9 3.6 3.8 4.1 3.7 3.9

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

74 128 256 512 1024 1280 1518

uS

ec

Frame Size (bytes)

Blade 2 (XCM8944) 40 x 1GE Multicast Latency

Max Latency(usec)

Avg Latency(usec)

Min Latency(usec)

2.2 2.1 2.2 2.2 2.3 2.2 2.2

1.8 1.8 2.0 2.0 2.0 2.0 2.0

1.7 1.7 1.8 1.8 1.9 1.9 1.8

0.0

0.5

1.0

1.5

2.0

2.5

74 128 256 512 1024 1280 1518

uS

ec

Frame Size (bytes)

Blade 2 (XCM 8944) 4 x 10 GE Multicast Latency

Max Latency(usec)

Avg Latency(usec)

Min Latency(usec)

Source: Miercom, January 2015

Source: Miercom, January 2015

Page 29: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 29 DR141231B

Multicast Latency for Blade 3: One to 23 x 10GE Ports

The latency applied to multicast traffic by Blade 3, the XCM8924 switch blade, is remarkably low and consistent. The above graph shows that multicast traffic of

all frame sizes through the 10GE switch blade incurs just 3.4 to 3.7 sec, which is superb.

3.7 3.6 3.7 3.7 3.6 3.6 3.6

3.5 3.4 3.5 3.5 3.5 3.5 3.4

3.4 3.4 3.4 3.4

3.4 3.4

3.4

3.2

3.3

3.3

3.4

3.4

3.5

3.5

3.6

3.6

3.7

3.7

74 128 256 512 1024 1280 1518

uS

ec

Frame Size (bytes)

Blade 3 (XCM 8924) 24 x 10 GE Multicast Latency

Max Latency(usec)

Avg Latency(usec)

Min Latency(usec)

Source: Miercom, January 2015

Page 30: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 30 DR141231B

8. High Availability Testing

Various high-availability features of the NETGEAR M6100 were tested and verified. Among them:

Link Aggregation: NETGEAR switches support LAG (link-aggregation group) and MLAG (multi-chassis LAG), where two or more links between the same two switches or chassis can act as a single logical link, providing for automatic load sharing and fail-over.

Blade Hot Swapping: Blades fail, but it isn't always necessary to shut down the entire network to replace a blade in a multislot chassis. The NETGEAR M6100 supports blade hot swapping. The user can extract and replace a blade without affecting live traffic passing through adjoining or other blades in the same chassis.

Power Supply Fail-over: The NETGEAR M6100 supports up to four power supplies. If a power supply is provisioned as a hot-standby spare – that is, it is not required to contribute to the ongoing powering of the system – the spare can automatically take over for a power supply that fails.

Link Aggregation

Testing the NETGEAR M6100's link aggregation group (LAG) support was done according to RFC 2544. A separate NETGEAR switch, an M7100, was acquired and the LAG – consisting of two 10GE links – was set up between the two (see figure). The M7100 and M6100 switches were peer-linked together via two Cat 6A copper cables.

Source: Miercom, January 2015

Link Aggregation Group (LAG).

Two 10GE, RJ-45 copper links.

In the test, 19.74 Gbps of

bidirectional throughput (50% of

the combined links’ capacity)

was sent continuously. One of

the links was then failed.

M7100 Switch

M6100 Chassis

Page 31: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 31 DR141231B

Load was applied by the Ixia test system to verify connectivity and bi-directional 10-Gbps throughput on each links. With bidirectional traffic from the Ixia test system flowing through the LAG, we verified that the switches were capable of forwarding 100-percent line rate traffic with zero loss using 1,518-byte frames.

Then a full 10-Gbps load was applied. The aggregate throughput being sent, then, through the two load-shared LAG ports was 19.74 Gbps – 50 percent of the combined link's bandwidth.

One of the LAG's two links was then disconnected, simulating a link failure, and the results were carefully monitored.

The Ixia system reported that 1,210 frames (1,518-byte) were lost during the interruption, before all traffic was rerouted over onto the remaining LAG link. Using this data it was calculated that the peer-link cutoff-recovery time was 744.3 microseconds.

Data loss in LAG-link-failover situations is unavoidable, since some packet traffic is in transit when the link fails, before link-recovery takes effect. An outage of less than 1 millisecond (744 microseconds) with loss of just 1,200 frames is impressive.

Hot-swapping

All switch blades in the M6100 chassis are hot-swappable. They can be inserted into or removed from the chassis without disrupting existing network traffic on the other blades, even traffic passing between other active blades across the chassis backplane.

Using the Ixia test system we applied 100-percent, bi-directional line-rate throughput between each of four 1GE ports of the XCM8948 blade, and four 1GE ports of the XCM8944 blade – a total throughput of 7.9 Gbps. We then extracted, and re-inserted, the XCM8924 blade in the third slot. The Ixia test system reported that there was no frame loss resulting from the removal or insertion of the third switch blade.

Power Supply Failover

The NETGEAR M6100 chassis can accommodate up to four power supplies, one or more of which can be redundant – ensuring that if a power supply fails, the switch keeps running. We tested power failover using two power supplies – unplugging each and then plugging it back in, one at a time, while traffic was being applied by the Ixia test system.

For this test we sent full-load, bi-directional traffic between four 1GE ports on the XCM8948 and XCM8944 switch blades – a total of 7.9 Gbps was being sent during the power-supply disruption testing. The result: There was no frame loss or any throughput variation as a result of the power-supply failover.

Page 32: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 32 DR141231B

Brief interruption: Shown above are the data-loss results of the High Availability tests conducted on the NETGEAR M6100 chassis with switching blades in three slots. There was no data loss or system disruption from either power-supply failover or hot blade swap. Disconnecting a 10-Gbps LAG link between switches resulted in a one-time loss of 1,210 frames (1,518 byte). Traffic rerouted and the interruption lasted less than a millisecond.

1,210 frames dropped in

744.3 microseconds

0%

0%

LAG Peer LinkInterruption

Hot Blade Swap

Power Interruption

High Availability Duration and Effect of Failover

Loss fromfailover

Source: Miercom, January 2015

Page 33: NETGEAR, Inc. M6100 Managed Switch€¦ · The NETGEAR M6100 is a multislot chassis that offers three slots for line-card/switching blades, plus a PSU bay that accommodates up to

NETGEAR M6100 Managed Switch 19 January 2015

Copyright 2015 Miercom 33 DR141231B

About Miercom

Miercom has published hundreds of network-product-comparison analyses in leading trade periodicals and other publications. Miercom’s reputation as the leading, independent product test center is undisputed.

Miercom’s private test services include competitive product analyses, as well as individual product evaluations. Miercom features comprehensive certification and test programs including: Certified Interoperable, Certified Reliable, Certified Secure and Certified Green. Products may also be evaluated under the Performance Verified program, the industry’s most thorough and trusted assessment for product usability, feature validation and performance.

Use of This Report

Every effort was made to ensure the accuracy of the data contained in this report but errors and/or oversights can occur. The information documented in this report may also rely on various test tools, the accuracy of which is beyond our control. Furthermore, the document relies on certain representations by the vendors that were reasonably verified by Miercom but beyond our control to verify to 100 percent certainty.

This document is provided as is, by Miercom and gives no warranty, representation or undertaking, whether express or implied, and accepts no legal responsibility, whether direct or indirect, for the accuracy, completeness, usefulness or suitability of any information contained in this report.

No part of any document may be reproduced, in whole or in part, without the specific written permission of Miercom or NETGEAR, Inc. All trademarks used in the document are owned by their respective owners. You agree not to use any trademark in or as the whole or part of your own trademarks in connection with any activities, products or services which are not ours, or in a manner which may be confusing, misleading or deceptive, or in a manner that disparages us or our information, projects or developments.