This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
A. An approach, validated through specific testing scenarios or observed frequently in customers’ environments, that produces an optimal outcome to a particular technical challenge.
R. Are Best Practices the same as Standards?
S. No. They simply represent an approach that is widely understood to produce the best possible outcome.
Q. What are the benefits of observing Best Practices?
R. Reduced risk, faster root-cause analysis of issues, faster response times from support organizations, and the best chance of achieving optimal performance.
General Best PracticesFor All Isilon-Based Datastores
• Network segmentation (e.g. VLANs) to separate VM network traffic from VMkernel storage traffico Best practice for optimal performance o For optimal security, use an isolated (or trusted) network for all storage
traffic• Test Jumbo frame (MTU=9000) performance in your environment
o Fully supported by both VMware and EMCo Overall performance results depend on multiple variableso Use whichever configuration produces best overall performance
For optimal datastore performance and availability:
• Use 10Gb/s Ethernet connections if possible for best performance
• Use vSphere Storage I/O Control to manage VM storage utilization
• Use Network I/O control to manage network bandwidth for storage traffic under heavy workloads
General Best PracticesFor All Isilon-Based Datastores
For optimal datastore performance and availability (continued):
• Size your storage cluster first for performance, and then for capacity• Minimize the number of network hops between vSphere hosts and Isilon
storage:o EMC recommends using the same subneto Use the same switch, if possible
• Ensure redundant network links exist between vSphere hosts and Isilon nodes for all datastoreso HA path configuration and administration differs for each datastore type
• Different workloads may require different storage configuration settingso Higher data protection levels vs.
higher performance requirementso Analyze workload patterns for
NFS Best PracticesOptimal Configuration for High Availability
Network Redundancy Options• Static Link Aggregation using 802.3ad LAG
o Requires compatible switch and NIC hardwareo Protects against NIC/path failureso Does not increase performance
• SmartConnect Dynamic IP Address Poolso Automatically assigns IP addresses to member interfaces on each nodeo Interface or node failure causes SmartConnect to reassign IP address(es)
to remaining nodes in the clustero Datastore mapping can be IP-addressed based, or use DNS round-robin
NFS Best PracticesOptimal Configuration for Performance (continued)
Creating multiple datastores increases throughput• Best design uses mesh topology• Every vSphere host connects to every datastore• VMs can be created on any datastore to balance the I/O workload between
NFS Configuration Gotcha #1 ESXi supports NFS, but more specifically:
– NFS version 3 only, no support for v2 or v4.– Over TCP only, no support for UDP.
The UI and ESXi logs will inform you if you attempt to use a version or protocol other than version 3 over TCP:
NasVsi: 107: Command: (mount) Server: (madpat) IP: (10.16.156.25) Path: (/cormac) Label: (demo) Options: (None)WARNING: NFS: 1007: Server (10.16.156.25) does not support NFS Program (100003) Version (3) Protocol (TCP)
Increasing Maximum Number of NFS mounts Default configuration only allows for 8 NFS mounts per ESXi Server. To enable more, start the vSphere Client, select the host from the
inventory, and click Advanced Settings on the Configuration tab. In the Advanced Settings dialog box, Net.TcplpHeapSize needs to be
adjusted if NFS.MaxVolumes is increased or you may deplete heap.
Symptoms from running out of heap are documented here:http://kb.vmware.com/kb/1007332
NIC Teaming – Failover, not Load Balancing There is only one active connection between the ESXi
server and a single storage target (mount point). This means that although there may be alternate
connections available for failover, the bandwidth for a single datastore and the underlying storage is limited to what a single connection can provide.
To leverage more available bandwidth, there must be multiple connections from the ESXi server to the storage targets.
One would need to configure multiple datastores, with each datastore using separate connections between the server and the storage, i.e. NFS shares presented on different IP addresses.
VLANs for Isolation & Security of NFS Traffic Storage traffic is transmitted as clear text across the LAN. Since ESXi 5.0 continues to use NFS v3, there is no built-
in encryption mechanism for the traffic. A best practice would be to use trusted networks for NFS.
– This may possibly entail using separate physical switches or leverage a private VLAN.
iSCSI Datastore Overview• iSCSI LUNs are constructed and treated as files within OneFS• Mounted over Ethernet network using iSCSI Initiators• EMC supports both thin and thick provisioning
Advantages of iSCSI datastores:• Raw-device mapping
supported for VMs that require it
• May provide better throughput performance for some workload types
• iSCSI LUNs can be cloned for certain VM management scenarios
Gotcha – Improper Device Removal Improper removal of a physical device containing a VMFS
or RDM could result in an APD (All Paths Dead) state. Improvements have been made to ESX 4.x & 5.0. Follow steps outlined in http://kb.vmware.com/kb/1015084 for
ESX 4.x & http://kb.vmware.com/kb/2004605 for ESXi 5.0 when removing a datastore.