Top Banner

of 21

High Performance Nas Architecture Guide

Apr 06, 2018

Download

Documents

Bill Ryder
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/3/2019 High Performance Nas Architecture Guide

    1/21

    A R C H I T E C T U R E G U I D E

    Hitachi Data Syst

    Hitachi High-performance NASPlatform, Powered by BlueArc

  • 8/3/2019 High Performance Nas Architecture Guide

    2/21

    Hitachi High-performance NAS Platform, Poweredby BlueArc

    Executive Summary

    Hitachi High-performance NAS Platforms SiliconServer Architecture, provided by BlueArc,enables a revolutionary step in file servers by creating a hardware-accelerated file systemthat can scale throughput, IOPS, and capacity well beyond conventional software-based fileservers. With its ability to virtualize a large storage pool of up to 512TB of multitiered storage,High-performance NAS Platform can scale with growing storage requirements and become acompetitive advantage for business processes. This document sets forth the technical detailsof the architecture to better help technical readers understand the unique hardware-accelerated design and the object-based file system.

    IntroductionWith the massive increase in the number of desktop users, high-end workstations,

    application servers, high-performance computing (HPC), and nodes in compute clusters overthe last decade, conventional network attached storage (NAS) solutions have beenchallenged to meet the resulting acceleration in customer requirements. While file servervendors have offered systems with faster off-the-shelf components and CPUs as theybecame available, storage demands have far outpaced the ability of these CPU-basedappliances to keep up. To meet the increasing performance and capacity requirements,companies have been forced to deploy multiple NAS appliances concurrentlyreducing thebenefit of NAS, decentralizing data, and complicating storage management.

    Many organizations looking for a solution to this performance deficit turned to storage areanetwork (SAN) implementations, but there are challenges with SANs that they did notexperience with NAS. The first is high infrastructure cost. Adding one or two expensive FibreChannel host bus adapters (HBAs) to each high-end workstation, each application and

    database server, and each cluster node, is an expensive proposition compared to usingexisting Ethernet network interface cards (NICs). Expensive file system license fees andmaintenance costs and complexity add to the burden. But by far the biggest challenge tocustomers is that a SAN alone does not provide the standards-based shared file accessneeded for simple data management.

    Another solution that some organizations are beginning to look at is the concept of storageclusters or grids. Both will be referred to as storage-grids for the remainder of this discussion.For conventional NAS appliance vendors that cannot scale performance or capacity, thisstrategy is not an option, but rather a necessity. Although storage-grids are interesting, theyare far from ready for prime time. Consider the rise of compute-clusters as an allegory. In thesummer of 1994 Thomas Sterling and Don Becker, working as contractors to NASA, built aclustered computer consisting of 16 DX4 processors connected by channel bonded Ethernet.

    They called their machine Beowulf. Now, years later, compute-clusters are commonly usedin research and are gaining wider acceptance in commercial enterprises. The key to thisacceptance is that the complex software that ties compute-clusters together and distributestasks to the nodes has finally begun to mature to a point where companies can rely uponthem for stable services.

    Some aspects of compute-clusters will translate directly to storage-grids; however, there areenormous complexities that are introduced as well. Locking, cache coherency, client sidecaching and many other aspects of sharing a file system make it a daunting task. This will be

    1

  • 8/3/2019 High Performance Nas Architecture Guide

    3/21

    solved over time, but as with compute-clusters, this will take a significant amount of time tomature. The Internet Engineering Task Force (IETF) is proposing a new standard calledpNFS. pNFS is an extension to NFSv4 and will help focus the industry towards a standards-based solution. The SiliconServer Architecture design illustrates a commitment to standards-based protocols and methods while taking advantage of its unique hardware architecture foracceleration of data flow. Moreover, the SiliconServer Architecture delivers a significantly

    less complex solution by keeping the node count low, while still achieving the desiredperformance. It provides the fastest nodes for storage grids, ensuring reduced complexityand cost, while delivering best in class performance and scalability.

    The SiliconServer Architecture was developed in 1998 to overcome the limitations of scalingan individual CPU-based NAS server. It was a fresh approach to the problem with thefundamental belief that file services could be accelerated using hardware-based statemachines and massive parallelization, in the same way Ethernet and TCP/IP had beenaccelerated by switch and router vendors. The network vendors moved from software-basedsolutions to hardware-accelerated solutions, to accelerate packet flow processing. It seemedlogical that file services should follow this same evolution, and an evolutionary new designwould be required to attain the same benefits as experienced in the networking.

    SiliconServers Founding Design PrinciplesSiliconServer adheres to key design principleskeeping it simple, maintaining a standards-based approach, ensuring data availability while enabling significant increases inperformance and capacity.

    High-performance NAS Platform uses the fourth-generation SiliconServer, which deliverssignificant performance, throughput, and capacity gains, as well as an excellent return oninvestment. This Architecture Guide discusses the SiliconServer Architecture in detail,including its hardware architecture, unique object-based file system structure, and methodsof acceleration. It also looks at how this architecture enables innovative product designs toachieve performance and capacity levels dramatically higher than competitors in the networkstorage market and, in fact, allows the High-performance NAS Platform to be a NAS and

    iSCSI hybrid product that delivers the benefits of both SAN and NAS.

    The High-performance NAS Platform is the fastest filer node to date, and Hitachi DataSystems will continue to enhance offerings at the node level, as well as ensure a path tostorage-grids for organizations that exceed the throughput of the High-performance NASPlatform. The High-performance NAS Platform supports all lines of storage offered by HitachiData Systems. Organizations can have the industrys most powerful storage and the fastestNAS in one solution.

    2

  • 8/3/2019 High Performance Nas Architecture Guide

    4/21

    Architecting a Better Network Storage Solution

    It Starts with the Hardware DesignLooking at the network sectors technology advancements, networking progressed fromrouting functionality on standard UNIX servers, to specialized CPU-based appliances, tohardware-based router and switch solutions, where all processing is handled in the hardware,

    utilizing custom firmware and operating systems to drive the chipsets. SiliconServerArchitecture was an evolutionary design that applied these proven market fundamentals tothe NAS market.

    The goals were to solve the many pain points experienced by organizations currentlyemploying NAS and SAN in the areas of performance, scalability, backup, management, andease of use. The requirements were to build the best possible single or clustered NAS serverthat could track and even exceed the requirements of most customers, and could also beleveraged to create a simpler storage grid to scale even further as the data crunch continuedto grow.

    To create the next-generation NAS server, the architecture required: Significantly higher throughput and IOPS than CPU-based appliances and servers Highly scalable storage capacity that would not reduce system performance The ability to virtualize storage in order to extract maximum performance Adherence to traditional protocol standards Flexibility to add new innovative features and new protocol standards

    Unlike conventional CPU-based NAS architectures, where you never really know how fastthey are going to go until you try them, SiliconServer Architecture is designed from the startto achieve a certain level of performance. The design engineers decide up front how fastthey want it to go based on what they think is achievable at acceptable cost with theappropriate technologies. This is the beauty of a silicon-based design. The on-paper goalstranslate directly into the final product capabilities. SiliconServer Architecture hasconsistently produced the sought-after performance anticipated in the design process and

    has met or exceeded customer expectations for a network storage solution.

    The Modular ChassisThe High-performance NAS Platform chassis design was the first critical designconsideration, as it would need to scale through newer generations of modules supportingincreased throughput, IOPS, and scalability. The modular chassis design therefore neededto scale to 40Gbit/sec total throughput. The passive backplane design was chosen to supportthese requirements. The backplane has no active components and creates the foundation fora high-availability design, which includes dual redundant hot-pluggable power supplies andfans, as well as dual battery backup for NVRAM.

    3

  • 8/3/2019 High Performance Nas Architecture Guide

    5/21

    The passive backplane incorporates pathways upon which Low Voltage Differential Signaling(LVDS) guarantees low noise and very high throughput. The ANSI EIA/TIA-644 standard forLow Voltage Differential Signaling (LVDS) is well suited for a variety of applications, includingclock distribution, point-to-point and point-to-multipoint signal distribution. Further discussionof LVDS is beyond the scope of this paper; however, a simple Internet search will returnsignificant information if you wish understand LVDS technology. The hardware-based logic

    or Field Programmable Gate Arrays (FPGA) connect directly to these high-speed LVDSPipelines (also known as the FastPath Pipeline), meeting the high-throughput requirement ofcurrent and future product designs.

    A key advantage of this design is the point-to-point relationship between the FPGAs alongthe pipelines. While traditional computers are filled with shared buses requiring arbitrationbetween processes, this pipeline architecture allows data to transfer between logical blocksin a point-to-point fashion, ensuring no conflicts or bottlenecks. For example, data beingprocessed and transferred from a network process to a file system process is completelyindependent of all other data transfers. It would have no impact on data moving to thestorage interface, for example. This is vastly different from conventional file servers where allIO must navigate through shared buses and memory, which can cause significantperformance reductions and fluctuations. The backplane provides separate pipelines to

    transmit and receive data, meeting only on the storage module, in order to guarantee fullduplex performance. The convergence at the storage module allows a read directly fromcache after a write, and it is discussed further in this Architecture Guide.

    The ModulesFour physical modules are inserted into rear of the chassis. These are the Network InterfaceModule (NIM), two File System Modules (FSA and FSB), and the Storage Interface Module(SIM). Those who need to accelerate performance in a CIFS-intensive environment have theoption to choose the new File System Module X (FSX) instead of the File System A module.Each module has clear responsibilities and typically operates completely independently fromthe others, although the FSA/FSX and FSB modules do have a cooperative relationship.

    Next-generation modules will continue the advancement of performance, port count, updatedmemory, and FPGA speeds.

    Network Interface Module (NIM)Responsible for: High-performance Gigabit Ethernet(GigE) connectivity Hardware processing of protocols OSI Layers 1-4 Out-of-band management access

    The NIM is responsible for handling all Ethernet-facing I/O functions corresponding to OSILayer 1-4. The functions implemented on the NIM include handling Ethernet and Jumbo

    Ethernet frames up to 9000 bytes, ARP, IP protocol and routing, and of course the TCP andUDP protocols. The NIM works as an independent unit within the architecture. It has its ownparallel state machines and memory banks. Like the overall architecture design, the TCIP/IPstack is serviced in hardware on the NIM module. This design allows it to handle 64,000sessions concurrently. Multiple hardware state machines, programmed into FPGAs, runningin a massively parallel architecture, ensure that there are no wait-states. This results innearly instantaneous network response, the highest performance, and the lowest latency. Infact, the predecessor to the NIM was one of the worlds first TCP Offload Engines (TOE),similar to the ones used in some PC-based appliances today.

    4

  • 8/3/2019 High Performance Nas Architecture Guide

    6/21

    The purpose-built NIM provides an ideal network interface to the High-performance NASPlatform. A key difference between the NIM and an off-the-shelf TOE card is the substantialamount of resources available. While most TOE cards have no more than 64MB of buffermemory, the NIM has more than 2.75GB of buffer memory supporting the parallel statemachines in the FPGAs. This allows the NIM to handle significantly higher throughput and

    more simultaneous connections. TOE cards used in PC-based architectures are also limitedby PCI and memory bus contentions in the server, whereas pipelines in the High-performance NAS Platform are contention free. Also, TOE cards used on NAS filers usuallyonly handle certain protocols, putting the burden on the central CPU to handle otherprotocols, which affects overall performance and functions, whereas in the High-performanceNAS Platform FPGAs handle virtually all protocols.

    The current NIM module of the High-performance NAS Platform offers six GigE ports withSFP (Small Form-factor Pluggable) media to allow for either optical or copper physicalinterconnects. The NIM supports link-aggregation (IEEE802.3ad) including the LinkAggregate Control Protocol (LACP), thus supporting dynamic changes to the aggregationand enabling higher availability and higher throughput to the data, which is critical for high-performance shared data environments. Future NIM modules will scale to higher numbers of

    GigE and 10GigE ports, allowing for increased throughput and connectivity. The NIM cardalso has four shared Fast-Ethernet ports for out-of-band management, which allows fordirect access and/or connection to the System Management Unit (SMU) and the otherdevices that make up the total solution.

    File System Modules (FSA/FSX and FSB)Responsible for: Advanced features OSI Layer 5, 6, and 7 protocols

    o NFS, CIFS, iSCSI, NDMP Security and authentication SiliconFS (hardware file system)

    Object store layer FSXFSX File system attribute caching Metadata cache management NVRAM logging

    The two File System Modules work collaboratively to deliver the advanced features of theHigh-performance NAS Platform. The FSB board handles data movement and the FSAhandles data management. The FSA is not in line with the pipeline. Rather, this modulecontrols the several advanced management and exception processing functions of the filesystem much like the supervisor module of a high-end network switch controls the higherorder features of the switch. Snapshot, quotas, and File and Directory Locking are a fewexamples of processes managed by the FSA module. It will accomplish these tasks bysending instructions to the FSB module, which will actually handle the data control and

    movement associated with these tasks. The FSA module has dedicated resources in supportof its supervisory role, including 4GB of memory. Administrators have the option to use FSXinstead of FSA to accelerate performance in a CIFS-intensive environment.

    As mentioned, the FSB module handles all data movement and sits directly on the FastPathpipeline, transforming, sending, and receiving data to and from the NIM and the storageinterface module (SIM). The FSB Module contains the highest population of FPGAs in thesystem and also contains 19.5GB of memory distributed across different functions. It is the

    5

  • 8/3/2019 High Performance Nas Architecture Guide

    7/21

    FSB module that moves and organizes the data via the Object Based SiliconFS file system.The file system is discussed in detail later in this Architecture Guide.

    When the FSB module receives a request from the NIM module it will inspect the request todetermine what is required to fulfill the request, notify the FSA module of the arrival of thenew request, and take any action that the FSA may deem necessary, if any. The protocol

    request is decoded and transformed into the Object Store API for further processing. Thiscritical point is an example of where the parallel state-machine architecture really shows itsbenefit. Several functions will execute simultaneously:

    The data is pushed into NVRAM to guarantee the data is captured The data is pushed across the High Speed Cluster Interconnect to update the clusterpartner NVRAM if it exists The data is sent over the Fastpath pipeline to the SIM for further processing A response packet is formed

    Upon successful completion of all of these elements, the response packet can be transmittedby the FSB back to the NIM which will in turn send the response back to the client, thus whatwould be four serial steps in a traditional file server are collapsed into a single atomic parallel

    step. This, kind of parallelization occurs throughout the entire system whenever possible.

    Storage Interface Module (SIM)Responsible for: Fibre Channel processing SCSI command processing Sector cache management Parallel striping Cluster inerconnect NVRAM mirroring

    The SIM module has two distinct responsibilities. The first role is the handling and

    management of raw data on the SAN storage back-end. The second responsibility is for thehigh-availability features of the SAN and the cluster interconnect (when configured in acluster).

    The SIM provides the redundant backend SAN connection to the storage pool using fourGigabit Fibre Channel ports. The SIM logically organizes LUNs on the SAN into a virtualizedpool of storage so that data is striped across an adequate number of drives in order toprovide the high-speed throughput required for the NAS head. This parallel striping is a keyadvantage as it allows more drive spindles to be involved in all data transfers, ensuring thebest storage performance. (The virtualized storage is covered in detail later in thisArchitecture Guide.)

    The SIM provides the high-availability failover capabilities for clustered systems. The SIM

    has two HSCI (High Speed Cluster Interconnect) ports used for both the clustercommunications as well as the avenue for mirroring NVRAM between nodes for the highestdegree of data protection. The SIM card uses two 10GigE ports for clustering, which providean extremely fast HSCI connection. These connections are required for the additional N-waycluster performance and can handle the increased inter-node communication requiredsupport a high-performance storage-grid.

    6

  • 8/3/2019 High Performance Nas Architecture Guide

    8/21

    Memory (Buffers and Caches)In order to achieve the performance design goal, there are a number of considerations totake into account. In particular, the minimum memory bandwidth requirements throughoutthe system are critical. High-performance NAS Platform has a robust set of memory pools,each dedicated to certain tasks, and these pools must operate within certain tolerances inorder to achieve the desired performance.

    The amount of memory in each module is summarized below. This memory is distributedacross various tasks on each module. By segregating memory pools (there are severaldozen in the entire system) and ensuring that each has adequate bandwidth, theSiliconServer Architecture ensures that memory access will never be a bottleneck. This iscritical to sustaining the High-performance NAS Platforms high-throughput performance.

    High-performance NAS Platform model 2200 has 32GB of distributed memory, cache, andNVRAM distributed as follows:

    NIM Module 2.75GB network processing FSA 4GB protocol handshaking and file system management FSB 15.5GB metadata NVRAM and control memory

    SIM 9.75GB sector cache and control memory Total memory 32GB

    In designing memory requirements for a high-speed system, two key elements must betaken into consideration. First, peak transfer rates on an SDRAM interface cannot besustained due to various per-transfer overheads. Then, for the various memories containedin each of the modules, the memory bandwidth must be doubled to support simultaneoushigh-performance reads and writes, as data is written into memory and pulled out. Thus,memory bandwidth in the architecture is designed to have approximately 2.5 times (2.5x) thebandwidth required to sustain throughput. There are also areas of the architecture wherememory bandwidth is even greater. The SIM, for example, has 8GB of raw block sectorcache. On this module, the transmit and receive Fastpath pipelines intersect, as data that is

    written into the sector cache must be immediately available for reading by other users eventhough the data may not yet have made it to disk. In this scenario, four simultaneous types ofaccess into the memory must be considered:

    Writes coming from the FSB Reads being returned to the FSB Updates of the cache from data coming from the SAN Reads from the cache in order to flush data to the SAN

    As a result, the SIMs sector cache must deliver 5x the bandwidth of desired throughput ofthe overall system, which it does.

    Field Programmable Gate Arrays (FPGA)

    At the heart of the SiliconServer Architecture is a unique implementation of parallel statemachines FPGAs. An FPGA is an integrated circuit, which can be reprogrammed in the field,enabling it to have the flexibility to perform new or updated tasks, support new protocols orresolve issues. Upgrades are accomplished via simple upgrade as performed on switches orrouters today, which can change the FPGA configuration to perform new functions orprotocols.

    Todays FPGAs are high-performance hardware components with their own memory,input/output buffers, and clock distributionall embedded within the chip. FPGAs are similar

    7

  • 8/3/2019 High Performance Nas Architecture Guide

    9/21

    to ASICs (Application Specific Integrated Circuits), used in high-speed switches and routers,but ASICs are not reprogrammable in the field, and are generally used in a high-volume,non-changing product.

    Hardware developers sometimes do their initial designs and releases on an FPGA as theyallow for quick ad hoc changes during the design phase and short production runs. Once the

    logic is locked down, they move the logic to an ASIC as product volumes ramp and allfeatures are locked in, to get to a fixed lower cost design. Yet in the SiliconServerArchitecture, the FPGA serves as the final design implementation in order to provide theflexibility to add new features and support new protocols in hardware as they are introducedto the market. High-performance switches and routers use FPGAs and ASICs to pumpnetwork data for obvious reasons. Now, with the High-performance NAS Platform, the samecapability exists for network storage.

    For an analogy of how the FPGAs work, think of them as little factories. There are a numberof loading docks called Input/Output blocks, workers called logic blocks, and connectingeverything up are the assembly lines called Programmable Interconnects. Data entersthrough an input block, much like a receiving dock. The data is examined by a logic blockand routed along the Programmable Interconnect to another logic block. Each logic block is

    capable of doing its task unfettered by whatever else is happening inside the FPGA. Theseare individual tasks, such as looking for a particular pattern in a data stream, or performing amath function. The logic blocks perform their tasks within strict time constraints so that allfinish at virtually the same time. This period of activity is gated by the clock-cycle of theFPGA. High-performance NAS Platform FPGAs operate at 50 million cycles per second.

    Clock Cycles

    FPGAFPGAMemoryMemory

    TCP/IP

    Metadata

    Block Allocation

    Block Retrieval

    iSCSI

    FPGAFPGA Metadata

    Snapshots

    NVRAM

    Fibre Channel

    FPGAFPGA

    FPGAFPGA

    MemoryMemory

    MemoryMemory

    MemoryMemory

    NFS

    TCP/IP

    CIFS

    Virtual Volumes

    Block Retrieval

    iSCSI

    Metadata

    NDMP

    NDMP

    Block Allocation

    NFS

    Clock Cycles

    FPGAFPGAMemoryMemory

    TCP/IP

    Metadata

    Block Allocation

    Block Retrieval

    iSCSI

    FPGAFPGA Metadata

    Snapshots

    NVRAM

    Fibre Channel

    FPGAFPGA

    FPGAFPGA

    MemoryMemory

    MemoryMemory

    MemoryMemory

    NFS

    TCP/IP

    CIFS

    Virtual Volumes

    Block Retrieval

    iSCSI

    Metadata

    NDMP

    NDMP

    Block Allocation

    NFS

    Given the 750,000+ logical blocks inside the High-performance NAS Platform modules, thisyields a peak processing capability of approximately 50 trillion tasks per secondover

    10,000 times more tasks than the fastest general purpose CPU. (NOTE: As of this writing,Intels fastest microprocessor was rated for 3.8 billion tasks per second.)

    This massive parallel processing capability is what drives the architecture design and allowsthroughput improvement of nearly 100 percent per product generation. This contrasts sharplywith conventional network storage servers, which rely on general-purpose CPUs that haveonly been able to scale at approximately 30 percent per product generation.

    8

  • 8/3/2019 High Performance Nas Architecture Guide

    10/21

    The fundamental failing of the CPU is that with each atomic step, a software delay isintroduced as tasks, which are serially queued-up to be processed, demand systemresources in order to execute. When a client machine makes a request to a softwareappliance, every attempt is made to fulfill that request as far through the compute process aspossible. The steps include the device driver of the network card initially receiving therequest through error checking and translation of the request into the file system interface.

    However, this is a best effort strategy. In fact, it can only be a best effort strategy, becausethe CPU at the heart of software appliances is limited to performing only one task at a timeand must, by definition, time-share.

    CPUCPU

    This issue is exacerbated when advanced features or processes such as snapshot, mirroring,clustering, NDMP backup, and in some cases even RAID protection must be handled by theCPU. Each of these processes cause variations and slowdowns that adversely impact thethroughput of the traditional architectures as the CPUs processing capability is diluted fromhaving to time-share among these various tasks.

    Virtualized Storage, Object-Store, and File SystemNow that the exclusive hardware advantage of the SiliconServer Architecture has beendiscussed, it is equally important to examine and understand the external storage systemand the unique file system that take advantage of the high-performance and highly scalablearchitecture. As data passes through layers of FPGAs, the data itself is organized andmanaged via several layers within the Silicon File System. The best way to understand these

    layers is to work from the physical storage up to the actual file system seen by the end users.

    MetadataLookup

    Metadata

    FetchRAID

    RAI

    Rebuil

    Bloc

    RetrievaNVRAMWrite

    OSOperation

    D

    d

    k

    l

    BlockAllocation

    Clock Cycle

    Main MemoryMain Memory

    CPUCPU

    MetadataLookup

    Metadata

    FetchRAID

    RAI

    Rebuil

    Bloc

    RetrievaNVRAMWrite

    OSOperation

    D

    d

    k

    l

    BlockAllocation

    BlockAllocation

    Clock Cycle

    Main MemoryMain Memory

    CPUCPU

    MetadataLookup

    Metadata

    FetchRAID

    RAI

    Rebuil

    Bloc

    RetrievaNVRAMWrite

    OSOperation

    D

    d

    k

    l

    BlockAllocation

    Clock Cycle

    Main MemoryMain Memory

    CPUCPU

    MetadataLookup

    Metadata

    FetchRAID

    RAI

    Rebuil

    Bloc

    RetrievaNVRAMWrite

    OSOperation

    D

    d

    k

    l

    BlockAllocation

    BlockAllocation

    Clock Cycle

    Main MemoryMain Memory

    VirtualServers

    VirtualStoragePools

    VirtualTieredStorage

    Virtual FileSystem Cluster Name

    Space withsingle root upto 512TB

    Parallel RAID

    Striping withhundreds ofspindles per

    span

    Multiple File

    Systems PerStorage Pool

    Multiple dynamicVirtual Volumesper File System

    Storage Pool Storage Pool

    File System

    VirtualVolumes

    File System File System

    Up to 32VirtualServers perSystem

    NAS ClusterVirtualServers

    VirtualStoragePools

    VirtualTieredStorage

    Virtual FileSystem Cluster Name

    Space withsingle root upto 512TB

    Parallel RAID

    Striping withhundreds ofspindles per

    span

    Multiple File

    Systems PerStorage Pool

    Multiple dynamicVirtual Volumesper File System

    Storage Pool Storage Pool

    File System

    VirtualVolumes

    File System

    VirtualVolumes

    File SystemFile System File SystemFile System

    Up to 32VirtualServers perSystem

    NAS Cluster

    9

  • 8/3/2019 High Performance Nas Architecture Guide

    11/21

    Parallel StripingThe product is designed to meet two requirements: first, protect the data, and second,provide the high throughput and performance needed to feed the High-performance NASPlatform quickly enough to keep up with its throughput potential and each customersrequirements. The SAN is usually two or more redundant Fibre Channel switches, whichallows for a scalable back end and high availability. The switches are cross-connected to the

    storage systems and the High-performance NAS Platform SIM module, providing the high-availability failover paths.

    The LUNs are usually configured for RAID-5 protection, providing failed-disk protection andreconstruction through rotating parity, again ensuring both high availability and goodread/write performance. The SIM Module then stripes up to 32 LUNs into a larger logical unitcalled a Stripe. Stripes are organized into a higher-level entity known as a Span. New Stripesmay be added to a Span at any time, without requiring any downtime, allowing dynamicallyscalable volumes and thin provisioning. This design allows the High-performance NASPlatform to scale in both capacity and back-end performance. Customers optionally scaleperformance by adding more storage systems or scale capacity by simply adding more disksto existing storage systems.

    I/O is issued to the Span, which is in turn sent to an underlying Stripe, and all of the diskdrives within that Stripe are brought to bear to achieve the enhanced performance requiredto sustain throughput. This feature, called SiliconStack, allows the High-performance NASPlatform to scale storage without a reduction in performance. In fact, as more storage isadded, this actually increases the performance of the High-performance NAS Platform, as itprovides more spindles and controllers to feed the High-performance NAS Platforms high-throughput capability. This combined with the SiliconFS allows scalability up to 512TB.

    Virtualization and Tiered StorageThe High-performance NAS Platform delivers advanced virtualization features. Its VirtualServers feature enables partitioning of storage resources, allows server consolidation, and

    provides multiprotocol support. When use patterns change or spikes in I/O demand occur,administrators can balance workloads and rapidly respond. They can also create up to 32virtual servers per node or cluster within the same management framework, easilycoordinating throughput by dedicating ports and separate IP addresses to virtual servers.

    In addition to presenting an entire file system, the High-performance NAS Platform deliversflexible partitioning, called Virtual Volumes. Administrators may not wish to expose theentirety of the file system to everyone, and through the use of Virtual Volumes, they canpresent a subset of the file system space to a specific group or user. Virtual Volumes arelogical containers, which can be grown and contracted with a simple size controlimplementation. Client machines see changes to the size of Virtual Volumes instantly. Whenshared as an NFS export or CIFS share, the user or application sees only the availablespace assigned to the Virtual Volume. Administrators can use them to granularly control

    directory, project, or user space.

    The sum of the space controlled by the Virtual Volumes may be greater than the size of theentire file system. This over-subscription approach, sometimes referred to as thinprovisioning, provides additional flexibility when project growth rate is indeterminate. Thisallows administrators to present the appearance of a larger volume to their users, andpurchase the additional storage as needed, while showing a much larger storage pool than isactually available.

    10

  • 8/3/2019 High Performance Nas Architecture Guide

    12/21

    Further granular control can be realized by assigning user and group quotas to a VirtualVolume. Each Virtual Volume may have its own set of user and group quotas and defaultquota values can be assigned undefined users and groups. Of course, both hard and softquotas are supported by the system as well as quota by file count.

    The High-performance NAS Platform has the intrinsic property of allowing administrators to

    more granularly control their storage expenditure. Its unique Multi-Tiered Storage (MTS)feature allows administrators to choose the right Fibre Channel or Serial ATA disk for thespecific application and customer requirements. High-performance Fibre Channel drives canbe used for the highest throughput and I/O requirements, while lower cost, higher capacitySerial ATA drives can be used for lower throughput applications or nearline storage. Asstorage technology continues to get faster and achieve higher capacity, the High-performance NAS Platform will continue to accommodate and enhance the value of thesemixed media types, and as well as reduce the cost of storage management via its ability tomigrate storage between the storage tiers.

    It also has policy-based engine which allows administrators to classify data based on pre-defined rules, such as data type or last access date. Data can be migrated transparentlyacross storage tiers. The High-performance NAS Platform complements powerful Hitachi

    HiCommand Tiered Storage Manager software. This integration allows organizations tocombine the advanced file-based virtualization framework with the industry-leading block-based virtualization provided by the Hitachi Universal Storage Platform and Hitachi NetworkStorage Controller.

    FS2FS2 FS1

    XLS

    High-performance NAS Platform

    PPT

    DOC

    SANSAN

    Unique Object StoreThe Object Store is a layer between the normal presentation of a file system view to the userand the raw blocks of storage managed by the SIM. An object is an organization of one ormore raw blocks into a tree structure. Each element of the object is called an Onode. Objectsare manipulated by logic residing in the FPGAs located on the FSB module.

    11

  • 8/3/2019 High Performance Nas Architecture Guide

    13/21

    Data 0

    Data 1

    IndirectOnode D

    Data 2

    Data 3

    Data 4

    DirectOnode A

    Direct

    Onode C

    Data 5

    InidirectOnode B

    Data 6

    Data 7

    Data 8

    Direct

    Onode F

    Direct

    Onode G

    Data 9

    Inidirect

    Onode E

    Root Onode

    RIght

    Root Onode

    Left

    The primary element at the base of an object is called the Root Onode. Each Root Onodecontains a unique 64-bit identifier called the Object Identifier (OID) as well as the meta-datainformation relevant to the object. Root Onodes point either directly to Data Onodes, toDirect Pointer Onodes, or to Indirect Pointer Onodes depending on the amount of content tobe stored. These pointer Onodes are simply the connectors that ultimately lead to the DataOnodes. Via this extensibility, High-performance NAS Platform can support a single object ofas large as the entire file system, or billions of smaller files in a very efficient method.

    For each object, two versions of the Root Onode are maintained. They are referred to as theLeft and Right Root Onodes. At any given moment, one of these Root Onodes is atomicallycorrect while its partner is subject to updates and changes. In combination with the NVRAM

    implementation, this ensures that data integrity is preserved even in the case of a systemfailure. NVRAM recovery is discussed later in this Architecture Guide. Finally, Root Onodesare versioned when snapshots are taken so that previous incarnations of the object can beaccessed.

    Different kinds of objects serve different purposes. User data is contained in a file_object. Adirectory_name_table_object contains file and directory names in various formats (dos shortnames, POSIX, etc.), file handles, a crc32 hash value and the associated OID that points tothe location of another object such as a subdirectory (another directory_name_table_object)or a file (file_object). Directory and file manipulation, Snapshot, and other features benefitfrom this object implementation versus a more traditional file level structure. One keyexample of this is delivered via unique object called a directory_tree_object.

    For each directory_name_table_object, there exists a peer called the directory_tree_object.This is a sorted binary search tree (BST) of Onodes containing numeric values (hashes).First converting the directory/file name to lowercase and then applying a crc32 algorithmagainst it derive these hashes.

    The payoff comes when it is time to find a directory or a file. When a user request asks for aparticular file/directory by name that value is again converted to lowercase, the crc32algorithm is applied and then an FPGA on the FSB module executes a binary search ofnumeric values (as opposed to having to do string compares of names) to locate the position

    12

  • 8/3/2019 High Performance Nas Architecture Guide

    14/21

    within the directory_name_table_object at which to begin the search for the required name.The result is a quantum improvement in lookup speed. Where all other network storageservers break down, High-performance NAS Platform maintains its performance even withvery densely populated directories.

    High-performance NAS Platform can support over four million files in a single directory, while

    keeping directory search times to a minimum and sustaining overall system performance.This is one of the reasons that High-performance NAS Platform is ideal for Internet servicescompanies, as they have millions and millions of files, and fewer directories allows for asimplified data structure. In addition to a lot of files within a directory, the High-performanceNAS Platforms Object Store allows the file system itself to be significantly larger, currentlysupported up to 256TB. Compare this to other file systems that theoretically support 16TB to32TB but are often limited to less than half this size due to performance penalties. Thiscapability combined with the Cluster Name Space feature allows High-performance NASPlatform to support up to 512TB today, and even more in the future, as these are notarchitectural limits.

    Client machines have no concept of objects, but rather see only the standards-basedrepresentation of files. Via the NFS or CIFS protocols they expect to work with string names

    and file handles; thus, High-performance NAS Platform presents what is expected by theseclients and handles all the conversion to objects transparently to ensure perfect compatibility.This view of what the clients expects is the job of yet another FPGA, also located on theFSB module, which presents the Virtual File System layer to the clients.

    For those clients that require or prefer block level access, High-performance NAS Platformsupports iSCSI. iSCSI requires a view of raw blocks of storage. The client formats, and laysdown its own file system structure upon this view. To make this happen, High-performanceNAS Platform simply creates a single large Object up to 2TB in size (this is an iSCSIlimitation) within the Object Store, which is presented as a run of logical blocks to the client.Since the iSCSI volume is just another object, features like Snapshot or dynamic growth ofthe object are possible.

    By implementing an Object Store, High-performance NAS Platform delivers manyoutstanding File System characteristics beyond just performance:

    Maximum supported volume size: Currently 256TB, architected for 2PB Maximum supported object size: Currently 256TB, architected for 2PB Maximum supported capacity: Currently 512TB, architectured for over 2PB Maximum objects per directory: Approximately four million

    o Depending on the amount of attributes the object contains and the file name

    lengths themselveso Note that the High-performance NAS Platform can perform at its maximum

    capability even with this kind of directory population as long as the back-end physicalstorage can sustain the throughput

    Maximum number of snapshots per volume: 1024

    13

  • 8/3/2019 High Performance Nas Architecture Guide

    15/21

    NVRAM ProtectionThe current FSB module contains 2GB of NVRAM for storing writes and returning fastacknowledgements to clients. The NVRAM is partitioned in half so that one half is receivingdata while the other is flushed to disk (check-pointed). The NVRAM halves are organizedinto smaller pages, which are dynamically assigned to the various file systems, based onhow heavily they are being accessed.

    Fastpath

    NVRAM

    High SpeedCluster

    FPGA

    Fastpath

    Check-pointing, is the process of flushing writes to disk. At check-point time, either the left orright Root Onode is written to while the other Onode is frozen, becoming the atomicallycorrect version. This process cycles back and forth every few seconds. In the event a filesystem recovery is needed later, this frozen version of the Root Onode is used to restore the

    file system quickly to a consistent check-pointed state. For example, in the case of a poweroutage, that atomically correct version of the Root Onode becomes critical. First, thealternate Root Onode is made to look exactly the same as the atomically correct version.This process is called a rollback. Then, the contents of NVRAM are replayed against theobjects. In this way, customer data is guaranteed to be complete and intact at the end of therecovery. In a High-performance NAS cluster, the NVRAM is further partitioned, half forstoring local write data, and half to store the cluster partners write data. In this way, even ifone of the nodes fails completely, the remaining partner node can complete the recoveryprocess.

    Life of a PacketTo tie the hardware and software architecture together, it is a good exercise to understand

    how a typical read or write operation is handled through the system. The following diagramand the detailed steps walk through both a write and a read operation. The diagram is asimplification of the design, but it highlights the major blocks of the architecture. Forsimplicity, the inbound and outbound FPGAs are shown as a single block, but they areactually separate.

    14

  • 8/3/2019 High Performance Nas Architecture Guide

    16/21

    Write Example1. A network packet is received from one of the GE interfaces on the NIM.2. The incoming packet is saved on the NIM into memory by the FPGA3. If the incoming packet is a network-only request, such as a TCP session setup, it isprocessed to completion and sent back out to the requesting client.4. Otherwise, the FPGA will gather additional related incoming packets.

    5. The complete request is passed over the LVDS FastPath to the FSB module.6. The first FPGA on the FSB Module stores the message in its own memory and thenattempts to decode the incoming message, simultaneously notifying the FSA of the arrival incase exception processing will be required. While most requests are handled directly by thisFPGA, the FSA Module processor handles exception cases; however, only the headerinformation required for decoding the request is sent to the FSA for processing.7. Once the request is decoded, the Object Store then takes over and this FPGA will sendthe data in parallel to the NVRAM, update the Meta-Data Cache, send an additional copy ofthe write request over the cluster interconnect pipeline if there is a cluster partner, begin theformulation of a response packet, andpass the request to the SIM module via the FastPathpipeline.8. Once the NVRAM acknowledges that the data is safely stored, the response packet isshipped back to the NIM, letting it know that the data has been received and is protected

    (see Step 12 and 13 below). This allows the client to go on processing without having to waitfor the data to actually be put on disk.9. In parallel with the above operations, an FPGA on the SIM receives the write request andupdates the Sector Cache with the data. At a specified timed interval of just a few seconds,or when half of the NVRAM becomes full, the SIM will be told by the FSB module to flush anyoutstanding data it has to disks. This is done in such a way as to maximize large I/Oswhenever possible in order to achieve the highest throughput to the storage.

    Read ExampleSteps 1 through 6 of are virtually the same as the previous example.7. Since this is a read request NVRAM is not involved

    8. There are certain kinds of read requests that have to do with metadata lookups. TheObject Store on the FSB module will attempt to find the relevant data in this case in its Meta-Data Cache and if successful will respond rapidly without having to retrieve the Meta-Datafrom disk. Otherwise the lookup mechanism kicks in as described earlier, taking advantageof the directory_tree_object BST method. At various points in this process request will bepassed onto the SIM module for any data necessary to the lookup. Once the OID of thetarget object is found, processing moves to the SIM.9. The SIM module has an ample sector cache. The FPGA on the SIM will look to see if itcan satisfy read requests from here. Otherwise, it will formulate a Fibre Channel request toretrieve the data from disk and store it in the sector cache. Both Meta-Data and datarequests are handled this way.10. Once the relevant data has been retrieved, the SIM will pass the data back to the FSBmodule.

    11. The Object Store will update the Meta-Data cache as necessary and re-apply the RPClayers in order to create a well-formed response.12. The response packet is passed to the NIM Module.13. The FPGA on the NIM will appropriately organize the response into segments thatcomply with TCP/IP or UDP/IP and of course Ethernet formatting.14. Finally the NIM transmits the response out the Gigabit Ethernet interface.

    This packet walk-through should help to tie together the hardware and software architectureand interactions of how data flows through High-performance NAS Platform.

    15

  • 8/3/2019 High Performance Nas Architecture Guide

    17/21

    Benefits of the SiliconServer Architecture: Industry-leading Performance, Throughput,and CapacityFirst and foremost, the benefits of the SiliconServer Architecture are high-performancetransaction, high throughput, and high capacity. High-performance NAS Platform delivers the

    highest performance of any single filer on the market, and it will continue to increase thisadvantage through its modular design. High-performance NAS Platform has achievedSPECsfs97_R1.v3 benchmark results, exceeding those of any network storage solutionutilizing a single High-performance NAS projecting a single file system. Published results forthe High-performance NAS Platform are 98,131 ops/sec (SPECsfs97_R1.v3) * and anoverall response time (ORT) of 2.34 ms. This is more than 272 percent higher throughputthan high-end system results from other leading NAS vendors. Dual Clustered High-performance NAS presenting a single file system using cluster name space, achieved195,502 ops/sec * and an ORT of 2.37 ms, 286 percent higher than other dual clusteredsystems. The highly efficient High-performance NAS Platform clustering provided unheard oflinear performance over the single node results with less than 1percent loss in efficiency.These results are available at the SPEC.org website, http://www.spec.org/sfs97r1.*SpecSFS benchmarks published under High-performance NAS Corporation using third-party storage

    systems.

    These tests clearly demonstrate that the High-performance NAS Platform can sustain theresponsiveness in both high concurrent user and cluster computing environments, speedingapplications such as Life Sciences research, 3D Computer Generated Imagery, Visualization,Internet services and other compute intensive environments. While the test results are aproof point for transactional performance, raw throughput is also critical for many of todayslarge digital content applications. Here, High-performance NAS Platform also excelsproviding 3.2Gb/sec in throughput. The combination of both high transaction rates andthroughput allows High-performance NAS Platform to excel in mixed environments wherethere are users and batch processes demanding both aspects of performance with minimalwait times. High-performance NAS Platform leads network storage performance in bothdimensions.

    High-performance NAS Platform currently supports up to 512TB under the clusternamespace with file systems of up to 256TB; however, it is architected to support up to 2PB.The actual amount of storage supported will continue to scale as testing, memory, andrequirements continue to grow. Although the unique Object-store-based file system is theenabler for this capability, the hardware allows these large file systems to continue to beaccessed at the highest performance levels even as the file system begins to fill with a highnumber of files. The combined hardware and software also enable the support of over amillion files per directory, critical for larger file systems. Since these types of inquiries areconverted to a binary search, the system delivers exceptional responsiveness to users,regardless of the type of directory structures and files. This benefit allows storageadministrators to consolidate their filers to reduce hardware, software license, and support

    costs. It also allows customers to purchase an initial system and grow the system withoutbuying additional filers and having to manage the migration or separation of data betweenmultiple filers or tiers of storage.

    ScalabilityHigh-performance NAS Platform can scale to meet an organizations future requirements inany of the three key storage dimensions. First is the need to support higher performance interms of IOPS, to feed the high-speed applications and compute clusters. Second is theneed to support higher bandwidth throughput in terms of Gbit/sec, as file sizes and the

    16

  • 8/3/2019 High Performance Nas Architecture Guide

    18/21

    number of users of these large data sets continue to grow. Third is the need to storesignificantly greater amounts of data in terms of terabytes, driven by growing file sizes,increased data sets, and changing regulatory requirements to retain data for longer periodsof time.

    Three Dimensions of Scalability

    Throughput (Gbps)

    Capacity (TB) Performance (IOPs)

    To cope with all three dimensions of scalability, most customers have been forced to doforklift upgrades or deploy many individual or HA-clustered filers. However, theseapproaches only lead to filer proliferation, increased complexity, and higher support costs.Filer proliferation causes many of the same challenges of DAS, such as unused space onsome volumes, not enough on others, excessive headroom across all the volumes, andmanagement issues. Data sets that once had to be separated between the different filers tosupport the greater aggregate capacity, performance, and throughput demands are nolonger necessary with High-performance NAS Platform and, because of this, clients nolonger need to be determine which filer to access. Clustering storage is another solution;however, with the lower performance of other systems this often requires as many as 8-10cluster storage nodes or more. With clustering storage software in its infancy, and thenumber of nodes increasing in complexity and losing efficiency, fewer nodes are clearly anadvantage.

    High-performance NAS Platform was designed to address these three dimensions ofscalability today as well as into the future. First, High-performance NAS Platform wasdesigned with the highest performance, capacity, and throughput, all of which meet orexceed most organizations current requirements. These requirements will continue to growas data sets grow and compute clusters become more prevalent and put more demands onthe storage system. High-performance NAS Platform foresees the potential of business ororganization requirements nearly doubling year over year in the very near future. TheSiliconServer Architecture was designed with this in mind.

    From a design perspective, the long-term benefit of the architecture is that it allows theengineering team to increase performance and throughput of each product generation byapproximately 100 percent. This compares with approximately 30 percent achieved byconventional CPU-based appliance vendors. This equates to the architecture having anever-increasing advantage at the atomic (single filer node) leveldoubling the advantageeach product cycle at the current rates. This is critical both for organizations that wish to stayon single and high-availability clustered systems as well as for organizations that want toeventually deploy a storage-grid.

    Both the current High-performance NAS Platform and future module upgrades will continueto provide the fastest file serving nodes and are designed to be most capable and simplestorage-grid. For those who require scalability beyond the capabilities of a single node, High-

    17

  • 8/3/2019 High Performance Nas Architecture Guide

    19/21

    performance NAS Platform is the ideal foundation for larger storage-grids, in that fewernodes will be required to achieve the desired performance. As explained in the previoussections, High-performance NAS Platform was able to be clustered in 2-way, 3-way or 4-wayconfigurations, with almost no loss of efficiency, providing the highest SpecSFS IOPSsolution of any single name space solution. Other configurations take 6-, or 8-way or moreclustered file servers to achieve similar results. Building a storage-grid from larger, faster

    nodes such as High-performance NAS Platform reduces the hardware, software, andsupport costs, as well as significantly reducing the complexity required to achieve the desiredcapacity and performance, and this is often theoretical as many clusters do not support thesehigher numbers and lose efficiency as they scale. Fewer storage cluster nodes reduce theback-end inter-cluster communications and data transfers. Smaller storage clusters or gridswith faster nodes will provide a more efficient and cost-effective scalable storage solution.The engineering team is continuing to drive towards higher node cluster storage solutions;however, with High-performance NAS Platforms significant advantage in per nodeperformance, their task is significantly less daunting than that of competitors. Competitiveengineering efforts must cluster more nodes with standard PC-based architectures, which issignificantly more complex and less efficient.

    Features

    The SiliconServer Architecture delivers advanced features, layered onto the hardware filesystem, without significantly impacting performance, as is often the case with a CPU-based,shared-memory appliance. Features such as snapshot, policy-based migration, mirroring,and other data mover features are executed in hardware operating on objects within theobject store, allowing them to be done at a binary level within the hardware. This significantlyreduces the overhead of these features. Combined with the virtualized storage and back-endSAN implementation of the architecture, this capability enables many storage functions to behandled without affecting the performance and throughput of the system. This is especiallytrue in scenarios such as a drive failure, where a CPU-based filer would have to get involvedin the rebuilding the hot-spare, while High-performance NAS Platform off-loads this functionto the hardware storage systems.

    The SiliconServer Architecture allows Fibre Channel and SATA disk drives to be used. Thisfeature, called Multi-Tiered Storage (MTS), departs from other vendors offerings, whichrequire a separate filer to handle different tiers of storage and disk, causing a proliferation offilers. High-performance NAS Platform further delivers the capability to do data migrationbetween these storage tiers, simplifying storage management and providing a more cost-controlled environment as administrators can provide the right type of disk, performance, andthroughput based on the application requirements. To preserve the user or applicationexperience, migration between tiers can also be accomplished transparently using NAS DataMigrator, allowing simplified data management functionality that does not affect the endusers or applications.

    High-performance NAS Platform delivers multiprotocol access into its MTS, including NFS,CIFS, and even block-level iSCSI. Block level access is enabled through the object design of

    the file system, allowing even a block level partition to be viewed as an object. Formanagement, High-performance NAS Platform supports SSL, HTTP, SSH, SMTP, andSNMP, as well as remote scripting capability via utilities provided at with no additionallicense fees to the customer.

    Going forward, Hitachi Data Systems and BlueArc will continue to innovate in both thehardware and software areas. The modular design of the High-performance NAS Platformwill allow organization to have the purchase protection of an upgradeable system, increasingthroughput, IOPS, and capacity, with simple blade changes. In terms of software, the

    18

  • 8/3/2019 High Performance Nas Architecture Guide

    20/21

    19

    foundation of the SiliconFS and the virtualized storage allows advanced features such asVirtual Servers, NAS Data Migrator, and remote block-level replication.

    ConclusionThe fastest, most reliable and longest-lived technology in the data center is typically thenetwork switch, whether it is Ethernet or Fibre Channel. A switch purchased five years ago is

    still fast and useful because it was built with scalability in mind. The speed, reliability, andscalability of the network switch are directly attributable to the parallelism inherent in thehardware-accelerated implementation, the high-speed backplane, and the replaceable bladedesign. The High-performance NAS Platform has delivered on the promise of hardware-accelerated file services and will capitalize on this unique capability to enable its customersto continually scale their storage infrastructure as their requirements grow.

  • 8/3/2019 High Performance Nas Architecture Guide

    21/21

    Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USAContact Information: + 1 408 970 1000 www.hds.com / [email protected]

    Asia Pacific and Americas 750 Central Expressway, Santa Clara, California 95050-2627 USAContact Information: + 1 408 970 1000 www.hds.com / [email protected]

    Europe Headquarters Sefton Park, Stoke Poges, Buckinghamshire SL2 4HD United KingdomContact Information: + 44 (0) 1753 618000 www.hds.com / [email protected]

    Hitachi is a registered trademark of Hitachi, Ltd., and/or its affiliates in the United States and other countries. Hitachi DataSystems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries.

    HiCommand is a registered trademark of Hitachi, Ltd.

    BlueArc is a registered trademark of BlueArc Corporation in the United States and/or other countries.

    All other trademarks, service marks, and company names are properties of their respective owners.

    Notice: This document is for informational purposes only, and does not set forth any warranty, express or implied, concerningany equipment or service offered or to be offered by Hitachi Data Systems. This document describes some capabilitiesthat are conditioned on a maintenance contract with Hitachi Data Systems being in effect, and that may be configuration-dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office forinformation on feature and product availability.

    Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited warranties. Tosee a copy of these terms and conditions prior to purchase or license, please go to http://www.hds.com/products_services/support/warranty.html or call your local sales representativeto obtain a printed copy. If you purchase or license the product, you are deemed to have accepted these terms and conditions.