Top Banner
IMPACT modules consist of focused, in-depth training content that can be consumed in about 1-2 hours Welcome to Symmetrix Foundations © 2004 EMC Corporation. All rights reserved. EMC Global Education – IMPACT For questions or support please contact Global Education Complete Course Directions on how to update your online transcript to reflect a complete status for this course. Course Description Start Training Run/Download the PowerPoint presentation Student Resource Guide Training slides with notes Assessment Must be completed online (Note: Completed Assessments will be reflected online within 24-48 hrs.) Home
67
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Symmetrix Foundations Mr 5wp Symmfd Wrapper

IMPACT modules consist of focused, in-depth training content that can be consumed in about 1-2 hours

Welcome to Symmetrix Foundations

© 2004 EMC Corporation. All rights reserved.

EMC Global Education – IMPACT

For questions or support please contact Global Education

Complete Course Directions on how toupdate your online transcript to reflect acomplete status for this course.

Course Description

Start Training Run/Download the PowerPoint presentation

Student Resource Guide Training slides with notes

Assessment Must be completed online(Note: Completed Assessments will be reflected online within 24-48 hrs.)

Home

Page 2: Symmetrix Foundations Mr 5wp Symmfd Wrapper

1. Logon to Knowledgelink (EMC Learning management system).

2. Click on 'My Development'.

3. Locate the entry for this learning event you wish to complete.

4. Click on the complete icon [ ].

Link to Knowledgelink to update your transcript and indicate that you have completed the course.

Symmetrix Foundations Course Completion Steps:

© 2004 EMC Corporation. All rights reserved.

EMC Global Education – IMPACT

For questions or support please contact Global Education

Back to Home

Note: The Mark Complete button does not apply to items with the Type: Class, Downloadable (AICC Compliant) or Assessment Test. Any item you cancel from your Enrollments will automatically be deleted from your Development Plan.

Course Completion

Click here to link to Knowledgelink

Page 3: Symmetrix Foundations Mr 5wp Symmfd Wrapper

If you have any questions, please contact us by email at [email protected] Page 1 of 1

EMC Global Education

Symmetrix Foundations - IMPACT

Course Description

e-Learning

This foundation level course provides participants with an understanding of the Symmetrix Architecture and how it is an integral component of EMC’s offering. This course is part of the EMC Technology Foundations curriculum and is a pre-requisite to other learning paths.

Course Number: MR-5WP-SYMMFD

Method: IMPACT Duration: 3 hours

Audience This course is intended for any person who presently or plans to:

• Educate partners and/or customers on the value of EMC’s Symmetrix-based storage infrastructure

• Provide technical consulting skills and support for EMC products

• Analyze a Customer’s business technology requirements

• Qualify the value of EMC’s products

• Collaborate with customers as a storage solutions advisor

Prerequisites Prior to taking this course, participants should have strong understanding of IT concepts and a basic knowledge of storage concepts.

Course Objectives Upon successful completion of this course, participants should be able to:

• Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA)

• Write a detailed list of host connectivity options for Symmetrix

• Explain how Symmetrix functionally handles I/O requests from the host environment

• Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes

• Describe the media protection options available on the Symmetrix

• Referencing a diagram, explain some of the high availability features of Symmetrix and how this potentially impacts data availability

• Describe the front-end, back-end, cache, and physical drive configurations of a DMX and other Symmetrix models

Modules Covered

• This course includes a single module on Symmetrix Architecture

Labs Labs reinforce the information you have been taught. The labs for this course include: • None

Assessments

• Assessments validate that you have learned the knowledge or skills presented during a learning experience. This course includes a self-assessment quiz, to be conducted on-line via KnowledgeLink, EMC’s Learning Management System.

Page 4: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 1

11EMC Global Education© 2004 EMC Corporation. All rights reserved.

Symmetrix Foundations

Welcome to Symmetrix Foundations. EMC offers a full range of storage platforms, from the CLARiiON CX200 at the low end, to the unsurpassed DMX 3000 at the high end. This training provides an architectural introduction to the Symmetrix family of products. The focus will be on DMX, but prior generations of Symmetrix will also be discussed.

Copyright © 2004 EMC Corporation. All rights reserved.These materials may not be copied without EMC's written consent.EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

Page 5: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 2

EMC Global Education© 2004 EMC Corporation. All rights reserved. 22

Symmetrix Foundations

After completing this course, you will be able to:Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA)Write a detailed list of host connectivity options for SymmetrixExplain how Symmetrix functionally handles I/O requests from thehost environmentIllustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical VolumesDescribe the media protection options available on the SymmetrixExplain some of the high availability features of Symmetrix and how it impacts data availabilityDescribe the front-end, back-end, cache, and physical drive configurations of various Symmetrix models

These are the learning objectives for this training. Please take a moment to read them.

Page 6: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 3

EMC Global Education© 2004 EMC Corporation. All rights reserved. 33

High-End Storage: The New Definition

High-End Then– Simple redundancy

• Automated Fail-over– Benchmark performance

(IOPs and MB/s)• Single and/or simple workloads

– Basic local and remote data replication

• Backup windows, testing, and disaster recovery

– Scalability• Capacity

– Manage the storage array• Easy configuration, simple

operation, minimal tuning

High-End Today– Non-disruptive everything

• Upgrades, operation, and service– Predictable performance…

unpredictable world• Complex, dynamic workloads

– Replicate any amount, any time, anywhere

• Replicate any amount of data, across any distance, without impact to service levels

– Flexibility• Capacity, performance,

connectivity, workloads, etc.– Manage service levels

• Centralized management of the storage environment

– Both Open Systems and Mainframe connectivity

Both of EMC’s Storage Platforms, the CLARiiON and the DMX Symmetrix, raised the bar. What was once considered high-end is provided in the CLARiiON today. The Symmetrix has provided higher levels of capabilities that were never before available. Availability - It used to be that high-end availability meant simple redundancy. Use two of everything. Two busses, mirrored cache boards, dual power suppliers—use the second one if the first one breaks. But today, that’s a mid-tier feature. High-end needs to be always online—which means “non-disruptive everything”—non-disruptive upgrades, non-disruptive reconfigurations, and non-disruptive serviceability. Performance - It used to be all about low-level benchmarks—how many, and how fast. IOPs and megabytes per second. Today, simple benchmarks are used to measure mid-tier arrays, not high-end. High-end customers want predictable performance in an unpredictable world. High service levels mean being able to guarantee great application response, even if there’s a surprise like an unpredictable workload. And you can’t measure that with a simple benchmark.Replication - Today, just about every mid-tier array can do replication. High-end means being able to copy any amount of data, at any time during the day, and send it any distance if need be, delivering high application performance…all at the same time.Scalability - In today’s world, SANs give you lots of ports. If you want large capacities, a 50 TB CLARiiON is the better deal, or Centera, which can handle up to a petabyte.Flexibility —Being able to handle requirements with just the right mix of performance and capacity. It means supporting the right connections—like iSCSI and GigE. And it means being able to handle different requirements—cost-effectively—if things change. And one of the things that sets high-end apart is its ability to handle change. Management - It wasn’t so long ago that management at the high end meant a nice, easy-to-use GUI that helped you configure the array. But today, that’s what mid-tier arrays do. There’s a new requirement. It’s not just the array. It’s the switch, and the server, and the applications—it’s the whole end-to-end stack.

Page 7: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 4

EMC Global Education© 2004 EMC Corporation. All rights reserved. 44

Symmetrix Integrated Cached Disk Array

Highest level of performance and availability in the industryConsolidation– Capacities to 84TB– Up to 64 host ports– SAN or NAS

Advanced functionality– Parallel processing

architecture– Intelligent prefetch– Auto cache destage– Dynamic mirror service policy– Multi-region internal memory– Predictive failure analysis and

call home– Back-end optimization

Enginuity Operating Environment– Base services for data

integrity, optimization, security, and Quality of Service

– Core services for data mobility, sharing, repurposing, and recovery

There are basically three categories of storage architectures: Cache Centric, Storage Processor centric, and JBOD or Just a Bunch Of Disks. The Symmetrix falls under the category of cache centric storage. We call it an ICDA, or Integrated Caching Disk Array. It is not a RAID box, it is an Integrated Caching Disk Array! As we go through this presentation, you will understand the differences.

Page 8: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 5

EMC Global Education© 2004 EMC Corporation. All rights reserved. 55

Enginuity Operating EnvironmentEnginuity Operating Environment is the Symmetrix software that:– Manages all operations– Ensures data integrity– Optimizes performance

Enginuity is often referred to as “the microcode”WideSky middleware provides common API and CLI interface for managing Symmetrix and the entire storage infrastructure

EMC and ISV develop management software supporting heterogeneous platforms using WideSky API and CLIs

Symmetrix Hardware

Enginuity Operating Environment

WideSky Management Middleware

Symmetrix Based ApplicationsHost Based Management Software

ISV Software

Before we get into the hardware, let’s briefly introduce the software components, as most functionality is based in software and supported by the hardware.Enginuity is the operating environment for the Symmetrix storage systems. Enginuity manages all Symmetrix operations, from monitoring and optimizing internal data flow, to ensuring the fastest response to the user’s requests for information, to protecting and replicating data. Enginuity is often referred to as “the Microcode”.

WideSky is storage management middleware that provides a common access mechanism for managing multivendor environments, including the Symmetrix, storage, switches, and host storage resources. It enables the creation of powerful storage management applications that don’t have to understand the management details of each piece within an EMC user’s environment.

In addition to being middleware, WideSky is a development initiative (that is, a program available to ISVs and developers through the EMC Developers Program™) and provides a set of storage application programming interfaces (APIs) that shield the management applications from the details beneath. It provides a common set of interfaces to manage all aspects of storage. With WideSky providing building blocks for integrating layered software applications, ISVs and third-party software developers (through the EMC Developers Program), and EMC software developers are given wide-scale access to Enginuity functionality.

Page 9: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 6

EMC Global Education© 2004 EMC Corporation. All rights reserved. 66

Back EndDisk Director

Front EndChannel Director

Symmetrix Architecture

All Symmetrix share the same basic architecture

Shared Global Memory“Cache”

All members of the Symmetrix family share the same fundamental architecture. This architecture was initially called MOSAIC 2000 and is the architecture that continues to drive the Symmetrix through the year 2000 and beyond. This modular hardware framework allows rapid development of new storage technology, while supporting existing configurations.There are three functional areas:

• Shared Global Memory - provides cache memory and link between independent front end and back end (intelligent boards comprised of memory chips)

• Front End - how the Symmetrix connects to the host (server) environment, referred to as Channel Directors (multi-processor circuit boards)

• Back End - how the Symmetrix controls and manages its physical disk drives, referred to as Disk Directors or Disk Adapters (multi-processor circuit boards)

What differentiates the different generations and models is the number, type, and speed of the various processors, and the technology used to interconnect the front-end and back-end with cache.

Page 10: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 7

EMC Global Education© 2004 EMC Corporation. All rights reserved. 77

Symmetrix 4.8 Architecture

Dual “X” and “Y” Buses

Port D

Port C

Port D

Port C

Y Bus

X Bus

Shared Global Memory Back End

Disk Director

Processor a68060-75Mhz

Processor b68060-75Mhz

Front End

Channel Director

Processor a68060-75Mhz

Processor b68060-75Mhz

40 MBS UWD SCSI Bus

360 MBS Internal Bus(Total of 720 MBS)

Cache

3630 - OS 5630 - MF

3830 - OS 5830 - MF

3930 - OS5930 - MF

½ Bay Cabinet

1 Bay Cabinet

3 Bay Cabinet

The Symm 4.x Architecture includes:• Dual “X” and “Y” buses • Odd Numbered Directors connect to the X Bus. Even Numbered Directors connect to the Y Bus. • Memory Boards connect to both the X and Y Busses• Motorola Processors• 40 MB/Second Ultra SCSI Back-end

The Symm 4.X family is based on a dual system bus design. Each director is connected to either the X bus (odd numbered director) or Y bus (even numbered director). Each director card has two sides, the b processor (top half) and the a processor (bottom half). The processors for the Symm 4.X are Motorola 68000 series (Symm 4 core frequency of 66 MHz | Symm 4.8 = 75 MHz). The a & b processors have their own dedicated circuitry, except for SDRAM (Control Store - where the microcode lives) and the logic to arbitrate for and control the internal system busses. Data is transferred throughout the Symm (from Channel Director to Memory to Disk Director) in a serial fashion along the system busses. For every 64 bits of data, the Symm creates a 72 bit “Memory Word” (64 bits of data + 8 bits of parity). These Memory Words are then sent in a serial fashion across the internal busses to director from cache or from cache to director.

Page 11: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 8

EMC Global Education© 2004 EMC Corporation. All rights reserved. 88

Symmetrix 5.0 Architecture

New Memory and Quad Bus Architecture

Disk Director

Shared Global Memory Back EndFront End

Channel Director

Processor bPowerPC 750

266 Mhz

Low Memory

Top High

Bottom Low Bottom High

Processor bPowerPC 750

266 Mhz

Processor aPowerPC 750

266 Mhz

Processor aPowerPC 750

266 Mhz

40 MBS SCSI Bus

360 MBS Internal

Bus

360 MBS Internal

Bus

Top Low

High Memory

Cache

8430

8730

1 Bay Cabinet

3 Bay Cabinet

The Symmetrix 5 is a prime example of MOSAIC 2000. The basic architecture has not changed from Symm 4 to Symm 5, but has been enhanced. Here is what has changed:

• Addition of 2 internal system busses (total of 4); each bus still 360MB/s for an aggregate of 1440 MB/s• Odd Numbered Directors connect to both Top High and Bottom Low Busses. Even Numbered Directors connect

to both Top Low and Bottom High Busses. Memory boards connect to either the Top High and Bottom High (High Memory) or Top Low and Bottom Low (Low Memory).

• The director processors are IBM/Motorola (jointly developed) PowerPC 750 (RISC-based processor).• This processor switch required Symm microcode to be translated from Motorola Assembler Language to C++.

To further enable the processor swap, each director has an additional chip (called “the Gumba”) that makes the PowerPC “look like” a 68060 to the CPU Control Gate Array, handles Control Store mirroring functions and is responsible for SDRAM control.

• Each director connects to 2 internal system busses (Top High & Bottom Low for odd directors | Bottom Low & Top High for even directors).

• M3 Generation of Memory Boards introduced the concept of 4 addressable regions per board (High Memory = board connected to Top High & Bottom High | Low Memory = board connected to Top Low & Bottom Low)

Page 12: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 9

EMC Global Education© 2004 EMC Corporation. All rights reserved. 99

Symmetrix 5.X LVD Architecture

Faster Processors, Faster Bus, Faster Back-end,

80 MBS SCSI LVD Bus

Shared Global Memory Back EndFront End

Channel Director

Processor bPowerPC 750

333Mhz

Low Memory

Top High

Bottom Low Bottom High

Processor aPowerPC 750

333 Mhz

400 MBS Internal

Bus

400 MBS Internal

Bus

Top Low

High Memory

Cache

Disk Director

Processor bPowerPC 750

333Mhz

Processor aPowerPC 750

333 Mhz

8230½ Bay Cabinet

8530

8830

1 Bay Cabinet

3 Bay Cabinet

Again, here is another example of the MOSAIC 2000 Architecture. The basic architecture hasn’t change but has been enhanced to improve performance by eliminating bottlenecks. Here is what has changed for Symm 5.X LVD Architecture:

• Increased bus speed to 400MB/s for an aggregate of 1600 MB/s• Back End Directors and Drives support Ultra 2 SCSI LVD (Low Voltage Differential) and the bus speed has

increased to 80 MBs.• The director processors are now 333 Mhz. ESCON directors are 400 Mhz.• Each director connects to 2 internal system busses (Top High & Bottom Low for odd directors | Bottom High &

Top Low for even directors ). • M4 Generation of Memory Boards support LVD ( Low Voltage Differential or Ultra 2 SCSI Enginuity 5567 or

greater

Page 13: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 10

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1010

Symmetrix DMX Architecture

Direct Matrix, Quad Processor Directors, Faster Processors, 2GB Fibre Channel Back-end, and Communications Matrix

Track Table

Status and Communications

MAILBOXES

CACHE SLOTSChannel Director

Shared Global Memory

Back EndFront End

Direct Matrix – Each Director gets its Own 500MB/sec Point to Point Connection to each Cache Board 2 GB Fibre Channel

Cache

Processor dPowerPC 500 Mhz

Processor cPowerPC 500 Mhz

Processor bPowerPC 500 Mhz

Processor aPowerPC 500 Mhz

Disk Director

Processor dPowerPC 500 Mhz

Processor cPowerPC 500 Mhz

Processor bPowerPC 500 Mhz

Processor aPowerPC 500 Mhz

A testimonial to EMC’s Symmetrix architecture is the DMX. While Symmetrix Direct Matrix (DMX) is a radical redesign, it contains the same functional blocks with a significant advantage beyond yesterday’s bus and switch architecture. The result is even greater performance and availability.

Performance – The Symmetrix DMX dramatically reset performance expectations in a broad range of demanding transactional, decision support, and consolidated environments. More importantly, when coupled with Enginuity storage operating system, Symmetrix DMX has a unique ability to effectively react to bursts of unexpected activity, while continuing to deliver high service levels.

Availability – The Symmetrix DMX goes beyond yesterday’s design to set a new standard in availability, including the elimination of buses and switches, and the incorporation of triple-module-voting for key components. Power systems, and the ability to do on-line upgrades, have been dramatically improved.

Page 14: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 11

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1111

Symmetrix DMX Architecture

Disks

Servers

Each boardgets its own

direct connection to

cache!

The Symmetrix DMX features a high-performance, Direct Matrix Architecture (DMX) supporting up to 128 point-to-point serial connections in the DMX2000/3000 (up to 64 in the DMX1000).Symmetrix DMX technology is distributed across all channel directors, disk directors, and global memory directors in Symmetrix DMX systems. Enhanced global memory technology supports multiple regions and 16 connections on each global memory director. In the Direct Matrix Architecture, contention is minimized because control information and commands are transferred across a separate and dedicated message matrix. The major components of Symmetrix DMX architecture are the front-end channel directors (and their interface adapters), global memory directors, and back-end disk directors (and their interface adapters).In a fully configured Symmetrix DMX1000 system, each of the eight director ports on the eight directors connects to one of the sixteen memory ports on each of the four global memory directors. That is, there are two connections between each director and each global memory director. These 64 individual point-to-point connections facilitate up to 64 concurrent global memory operations in the systemIn a fully configured Symmetrix DMX2000/3000 system, each of the eight director ports on the sixteen directors connects to one of the sixteen memory ports on each of the eight global memory directors. These 128 individual point-to-point connections facilitate up to 128 concurrent global memory operations in the system.

Page 15: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 12

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1212

Symmetrix DMX Architecture

Disks

Servers

Separate Control and

Communications Message Matrix

Another major performance improvement with the DMX is the separate control and communications matrix. This enables communication between the directors, without consuming cache bandwidth. It becomes more apparent as we talk about read and write operations and the information flow through the Symmetrix later in this module.

Page 16: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 13

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1313

Shared Global Memory

Port D

Port C

Port D

Port C

Back End

Disk Director

Front End

Channel Director

Track Table

Status and Communications

CACHE SLOTS

4. CD retrieves data and sends to host

4.

1. Host sends READ request

1.

2. Channel Director checks Track Table

2.

Processor

3. Requested data located in cache - Cache Hit!

3.

Read Operation - Cache Hit

Read operation completed at memory speed!

Host sends read request (requesting to read some number of blocks from a physical disk). Host sees storage on Symm as entire physical drive (actually logical volume on Symmetrix - piece of a physical drive). Through the configuration file (bin file), logical volumes are given a channel address for the 1)Channel Director, 2) Processor, and 3) Port that will be accessing that volume (logical volume 001 gets channel address (1,0) on SA # 3, Processor a, port A). Open systems hosts view disk drives using the SCSI target and LUN addressing scheme (target ranging from 0-16 | LUN ranging from 0-16)

Channel Director receives the request to read some number of blocks for target 1, LUN 0 (continuing from the previous example). By looking in the bin file (stored within the director’s EPROM), it translates the blocks requested for (1,0) as blocks requested for logical volume 001. The Channel Director then scans the track table to discover if the requested blocks on 001 are already resident in cache.

In this case (read-hit), the data that is being requested is resident within cache.

The channel director reads the requested data from cache. At this point, the Age-Link-Chain is updated to reflect the access (data moved to top of LRU queue - now most recently used). The data is sent from the Channel Director back to the host. Total I/O response time would be something on the order of 1 millisecond.

Page 17: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 14

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1414

Shared Global Memory

Port D

Port C

Port D

Port C

Back End

Disk Director

Front End

Channel Director CACHE SLOTS

1. Host sends READ request

1.

2. CD checks Track Table - Data Not in Cache

2.

4. DA retrieves data from disk (updates track table)

4.

6. CD retrieves data and sends to host

6.

Processor

Processor

Read Operation - Cache Miss

Track Table

Status and Communications

5. CD is notified that data is in cache

5.

3. CD notifies DA using Message Matrix

3.

Host sends Read Request.If the data being requested is not in the process of prefetch, the Channel Director will disconnect from the channel (known as “long miss”). This enables the host to perform other operations. If the requested data is in the process of prefetch (known as “short miss”), the Channel Director will not disconnect from the channel. On Symm 4 and Symm 5 Architectures, Directors communicate through “Mailboxes”. Basically, all directors monitor the mailbox area in cache to see if there is work for it. With the DMX, Directors communicate to each other through the communication matrix. This eliminates the added burden on cache of continuously polling the mailbox.DA retrieves data from physical disk and place it in an available cache slot.The Channel Director is notified by the Disk Director (via the Status & Communications Mailboxes or in a DMX through the communication matrix) to check the track table once more. From this point, the operation that occurs is exactly the same as a read cache hit. If the Channel Director has disconnected from the channel, it must now reconnect the channel.

It may seem that in the case of a read request not being in cache, it would simply be faster to bypass cache and retrieve the information directly from the physical storage on the back end. While this is certainly true, the important thing to keep in mind is that if cache is bypassed, the requested read would not be placed in cache for future access. Additionally, you would also lose the integrity checking that occurs as data is placed within cache. Again, with DMX architecture and the efficiency gained through director communications via the communications matrix, the faster quad processor directors, up to 128GB of cache, and 2GB fibre channel back-end drive, the impact of a cache miss is reduced greatly.

Page 18: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 15

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1515

Shared Global Memory

Port D

Port C

Port D

Port C

Back End

Disk Director

Front End

Channel Director CACHE SLOTS

1. Host sends WRITE request to CD

1.

2. CD places data in an available cache slot

2.

3. Write Complete sent to host

3.

4. Tracks marked as Write Pending - DA will de-stageat earliest convenience

4.

Data remains in cache until replaced by LRU algorithm

Processor Processor

Write Operation - Cache Hit

Track Table

Status and Communications

Host sends write request to Channel Director. Channel Director locates an available cache slot and places data in cache. If the track(s) already exists in cache as write pending (waiting to be written to disk), the Channel Director will write the data in question to the existing slot in cache. For example, I/O # 1 consists of a write to the last block on the first track on the first cylinder of Logical Volume 001. I/O # 2 then consists of a write to the first block on that same track. When the Channel Director checks the track table for an available slot in cache, it will see that the track in question is already flagged as a write pending. Therefore, the Channel Director will write the first block (I/O# 2) to the same slot in cache where the last block (I/O# 1) on that track is already residing.Host is notified that write is complete.As soon as the Disk Director(s) that are managing the physical copy(ies) of the data are available, the data will be read from cache and write to buffer on the Disk Director. The data is then written to the physical disk. Note: Even if the host only writes/updates one block, the entire track (8 blocks in a sector, 8 sectors in a track) is marked as write pending. When the track is marked as write pending, all four mirrored positions are flagged write pending for that track. Remember that the data remains in cache until 1) it is committed to disk, AND 2) it becomes the Least Recently Used data in cache. Write pending tracks are not subject to the LRU algorithm. When the data is restaged to disk (removal of write pending flag), it then enters the LRU Age-Link-Chain as the most recently used data. If the data is frequently accessed, it will remain towards the front of the Chain. If the data is not accessed, it will move to the end of the Chain and subsequently cycled out of cache (slot in cache made available for other use). The effect of a write cache hit is that the host is immediately freed up to process more I/O as soon as the write is received in cache. This greatly enhances the performance of the host itself. Less cycles are spent awaiting acknowledgement, freeing the host to process application data.

Page 19: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 16

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1616

Fast Write Ceiling

Cache algorithms are designed to optimize cache utilization and “fairness” for all Symmetrix VolumesCache allocation dynamically adjusted based on current usage– Symmetrix constantly monitors system utilization (including individual

volume activity)– “More active” volumes dynamically are allocated additional cache

resources from relatively “less active” volumes– Each volume has a minimum and maximum number of cache slots

for write operations based on configuration (known as “Fast Write Ceiling” or “Write Pending Ceiling”)

During a write operation, a Delayed Write occurs when the Write Pending Ceiling is reached– Logical Volume Level (not a fixed percentage - dynamically

determined by Symmetrix)– Symmetrix System Level (80% of Symmetrix cache slots contain

“write pendings”)

When a Symmetrix is IMPL’ed (Initial Microcode Program Load), the amount of available cache resources is automatically distributed to all of the logical volumes in the configuration. For example, if a Symmetrix were configured with 100 logical volumes of the same size and emulation, then at IMPL, each one would receive 1% of available cache resources. As soon as reads and writes to volumes begins, the Symmetrix Operating Environment (Enginuity) dynamically adjusts the allocation of cache. If only 1 of the 100 volumes was active, it would get incrementally more cache and the remaining amount would be redistributed to the other 99 volumes.

It is important to remember that there will always be cache resources available for reads. By default, the 80% fast write ceiling ensures that at least 20% of cache resources will be free for read requests.

Managing each individual volume’s write activity (via the dynamic fast write ceiling) enables Enginuity to typically prevent system-wide delayed write situations.

Page 20: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 17

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1717

Port D

Port C

Port D

Port C

Disk Director

Shared Global Memory Back EndFront End

Channel Director CACHE SLOTS

1. Host sends WRITE request to CD

1.

3. DA will do a forced de-stage of Write Pendingsto free cache slots

3.

6. Write complete sent to host

6.

2. CD cannot locate free cache slot and signalsDA to destage

Processor Processor

5. CD places data in an available cache slot

5.

Write Operation - Delayed Fast Write

Track Table

Status and Communications

2.

4.

4. DA signals CD of available slots

Host Sends Write request.The Channel Director does not find available cache slots for writing because the volume has reached its Fast Write Ceiling, or the entire Symm has 80% of its cache slots containing “write pendings”. When the volume Fast Write ceiling is reached, only that volume’s performance is impacted. When the Symm System Fast Write Ceiling is reached, the entire Symm’s performance is impacted. Disk Director frees up cache slots.The Disk Director signals the Channel Director through the Mailbox or the Communication Matrix on DMX.The rest of the operation is similar to a fast write.

Again, this operation takes significantly longer than a fast write but ensures that the I/O flows through cache. It is likely that information just written by a host will be read in the near future. If cache were bypassed and the data written directly to disk, the data would not then be available directly from cache for the next request.

Page 21: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 18

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1818

Port B

Port A

Port A

Port B

Channel DirectorSymmetrix Front EndChannel Directors allow Symmetrix to connect to the host environment– Minimum of 2 directors per frame (redundancy)– Maximum of 4, 6 or 8 directors per frame

(depending upon model and configuration) Type(s) of Channel Director cards determined by the type of host and the selected protocol for communication with SymmetrixCards are Field Replaceable Units (FRUs) and “hot swappable”Open Systems and Windows hosts connect to Symmetrix using either:– SCSI (Small Computer System Interface) – Fibre Channel (SCSI protocol to be sent over greater distances via

Fibre Channel protocol and fiber optic cable)Mainframe hosts will typically connect to Symmetrix using ESCON or FICON (IBM-based protocols that allow mainframe hosts to connect to storage using fiber optic cables)

Normally, Channel Directors are installed in pairs, providing redundancy and continuous availability in the event of repair or replacement to any one Channel Director. Each Channel Director has multiple microprocessors and supports multiple independent data paths to the global memory to and from the host system.

Page 22: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 19

EMC Global Education© 2004 EMC Corporation. All rights reserved. 1919

Open Systems Connectivity Options

DMX supports eight-port, four processor Fibre Channel Directors – 2Gb/sec (can be configured for 1Gb/sec)– Single-mode and multi-mode configurations:

• Eight multi-mode ports• Seven multi-mode ports and one single-mode port• Six multi-mode ports and two single-mode ports

– 8,192 Logical Volumes per director (2048 per port)

SCSI Channel Directors supported on Symm 8000 – 4 ports, 4 concurrent I/Os (Ultra 40MB/sec)– 4 ports, 4 concurrent I/Os (Ultra LVD 80MB/sec

iSCSI support using Multi-Protocol Channel Director– Low cost connectivity using existing IP network infrastructure

Depending upon the model, from 2 to 8 front-end Channel Directors are supported per system. Today, networked storage (SAN or NAS) is the preferred method to connect hosts with storage. For SAN connectivity, Fibre Channel is the interface of choice. Legacy systems often use parallel SCSI, and SCSI Front-end directors are supported on non-DMX systems.

Fibre Channel—The DMX supports an eight port four processor Fibre Channel Director. Earlier Symmetrix offered 2 port, 4 port fibre Channel directors, and a 12 port director with an embedded switch. The standard fibre channel connection uses Short-wave Laser optics and multimode fiber optical cables for distances of up to 500 Meters over a 50 micron cable. The optional Long wave laser uses 9 micron single mode optics for distances of 10K and greater. Both switched fabrics and arbitrated loops SANs are supported.

SCSI Channel Directors support HVD and LVD and speeds to 80MB/sec. SCSI Channel Directors are not supported in DMX.

iSCSI allows block level access over IP networks. It is supported on the DMX using the new Multi-Protocol Channel Director. This director can be configured to support FICON, 1Gb Ethernet for SRDF attach, and 1Gb Ethernet for iSCSI host attach. iSCSI is ideal for storage and server consolidation environments that require low cost connectivity that leverages existing IP networks. Note: The 4 Port Multi-Protocol Channel Director is supported on the DMX1000, DMX2000, and DMX3000. The 2-port director is supported on all DMX systems.

Page 23: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 20

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2020

Mainframe Connectivity Options

ESCON eight-port, four Processor Director – Supports data transfer rates up to 17 MB/s per port– Single-mode and multi-mode configurations:

• Eight multi-mode ports• Seven multi-mode ports and one single-mode port• Six multi-mode ports and two single-mode ports

– 8,192 Logical Volumes per director (2048 per port)

FICON support using Multi-Protocol Channel Director– 2Gb/Sec– Point-to-point – Switched point-to-point

• Single FICON Fibre Channel Director between server and storage• No mixing FICON and FC Open Systems on the same Switch

Today, mainframe connectivity is through either ESCON or FICON serial channels. The original mainframe connectivity was through parallel interfaces with bus and tag cables. Except for a few legacy systems, this bus and tag has been replaced with ESCON because of increased speed and flexibility. ESCON uses multimode fiber optics and supports distances of up to 3 kilometers. Greater distances are supported using media converters.

FICON is Fibre Channel for mainframes. It offers superior performance and extended distance as compared to its predecessor, ESCON. As such, most mainframe customers will adopt FICON as their primary mainframe channel connectivity over the next few years. FICON uses multimode fiber optics and supports distances of up to 500 meters. FICON may also use single mode fiber optics for distances of up to 10KM and beyond.

Page 24: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 21

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2121

Symmetrix Back End Disk Director (also called Disk Adapter or DA) writes and reads data to/from physical disk drives

– DA also responsible for disk and cache “scrubbing” and assists in parity-based data rebuilding

DAs are Field Replaceable Units (FRUs) and are “hot swappable”DAs installed in pairs on adjacent slots within the card cage of Symmetrix SYMM 4 and 5 architectures use

40/80MB/s SCSI to connect physical drives with a maximum of 12 drives per portDMX Architecture uses 2Gb Fibre Channel drives– Eight ports per DA – Maximum 18 dual ported drives per port– In addition to the Direct Matrix

connections to cache, each director has a separate message matrix for the transfer of control information

Port C

Port D

Port C

Port D

Disk Director

Processor b

Processor a

Disk Director

Processor dPowerPC 500 Mhz

Processor cPowerPC 500 Mhz

Processor bPowerPC 500 Mhz

Processor aPowerPC 500 Mhz

The primary purpose of the Back End director is to read and write data to the physical disks. However, when it is not staging data in cache or destaging data to disk, the disk director is responsible for proactive monitoring of physical drives and cache memory. This is referred to as disk and cache “scrubbing”.

“Disk Scrubbing” or Disk Error Correction and Error Verification: The disk directors use idle time to read data and check the polynomial correction bits for validity. If a disk read error occurs, the disk director reads all data on that track to Symmetrix cache memory. The disk director writes several worst case patterns to that track searching for media errors. When the test completes, the disk director rewrites the data from cache to the disk device, verifying the write operation. The disk microprocessor maps around any bad block (or blocks) detected during the worst case write operation, thus skipping defects in the media. If necessary, the disk microprocessor can reallocate up to 32 blocks of data on that track. To further safeguard the data, each disk device has several spare cylinders available. If the number of bad blocks per track exceeds 32 blocks, the disk director rewrites the data to an available spare cylinder. This entire process is called “error verification.” The disk director increments a soft error counter with each bad block detected. When the internal soft error threshold is reached, the Symmetrix service processor automatically dials the EMC Customer Support Center and notifies the host system of errors via sense data. It also invokes dynamic sparing (if the Dynamic Sparing option is enabled). This feature maximizes data availability by diagnosing marginal media errors before data becomes unreadable.

“Cache Scrubbing” or Cache Error Correction and Error Verification: The disk directors use idle time to periodically read cache, correct errors, and write the corrected data back to cache. This process is called “error verification or scrubbing.” When the directors detect an uncorrectable error in cache, Symmetrix reads the data from disk and takes the defective cache memory block offline until an EMC Customer Engineer can repair it. Error verification maximizes data availability by significantly reducing the probability of encountering an uncorrectable error by preventing bit errors from accumulating in cache.

Page 25: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 22

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2222

Disk Performance Basics

Three components of disk performance– Time to reposition actuator - seek

time– Rotational latency– Transfer rate

With a Symmetrix, I/Os are services from cache, not from the physical HDA– Minimizes the inherent latencies

of physical disk I/O– Disk I/O at memory speeds

+ Transfer Rate

Transfer Data

Position Actuator

Seek time Disk I/O =

time

+ Rotational Delay

Rotational Delay

When you look at a physical disk driver, a read or write operation has three components that add up to the overall response time.

Actuator positioning is the time it takes to move the read/write heads over the desired cylinder. This is mechanical movement and is typically measured in milliseconds. The actual time that it takes to reposition depends on how far the heads have to move, but this contributes to the greatest share of the overall response time.Rotational Delay is the time it takes for the desired information to come under the ready write head. This time is the function of the revolutions per second, or drive RPM. The faster the drive turns, the lower the rotational latency. A 10,000 RPM drive has an average rotational latency of approximately 3.00 milliseconds, which is half the time it takes to make one revolution.Transfer Rate is the smallest time component and consists of the time it takes to actually read/write the data. This is a function of drive RPM and the data density. It is often measured as internal transfer rate or external transfer rate. The external rate is the speed that the drive transfers data to the controller. This is limited by the internal transfer rate, but with buffers on the drive modules themselves, it allow faster transfer rates.

The design objective of a Symmetrix is not to limit the performance of host applications based on the performance limitations of the physical disk. This is accomplished using cache. Write operations are to cache and asynchronously destage to disk. Read operations are from cache using the Least recently Used algorithm and prefetching to keep the information that is most likely to be accessed in memory.

Page 26: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 23

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2323

Symmetrix Disk Comparisons

Fibre Channel

Fibre Channel

Ultra SCSI Ultra SCSIUltra SCSI Ultra SCSI Ultra SCSI Ultra SCSI Interface

135.97 OS

136 MF146 OS

DMX

10,000

68.38 OS

72.17 MF73.10 OS

DMX

10,000

169.31 OS135.97 OS68.38 OS33.72 OS16.86 OS33.72 OSFormatted Capacity(Eng GB)

178.7 MF181 OS

136 MF146 OS

72.17 MF73.10 OS

35.8 MF36 OS

17.90 MF18.10 OS

35.80 MF36.20 OS

Formatted Capacity(Mkt GB)

Sym 5Sym 5Sym 5Sym 5Sym 5Sym 4.8Symmetrix Architecture

7,200 10,00010,00010,00010,0007,200 Spindle Speed

73 GB 146 GB181 GB73 GB36 GB18 GB36 GB 146 GB

Symmetrix physical drives are manufactured by our suppliers (Seagate) to meet EMC’s rigorous quality standards and unique product specifications. These specification include, dedicated microprocessors (that can be XOR capable), the most functionally robust microcode available, and large onboard buffer memory (4MB – 32MB).

Again, while the physical speed of disk drives does contribute to the overall performance, the design is for most read or write operations to be handled from cache.Note: Marketing defines a GB as 1000 X 1000 X 1000, while Engineering defines a GB as 1024 X 1024 X 1024.

Page 27: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 24

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2424

Symmetrix Global Cache Directors

Memory boards are now referred to as Global Cache Directors and contain global shared memorySymmetrix has a minimum of 2 memory boards and a maximum of 8Individual cache directors are available in 2 GB, 4 GB, 8 GB, and 16 GB sizes. Boards are comprised of memory chips and divided into four addressable regions Generally installed in pairsMemory boards are FRUs and “hot swappable” (does not require Symm power down or “reboot”)

128 GB8DMX 300064 GB48830

32 GB/64 GB2/48530

128 GB8DMX 2000

32 GB28230

64 GB4DMX 100032 GB2DMX 800

Maximum Cache Size

Number of Cache Boards

Model

Cache boards are designed for each family of Symm. Symm 4.8 uses the M2 generation of memory boards. Symm 5 uses the M3/M4 generation of memory boards. The DMX uses M5. Because these boards have different designs, they cannot be swapped between families of Symm. Memory boards in the DMX are referred to as Global Cache Directors with CacheStorm technology.

On Symm 5, memory boards that connect to the Top High and Bottom High internal system busses are referred to as “High Memory”. Conversely, boards that connect to Top Low and Bottom Low are known as “Low Memory”. Important to note that even on the Symm 4.X, cache connects to both the X and Y internal busses. DMX uses direct connections between directors and cache.

“Hot swappable” means that a Customer Engineer, following documented procedure, can be removed and replace the board without powering down the Symm. The CE procedure includes destaging all remainder data in cache and fencing off the board in order to prevent loss of data.

When configuring cache for the Symmetrix DMX systems, follow these guidelines:• A minimum of four and a maximum of eight cache director boards is required for the DMX2000 system

configuration; and a minimum of two and a maximum of four cache director boards is required for the DMX1000 system configuration.

• Two-board cache director configurations require boards of equal size.• Cache directors can be added one at a time to configurations of two boards and greater.• A maximum of two different cache director sizes is supported, and the smallest cache director must be at least

one-half the size of the largest cache director.• In cache director configurations with more than two boards, no more than one half of the boards can be smaller

than the largest cache director.

Page 28: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 25

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2525

Symmetrix Shared Global Memory

Shared Global Memory contain three types of information– Cache Slots: temporary repository for

frequently accessed data (staging area between host and physical drive)

– Track Table: directory of the data residing in cache and of the location and condition of the data residing on Symmetrix physical disk(s)

– Communications and Mailboxes: contains performance and diagnostic information concerning Symmetrix and allows independent front end and back end to communicate

– DMX also uses message matrix for control and communications

Track Table

Status and Communications

MAILBOXES

CACHE SLOTS

The actual size requirements for cache depends on the configuration. The general rule that “more is better” also applies to cache, but again, the actual requirements is a function of the configuration and application access patterns. The CQS system provides sizing guidelines based on actual configuration.The primary use for cache is for staging and destaging data between the host and the disk drives. Cache is allocated in tracks and is referred to as cache slots, which are 32Kbytes in size (47 or 57 Kbytes for Mainframe). If the Symm is supporting both FBA and CKD emulation within the same frame, the cache slots will be the size of the largest track size, either 47K (3380) or 57K (3390) track size.The Track Table is used to keep track of the status of each track of each logical volume. Approximately 16 Bytes of cache space is used for each track. So, a 2GB volume would use approximately 1MB of cache for track table space (( 200GB/32KB) X 16B) of cache space. You can see that cache requirements depend on the actual configuration. Cache is also used to maintain all diagnostic and short-term performance information, as well as provide the facility for Channel Directors to communicate with Disk Directors. The Symm maintains diagnostic information for every component within the architecture. Performance data includes I/Os per second, cache hit rate and read/write percentage for the entire system, individual directors, and individual devices (logical volumes). This information is accumulated and stored as part of the Symm’s normal operations, whether or not someone (CE or customer) is referencing it. The Mailbox is used for communications between the directors. With DMX, while the Mailboxes still exist, a Communications and Control Matrix allows direct communication between directors.

Page 29: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 26

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2626

Symmetrix Cache Management

Symmetrix Cache management is based upon the following principles:– Locality of Reference

• If a data block has been recently used, adjacent data will be needed soon

• Data staged from disk to cache at a minimum of 4K or blocks to end of track or full track

• Prefetch algorithm detects sequential data access patterns

– Data re-use• Accessed Data will probably be used

again– Least Recently Used (LRU) data

is flushed from the cache first• Only keep active data in the cache• Free up cache slots that are inactive to

make room for more active data

Track Table

Status and Communications

MAILBOXES

CACHE SLOTS

Prefetching - Once sequential access is detected, prefetch is turned on for that logical volume. Prefetch is initiated by 2 sequential accesses to a volume. Once turned on, for every sequential access, the Symm will pull the next two successive tracks into cache (access to track 1 on cylinder 1 and will prompt the prefetch of tracks 2 & 3 on cylinder 1). After 100 sequential accesses to that volume, the next sequential access will initiate the prefetching of the next 5 tracks on that volume (access to track 1 on cylinder 10 will prompt the prefetch of tracks 2,3,4,5 & 6 on cylinder 10). After the next 100 sequential accesses to that volume, the prefetch track value is increased to 8 (access to track 1 on cylinder 100 will prompt the prefetch of tracks 2,3,4,5,6,7,8 & 9 on cylinder 100). Any non-sequential accesses to that volume will turn the prefetch capability off.

As data is placed into cache or accessed within cache, it is given a pseudo timestamp. This allows the Symm to maintain only the most frequently accessed data to remain in cache memory. The data residing in cache is ordered through an Age-Link-Chain. As data is touched (read operation for example), it moves to the top of the Age-Link-Chain. Every time a director performs a cache operation, it must take control of the LRU algorithm. This forces the director to mark the least recently used data in cache as available (to be overwritten by the next cache operation).

Page 30: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 27

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2727

Symmetrix Card Cage0 1

234567

89

A

0 1234567

89

A

0F

0F

0 12345678

9A

0 1234567

89

A

0F

0F

0 12345678

9A

0 1234567

89A

0F

0F

0 12345678

9A

0 1234567

89

A

0F

0F

0 12345678

9A

0 1234567

89A

0F

0F

0 12345678

9A

0 1234567

89

A

0 12345678

9A

0 1234567

89A

0F

0F

0 1234567

89

A

0 1234567

89

A

0F

0F

0F

0F

1 2 3 4 5

5 A B C D E F0 1 2 3 4

M 1 M 2 1 2 1 3 1 4 1 5 1 6

DA DA DA DASA FA EA MM MM EA FA SA

DMX3000DMX2000DMX1000DMX800

4896384576288288144144120

Maximum Disk Drives

32GB2228230

64GB444DMX1000P

64GB4448530

64GB4888830

256GB888DMX 3000

256GB888DMX2000P

256GB8412DMX2000

128GB426DMX1000

32GB222DMX800

Maximum Cache

Maximum Cache Directors

Maximum Back End Directors

Maximum Front End Directors

Model

Though we logically divide the architecture of the Symm into Front End, Back End, and Shared Global Memory, physically, these director and memory cards reside side-by-side within the card cage of the Symm.The DMX “P” model are configured for maximum performance rather than connectivity.

Page 31: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 28

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2828

Enginuity Overview

Operating Environment for Symmetrix– Each processor in each director is loaded with Enginuity

• Downloaded from service processor to directors over internal LAN• Zipped code loaded from EEPROM to SDRAM (control store of director)

– Enginuity is what allows the independent director processors to act as one Integrated Cached Disk Array

• Also provides the framework (coding) for advanced functionality like SRDF, TimeFinder, etc.

– All DMX shipped with the latest Enginuity 5670 as of Sept. 2003

5568.34.22

Symmetrix HardwareSupported:

50 = Symm352 = Symm455 = Symm556 = DMX

Microcode ‘Family’

(Major Release Level)

Field Release Level ofSymmetrix Microcode(Minor Release Level)

Field Release Level ofService Processor

Code(Minor Release Level)

Enginuity automatically reserves 12 GB (raw) for internal use as a Symmetrix File System (SFS). This space is automatically allocated while initially loading the Enginuity Operating Environment on Symmetrix systems and is not visible to the host environment. This 12 GB of raw SFS space is translated into 6 GB of usable space (mirrored configuration) and is spread equally across two 3 GB volumes.

The SFS stores statistic data that is generated and is used to provide a number of benefits:• Dynamically adjusting performance algorithms• Enhancement of dynamic mirror service policy• Enhancement of Symmetrix Optimizer• More rapid recovery from problems• Enhanced system audit and investigation

Enginuity also allows Quality of Service (QoS), giving the ability to set varying priority levels to applications residing within a Symmetrix to meet varying customer needs or agreements.

Page 32: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 29

EMC Global Education© 2004 EMC Corporation. All rights reserved. 2929

Symmetrix Configuration InformationSymmetrix configuration information includes the following:

– Physical hardware that is installed –number and type of directors, memory, and physical drives

– Mapping of physical disks to logical volumes

– Mapping of SCSI addresses to volumes and volumes to front-end directors

– Operational parameters for front-end directors

Configuration information is referred to as the IMPL.bin file or simply “the bin file”Stored in two places:

– On the hard disk of the Symmetrix Service Processor

– In the EEPROM of each Symmetrix Director

Configuration changes can also be made using EMC ControlCenter Configuration Manager GUI and WideSky CLI

PC Memory

PCHard disk

EditConfiguration Information

(IMPL.BIN file)

Fromsystem

FromDisk

DefaultDirector

Two very important concepts:

Each director (both Channel and Disk) has a local copy (stored in EPROM) of the configuration file. This enables Channel Directors to be aware of the Disk Directors that are managing the physical copy(ies) of Symmetrix Logical Volumes and vice versa. The bin file also allows Channel Directors to map host requests to a channel address, or target and LUN to the Symmetrix Logical Volume.

Changes made to the bin file (non-SDR changes) must first be made to the IMPL.BIN on the Service Processor and then downloaded to the directors over the internal Ethernet LAN. Though Customer Service has the capability to do remote bin file updates (using the SymmRemote application), their standard operating procedure mandates the CE be physically present for all configuration changes. In addition, CS requires that all CEs do a comparison analysis prior to committing changes (read out existing IMPL.BIN and compare to proposed IMPL.BIN.)

Page 33: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 30

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3030

Mapping Physical Volumes to Logical Volumes

Symmetrix Physical Drives are split into Hyper Volume Extensions

Hyper Volume Extensions (disk slices) are then defined as Symmetrix Logical Volumes– Symmetrix Logical Volumes internally labeled with hexadecimal identifier

(0000-FFFF)– Maximum number of Logical Volumes per Symmetrix configuration = 8192

Physical Drive

Logical Volume 4.5 GB

Logical Volume

4.5 GB

Logical Volume

4.5 GB

Logical Volume 4.5 GB

18 GB

While “hyper -volume” and “split” refer to the same thing (a portion of a Symmetrix physical drive), a “logical volume” is a slightly different concept. A logical volume is the disk entity presented to a host via a Symmetrix channel director port. As far as the host is concerned, the Symmetrix Logical Volume (SLV) is a physical drive. As we will see, an SLV physically resides on at least one hyper-volume, but may be mirrored to more than one hyper-volume on the back end.

Do not confuse Symmetrix Logical Volumes with host-based logical volumes. Symmetrix Logical Volumes are defined by the Symmetrix Configuration (BIN File). Host-based logical volumes are configured (by customers) through Logical Volume Manager software (Veritas LVM, NT Disk Administrator, ...etc.).

Note: This is a very simplistic example of hyper-volume extensions on a physical drive. In actuality, the true useable capacity of the drive would be less than 18GB due to disk formatting and overhead (track tables, etc.). This would result in each of the 4 splits in this example being approximately 4.21GB in size (open systems).

Page 34: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 31

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3131

Symmetrix Logical Volume Specifications

Volume Specifications– Enginuity allows up to 128 Hyper Volumes to be configured from a

single Physical Drive– Size of Volumes defined as number of Cylinders (FBA Cylinder = 15

* 32K), with a max. size ~16 GB– All Hyper Volumes on a physical disk do not have to be the same

size, however a consistent size makes planning and ongoing management easier

– Hyper Volume(s) are the physical disk partitions that comprise Symmetrix Logical Volumes

• One mirrored Symmetrix Logical Volume = Two Hyper Volumes

Physical Disk

Physical Disk

Physical Disk

Physical Disk

Physical Disk

Volume specifications are illustrated here.

Page 35: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 32

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3232

Defining Symmetrix Logical Volumes

Symmetrix Logical Volumes are configured using the service processor and SymmWin interface/application– EMC Configuration Group uses information gathered during pre-

site survey to create initial configuration• Subsequent changes to configuration must be approved by Configuration

Group through their standard change control process (expected turnaround is 5 days)

– Generates configuration file (IMPL.BIN) that is downloaded from the service processor to each director

Most configuration changes can be performed on-line at the discretion of the EMC Customer EngineerConfiguration changes can be performed online using the EMC ControlCenter Configuration Manager and WideSky Command Line Interface

Physical Disk

Physical Disk

Physical Disk

Physical Disk

Physical Disk

Symmetrix Service Processor

Running SymmWin Application

The C4 group (Configuration and Change Control Committee) is the division of Global Services responsible for initial Symm configuration and any subsequent changes to the configuration. They use time-honored and extensive best practices and tools to configure Symms. There is also much manual review to be done to ensure that BIN files are valid. For planning purposes, allow at least 5 days to produce a BIN file or make major changes to a configuration.

An important misperception to correct is that only the CE can change the bin-file. While this might have been true at one time, today the customer may make configuration changes using EMC ControlCenter GUI or the WideSky GUI.

Prior to 5x66 Enginuity, BIN file configuration was performed using a DOS-based program called AnatMain.

Page 36: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 33

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3333

Symmetrix Logical Volume Types

Open Systems hosts use Fixed Block Architecture (FBA)– Each block is a fixed size of 512 bytes– Sector = 8 Blocks (4,096 Bytes)– Track = 8 Sectors (32,768 Bytes)– Cylinder = 15 Tracks (491,520 Bytes)– Volume size referred to by the number of Cylinders

Mainframes use Count Key Data (CKD)– Variable block size specified in “count”– Emulate Standard IBM volumes

• 3380D, E, K, K+, K++ (max. track size 47,476 bytes)• 3390-1, -2, -3, -9 (max. track size ~ 56,664 bytes)• Volume size defined as a number of Cylinders

Symmetrix stores data in cache in FBA and CKD and on physical disk in FBA format (32 KB tracks)– Emulates “expected” disk geometry to host OS through Channel

Directors

Data Block512 Bytes

DataCount Key

CKD and FBA physicals can be mixed in a Symmetrix if the ESP license is purchased for that Symm. ESP allows the Symmetrix to deal with the 2 fundamentally different types of low-level formats.

A notable exception to the “512-byte” Open Systems rule is AS/400. It uses 520 bytes per block. The extra 8 bytes are for host system overhead. Enginuity, prior to 5566 on the Symmetrix 5, only supports a single type of FBA format on Open Systems drives. If you connect an AS/400 to a pre-5566 Symmetrix, all FBA devices must be formatted 520. Open Systems hosts other than the AS/400 must be configured to use 520-formatted volumes. BE AWARE THAT CHANGING THE LOW-LEVEL FORMAT OF PHYSICAL DEVICES TYPICALLY REQUIRES SYMMETRIX DOWNTIME. Also, reformatting existing 512 devices will erase them, requiring a potentially complex backup and restore of all Open Systems data (VTOC the drives). With 5566+ on Symm 5 +, Enginuity has SLLF (Selective Low-Level Format) capabilities. This allows some drives to be formatted 512 and others 520, avoiding the complications mentioned above.

Page 37: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 34

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3434

Media Protection

Data protection options are configured at the volume level and the same system can employ a variety of protection schemes– Mirroring (RAID 1)

• Highest performance, availability and functionality • Two mirrors of one Symmetrix Logical Volume located on separate physical drives

– Parity RAID• 3 +1 (3 data and 1 parity volume) or 7 +1 (7 data and 1 parity volume) • Known as RAID S or RAID R in Symm 5 and earlier

– RAID 1/0 – Mirrored Stripped Mainframe Volumes– Dynamic Sparing

• One or more HDAs that are used when Symmetrix detects a potentially failing (or failed) device

• Can be utilized to augment data protection scheme• Minimizes exposure after a drive failure and before drive replacement

– SRDF (Symmetrix Remote Data Facility)• Mirror of Symmetrix Logical Volume maintained in separate Symmetrix frame

The RAID Advisory Board has rated configurations with both SRDF and Parity RAID or RAID 1 Mirroring with the highest availability and protection classification: Disaster Tolerant Disk System Plus (DTDS+)

RAID - Redundant Array of Independent Disks

See http://www.raid-advisory.com/emc.html for the ratings.

Page 38: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 35

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3535

Mirror Positions

Internally, each Symmetrix Logical Volume is represented by one four mirror position – M1, M2, M3, M4 Mirror positions are actually data structures that point to a physical location of a mirror of the data and status of each trackEach mirror position represents a mirror copy of the volume or is unused

Symmetrix Logical Volume

001

M1 M2 M3 M4M1

Unprotected Volume

M3

Remote Replica

M4

Local Replica

Before getting too far into volume configuration, understanding the concept of mirror positions is very important. Within the Symmetrix, each logical volume is represented by four mirror positions – M1, M2, M3, M4. These Mirror Positions are actually data structures that point to a physical location of a data mirror and the status of each track. Each position either represents a mirror or is unused. For example, an unprotected volume will only use the M1 position to point to the only data copy. A RAID-1 protected volume will use the M1 and M2 positions. If this volume was also protected with SRDF, three mirror positions would be used, and if we add a BCV to this SRDF protected RAID-1 volume, all four mirror positions would be used.Note that the order that mirror positions are assigned is not important. For example, a BCV or SRDF mirror is assigned the next available unused mirror position. For example, if a BCV was established to a RAID1 protected volume, it would assume the M3 mirror position.Another thing to keep in mind is Mirror Positions are logical pointers. With local mirrors, the pointer is to the physical hyper volume (Disk Director, Drive, and Split). In the case of SRDF, the mirror position actually points to a Logical Volume in the remote Symmetrix.

Page 39: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 36

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3636

Mirroring: RAID-1

Two physical “copies” or mirrors of the dataHost is unaware of data protection being applied

Physical Drive

LV 001 M2

Disk Director 15

Physical Drive

LV 001 M1

Disk Director 2

Logical Volume 001

Target = 1LUN = 0

Mirroring provides the highest level of performance and availability for all applications. Mirroring maintains a duplicate copy of a logical volume on two physical drives. The Symmetrix maintains these copies internally by writing all modified data to both devices. The mirroring function is transparent to attached hosts as the hosts views the mirrored volumes as a single logical volume.

In the example shown, Hyper 3 on Physical Drive 0 on DA 2 is the M1 for Logical Volume 001. Hyper 0 on physical drive 0 on DA 15 is the M2 for Logical Volume 001. Two physical mirrors of one logical volume are being presented to the host (using SCSI address 1,0) as if it were an entire physical drive. Notice that if the director numbers of the DA’s are added together (2+15) , they equal 17. This is what is known as the “rule of 17”. Because of where within the card cage the DA pairs reside (1/2, 3/4, 13/14, 15/16), as long as the sum total of the DA director numbers equal 17 (1/16, 2/15, 3/14, 4/13), the mirrors will always be on different internal system busses (for the highest availability and maximum Symm resources).

Page 40: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 37

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3737

Physical Drive

Physical Drive

Logical Volume 000

Logical Volume 004

Logical Volume 008

Logical Volume 00C

LV 000 M1

LV 004 M1

LV 008 M1

LV 00C M1 LV 00C M2

LV 008 M2

LV 004M2

LV 000 M2

Mirrored Service Policy

Symmetrix leverages either or both mirrors of a Logical Volume to fulfill read requests as quickly and efficiently as possibleTwo options for mirror reads: Interleave and Split– Interleave maximizes throughput by using both Hyper Volumes for reads

alternately – Split minimizes head movement by targeting reads for specific volumes to

either M1 or M2 mirrorDynamic Mirror Service Policy (DMSP) sets, policy is dynamicallyadjusted based on I/O patterns– Adjusted approximately every 5 minutes– Set at a logical volume level

During a read operation, if data is not available in cache memory, the Symmetrix reads the data from the volume chosen for best overall system performance. Performance algorithms within Enginuity track path-busy information, as well as the actuator location, and which sector is currently under the disk head in each device. Symmetrix performance algorithms for a read operation choose the best volume in the mirrored pair based upon these service policies.

• Interleave Service Policy – Share the read operations of a mirror pair by reading tracks from both logical volumes in an alternating method: a number of tracks from the primary volume (M1) and a number of tracks from the secondary volume (M2). The Interleave Service Policy is designed to achieve maximum throughput.

• Split Service Policy – Different from the Interleave Service Policy because read operations are assigned to either the M1 or the M2 logical volumes, but not both. Split Service policy is designed to minimize head movement.

• Dynamic Mirror Service Policy (DMSP) -DMSP dynamically chooses between the Interleave and Split policies at the logical volume level based on current performance and environmental variables, to maximum throughput and minimum head movement. DMSP adjusts each logical volume dynamically based on recent access patterns. This is the default mode. The Symmetrix system tracks I/O performance of logical volumes (Including BCV volumes), physical disks, and disk directors. Based on these measurements, directs read operation for mirrored data to the appropriate mirror. As the access patterns and workloads change, the DMSP algorithm analyzes the new workload and adjusts the service policy to optimize performance.

Page 41: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 38

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3838

Parity RAID

Parity RAID is also referred to as RAID-S in SYMM 5 and earlier architectures3 +1 (3 data volumes and 1 parity volume) or 7 +1 – Parity calculated by Symmetrix Disk Drives using Exclusive-OR (XOR)

function– Parity and difference data (result of XOR calculations) passed between

drives by DAs– Member drives must be on different DA ports (ideally on different DAs)

Parity volumes distributed across member drives in RAID GroupUnlike RAID-5, the data is not striped (“Volume A” in the diagram above is an entire Logical Volume and only related to “Volume B” and “Volume C” via parity calculations)

LV 005Volume E

RAID Rank “0”

Parity forJKL

LV 00aVolume J

LV 00bVolume K

LV 00cVolume L

LV 007Volume G

Parity forGHI

LV 008Volume H

LV 009Volume I

LV 004Volume D

Parity forDEF

LV 001Volume A

LV 002Volume B

LV 003Volume C

RAID Rank “1”

RAID Rank “2”

RAID Rank “3”

LV 006Volume F

Parity forABC

Symmetrix Parity RAID technology is a combination of hardware and software functionality that improves data availability on drives in Symmetrix systems by using a portion of the array to store redundancy information. This redundancy information, called parity, can be used to regenerate data if the data on a disk drive becomes unavailable.

Parity RAID is also referred to as RAID-S in Symm 5 and earlier architectures, and resembles RAID-5. However, EMC’s Parity RAID DOES NOT STRIPE DATA, however, Parity is striped across all disks in the rank.

Compared to a mirrored Symmetrix system, Parity RAID offers more usable capacity than a mirrored system containing the same number of disk drives. Like the Mirroring or Dynamic Sparing options, Symmetrix RAID parity protection can be dynamically added or removed. For example, for higher performance requirements and high availability, a Parity RAID group of volumes can be reconfigured as multiple mirrored pairs. Within the same Symmetrix system, data can be protected through Parity RAID, mirroring, and SRDF. Two Configurations are supported: 3 +1 and 7+1.

Parity RAID employs the same technique for generating parity information as many other commercially available RAID solutions, that is, the Boolean operation EXCLUSIVE OR (XOR). However, EMC’s Parity RAID implementation reduces the overhead associated with parity computation by moving the operation from controller microcode to the hardware on the XOR-capable disk drives.

Page 42: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 39

EMC Global Education© 2004 EMC Corporation. All rights reserved. 3939

Parity RAID Considerations

While Symmetrix Parity RAID minimizes some of the hardware and software overhead associated with typical RAID-5, it is not offered as a performance solution– For high data availability environments where cost and performance

must be balanced– Fixed 3 + 1 configuration means 25% of disk space used for

protection– Avoid using in application environments that are 25% or greater write

intensive– Every write to a data volume requires an update (write) to the parity

volume within that rank– Write activity to the parity volume equals the total writes to the 3 data

volumes within that rank– In write intensive environments, the parity volume is likely to reach its

Fast Write Ceiling, sending the entire rank into delayed write modeHigh write volumes spread across Parity RAID Groups (avoid spindle contention)In some configurations, Parity RAID in a DMX environment may perform as well as RAID 1 protection on a Symmetrix 8000

Some of the inefficiencies associated with RAID –5 have been eliminated with EMC’s Parity RAID in a DMX system; however, RAID-1 Mirroring continues to provide the highest in availability and performance, and should be positioned as such.

If customer requirements dictate using Parity RAID, planning and careful attention to layout is required to ensure optimal performance.

Page 43: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 40

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4040

Dynamic Sparing

Dedicated spare(s) protects storageDisk errors are detected during I/O operations or through DA’s “Disk Scrubbing”Data from failed disk is copied to Dynamic SpareWhen failed disk is replaced, data is automatically restored and Dynamic Spare resumes role as standby

Dynamic Spare

Every Symmetrix logical volume has 4 mirror positions. There is no priority associated with any of these positions. They simply point to potential physical locations (on the back end of the Symmetrix) for the logical volume entity. When sparing is necessitated, hyper volumes on the spare disk devices take the next available mirror position for the logical volumes present on the failing volume. All of these dynamic spare hyper volumes are marked as having all tracks invalid in the respective mirror positions of the logical volumes. It is now the responsibility of the Symmetrix to copy all tracks over to the Dynamic Spare.

Dynamic sparing occurs at the physical drive level, since a physical drive is the FRU (Field Replaceable Unit) in the Symmetrix. In other words, you can’t just replace a failed hyper volume, only the disk it resides on. However, the actual data migration from the volumes on the failed drive to the dynamic spare occurs at the logical volume level.

Dynamic Sparing is also supported with Parity RAID. A minimum of volumes in the config, minimum of 3 spares are suggested. If a drive fails, a dynamic spare drive will copy the data volumes onto itself by rebuilding them from parity and reading from any remaining uncorrupted data. If there are at least 3 spares available, the 1st spare will also start copying data from uncorrupted drives in the group. The other 2 spares will copy the contents of the remaining data volumes on the unaffected drives in the group. This results in the formerly parity-protected volumes now being temporarily mirrored. Since parity can’t be calculated with a drive lost, and mirroring is a faster way to make sure the data is redundantly protected, mirroring the entire RAID group results in the best way to protect against data loss until the problematic drive can be replaced.

Page 44: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 41

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4141

Logical Volume 001

Logical Volume 002

Logical Volume 003

Logical Volume 00F

Meta VolumeLV 001

LV 002

LV 003

LV 00F

SCSI Address:Target = 1LUN = 0

*Note: Symmetrix Engineering recommends Meta Volumes no larger than 512GB

Meta Volumes

Between 2 and 255* Symmetrix Logical Volumes can be grouped into a Meta Volume configuration and presented to Open System hosts as a single disk

– Assigned one SCSI address

Allows volumes larger than the current maximum hyper volume size of 16GB

– Satisfies requirements for environments where there is a limited number of SCSI addresses or volume labels available

Data is striped or concatenated within the Meta VolumeStripe size is configurable

– 2 cylinders stripe is default and appropriate for most environments

Meta Volumes become very useful in several environments.First, the environment where channel addresses are at a premium. Meta Volumes allow customers to present larger Symmetrix Logical Volumes to the host environment. They are able to present more GBs with less channel addresses. For example, the maximum number of devices that can be presented on a Symm 5 FA port is 256 (128 for Symm 4.X). If the customer has multipathing software (like PowerPath), devices will be presented down multiple Symm ports. Four paths to 64 volumes has just exhausted the 256 devices for those four Symm ports. There is a limitation on the number of volumes a host can manage. For example, with NT, the Drive lettering puts a limit on the number of volumes, and Meta Volumes prevent “running out of drive letters” by presenting larger volumes to NT hosts (Engineering has successfully presented a 1 TB volume to NT).

Page 45: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 42

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4242

TimeFinder Introduction

TimeFinder allows local replication of Symmetrix Logical Volumes for business continuance operationsUtilizes special Symmetrix Logical volume called a BCV, or Business Continuance Volume– BCV can be dynamically

attached to another volume, synchronized, and split off

– Host can access BCV as an independent volume that may be used for business continuance operations

– Full volume copy

1. “Establish” BCV 2. Synchronized3. “Split” BCV4. Execute BC operations using BCV

STD BCV

BCV Split

BCV Established

STD BCV

TimeFinder uses Business Continuance Volumes (BCVs) to create copies of a volume for parallel processing.

Basic TimeFinder operations include:• Establish Mirror relationship between any standard volume and BCV. Basically, the BCV assumes the next

available mirror position of the source volume. While a BCV is established, it is “hidden” from view and cannot be accessed.

• Synchronize data from Source to BCV volume. Synchronization will take place while production continues on the source volume. TimeFinder supports incremental establish by default where only changed data since the last established is synchronized.

• Split allows the BCV to be accessed as an independent volume for parallel processing.• Restore allows the BCV to be established as a mirror to either the original source or a different volume and the

data on the BCV is synchronized.

Page 46: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 43

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4343

EMC SNAP Introduction

EMC SNAP uses Snapshot techniques to create logical point-in-time images of a source volume– Snapshot is virtual abstraction of a

volume– Multiple Snapshots can be created from

same source– Snapshots are available immediately

EMC SNAP does a Copy-on-Write– Writes to production volume are first

copied to Save Area– Uses only a fraction of the source

volume’s capacity (~20–30%)Snapshots can be used for both read and write processing– Reads of unchanged data will be from

Production volume – Changed data will be read from Save

Area– Writes to Snapshot as save in Saved

Area

Save Area

Production view of volume

Snapshot viewof volume

Volume A

New writes copied toSave Area

Snapshotof

Volume A(VDEV)

EMC Snap creates space-saving, logical point-in-time images or “snapshots.” The snapshots are not full copies of data; they are logical images of the original information based on the time the snapshot was created. It’s simply a view into the data. A set of pointers to the source volume data tracks is instantly created upon activation of the snapshot. This set of pointers is addressed as a logical volume and is made accessible to a secondary host that uses the point-in-time image of the underlying data.

Page 47: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 44

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4444

SRDF Introduction

Symmetrix Remote Data Facility (SRDF) Maintains real-time, or near real-time copy of data at remote locationSimilar concept as RAID-1, except mirror is located in a different SymmetrixPrimary copy is called Source, remote copy is called TargetLink options between local and remote Symmetrix based on distance and performance requirements– ESCON– Fibre Channel– Gigabit Ethernet

Three different options to meet recovery objectives

Source Target

SRDF is an online, host-independent, mirrored data storage solution that duplicates production site data (source) to a secondary site (target). If the production site becomes inoperable, SRDF enables rapid manual fail over to the secondary site, allowing critical data to be available to the business operation in minutes. While it is easy to see this as a disaster recovery solution, the remote copy can also be used for business continuance during planned outages as well as backups, testing, and decision support applications. EMC offers a complete set of replication solutions to meet a wide range of service level requirements. When implementing a remote replication solutions, users must balance application response time, recovery point objectives, and communications and infrastructure costs.SRDF in Synchronous mode offers minimal impact to the application. Performance is dependent on the distance between the source and target Symmetrix. The greater the distance, the more overhead to complete the write operation. The communications connections must be sized appropriately to handle peak processing workloads without impacting performance. SRDF, in synchronous mode, provides the highest level of data integrity.SRDF/AR (formerly SAR) offers no impact to the host server performance, but requires BCVs (Business Continuance Volumes) to allow point-in-time copies to be periodically split off from the source copy. Because the copy is periodically split off, the communication bandwidth requirement is less than a synchronous mode operation. The target copy, however, is no longer synchronous to the source, meaning in the event of a source failure, the data on the target will only be current to the last resync from the BCV.SRDF/A bridges the gap between SRDF and SRDF/AR by balancing response time, infrastructure costs, communication requirements, and recovery point objectives to provide a new level of remote replication. SRDF/A offers no impact to the host servers, requires some additional cache to operate adding slightly to the infrastructure costs, but only requires communication links sized to meet the average I/O work load (vs. peak for SRDF synchronous). SRDF/A provides an improved recovery point objective (vs. SRDF/AR) and allows customers to deploy remote replication over extended distances.

Page 48: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 45

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4545

PhysicalDisk Drives

LogicalDisk Volumes

Cache

Channel Director

DiskDirecto

r

Channel Director

DiskDirecto

r

Physical and Logical Volumes

Symmetrix Physical Drives are divided into Hyper Volumes (disk slices)One or more Hyper Volumes comprise Symmetrix Logical Volumes

– Mirroring would require 2 Hyper Volumes for every 1 Symmetrix Logical Volume (M1 & M2)

Symmetrix Logical Volumes are made available to hosts through Channel Directors

– Bin file must map Logical Volume’s channel address to the Channel Director -Processor- Port in order to be discovered/used by hosts

Host sees SymmetrixLogical Volumes as ifthey were entire physical drives

From the Symmetrix perspective, physical disk drives are being partitioned into disk slices, called Hyper Volumes. Hyper Volumes could be used as unprotected Symmetrix Logical Volume; a mirror of a Symmetrix Logical Volume, a Business Continuance Volume (BCV), a parity volume for RAID S, a remote mirror using SRDF, a Disk Reallocation Volume (DRV), …etc. Within the Symmetrix bin-file, the emulation type, size in cylinders, count, number of mirrors, and special flags (like BCV, DRV, Dynamic Spare) are defined. Each Symmetrix Logical Volume is assigned a hexadecimal identifier. The bin file also tells the Channel director what volumes are presented on what port and the address used to access it. When more than one host is connected to a port, LUN Masking, using Volume Logix, is used to further restrict which host has access to which volume.From the Host’s perspective, when a device discovery process occurs, the information provided back to the OS appears to be referencing a series of SCSI disk drives. To an Open Systems host, the Symm looks like JBOD (Just Bunch Of Disks). The host is unaware of the bin file, RAID protection, remote mirroring, BCV mirrors, dynamic sparing, ...etc. In other words, the host “thinks its getting” an entire physical drive.

Page 49: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 46

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4646

Configuration Considerations

Understand the applications on the host connected to the Symmetrix system– Capacity requirements– I/O rates– Read/Write ratios– Locality of reference – Sequential or Random

Understand special host considerations– Maximum drive and file system sizes supported– Consider Logical Volume Manager (LVM) on the host and the use of data

striping– Device sharing requirements - Clustering

Determine Volume size and appropriate level of protection– Symmetrix provides flexibility for different sizes and protection within a

system– Standard sizes make it easier to manage

Determine connectivity requirements– Number of channels available from each host

Distribute workloads from the busiest to the least busy

The best advice for configuring a Symmetrix storage subsystem for maximum performance is “ Go wide before deep!”. This means the best possible performance will only be achieved if all the resources within the system are being equally utilized. This is much easier said than done, but through careful planning, you will have a better chance for success. Planning starts with understanding the host and application requirements.

Page 50: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 47

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4747

Symmetrix Availability: Phone-Home and Dial-In

EMC Phone-Home capability– Service Processor connects to

external modem (can fit in existing teleco racks)

– Communicates error and diagnostic information to EMC Customer Service

– Provides problem resolution Dial-In capability– Product Support Engineer

(PSE) or Customer Engineer (CE) dial-in

– Allows full control of service processor through proprietary and secure interface

– Allows for proactive and reactive maintenance

– Can be disabled by customer through external modem

Every Symmetrix unit has an integrated service processor that continuously monitors the Symmetrix environment. The service processor communicates with the EMC Customer Support Center through a customer-supplied, direct phone line. The service processor automatically dials the Customer Support Center whenever Symmetrix detects a component failure or environmental violation. An EMC Product Support Engineer at the Customer Support Center can also run diagnostics remotely through the service processor to determine the source of a problem and potentially resolve it before the problem becomes critical.

Most call-home incidents are software-related and can be resolved remotely by dialing back into the Symmetrix. When required, a Customer Engineers will be dispatched to the Symmetrix to replace hardware or perform other maintenance.

Page 51: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 48

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4848

Symmetrix Architecture based on the concept of N + 1 redundancy– One more component than is necessary for operation

Continuous operation, even if failures occur to any major component:– Global Memory Director boards– Channel Director boards– Disk Director boards– Disk drives– Communications Control Module– Cooling Fan Modules– Power modules– Batteries– Service Processor

Non-disruptive Microcode Upgrades and Loads

Channel Adapters and Cache Cards

Disks

PowerModules

Batteries

Symmetrix Availability: Hardware Redundancy

The Symmetrix undergoes the most rigorous pre-ship testing in the industry. Component, environmental, and operational testing all but guarantee the elimination of defective or substandard components.

Non-disruptive Microcode Upgrades and Loads: Non-disruptive microcode upgrade and load capabilities are currentlyavailable for the Symmetrix. Symmetrix takes advantage of a multi-processing and redundant architecture to allow for hot loadability of similar microcode platforms. Within a code family, release levels can be non-disruptively loaded without interruption to user access. During a non-disruptive microcode upgrade, the Product Support Engineer downloads the new microcode to the service processor. The new microcode loads into the EEPROM areas within the channel and disk directors and remains idle until requested for hot load in control storage. The Symmetrix system does not require manual intervention on the customer’s part to perform this function. All channel and disk directors remain in an on-line state to the host processor, thus maintaining application access. Symmetrix will load executable code at selected “windows of opportunity” within each director hardware resource until all directors have been loaded. Once the executable code is loaded, internal processing is synchronized and the new code becomes operational. This capability can be utilized to upgrade or to back down from a release level within a family. NOTE: During a non-disruptive microcode load within a code family, the full microcode is loaded, which consists of the same base code plus additional patches that reside in the patch area.

Page 52: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 49

EMC Global Education© 2004 EMC Corporation. All rights reserved. 4949

CACHE

Advanced Availability: PowerPathPowerPath from EMC is host-based software that supports multiple paths to a Symmetrix Volume– Open Systems only (not

needed for OS/390)– GUI or CLI management

capabilities

Symmetrix is configured so that volumes can be accessed through multiple directors/portsEliminates HBA, cable, switch, and director as single points of failureLoad balancing across paths also improves performance

Channel Director

Processor

Processor

Processor

Processor

Channel Director

Processor

Processor

Processor

Processor

While Channel Directors are redundant, it is important to remember that there is no automatic fail over on the front end. EMC PowerPath, along with a properly architected connectivity from hosts to storage, ensures continuous availability on the front end.

PowerPath is an Open Systems, host-based software application that allows UNIX and Windows hosts to have multiple paths to the same Symmetrix Logical Volume (disk from a host’s perspective). For the highest availability, the physical connections from the HBA should be to: 1) separate channel directors, and 2) that are located on different internal system busses. The easiest way to achieve this configuration is to ensure one Channel Director is odd numbered and one is even numbered. This is not an issue with the DMX in that all directors have a direct path to cache.

Important note, the more paths that exist to one Symmetrix Logical Volume, the more SCSI addresses are being used within the Symm. Though PowerPath can accommodate up to 32 paths to one Logical Volume, realistically, this could quickly exhaust available addresses. For example, with 4 Symm Ports and 100 volumes, it would be impossible to present all 100 volumes on all 4 ports (paths). This is because of the 256 maximum devices on any one FA port.

Page 53: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 50

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5050

DMX: Dual-ported Disk and Redundant Directors

Directors are always configured in pairs to facilitate secondary paths to drivesEach disk module has two fully independent Fibre Channel portsDrive port connects to the Director by a separate loop– Each port connects to different

Director in the Director pair – Star-hub topology– Port bypass cards prevent a drive

failure or replacement for effecting the other drives on the loop

Directors have four primary loops for normal drive communication, and four secondary loops to provide alternate path if other director fails

Disk Director 1 Disk Director 16

P

S

P

S

P

S

P

S

S

P

S

P

S

P

S

P

P = Primary Connection to DriveS= Secondary Connection for Redundancy

Symmetrix DMX back-end employs an arbitrated loop design and dual-ported disk drives. Each drive connects to two Disk Directors through separate Fibre Channel loops. The loops are configured in a star-hub topology with gated hub ports and bypass switches that allow individual Fibre Channel disk drives to be dynamically inserted or removed.

Page 54: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 51

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5151

DA 2

Processor b

MIDPLANE

DA 1

Processor b

MIDPLANE

Port D

Port C

Port D

Port C

Solid line = Primary PathDotted line = Secondary Path

Symm 5: Dual-Initiator Disk DirectorDisk Directors are installed in pairs to facilitate secondary paths to drivesIn the unlikely event of a disk director processor failure, the adjacent director will continue servicing the attached drives through secondary path– In this example, DA1

processor “b” would see ports C & D for DA2 processor “b” as its A &B ports in a fail-over scenario

Protecting against DA processor card failurePhysical drives are not dual-ported but are connected via a dual-initiator SCSI BusVolumes are typically mirrored across directors

Symm 4 and 5 architectures utilize a dual-initiator back-end architecture that ensures continuous availability of data inthe unlikely event of a Disk Director failure. This feature works by having two disk directors shadow the function of the each other. That is, each disk director has the capability of servicing any or all of the disk devices of the disk director it is paired with. Under normal conditions, each disk director services its disk devices. If Symmetrix detects a disk director hardware failure, Symmetrix “calls home” but continues to read from or write to the disk devices through the disk director it is paired with. When the source of the failure is corrected, Symmetrix returns the I/O servicing of the two disk directors to their normal state. Note: On the 4.x family, dual-initiator occurs by physically connecting one disk directors’ port card to the port card of the adjacent disk director.

Page 55: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 52

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5252

Advanced Availability: Power Subsystem

Each Symmetrix has 3 power supplies and redundant batteries– Symmetrix can connect to 2 external power sources (primary / auxiliary)– Three AC/DC and three DC/DC power supply modules operate in a

redundant parallel configuration– While one battery is acting as the “primary standby”, the other battery is

acting as the “secondary standby” (periodically switch roles)Power modules and batteries are FRUs and “hot swappable”Batteries are periodically load tested to ensure their availability in the event of main power system failureBatteries power cache and all disks within the ICDA– Upon detection of main power failure; Symm will continue to accept I/O from

the host environment for 90 seconds – If power is not re-established:

• Symm will stop accepting I/O• Destage all write pending data to its actual location on disk

– Symm then waits for battery timer to run down to begin the “graceful” shutdown process (spin-down the drives and retract heads)

– Symm would be immediately available to hosts (no IML required) if power returns prior to battery timer run down

Power Subsystem: The Symmetrix has a modular power subsystem featuring a redundant architecture that facilitates field replacement without interruption. The Symmetrix power subsystem connects to two dedicated or isolated AC power lines. If AC power fails on one AC line, the power subsystem automatically switches to the other AC line. Three AC/DC power supply modules operate in a parallel configuration. These modules provide 56V power for the DC/DC power distribution system. If any single AC/DC power supply module fails, the remaining power supplies continue to share the load. These modules provide 5V and 12V power to the various components in the Symmetrix unit. System Battery Backup: The Symmetrix backup battery subsystem maintains power to the entire system if AC power is lost. The backup battery subsystem allows Symmetrix to remain online to the host system for three minutes in the event of an AC power loss, allowing the directors flush cache write data to the disk devices. Symmetrix continually recharges the battery subsystem whenever it is under AC power. When a power failure occurs, power switches immediately to the backup battery and Symmetrix continues to operate normally. When the battery timer window elapses, Symmetrix presents a busy status to prevent the attached hosts from initiating any new I/O. The Symmetrix destages any Write data still in cache to disk, spins down the disk devices and retracts the heads and powers down.Symmetrix Emergency Power Off: The Symmetrix emergency power off sequence allows 20 seconds to destage pending write data. When the EPO switch is set to off, Symmetrix immediately switches to battery backup and initiates writes of cache data. Data directed to mirrored pairs is written to only one device. The first available mirror device receives the data, and the other mirror device status is set to invalid. Data directed to non-mirrored volumes is written to the first available spare area on any devices available for write. The director records that there are pending write operations to complete, and stores the location of all data that has been temporarily redirected. When power is restored, all data is written to its proper volume and mirrored pairs are reestablished as part of the initial load sequence.

Page 56: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 53

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5353

Advanced Availability: Cache Protection

Why is cache not mirrored? - Advanced method of cache protection allows for more usable cache - for optimal I/O performance– Proven effective by the Symm’s install base

Minimum of 2 memory boards per Symmetrix (redundancy)– Each board is connected to and accessed by multiple busses– Each board has redundant power sources

Memory boards comprised of multiple chips (chips proactively monitored through I/O activity and “cache scrubbing”)– Each chip has redundant paths (A and B port ASICs)– Through Enginuity, each chip has a threshold for correctable errors

When the correctable error threshold is reached or a permanent (uncorrectable) error is detected:– Call-Home initiated and the suspect area within cache is “fenced off”– Any write pending data is written to disk– The board is non-disruptively replaced by a Customer Engineer

Data written to cache is rescanned against the data residing within DA or Channel Director buffer to ensure correctness

Proactive Cache Maintenance: EMC makes every effort to provide the most highly reliable hardware in the industry, and provides unique methods for detecting and preventing failures in a proactive way. This sets it apart from all others in providing continuous data integrity and high availability. Symmetrix actively looks for “soft” errors before they become permanent, and then records them. By tracking these soft or temporary errors during normal operation, Symmetrix can recognize patterns of error activity and predict a hard failure before it occurs. This proactive error tracking can usually prevent an error in cache by generating a call-home for service or by fencing off a failing memory segment before any hard data errors occur.

Cache Scrubbing: All locations in cache are periodically read and rewritten to detect any increase in single-bit errors. This cache scrubbing technique maintains a record of errors for each memory segment. If the predetermined error threshold is reached for single-bit errors, the service processor generates a call-home for immediate attention. Constant cache scrubbing reduces the potential for multi-bit or hard errors. Should a multi-bit error be detected during the scrubbing process, it is considered a permanent error, the segment is immediately fenced (removed from service), the segment's contents are moved to another area in cache, the service processor call-home alerts EMC, and Customer Service is immediately notified and a customer engineer is dispatched with the appropriate parts for speedy repair. Even in cases where errors are occurring and are easily corrected, if they exceed a preset level, the call home is executed. This represents EMC Engineering philosophy of not accepting any level of probability for errors.

On-line Maintenance: Every Symmetrix is configured with a minimum of two memory boards to allow for on-line hot replacement of a failing memory board.

Page 57: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 54

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5454

MaintenanceProcessor

Port ASICA

Port ASICB

MaintenanceProcessor

Port ASICA

Port ASICB

MaintenanceProcessor

Port ASICA

Port ASICB

MaintenanceProcessor

Port ASICA

Port ASICB

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Data 64+8

Address 32+1Command 10+1

Advanced Availability: Cache ProtectionCache slots are protected using advanced error detection and correction logic along with data “interleaving”– ECC is employed by every director to allow

for single-bit and non-consecutive double-bit error detection and correction

– Data is sent between Directors and Cache as a 72 bit “memory word” (64 bits of data + 8 bits of parity)

– Port ASIC on the Memory Board creates 80 bit package (64 bits of data + 16 bits of parity) from the incoming “memory word”

– These 80 bits are interleaved amongst 20 different SDRAM chips (memory bank)

– LRC (longitude redundancy check) also employed to XOR accumulated 4KB sectors within region

These factors enable single nibble (4 bits) error correction (consecutive) and double nibble error detection– Resulting in the capability to withstand the

failure of an entire SDRAM chip

Symmetrix assures the highest level of data integrity by checking data validity through the various levels of the data transfer in and out of Cache.

Byte Level Parity Checking All data and control paths have parity generating and checking circuitry that verify hardware integrity at the byte or word level. All data and command words passed on the system bus, and within each director and global memory board, include parity bits used to check integrity at each stage of the data transfer.

Error Checking and Correction (ECC)The directors detect and correct single-bit and non-consecutive double-bit errors and report uncorrectable 3-bit or more errors.

Sector Level Longitude Redundancy Code (LRC)The LRC calculation further assures data integrity. The check bytes are the XOR (exclusive OR) value of the accumulated bytes in a 4KB sector. LRC checking can detect both data errors and wrong block access problems.

Nibble-level InterleavingData and storage locations are spread across multiple components to improve error detection and recovery. For example, each memory word and associated ECC (80 bits) are stored in 20 separate DRAM chips. The failure of a single memory chip, the most common failure, is detected as a correctable error; 4 consecutive bits (a nibble) can be rebuilt using the remaining healthy chips and the associated ECC.

Page 58: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 55

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5555

Symmetrix DMX Series

73.5 TB(parity 7+1)

36.8 TB(parity 7+1)

18.4 TB(parity 7+1)

7.6 / 15.3 TB(parity 7+1)Capacity (usable)

96 x 2 Gb FC96 x ESCON48 x 2 Gb FICON8 x GigE SRDF48 x GigE iSCSI

128 GB

4–8

32 x 2 Gb Fibre Channel

42 TB

288

Integrated

DMX2000 DMX3000DMX1000DMX800IntegratedIntegratedModularPackaging

57614460 / 120Drives

84 TB21 TB8.75 / 17.5 TBCapacity (raw)

64 x 2Gb Fibre Channel

16 x 2 Gb Fibre Channel

8 / 16 x 2 Gb Fibre ChannelDrive channels

4–82–42Cache Directors

128 GB64 GB32 GBMaximum cache

64 x 2 Gb FC64 x ESCON32 x 2 Gb FICON8 x GigE SRDF32 x GigE iSCSI

48 x 2 Gb FC48 x ESCON24 x 2 Gb FICON8 x GigE SRDF24 x GigE iSCSI

8/16 x 2 Gb FC4 x 2 Gb FICON4 x GigE SRDF4 x GigE iSCSI

Connectivity(combinations may be limited by board slots)

These are the features of the DMX series.

Page 59: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 56

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5656

Symmetrix Foundations Summary

Symmetrix basic architecture is comprised of three functional areas (Front End, Back End and Shared Global Memory), connected by four internal system bussesHosts connect to Symmetrix using SCSI, Fibre Channel or ESCON, FICON, and today, iSCSIAll I/O must be serviced through cache (read hit, read miss, fast write, delayed write)Symmetrix physical disk drives are divided into Hyper Volumes, which comprise Symmetrix Logical Volumes that are presented to the host environment as if they were entire physical drivesMirroring, Parity RAID, SRDF, and Dynamic Sparing are all media protection options available on SymmetrixRedundancy in the hardware design and intelligence through Enginuity allow Symmetrix to provide the highest levels of data availability

These are some of the main features of the Symmetrix. Please take a moment to read them.

Page 60: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 57

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5757

Course Summary

Key points covered in this course:Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA)Write a detailed list of host connectivity options for SymmetrixExplain how Symmetrix functionally handles I/O requests from thehost environmentIllustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical VolumesDescribe the media protection options available on the SymmetrixReferencing a diagram, explain some of the high availability features of Symmetrix and how this potentially impacts data availabilityDescribe the front-end, back-end, cache, and physical drive configurations of various Symmetrix models

These are the main points covered in this training. Please take a moment to read them.

Page 61: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 58

5858EMC Global Education© 2004 EMC Corporation. All rights reserved.

Enginuity 5670+ Update

Updates have been made to this course based on Enginuity code 5670+. This section includes new features supported by this code update.

Page 62: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 59

EMC Global Education© 2004 EMC Corporation. All rights reserved. 5959

Update Objectives

Upon completing this update, you will be able to list:– Enginuity 5670+ Management Features– Enginuity 5670+ Business Continuity Features– Enginuity 5670+ Performance Features

Upon completion of this update, you will be able to list the features supported by Enginuity 5670+.

Page 63: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 60

EMC Global Education© 2004 EMC Corporation. All rights reserved. 6060

Management Features

5670+ Management Features– End User Configuration

• User control of volumes and type– Symm Purge

• Secure deletion method– Logical Volumes

• Increase number of “hypers”– Volume Expansion

• Striped- meta expansion

User Configuration - Enginuity v 5670+ will allow users to un-map CKD volumes, delete CKD volumes, or convert CKD volumes to FBA. These user configuration controls will simplify the task of reusing a Symmetrix by not requiring an EMC resource to modify the “bin” file.Symm Purge - provides customers a secure method of deleting ( electronic shredding) sensitive data. This will simplify the reuse of drive assets.Logical Volumes - v 5670+ will support an increased number of hypers per spindle. The number of “hypers” will be dependent upon the protection scheme. Volume Expansion - Previous microcode versions only supported the expansion of concatenated meta volumes. V5670+ will now support the expansion of both striped and concatenated meta volumes.

Page 64: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 61

EMC Global Education© 2004 EMC Corporation. All rights reserved. 6161

Business Continuity Features

5670+ Business Continuity Features– SRDF/A

• multi-session support – Protected Restore

• Enhanced restore features– SNAP Persistence

• Preserves snap session

SRDF/A- currently (v 5670) SRDF-A can only support a single-session. With v5670+ code, support will be available for multi-session SRDF/A data replication. Multi-session uses host control ( Mainframe only). Cycle switching is synchronized between the single-session SRDF/A Symmetrix pairs.

Protected Restore- v 5670+ provides Protected Restore features. While the restore is in progress, read miss data will come from the BCV volume, writes to the Standard volume will not propagate to the BCV volume, and the original Standard to BCV volume relationship will be maintained.

SNAP Persistence - v 5670+allows a protected snap restore and preserves the virtual snap session when the restore terminates.

Page 65: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 62

EMC Global Education© 2004 EMC Corporation. All rights reserved. 6262

Performance Features

5670+ Performance Features– RAID 5

• Either (3+1) or (7+1) configurations in same system• Both Parity RAID and RAID 5 can exist in same system on same disks• SRDF/ BCV protection

– Optimizer for RAID 5• Support for swapping individual members• No support for Parity RAID

RAID 5 will be available in two configurations: either 3 data drives and 1 parity drive, or 7 data drives and 1 parity drive. Currents limitations include:RAID 5 3+1 and 7+1 configuration cannot exist in the same frame. The same restrictions for Parity RAID must be observed ( mixing Parity RAID 3+1 and 7+1 in the same frame), as well as any combination of 3+1 and 7+1 Parity RAID and RAID 5 configurations. For example, Parity RAID (3+1) is not supported with RAID 5 ( 7+1).A single parity RAID protection scheme can be configured within a frame with any combination of SRDF, BCV, and mirroring protection.A single RAID 5 protection scheme can be configured within a frame with any combination of SRDF, BCV, and mirroring protection.A single Parity RAID protection scheme and RAID 5 of the same scheme can be configured within a frame with any combination of SRDF, BCV, and mirroring protection (for example, Parity RAID 3+1 is supported with RAID 5 3+1.)Optimizer – v 5670+ will provide Optimizer support for RAID-5. The microcode will support the swapping of individual members of a RAID 5 group instead of swapping the entire RAID 5 group. Optimizer does not support Parity RAID.

Page 66: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 63

EMC Global Education© 2004 EMC Corporation. All rights reserved. 6363

Summary

In this update, the following key point were discussed:– Enginuity 5670+ Management Features– Enginuity 5670+ Business Continuity Features– Enginuity 5670+ Performance Features

For additional information:http://powerlink.emc.com

New Enginuity 5670+ features were covered in this Symmetrix Foundations module update. For additional information refer to http://powerlink.emc.com.

Page 67: Symmetrix Foundations Mr 5wp Symmfd Wrapper

Copyright © 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 64

EMC Global Education© 2004 EMC Corporation. All rights reserved. 6464

Thank you for your attention. This ends our Symmetrix Foundations training.