Top Banner
Storage Handbook Your go-to for what’s now and what’s next in storage solutions. Flash makes it a software game now. Bob Plankers Page 24
25

Dell Storage Handbook

May 19, 2015

Download

Technology

The power to transform your storage.

At Dell, we’re constantly building a new breed of data management solutions that intelligently manage and automatically store data in the right place at the right time for the right cost.

This handbook is your definitive reference for unique, thoughtful information on the current and future landscape of storage technologies.


Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Dell Storage Handbook

Storage HandbookYour go-to for what’s now and what’s next in storage solutions.

Flash makes it a software game now.Bob PlankersPage 24

Page 2: Dell Storage Handbook

Storage Handbook

The power to transform your storage.

At Dell, we’re constantly building a new breed of data management solutions that intelligently manage and automatically store data in the right place at the right time for the right cost. That’s why our award-winning storage arrays have changed the status quo for thousands of customers around the world.

This handbook is your definitive reference for unique, thoughtful information on the current and future landscape of storage technologies.

Please feel free to contact us for more information on how we can help your business reach new levels of storage efficiency and agility—see Contact information and resources on page 25.

Page 3: Dell Storage Handbook

Executive summaryA storage state of the union by Bob Ganley, Senior Marketing Manager of Dell’s Storage Solutions Team.

4

Introduction to storageA brief history of the evolution of storage technologies. 8

Current landscapeA look at what’s happening and what to consider in storage solutions.

10

The latest trendsAn overview of the storage industry. 12

Case studiesDiscover what Mazda, Navicure Inc., and other businesses are doing to tackle their storage needs with Dell solutions.

15

Storage buying decisionsGain insight to drive innovative approaches for end-to-end solutions with examples of storage-intensive workloads.

19

Storage Handbook

Seven words on storageThe storage industry’s top influencers share their views on storage technologies.

22

Contact information and resources

25

Page 4: Dell Storage Handbook

Executive summary

Page 5: Dell Storage Handbook

By Bob GanleySenior ManagerDell Storage Solutions Team

What we’re trying to do for customers comes from where storage has been

Historically, storage has been about the physicality of the data.

When you think about data, you think about a customer database. That customer database is like an old filing cabinet that’s been put online, it’s physical. As people began implementing storage systems, the hardware was tangible, made up of spinning disks with gravity and mass.

For those of us in the technology sector, that has meant a few things. First, you really want to make sure you don’t lose the physical hardware that’s storing your valuable data. Also, you’d better make a copy of it, which will leave you with two pieces of hardware. Now that you have two copies, you probably want one of those copies to be in a different building in case the creek rises and a tree falls on your power line.

That’s where we’ve been.

Storage technology has been slow to change. Many vendors are still producing storage systems based on legacy architectures that literally tie a conceptual object (your volume) to a physical device (a disk drive). But thanks to virtualization, this conceptual and physical bond is loosening at a rapid pace.

Executive summary

5Storage Handbook

Page 6: Dell Storage Handbook

This is a crucial step toward the next generation of storage technology. Essentially, if you can truly break the bond between the hardware and the managed object, you can have an environment that allows you to act as quickly as you need to, whether you need to stay ahead of a change in business conditions, or respond to a hardware failure.

That’s where we’re going.

Making boundaries frictionless Implications of this tight bond between a conceptual object and a physical device are prominent in the way traditional storage has been purchased. Storage capability was purchased three, four, even five years out and since it takes a long time to fill up large amounts of expensive storage, a lot has gone unused.

When data gets created, it may never get used again. Ninety-five percent of data sitting on storage is cold, yet it’s being stored on the same old storage where it was put when it was first created. Because data can’t flow to the right storage and reach its own level (like water reaches its own level) resource allocation is not optimized. Over time, companies are buying more and more of the same storage to put that data on. Wouldn’t it be a better idea to have older data automatically move to more affordable storage?

Dell has a system which uses information about the data itself (metadata) which can quickly and easily look at which data has been accessed by recency or frequency and move the data that has been accessed less frequently (cold data) to a less expensive storage option. This system is called the Fluid Data Architecture.

The system has worked out quite nicely, as most customers build their systems over time they only add more low performing data to appropriate storage through a process known as automated data tiering. This process ensures data that doesn’t need fast drives and low latency remains easily available at a lower cost. Automated tiering translates into major performance benefits when applications run better and dollars per terabyte (return on assets) improve over time.

Building a Fluid Data ArchitectureDell’s storage strategy is centered on virtualization. We’ve very dramatically virtualized our storage infrastructure in such a way that data can be easily put in the right place for maximum performance and efficiency

without regard to the physical location. This is what we mean by Fluid Data.

The Fluid Data Architecture allows storage to be managed in a way that takes the burden off the administrator. It’s efficient through intelligence without having to incur extra labor.

Providing a more solid type of storage

Traditionally, two types of storage have existed: raw or block based (database systems, etc.) provided by a SAN; and file based (documents, videos, music, etc.) provided by an NAS. Now, we have the capability of providing a single pool of storage where storage capacity is no longer directly linked to the service that’s provided.

With a pool of shared storage, companies can access data through the appropriate set of services on the front end, while storage self-manages on the back end (hot vs. cold data). This aligns with Dell’s strategy of maximum data efficiency for our customers and is driven by dynamic tiering. That efficiency is taken to a new level when both file and block data can be tiered automatically in a unified storage environment. This reduces operating, management and capital expenses by fostering a more efficient utilization of a pool of storage, without needing multiple skill sets to manage it.

Incorporating flash/hybrid storage For a long time, storage was about hard drives. Engineering has increased the speed of the drives and the density of the bits, reaching a point of diminishing returns. Over time, the performance increases in servers has greatly outpaced improvements in storage. In the last few years major advancements have been made in using flash memory—a chip of non-volatile memory with no moving parts—to replace slower disks. These are known as solid-state drives (SSDs).

These chips are being combined in a way that makes them look just like a hard drive when they’re actually new media. SSDs yield dramatic improvements and benefits in speed, density and power. As data centers become more sensitive to power requirements, it’s critical that your data center uses less power. Eventually, we will see all active data move to flash or SSD storage; it is the only way forward to address the performance gap that has developed between servers and storage. However, not all data needs to be on flash. This is why dynamic tiering is so important to modern storage architecture.

6Storage Handbook

Page 7: Dell Storage Handbook

Monitoring and analyzing storage growth Users typically say they see maybe forty percent growth in their storage needs every year. This is obviously significant growth. What’s causing it?

Let’s say you’re frequently accessing an application like your customer database. Clearly you never want to lose that data, and you want to back it up, and that backup is supported by recovery points along the way. These points are created with different techniques depending on how important the data is (e.g., for a financial transaction, you might want to have minutes or seconds for recovery point objective; but with email exchange, you might not mind if you have hours to recover). Maybe you complete a backup once per week and setup disaster recovery sites with an extra data copy at a remote location. When you make a data copy, you often have more than two copies of it.

Growth in primary storage is accelerated by copies needed for data protection. The size of primary data sets is multiplied by recovery points, whether near-line, as in snapshots, and disk-based backups or off-line, as in tape-based backups and archives. This means that managing copies of data can have a direct effect on managing the growth of storage costs.

To that end, data protection strategy and operations is a strong solutions focus for Dell. We’re helping businesses become more efficient and cost effective with primary storage by developing advanced snapshot, backup and disaster recovery techniques in a way that balances the cost of protection with the value of data.

Working better together: Looking at storage as part of the whole systemImproving upon storage technology is ideal, but it’s important to remember that storage is part of a system. You have storage because you need to access data and use it towards some end goal. This touches on the notion of converged infrastructure where we’re looking at server, storage and compute together.

Another central strategy to how Dell thinks about storage is ensuring that each piece in the system works better together. We’re working on moving data closer to the processor, improving performance while still preserving the ability to manage and protect the data in a familiar shared storage model. We especially see this convergence happening on an engineering level where the focus is on taking advantage of synergies between the related components. We’re investing in optimizing information technology infrastructure as a system: servers, networks, storage, software and services. n

7Storage Handbook

Developing advanced snapshot, backup, and disaster recovery techniques in a way that balances the cost of protection with the value of data.

Page 8: Dell Storage Handbook

Introduction to storage

Page 9: Dell Storage Handbook

The history and evolution of storage technologiesDigital storage has been around since the beginning of computing. For most of that history, non-volatile digital storage (data that is retained even when the power is turned off) took the form of read/writeable magnetic media. The first generation was in the form of tape which required sequential access to the data. This was slowly replaced by hard disk drives that provided direct access to data through spinning media with moving mechanical read heads.

During the last forty years, increases in the performance of logic processing in computer hardware have consistently out paced performance gains in digital storage leading to a performance gap. In the last few years, a new entrant in digital storage technology called solid-state storage has led to a narrowing of that performance gap. Solid-state storage retains the non-volatile nature of disk storage and features direct data access but has no moving parts. This lack of moving parts has increased access speeds, increased reliability and reduced power consumption.

One aspect of digital storage that has been slow to change is the relationship between the physical media where data is stored and the logical storage objects that represent managed information. Information to be managed (for example: a customer database) is stored in a volume and that volume is instantiated as disk or collection of disks. This close relationship between the physical storage and logical storage object creates challenges.

In the last few years, the rise of server virtualization has led to huge benefits as the traditional tie-in between a physical server and the application running on it has been abstracted. Now multiple virtual servers can be combined in a single server to create efficiencies. Virtual servers can be easily moved between physical servers for the purposes of load balancing and high availability. The “friction” has been dramatically reduced between workloads and servers as a result. Storage virtualization as a term has been around for a while but from a practical perspective, most storage systems on the market today have not reached the frictionless state achieved by server virtualization.

Dell recognized this challenge a few years ago and has taken storage virtualization to a new level with Fluid Data Architecture. This has resulted in tremendous benefits for our customers through dramatic increases in efficiency, concrete improvements in the ability of Information Technology to respond to evolving requirements and the protection of digital information assets. n

Introduction to storage

9Storage Handbook

The rise of server virtualization has led to huge benefits as the traditional tie-in between a physical server and the application running on it has been abstracted.

Page 10: Dell Storage Handbook

Current landscape

Page 11: Dell Storage Handbook

Industry analysts peg storage capacity growth rates at forty percent per year. Most organizations today are wasting storage capacity, with average utilization rates hovering at around sixty percent. This waste is due to antiquated approaches to purchasing and managing storage. This challenge is propagated by architectures that have inflexible limits on growth and hinder free movement of data to unused drives. But what if you could recapture that wasted space?

Most data is accessed infrequently once it is created, yet organizations store most data on one or maybe two tiers of storage. This is because finding and moving old, cold data is a labor-intensive and disruptive process. If ninety-five percent of your data is cold, why not have that data automatically moved to cheap and deep storage by a non-disruptive background task?

Storage is a critical link in establishing and maintaining acceptable application performance. Analysis shows that a small percentage of data truly needs low-latency, high-performing storage to remove the storage bottleneck. Determining which data needs that performance, moving that data to the right storage and maintaining the right distribution over time is a complex task. What if your storage system could do that automatically with no manual intervention?

There are typically two types of storage in use: file and block. Unstructured data in the form of files gets stored on an NAS or filer and now represents over two-thirds of storage capacity. Block storage for structured data is stored on a SAN where it can be properly managed and protected. These disparate approaches to storage result in islands of capacity that have separate purchase cycles and management tasks. Wouldn’t it be better to have a single pool of managed storage capacity that can efficiently provide the repository for file and block data as needed?

Data protection and recovery requires creating recovery points, replicas and backups to prevent data loss and mitigate disaster scenarios. These copies of production data contribute to storage growth, and the process of creating them weighs on application performance. Fifty percent of organizations now struggle with meeting backup windows. Forty percent of organizations have more than one backup approach. How can you manage backup data growth? How can you streamline the creation of recovery points to meet rising service level expectations?

Data migrations are disruptive and costly. Many storage systems are replaced with a “forklift” every three years, requiring the purchase of new hardware, the re-purchase of software licenses and painful migrations. What if you could accommodate growth while preserving your investment in hardware and software?

Read on to find out how Dell is tackling the latest challenges in storage solutions. n

Current landscape

11Storage Handbook

Page 12: Dell Storage Handbook

The latest trends

Page 13: Dell Storage Handbook

Storage growthStorage growth is a fact of life. Organizations produce and consume data at an increasing rate. Core business processes rely on digital data, and more data is being collected and stored as organizations are realizing the potential value of collecting and analyzing all manner of information. Collected data ranges from daily office communications to the output which is constantly flowing from instruments and sensors. When the volume of that data is amplified by copies made for protection and recovery, the trend seems to be overwhelming. How can organizations keep up when storage growth outstrips budget growth by a factor of ten or more?

Consolidation presents clear opportunities for managing storage growth. Organizations have multiple repositories for data. Each repository must have some spare capacity to accommodate future growth. As the number of repositories multiply, that spare capacity adds up to wasted capacity and inefficient storage utilization. Consolidating these repositories presents the opportunity to combine that excess capacity for use in production resulting in an increase in storage utilization.

Separate systems for file and block storage also result in inefficient utilization. Unified storage solutions allow a single pool of storage to be allocated for use across either block or file protocols. This also creates utilization efficiencies that can reduce storage over provisioning.

Deduplication and compression are two related techniques that help mitigate the impact of storage growth, and backup storage is a prime application for this solution. Backup storage frequently contains multiple copies of the same data as frequent recovery points share multiple copies of data that has not changed during the time between backups.

Flash storageThe writing is on the wall. NAND Flash-based non-volatile memory (NVM) storage in the form of solid-state drives (SSDs) and solid-state cache cards are poised to dominate the future of storage for active data. Very low latencies and very high transaction rates for solid-state

storage provide the potential to close the performance gap between servers and storage. Given that most data is old and cold, active data represents a small portion of the storage capacity necessary to provide acceptable performance.

Automated data tiering moves hot data to the highest-performing storage without manual intervention. The storage system tracks usage patterns to determine how often each block of data is accessed. Frequently accessed data is moved to high-performance storage while cold blocks of data are moved down to more cost effective storage. This movement is done without the intervention of storage administrators. Automated data tiering enables the acceleration of workload performance with a small amount of flash storage because it moves to flash only the specific portion of data that requires high performance storage.

Servers and storage convergeAs organizations create the next generation architecture for their information technology, some trends begin to emerge. One clear trend is that the solution to optimal performance for critical workloads requires close coordination between storage and server components. One solution for the performance problem is to use NVM for caching of disk Input/Output (I/O). As the operating system calls for disk I/O, the data is read into the cache and kept there until it is overwritten. NVM is almost always used as read-only cache. Write caching risks data loss since some transactions might not have been written to the disk at the moment of an interruption, leaving the disk storage in an inconsistent state. Write-through cache requires waiting for acknowledgement from the back-end storage (whether DAS or shared) providing no performance advantage.

One improvement is to provide write-consistent cache with data protection. This provides the ability to accelerate reads as well as writes while preventing data loss in the event the cache card fails. The next step in this technology is to integrate this capability with shared storage. This step will extend the data protection and management benefits of a storage area network (SAN) to the data in the cache, essentially making server-attached

The latest trends

13Storage Handbook

Page 14: Dell Storage Handbook

flash a managed tier in the storage infrastructure. This development will blur the lines between server memory and storage.

Another area where storage and servers are coming together is in highly dense compute environments like blade enclosures. These solutions combine high-speed connectivity in the form of the backplane of a blade enclosure with the compute density of blade servers and bladed shared storage. This requires a high level of engineering sophistication and integration testing to ensure a complete solution which can maximize performance and efficiency within the high density of a blade enclosure.

Cloud-based architectures are a driving force for this type of convergence. Cloud computing environments promise to benefit enterprises in many different ways, including reduced capital costs through standardized building blocks, reduced operating costs through integrated management, and increased business agility through automated service delivery and rapid provisioning. To enable truly elastic cloud infrastructure, organizations must abandon the practice of custom-configuring each new virtualized environment. This shift is enabled by the adoption of standardized infrastructure building blocks which contain predefined sets of servers, storage, and networking that provide a desired level of service. These building blocks standardize virtualized infrastructure, reducing the time and effort involved to scale out the capacity of the cloud, and simplify the process of managing that infrastructure once it is deployed.

Storage value moves to softwareStorage virtualization relies on moving value “up the stack” from the storage hardware itself to the software that abstracts details of storage implementation and focuses on management of workloads across a pool of storage and compute. As more and more organizations are able to leverage enterprise-class storage components, the question becomes “Where is the value-add?” in enterprise NAS/SAN solutions. The answer is increasingly in software.

The trend toward “software defined” networking, data centers and storage has been picking up momentum. It is important to understand that this trend has received lukewarm reception from major storage vendors because of the possibility that storage hardware may become democratized in the process. This knee-jerk reaction on

the part of storage-only solution vendors ignores the hard reality that the lines between software and hardware are blurring. Companies like Dell are embracing the concept because highly virtualized storage is the future.

Network design becomes pivotalThere are several trends that are driving the importance of storage network design. Higher power servers drive increasingly large storage network requirements. Virtualization has driven higher levels of consolidation and unpredictable workload peaks can combine to overload storage networks. Higher performing storage including SSD technology is increasing throughput needed in the storage network.

The high end of Fibre channel storage networking has doubled with the introduction of 16Gb FC network components, while 10GbE is seeing widespread adoption. In order to fully recognize the benefits of improved network bandwidth, customers need a full end-to-end solution involving servers, networking and storage that is designed to optimize performance. n

14Storage Handbook

Page 15: Dell Storage Handbook

Case studies

Page 16: Dell Storage Handbook

Accelerating and protecting critical workloadsMazda North American Operations was experiencing unacceptable performance with their core ERP applications and was suffering from very long backup windows, putting their crucial information assets at risk of data loss. Mazda chose to implement a virtual environment with Dell Compellent storage with an SSD tier. Because Dell servers are specifically geared to handle virtualization, the Mazda infrastructure department was confident of its ability to transition to a virtual IT department. “Even minimal downtime means being separated from critical cash flow,” said Jim DiMarzio, CIO at Mazda North American Operations. “So we picked the most reliable system available to support our virtualization efforts—Dell servers fit the virtual environment one-hundred percent.”

With an end-to-end virtualized server and storage architecture in place powered by the Compellent SAN, the Mazda infrastructure services department has been able to substantially boost application performance. “We are now enjoying performance gains anywhere from eighty to four-hundred percent,” said Kai Sookwongse, Manager, Infrastructure Svcs Mazda NA Operations. “Critical applications like SAP actually run better than on physical servers.”

Additionally, Mazda has reduced full backups from 16 hours to 6 hours and their new setup now takes a complete system snapshot—including databases—in 30 seconds.

“Dell Compellent storage gave us the performance we needed to enter the virtual computing space and establish best practices. Our business units are stunned by the increase in application speed we have been able to deliver,” said Sookwongse.

Rapid response to changing business needsNavicure, Inc. is a leading Internet-based medical claims clearinghouse with a need to store vast amounts of data. The company’s claims-processing platform relies on Oracle Real Application Clusters (RAC) 11g database technology on Oracle Solaris-based servers. It first

launched the platform using outsourced Fibre Channel storage, but the solution couldn’t expand quickly or cost-effectively enough to meet Navicure’s needs. “We estimated at the time that adding three or four terabytes of usable redundant storage would cost us a quarter of a million dollars over the contract period,” said Donald Wilkins, Navicure’s IT director.

Navicure replaced the outsourced solution by deploying Dell EqualLogic PS series storage arrays on premise. “We had our first EqualLogic SAN up and running within thirty minutes,” Wilkins reports. “Other storage vendors told us we would need to attend three or four days of classes to set up and use their systems, but we completely familiarized ourselves with the Dell EqualLogic PS series array in a very short time, without training, as its user interface is very straightforward.”

Case studies

16Storage Handbook

Page 17: Dell Storage Handbook

The result is an agile and cost-effective storage infrastructure. “We’re continually adjusting our IT plans to accommodate growth projections for the business,” Wilkins said. “We’ve been able to grow our environment as our customer base grows by using highly scalable Dell EqualLogic storage. We don’t have to deal so much with forklift upgrades like we would with a traditional, frame-based SAN. As we add new arrays, we might move some of the older products down the line, from Tier 1 to Tier 2 or to our disaster recovery site. But the Dell EqualLogic arrays are never really outdated; we can update their firmware and keep them in the pool. The first model we bought seven years ago is still in service. In fact, we have one of almost every model that EqualLogic has ever produced, and they’re all running side-by-side.”

Taming explosive file growthIf a picture is worth a thousand words, a video is worth a million. That makes social video a hot market—one that Toronto-based startup Keek Inc. is poised to conquer. Keek is already an active social video community, allowing users to post video, text comments and share video updates via Twitter, Facebook and other networks, all at once. But it’s Keek’s mobile app that’s causing an explosion in the company’s growth. Users can upload video status updates (called “keeks”) using the Keek app for Android and iPhone. Storage growth is a big deal for Dell EqualLogic user Jeremy Wilson, Keek’s Chief Technology Officer. Keek is looking at growth of 40TB per month in video storage alone. That growth doesn’t count the storage capacity expended on supporting two billion page views, 100 million monthly visits and 18 million monthly unique visitors. It all requires a storage solution that doesn’t tolerate downtime and accommodates massive growth, especially since an additional 200,000 new users are joining Keek a day. “Growth is exponential,” said Wilson. “Data is doubling every month. In fact, we’ve doubled the size of the storage system since we initially installed it in August of 2012.”

To fulfill his needs for a scalable and downtime-resistant storage system, Wilson and his IT crew installed Dell EqualLogic FS7500 Unified Storage Solutions and Dell EqualLogic PS6500E iSCSI SAN disk arrays. The EqualLogic FS7500 front-ends the EqualLogic PS6500 E and serves as a Network File System (NFS) front-end for the PS6500 E file servers.

With this system, Keek Inc. has been able to absorb a 300% increase in user base in one month without any slowdown in uploading videos or any file contention; “Dell designed a storage solution that would scale non-disruptively without downtime,” says Wilson.

Accelerating virtual desktopsNorthwest Mississippi Community College decided to implement a virtual desktop initiative, rolling out 48 virtualized desktops. They decided to use the Dell EqualLogic PS6000XVS hybrid SAN which contains both spinning media and NAND Flash-based, solid-state drives. The PS6000XVS SAN intelligently tiers workloads between the SSDs and the lower-cost 15K SAS drives. “The ability to distinguish between data that is in high demand versus less important data saves us the cost of an all-SSD SAN,” said Michael Lamar, network technician at Northwest Mississippi. The result has made a significant impact on users’ experience. “We cut login times from 74 seconds to 54 seconds, which is twenty-six percent less time users have to spend waiting for their work session to start,” said Lamar. “This was all based on moving to the hybrid SAN.”

Simplified performance tuningAnother current example involves the databases which underpin many critical applications. Nelnet, Inc. provides loan processing outsourcing services. In order to maintain high performance for those applications, Nelnet decided to implement a Dell Compellent SAN with an SSD tier. Compellent features intelligent automated tiering called Data Progression. “We only allow our main reporting server, a Dell PowerEdge R710 server running Microsoft SQL Server® 2008, to access our two terabytes of Tier 1 solid-state drives (SSD),” said Ryan Regnier, IT Manager of Operational Engineering at Nelnet. “But because of Data Progression, most of that data is actually sitting on Tier 2, which is 15K SAS. We’re not paying to have all of that data sitting on SSD, and we still get the performance benefit.”

Modernized data protectionOne other recent initiative designed to streamline systems administration was upgrading data protection systems for Haggar. The company formerly used an Overland Storage REO virtual tape library (VTL). “We would store backup data on the VTL for one day, after which we would move it to tape,” said Matt Collins, Haggar’s Senior Network Administrator. “This made data restores challenging. If a user needed a file that was accidentally deleted two days earlier, we would have to travel offsite, look through a dozen tapes to find the right one, bring it back, load it up,

17Storage Handbook

Page 18: Dell Storage Handbook

find the right point in time on the tape and restore the file. The process took hours or even days.”

As the VTL approached end of life, Haggar planned to upgrade to a newer model. Then the Dell DR4000 Disk Backup Appliance caught the eye of Brad Coleman, Haggar’s infrastructure director. “The Dell DR4000 deduplicates data before running backups,” he said. “We really liked the idea of compacting our backups into less space and keeping more backup data locally, in an easily accessible format.” Another appealing feature was the unit’s use of Rapid Data Access for fast data recovery.

Haggar implemented a Dell DR4000; CommVault Simpana runs backups to the appliance. “We’ve reduced the amount of data in our daily backups by over eighty-five percent, thanks to the compression and deduplication technologies in the Dell DR4000,” Collins said. “We now retain data for 30 days before offloading to our Dell PowerVault TL2000 Tape Library, and we can restore any of that information in seconds. Even though

we’re retaining data locally 30 times longer, we’re using only forty-eight percent of the total capacity on the DR4000. This solution gives us a lot of room to grow, and we can restore data up to seventy-five percent quicker than we could with our old VTL solution.” n

18Storage Handbook

Page 19: Dell Storage Handbook

Storage buying decisions

Page 20: Dell Storage Handbook

Transactional systemsMany common applications are transactional in nature, for example, web-based applications such as an online store or payroll processing applications. These types of workloads generate lots of small storage I/O requests. Microsoft Exchange is another example of an application that can produce a large number of I/O requests. If the response to this flood of I/O requests slows down, application response can be negatively affected.

With competitors just “one click away” and executives relying on rapid access to online data, negative experiences with application performance can have a serious impact on results. Database systems like SQL Server, Oracle and MySQL often underlie these types of systems. Understanding the intersection between transactional workloads and storage systems is the first step to designing a system that can withstand lots of I/O requests and provide the right service levels.

Storage buying decisions

20Storage Handbook

More and more organizations are trying to get out of the mode of purchasing point-products for their IT needs and choosing to focus on workloads as the design point for system architectures, making storage buying decisions more project-based. Here are some examples of specific workloads that are storage-intensive, all are driving innovative approaches to end-to-end solutions.

Page 21: Dell Storage Handbook

Decision support systemsMost organizations are striving to mine the data they store to make better decisions. Business Intelligence, Data Warehousing, Online Analytical Processing and related systems present a very different challenge for systems design. These types of workloads tend to produce requests for large blocks of data to be read sequentially. This places more focus for system design on higher throughput storage networks.

Server virtualization and consolidationMost organizations are consolidating servers using virtualization. Before server virtualization, characterizing the I/O stream from a server for the purposes of optimizing storage performance was less complex. When a single server ran one workload, the I/O was optimized using techniques like caching and serialization. With virtualization, the I/O requests for many workloads are interleaved without optimization across the multiple VMs. This creates a highly randomized I/O stream some refer to as the “I/O blender”. This creates a new level of challenges for architecting the end-to-end solution. Integration across the stack of servers, networks, storage and software in a virtualized environment can have a dramatic impact on performance and reliability.

High-performance computingScientific computing uses mathematical models and computer simulations to solve scientific problems. These simulations require large data sets to be read into a processor for number crunching. Imaging applications like picture archiving and communication system (PACS) in the medical world also generate large data transfers between servers and storage. Understanding the I/O profile of this type of application is crucial for designing the right combination of servers, networks and storage for HPC.

CloudCloud computing holds the promise of rapid provisioning and deprovisioning of blocks of compute, network and storage resources to provide efficient and agile infrastructure. A key aspect of this type of flexibility is providing the ability to specify different service levels for the key components. With this type of environment, it is crucial to have tight integration between the management of hypervisors, servers, networks and storage that provide ease of configurability.

Mobile, BYOD and VDIMost organizations are pursuing initiatives to provide their employees with more flexible access to enterprise tools through the device of choice. These initiatives put a lot of pressure on IT infrastructure for several reasons. They move data storage from the desktop to the data center, creating growing demands for centralized storage. They depend on consistent network connectivity to provide the right responsiveness. They can create bursts of activity which must be planned for when sizing components for performance.

The next generation of business will be built around a mobile device interface, whether desktop, tablet or phone. Flexible integration of the infrastructure components will enable a successful transition to this mobile future. n

21Storage Handbook

Page 22: Dell Storage Handbook

Seven words on storage

Page 23: Dell Storage Handbook

We sat down with the who’s who of storage solutions and challenged them to share their views on the current and future landscape of storage technologies—in only seven words. Here’s what they had to say. n

Seven words on storage

23Storage Handbook

Luigi Danakos, CEOBlurt Media Group, Twitter: @NerdBlurt

Bruno José Ramalho e Sousa, Corporate IT Architectlinkedin.com/in/bjsousa/

Page 24: Dell Storage Handbook

24Storage Handbook

Roger Lund, Sr Systems Administrator Virtualization and Storage Evangelist, Dell Compellent, NetApp, EMC VNX, VNXe

Bob Plankers, Virtualization & Cloud Architect

Barry Coombs, Blogger / Technical Architect ManagerBlog: virtualisedreality.com

Page 25: Dell Storage Handbook

Link to Dell sites and contentDell Storage HomeDell Storage TechCenter PageTech Page OneInside Enterprise IT Blog

Link to an extended conversation on storage through social channelsDell Storage FacebookDell Storage TwitterDell EqualLogic TwitterDell Compellent Twitter

EventsDell Storage Resources and EventsDell Enterprise Forum FacebookDell Enterprise Forum TwitterThe IT SummitTechTarget Storage DecisionsFortune: Brainstorm Tech

Link to a few Dell experts for more information

Jason Boche Technical Marketing Consultant Twitter: @jasonbocheLinkedIn: linkedin.com/in/jasonboche

Lance BoleyStorage Evangelist, Dell TechCenterTwitter: @LanceBoley LinkedIn: linkedin.com/in/lanceboley/ Blog: lanceboley.com/

Bob Ganley Senior Marketing Manager, Dell’s Storage Solutions TeamTwitter: @GanleyBobLinkedIn: linkedin.com/pub/bob-ganley/0/577/968

Andy Hardy EMEA Storage Sales Director Twitter: @andyhardyLinkedIn: linkedin.com/in/andyhardy

Dylan LocsinProduct Manager, Dell EqualLogic Storage LinkedIn: linkedin.com/pub/dylan-locsin/0/961/62

Will Urban Technical Marketing EngineerTwitter: @virtwilluLinkedIn: linkedin.com/pub/william-urban/5/b6/ba4

Travis VigilExecutive Director, Dell Storage Product Marketing LinkedIn: linkedin.com/pub/travis-vigil/0/ba8/a48

Contact information and resources

25Storage Handbook