Filing Information: April 2008, IDC #211938, Volume: 1 Enterprise Virtualization Software: Industry Developments and Models INDUSTRY DEVELOPMENTS AND MODELS The Future of Virtualization: Leveraging Mobility to Move Beyond Consolidation John Humphreys IDC OPINION The server virtualization marketplace has been evolving rapidly over the past few years and IDC has seen customer attitudes and stances toward virtualization mature rapidly as well. As customers gain familiarity with the technology and as the technology matures, organizations are leveraging virtualization to solve far more than their server consolidation challenges. Increasingly, end users are using virtualization to solve for disaster recovery, high availability, remote client and, ultimately, managing the delivery of business applications to end users. These new emerging use cases are the focus of this study and are predicated on the three key attributes ofvirtualization software, which include: ! Application isolation. The ability for applications to be encapsulated in individual virtual machines and isolated from other applications residing on the same host. This helps to maintain one "server," one app paradigm while still utilizing the hardware and helps avoid all the application regression testing that must occur in a shared OS environment. The application isolation attribute is leveraged both in consolidating servers and in consolidating desktops onto servers running in the datacenter (so called vdi). ! Virtual machines are files. As such virt ual machines can be copied, backed-up, replicated and moved like files. This in turn enables unique, easier, lower-cost business continuity practices and, as a result, allows customers to protect a greater percentage of assets and thus limit the cost of downtime and lost revenue associated with IT outages. ! Live migration. The ability to move a live running application from one host to another enables virtual machines to move without any application downtime. Today, the capability is largely used as a tool to address planned downtime, and increasingly for capacity planning and load balancing across a pool of serverresources. Longer term, by pairing live migration with application monitoring technology, customers will be able to manage the quality of service for entire business services, whether those services are delivered via an SOA or a traditional three-tiered architecture. G l o b a l H e a d q u a r t e r s : 5 S p e e n S t r e e t F r a m i n g h a m , M A 0 1 7 0 1 U S A P . 5 0 8 . 8 7 2 . 8 2 0 0 F . 5 0 8 . 9 3 5 . 4 0 1 5 w w w . i d c . c o m
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Filing Information: April 2008, IDC #211938, Volume: 1
Enterprise Virtualization Software: Industry Developments and Models
I N D U S T R Y D E V E L O P M E N T S A N D M O D E L S
T h e F u t u r e o f V i r t u a l i z a t i o n : L e v e r a g i n g M o b i l i t y t o M o v eB e y o n d C o n s o l i d a t i o n
John Humphreys
I D C O P I N I O N
The server virtualization marketplace has been evolving rapidly over the past few
years and IDC has seen customer attitudes and stances toward virtualization mature
rapidly as well. As customers gain familiarity with the technology and as the
technology matures, organizations are leveraging virtualization to solve far more than
their server consolidation challenges. Increasingly, end users are using virtualization
to solve for disaster recovery, high availability, remote client and, ultimately,
managing the delivery of business applications to end users. These new emerging
use cases are the focus of this study and are predicated on the three key attributes of virtualization software, which include:
! Application isolation. The ability for applications to be encapsulated in
individual virtual machines and isolated from other applications residing on the
same host. This helps to maintain one "server," one app paradigm while still
utilizing the hardware and helps avoid all the application regression testing that
must occur in a shared OS environment. The application isolation attribute is
leveraged both in consolidating servers and in consolidating desktops onto
servers running in the datacenter (so called vdi).
! Virtual machines are files. As such virtual machines can be copied, backed-up,
replicated and moved like files. This in turn enables unique, easier, lower-costbusiness continuity practices and, as a result, allows customers to protect a
greater percentage of assets and thus limit the cost of downtime and lost
revenue associated with IT outages.
! Live migration. The ability to move a live running application from one host to
another enables virtual machines to move without any application downtime.
Today, the capability is largely used as a tool to address planned downtime, and
increasingly for capacity planning and load balancing across a pool of server
resources. Longer term, by pairing live migration with application monitoring
technology, customers will be able to manage the quality of service for entire
business services, whether those services are delivered via an SOA or a
management to now find that they were managing multiple thousands of devices and
that the IT build out and a trend toward decentralized IT amplified each other.
Rolling this up to a market level, IDC has found that by 2010, if these trends continue
unabated, there will be approximately 41 million servers installed in customer sites
worldwide. This marks a 700% increase over the 15-year period from 1996 to 2010.
At the same time, the drivers for this explosion made tremendous sense when looking
at each investment from a tactical standpoint. The first driver for this explosion in
systems is the rapid expansion in applications that IT needs to host. Today, there is
nary a business process or project that is somehow supported by IT and one, two or
even a half a dozen servers. At the same time, the mandate has been to take out as
much cost as possible so each new project is scrutinized to see if it would support an
IT investment. This directly led, in combination with new technologies emerging into
the market, to buyers gravitating toward lower cost systems based on the x86
processor. These systems were and continue to be priced at orders of magnitudes
lower than the more centralized high-end systems favored 20 or more years ago.
The gravitation toward low-end x86 servers has gotten to the point whereapproximately 90% of all systems sold are today based on chips from Intel and AMD.
This sort of sea change was also made possible by the wholesale support of the
Windows operating system for a majority of applications that businesses need to run.
The downside of Windows applications is that historically running more than one
application on the OS has led to conflicts in the resource or DLL, which in turn has led
to system instability. Rather than spend significant time, effort, and energy testing and
regressing applications so they would work well in a shared OS environment, the best
practice becomes to only deploy one application per server.
The result of this one application per system paradigm has been a tremendous
underutilization of server resources. IDC estimates that, on average, less then 10% of the total server capacity is utilized over a period of weeks or months. Again, taking
this to a market level, this means that today there is roughly $140 billion of server
capacity sitting idle in the marketplace. This is equivalent to roughly a three-year
supply.
The encapsulation benefits of virtualization software enabled customers to harness
the growing power of x86 servers and put a greater percentage of the capacity they
purchased to productive use. At the same time, by running on an isolated OS, this
allowed them to do so without having to do the expensive regression and testing they
would have incurred in a shared OS deployment.
Customers also found that by reducing the number of servers, they also saved interms of power and cooling costs as well as real estate expenses associated with
maintaining enough datacenter space to house these systems. To put some metrics
to this, customers report on average cutting their facilities' costs by about 20% post
virtualization. This holds the potential to return huge amounts of capital to the
customers as IDC has found that, in aggregate, customers spend $29 billion annually
in powering and cooling their servers. Additionally, we have worked with others that,
through virtualization, have been able to extend the life of the facility — taking
advantage of the time value of money and pushing out the multimillion dollar capital
outlay associated with building a new datacenter.
Finally, in this first phase of virtualization adoption we are seeing that the technology
has the potential to significantly alter operating cost structure for managing IT. IDC
has found that in the "physical world" on average, most organizations employ one IT
professional for every 20–30 servers installed in the datacenter. In the virtual world,
discussions with early adopters has found that the same IT professional can manage
60–80 virtual machines — with some customers reporting ratios of up to 200 to 1.
Being able to address the operational costs, which drive between 70–80% of total
company spending in the realm of IT, is one of the tremendous opportunities for
virtualization technologies to change the economics of IT. The other major opportunity
is centered on mitigating lost revenue attributable to system downtime. This
component of the future of virtualization will be the main thrust of this report.
F U T U R E O U T L O O K
T h e F u t u r e o f V i r t u a l i z a t i o n : S o m e F o r k s i n
t h e R o a d
As it was stated previously, the two key attributes of virtualization are, at the highest
level: encapsulation and mobility. To date, the thrust of adoption has been on
leveraging the encapsulation benefits and we have seen virtualization first deployed
in test and development scenarios (including developer workstations), then in the
migration of unsupported NT4 applications, and finally into new applications being
deployed in production.
In this manner there is a well-defined path going forward. The industry has found a
very compelling and powerful "hammer" and now it can go out and look for "nails."
One area that has garnered much attention in the need for consolidation is the
desktop computer. Utilization of desktops is even lower than servers and the
environment is even more distributed — driving even bigger support costs and
management headaches. Finally, the power cost savings or "green" benefits of
consolidation of desktops could dwarf that of servers given that IDC estimates that
there are roughly 500 million corporate PCs deployed around the globe today.
A few years ago, a few enterprising customers began levering virtualization for just
such an exercise. They paired virtualization together with Microsoft's Remote Desktop
Protocol (RDP) to create a hosted desktop solution (see Figure 1).
Today, hosted desktops or "vdi" as it is becoming known, is clearly the next step for the consolidation of IT infrastructure. That said, there are still some hurdles that must
be overcome — first and foremost is the economics of vdi. These solutions still run at
a price premium relative to traditional distributed desktops. The two main "culprits"
are storage costs and operating system expenses
From a storage perspective, the price difference between locally deployed 40–80GB
hard drive and the same volume of space on a SAN is significant; hence, a driver of
In addition to the consolidation benefits of virtualization, there appears to be a whole
host of new use cases based on the mobility of virtual machines. In aggregate, these
use cases are generally focused on reducing downtime and increasing the agility of
IT. These include how virtualization can help customers avoid planned downtime,
protect a greater percentage of their IT assets in case of disaster, reduce unplanneddowntime due to hardware and eventually application and OS failures, and ultimately
help to deliver on the concept of service-oriented computing.
In terms of virtual machine mobility, there are essentially two means in which a VM
becomes "mobile." First, virtualization software has the effect of decoupling the
application stack from the underlying hardware. It does so by essentially turning a
server into a file. As such a VM can now be copied, backed-up, replicated, and
moved like a file. This in turn opens the door to bring low-cost business continuity to
the rest of the IT environment.
In addition to turning servers into files, a growing number of the virtualization software
providers have the ability to do live migrations — moving a live running VM with OSand application from one host to another without any downtime. Today, this capability
is being used to address planned downtime needs and IT professional "quality of life"
issues such as doing hardware swaps or upgrades during normal business hours
without having to take down the application. Going forward, the technology can be
leveraged to created pools of computing capacity against which applications can be
run and further out, as a technology that enables SOA and true "cloud computing."
Business Continuity: Driv ing Interest
Increasingly, customers are beginning to recognize that IT is no longer a series of
unrelated systems each providing a discrete business function or application. Rather,
IT itself has increasing become one whole interconnected "system" and like
interconnected infrastructure systems such as communication or energy delivery, the
failure of one piece can and typically does have a cascading impact on the whole.
As one customer recently said:
[In a] traditional [DR model] you buy a cold or warm spare and you stick it at the
other datacenter where it gathers dust because the odds of it being used are very
low and you forget to update it. It has old firmware, old drivers, or may have been
'acquired' for use elsewhere. And then when you have to use it, all hell breaks
loose.
As a result, IDC finds that partners, the government and even internally through
SLAs, business units are requiring more disaster-tolerant and generally available ITsolutions. For IT to protect the "great unprotected masses" without dramatically
increasing the budget will require innovators to apply new technology in a different
way to satisfy both uptime and budgetary pressures. The trajectory of adoption for
business continuity is illustrated in Figure 2.
All of this indicates, in the words of Clayton Christiansen, generally business
continuity is an "underserved job." By undershot Dr Christiansen and others at
Innosight describe a job or process that "is important to customers and have yet to be
addressed appropriately."
IDC believes virtualization is a technology that is extremely well positioned to benefit
from this awakening. In a recent worldwide survey of approximately 400 customers,
business continuity (BC) ranked second behind consolidation as the reason they are
implementing virtualization (multiple responses allowed). Interestingly in emerging
markets, customers were more likely to be using virtualization software to drive BC
then for consolidation. Additionally, the midmarket (100–1,000 employees) was as
likely to use virtualization software for BC as consolidation (see Figure 3).
As an example of this, Gannett Publishing has taken very effective use of this
capability. The company implemented virtualization across both its primary as well as
subsidiary sites. As a result, the company has had two key benefits. First, the
company was able to consolidate DR services into its two primary datacenters.
Instead of having to maintain a third site to act as a cold backup, they now use each
primary site as a backup to each other.
Additionally, they have been able to offer DR as a service to its subsidiaries. This wasput to dramatic use by some local and regional papers in Louisiana when hurricane
Katrina hit in 2005. As the story approached, because the VMs and data had been
replicated offsite, Gannett was able to migrate services for these properties to the
Washington DC area and once the storm had passed and the local infrastructure was
back up and running, the company migrated applications back to the primary sites —
was one software stack; the downside was the applications were not isolated from
each other and one failure in the stack could bring down multiple applications.
With virtualization, this architecture could be altered so that each application was
maintained in a virtual machine and each VM had an OS, JVM, and copy of app
server software. While this works wonders to deliver better isolation of applications, it
also has driven the creation of more images and software stacks to patch, update,
and manage.
More recently, BEA has introduced LiquidVM, Virtual Edition and Liquid Operations
Control. More recently, BEA has introduced the idea of an application server
appliance with the LiquidVM, Virtual Edition, and Liquid Operations Control. Here
instead of a separate OS, JVM and App server, the stack is combined into a tuned
and optimized appliance that supplies all these services. Customers can deploy their
applications right on top of this appliance and it greatly reduces the number of
software components IT has to manage.
This architecture is very similar to that of SOA as services are run in an environment
just as applications are run on an appliance. Clearly, the big leap will be indeconstructing applications into the base services and this will likely be more than a
five-year process, but whether truly componentized applications become a reality or
not, the combination of application environment sensing and the control infrastructure
control instantiated through virtualization is a powerful concept that could lead to IT
managing delivering services rather then focusing on managing infrastructure (see
Figure 7).
If the industry can devise a way to link applications to represent individual business
processes, the concept of service-oriented computing remains feasible as the
management of the infrastructure can be linked to monitoring of the application or
service environment and measured against policies and service levels set by the
users.
This really does represent a marriage of best-in-breed — application monitoring tools
provide detailed data on the health, performance and demand for a specific business
application or service while infrastructure management tools such as live migration
implement the moves, adds, and changes of individual VMs hosting the service.
This vision for policy-based automation of the scaling, moving, resizing, provisioning
and decommissioning of virtual machines in support of application service levels is
highly speculative and only roughly defined. Even more of a challenge is that for such
a vision to come to fruition would require unprecedented collaboration in the industry
and hence is why service-oriented computing while this scenario in the future of
virtualization still be years from broad market acceptance and likely look very differentonce all the vendor wrangling and positioning is complete.
That said, if a version of this concept actually makes it successfully to market, it is
only a small step from here to moving IT into the cloud. Here individual services hold
no special significance or proprietary value add for the customer. It is only through the
combination of these services that one is able to deliver a business application and
the unique advantage comes not from the individual services but from how they are