Date: 05/12/22 Page: 1 Mainframe Dinosaur Myth An Evolving Method of Analyzing and Optimizing a IT Server Infrastructure The Dinosaur Myth… The Dinosaur Myth… 2005 2005 Windows Server, UNIX Server, Mainframe Server Windows Server, UNIX Server, Mainframe Server February 2004, 1nd edition February 2004, 1nd edition presented by presented by Dipl. Ing. Werner Hoffmann Dipl. Ing. Werner Hoffmann EMAIL: [email protected]EMAIL: [email protected]A member of IEEE and ACM
The Dinosaur Myth has been regularly updated in recent years as part of the Mainframe Market Information Service. The Myth was first published by Xephon, and is now maintained by Arcati (www.arcati.com) in association with the author Barry Graham.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Date: 04/11/23 Page: 1
Mainframe Dinosaur MythAn Evolving Method of
Analyzing and Optimizing a IT Server Infrastructure
The Dinosaur Myth…The Dinosaur Myth…20052005
Windows Server, UNIX Server, Mainframe ServerWindows Server, UNIX Server, Mainframe Server
February 2004, 1nd editionFebruary 2004, 1nd editionpresented bypresented by
- Computing Technology becomes more pervasive- TCO /TCU Analysis is still a hot industry topic- Critical success factors are very dependent on IT organization and IT infrrastructure- changing realities put huge pressures on standard ways of understanding and solving business problems
- IT infrastructure analysis methods balance the best techniques of server infrastructure technical analysis with financel analysis
Highlights: The biggest breakthrough is in getting to credible outline solution designs for tactical and strategic projects that can be used as the baseline for business cases that compare multiple target server scenarios!
The overall objective of a project can be summerized as defining the current state of the company‘s IT server infrastructure, describing alternative realistic future states, showing cost projections of the alternatives and recommending a better future state alternative.
1. Define study scope with IT executive management2. Issue data collection for server inventory, infrastructure budget
and personnel cost data3. Validate and Complete 2. by interviews4. Analyze costs by server platform and user5. Analyze server and application data to identify groups of similar
servers6. Run a efficiency and service health check for each of the major
server families7. Identify outline solution areas and a credible migration path8. Build a investment case for „future state“ based on a five-year
cost projection9. Deliver final recommendations to the IT executive management
How much do you spend each year on IT infrastructure for every user? Although this is essentially a very simple question, it can be made to appear extremely complicated. A simple way to get an initial answer to this question lies in identifying the total amount of IT infrastructure spend (say $100M per year) and dividing by the number of users (say 20,000) to calculate a total cost of IT per user per year (say $5K per user per year).
People efficiency – broadly measured by “z/OS MIPS per person” or “UNIX servers per support person” Systems efficiency – broadly measured by „average processor utilization“ or „Disk utilization“ Quality of service – broadly measured by quarterly scheduled and unscheduled outage time and impact Service delivery cost – broadly measured by total cost and incremental cost of additional usable capacity
This approach provides invaluable insights into often hidden or “grayarea” personnel costs that should be apportioned to the specific platforms in the company’s service delivery cost model.
IT Cost Analysis - The Basics Financial Analysis Technical Analysis
Before starting any discussion about the financial analysis it is vitalto identify the core IT infrastructure costs. This is often not quite assimple as it seems.
Note: There are typically hundreds of UNIX servers and thousands of Intel servers in any major enterprise. One of the first steps is to place these servers into a number of broad major categories and further subcategories.This is a very important step towards developing a server solution strategy for any enterprise.
One of the challenges of comparing server platforms is the lack of real UNIX and Intel availability measurements. Compared with 4-6 mainframes, measured daily, in many different ways, there are frequently 100-200 UNIX servers and 500-1000 Windows NT servers which have few measurements. An average UNIX service delivery organization has typical weekday service hours of extended office hours (08:00-20:00 Monday to Friday) for the bulk of the application and database servers.
In most organizations, total current spend is a direct result of infrastructure decisions made 2, 5 or even 10 years ago!
A good understanding of today’s actual IT expenditure is very important in determining actual, achievable people efficiency and systems utilization factors that are central to a future cost comparison.
It is important to distinguish between three or more "states", the today state, and two or more future states.
1. Machine virtualization2. Development and deployment standardization3. Hardware packaging4. Manual “best-fit” techniques
The next major step is to identify alternative future end states for twoor three solution areas such as Web serving, e-mail or a specific group of application servers.
• A good technical analysis and outline design of the future state is an essential starting point. Enough detail, or a set of realistic assumptions, is required to identify the main transaction volumes, main software options and likely service quality. Typically three future configurations will be needed, Intel, UNIX and S/390 or z/OS, each of them capable of handling, say, 1000 units of the application load.• The difficult next step is to ensure comparability of the performance and throughput of the Intel, UNIX and S/390 configurations. In most cases industry standard benchmarks, (such as TPC-C, SpecInt, SD steps/hour), actual benchmark data, or actual measurements can be used.
• The final key assumption is the number of IT support people to deliver the S/390, UNIX or Windows NT service. Clearly today’s actual experience relative to industry averages is a very important starting point. There are many measurement points over many years in the S/390 arena. With UNIX and WindowsNT, people support costs dominate the platform cost. In the case of UNIX, we find total people costs usually account for 35-45% of the total IT spend on the platform. This can be compared with 45-55% in the case of Windows NT, and 20-25% for S/390. It is worth noting that the heaviest people costs are incurred in supporting uncloned application and database servers, which require people-intensive monitoring, problem determination, tuning, backup, and recovery processes.
A clear perspective on today’s actual IT infrastructure cost, by technology platform (e.g., Windows NT, UNIX, mainframe) is extremely important in determining future IT strategy and investment decisions.
The following four action points are recommended:
1. Establish a realistic estimate of the actual IT cost per user per year,
2. Conduct an IT health check of all major server platforms,
3. Build a simple financial model to estimate the incremental cost of each server platform,
4. Ensure that PC and distributed server proliferation and data fragmentation are strongly controlled.
A credible, simple, constantly updated IT vision and blueprint of the target IT infrastructure linked to business needs. This IT infrastructure blueprint needs
to describe selected strategic technical components, such as the primary server platforms (e.g., S/390, z/OS, RS/6000, Netfinity®,) database (e.g., DB2®), core
middleware (e.g., CICS, MQ), development paradigms (e.g., Java, WebSphere, LINUX®) and the data network (e.g., TCP/IP).
• describes 2 alternative scenarios for a major application:
• 1) 750 real UNIX servers costing $40M over 3 years,• 2) centralized S/390 environment
• from a technical perspective this study highlights the potentially massive infrastructure costs of running hundreds of real servers rather than „hundreds of cloned virtual“ servers...
a) ...it is unwise to mix applications with different characteristics in the same UNIX server...,
b) Each application has different growth and failure characteristics, -> people intensive operational env....,
c) Technology changes, -> in UNIX env. There is a constant conflict between people productivity and newer function...
...case study illustrates that UNIX server proliferation is costly, and in the range of 1.6 - 1.8 times more expensive than the S/390 scenario in a three-year TCO case.
Case Study C: ... A complex application suite with rapidly growth
Conclusion:• Porting the application to the OS/390 (z/OS) environment significantly improves the availability of the application while reducing the operational complexity as the environment grows
• the UNIX scenario is 1.73 times more than the S/390 (z/OS),• with including porting cost the UNIX/S/390 ratio is still 1.35 times• availabilty costs shows a significant advantage to S/390 (z/OS)
• All of these case studies are based on real situations. • All of them show that an objective analysis of the total three-year cost of ownership of large e-business applications, can show mainframe scenario costs that are often 40%-50% lowerthan UNIX, and 50%-60% lower than Windows NT, even for relatively small applications.
Date: 04/11/23 Page: 49
Questions Questions ??????
Questions, comments, further information?
Please feel free to e-mail me!
Dipl.Ing. Werner HoffmannDipl.Ing. Werner Hoffmann
GF22-5168-01 IBM – Scorpion – Simplifying the Corporate IT Infrastructure GM13-0189-00 IBM . Scorpion Update – An Evolving Method of Analyzing and Optimizing the Corporate IT Server Infrastructure GF22-5176-00 IBM – Gemini – Generating Meaningful Incremental IT Costs GM13-0329-00 IBM – Orion – Opportunities for Systems Rationalization in an IT Organization