-
© Arcati Limited, 2020 1
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
The Arcati Mainframe Yearbook 2020
The independent annual guide for users of IBM mainframe
systems
SPONSORED BY: PUBLISHED BY:
Arcati Limited19 Ashbourne WayThatchamBerks RG19 3SJUK
Phone: +44 (0) 7717 858284Fax: +44 (0) 1635 881717Web:
http://www.arcati.com/E-mail: mainframe@arcati.com
mailto:sales%40sdsusa.com?subject=mailto:mainframe%40arcati.com?subject=https://actionsoftware.com/http://www.arcati.comhttp://www.arcati.comhttp://dkl.com/https://itech-ed.comhttp://www.tonesoft.com/mainframehttps://www.broadcom.com/products/mainframehttps://model9.io/https://www.sdsusa.com/https://www.krisecurity.com/http://www.esaigroup.comhttps://www.bmc.com/https://www.fujitsu.com/https://www.softwareag.com/arcati2020https://www.compuware.com/
-
© Arcati Ltd, 20202
Arcati Mainframe Yearbook 2020
Mainframe strategy
ContentsWelcome to the Arcati Mainframe Yearbook 2020
............................................................ 310
Steps to True Mainframe DevOps: A Phased Approach to
Cross-platform DevOps Success
.................................................................................
6Mainframe Data Management
..........................................................................................
17Join a Mainframe Development Revolution in 2020 – Embrace Open!
....................... 22Indicators of Compromise and Why It Takes
Six-Plus Months to ID a Breach ........... 27Mainframe breaches: a
beginner’s defensive strategy
................................................. 33
The 2020 Mainframe User Survey
....................................................................................
37An analysis of the profile, plans, and priorities of mainframe
users
Vendor Directory
...............................................................................................................
55Vendors, consultants, and service providers in the z/OS
environment
A guide to sources of information for IBM mainframers
............................................ 141Information
resources, publications, social media, and user groups for the z/OS
environment
Glossary of Terminology
................................................................................................
147Definitions of some mainframe-related terms
Mainframe evolution
........................................................................................................
179Mainframe hardware timeline 1952-2019; mainframe operating
system development
Action Software 58, 59BMC Software 27, 68Broadcom 22,
68Compuware 6, 73Data Kinetics 54, 77 ESAi 82, 82
Fujitsu 88. 88Key Resources 99, 100Model 9 17, 109Software AG
123, 123Software Diversified Services 124, 125Tone Software 131,
131
SPONSORS
-
© Arcati Limited, 2020 3
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
We are very grateful – as always – to all those who have
contributed this year by writing articles, taking part in our
annual user survey, or updating their company profiles. In
particular, I must thank the sponsors and advertisers, without
whose support this Yearbook would not be possible.
The big IBM announcement in 2019 was its new z15 mainframe,
which culminates four years of development with over 3,000 IBM Z
patents issued or in process and represents a collaboration with
input from over 100 companies. Building on the z14’s pervasive
encryption, IBM introduced Data Privacy Passports technology that
can be used to gain control over how data is stored and shared.
This gives users the ability to protect and provision data and
revoke access to that data at any time. In addition, it not only
works in the z15 environment, but also across an enterprise’s
hybrid multi-cloud environment. This helps enterprises to secure
their data wherever it travels.
Also new with the z15 is IBM Z Instant Recovery, which uses z15
system recovery technologies, to limit the cost and impact of
planned and unplanned downtime by accelerating the recovery of
mission-critical applications by using full system capacity for a
period of time. It enables general-purpose processors to run at
full-capacity speed, and allows general-purpose workloads to run on
zIIP processors. This boost period accelerates the entire recovery
process in the partition(s) being boosted.
Security is an on-going and growing issue for anyone with a
computer, and enterprises especially. Back in the summer, the
annual Evil Internet Minute report1 from RiskIQ, suggested that
cyber-criminals cost the global economy $2.9 million every minute
in 2018, for a total of $1.5 trillion. The report concluded this
after analysing proprietary research and data derived from the
volume of malicious activity on the
Internet. Major companies are paying $25 per Internet minute
because of security breaches, while hacks on cryptocurrency
exchanges cost $1,930. Criminals are leveraging multiple tactics,
from ‘malvertising’ to phishing and supply chain attacks. The loss
from phishing attacks alone is $17,700 per minute. Global
ransomware events in 2019 were projected to total $22,184 by the
minute. Cyber-criminals have also increased their targets on
e-commerce with Magecart hacks, which grew by 20% over the last
year. The study found 0.21 Magecart attacks were detected every
minute. It also found that in each Internet minute: 8,100
identifier records are compromised, seven malicious redirectors
occur, and 0.32 apps are blacklisted. Plus, there are 2.4 phish
traversing the Internet per minute.
The Arcati Mainframe Yearbook 2020
Publisher: Mark LillycropEditorial Director: Trevor
EddollsContributors: Broadcom, Compuware, Model 9, BMC, Mark
Wilson
© 2020, Arcati Limited.
All company and product names mentioned in this publication
remain the property of their respective owners.
This Yearbook is the copyright of Arcati Limited, and may not be
reproduced or distributed in whole or in part without the
permission of the owner. A licence for internal e-mail or intranet
distribution may be obtained from the publisher. Please contact
Arcati for details.
Welcome to the Arcati Mainframe Yearbook 2020
-
© Arcati Ltd, 20204
Arcati Mainframe Yearbook 2020
Mainframe strategy
IBM’s “Cost of a Data Breach Report”2, in 2019, highlighted the
financial impact that organizations can feel for years after an
incident. The actual cost of a breach has risen from $3.86m to
$3.92m over the past year, and by over 12% over the past five
years. According to IBM, in the USA, the cost of a breach is
$8.19m. For companies with fewer than 500 employees, losses
averaged out at over $2.5m, a potentially fatal sum. Mega breaches
of over one million records cost $42m, while those of 50 million
records are estimated to cost companies $388m. IBM also found that
on average 67% of data breach costs were realized within the first
year after a breach, but over a fifth (22%) accrued in the second
year and another 11% accrued more than two years after the initial
incident. Organizations in highly-regulated environments (like
healthcare and financial services) were more likely to see higher
costs in the second and third years, apparently. Malicious breaches
accounted for the majority (51%) of cases, up 21% over the past six
years, and cost firms more – on average $4.45m per breach.
Accidental breaches, 49% of all incidents, cost slightly less than
the global breach average. Human error cost $3.5m, and system
glitches cost $3.24m. For the ninth year in a row, healthcare
organizations suffered the highest cost of a breach – nearly $6.5m
on average. IBM suggested that extensively tested incident response
plans can minimize the financial impact of a breach, saving on
average $1.23m. Other factors affecting the cost of a breach
include how many records were lost, whether the breach came from a
third party, and whether the victim organization had in place
security automation tech and/or used encryption extensively.
IBM’s most significant acquisition this year concluded on 9 July
when it acquired Red Hat, which is best known for its version of
Linux. IBM acquired all the issued and outstanding common shares of
Red Hat for $190.00 per share in cash, representing a total equity
value of around $34 billion. At the time, IBM said that the
companies together will accelerate innovation by offering a
next-generation hybrid multi-cloud platform. Based on open source
technologies, such as Linux and Kubernetes, the platform will allow
businesses to securely deploy, run, and manage data and
applications on-premises and on private and multiple public
clouds.
The year ended with the news that Compuware intends to acquire
Innovation Data Processing. This, perhaps, is another example of
larger companies looking round for mid-range companies to
strengthen their product range – like Syncsort’s acquisition of
Pitney Bowes’ software and data business.
As well as Data Privacy Passports, introduced with the z15
mainframe, some other acronyms we’re getting used to this year are
zCX, WMLz, and Db2ZAI. zCX stands for z/OS Container Extensions
(zCX), which let you run Linux capabilities in a container on z/OS.
WMLz stands for IBM Watson Machine Learning for z/OS. It’s designed
to simplify the production implementation of AI models. Users can
develop models where they want. And, users can readily deploy them
within their transaction applications for real-time insight. Db2ZAI
stands for IBM Db2 AI for z/OS, which can optimize a Db2 for z/OS
engine to determine the best-performing query access paths, based
on the workload characteristics. You may also have come across fog
computing or fog networking or fogging, which is an architecture
that uses edge devices to carry out a substantial amount of
computation, storage, and communication locally and then routes it
over the Internet backbone.
Looking forward to 2020, it’s interesting to see what Gartner
highlights as the top 10 strategic technology trends for the year.
They are:• Hyperautomation – the combination of multiple machine
learning (ML), packaged software, and
automation tools to deliver work.
-
© Arcati Limited, 2020 5
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
• Multiexperience – a change in the user experience in how they
perceive the digital world and how they interact with it. It’s
suggested that the burden of translating intent will move from the
user to the computer.
• Democratization of expertise – providing people with access to
technical expertise (eg machine learning, application development)
or business domain expertise (eg sales process, economic analysis)
through a radically simplified experience and without requiring
extensive and costly training.
• Human augmentation – using technology to deliver cognitive and
physical improvements as an integral part of the human
experience.
• Transparency and traceability – a range of attitudes, actions,
and supporting technologies and practices designed to address
regulatory requirements, preserve an ethical approach to use of
artificial intelligence (AI) and other advanced technologies, and
repair the growing lack of trust in companies.
• The empowered edge – empowering edge computing with more
sophisticated and specialized compute resources and more data
storage.
• Distributed cloud – the distribution of public cloud services
to different locations while the originating public cloud provider
assumes responsibility for the operation, governance, updates to,
and evolution of the services.
• Autonomous things – physical devices that use AI to automate
functions previously performed by humans, eg robots, drones,
autonomous vehicles/ships and appliances.
• Practical blockchain – a maturing blockchain functionality and
growth in use.• AI security – artificial intelligence and machine
learning can augment human decision making and
create great opportunities to enable hyperautomation and
leverage autonomous things to deliver business transformation, it
creates significant new challenges for security in terms of
protecting AI-powered systems, leveraging AI to enhance security
defence, and anticipating nefarious use of AI by attackers.
It’s fascinating to see how many of those will use a mainframe
in order to work effectively.
So, it looks like the mainframe industry is an exciting place to
work. And with that in mind, I can confidently predict that 2020
will be an interesting year, and that the mainframe will continue
to offer outstanding performance and reliability, and be at the
heart of the world’s business-critical applications.
1
https://www.globenewswire.com/news-release/2019/07/24/1886965/0/en/In-Just-One-Evil-Internet-Minute-Over-Two-Phish-Are-Detected-And-2-9-Million-Is-Lost-To-Cybercrime-Reveals-RiskIQ.html
2 https://databreachcalculator.mybluemix.net/
As well as the z15, IBM announced a 53-quantum bit (qubit)
device in 2019, which they claimed was the most powerful quantum
computer ever offered for commercial use. IBM’s machine can be used
by anyone, and is sited at the company’s Quantum Computation Center
in Poughkeepsie, New York State. As yet, the new computer doesn’t
have a name. IBM said of its new 53-qubit system that it benefits
from a number of new techniques that enable it to be larger and
more reliable. It features more compact custom electronics for
improving scaling and lower error rates, as well as a new processor
design.
-
© Arcati Ltd, 20206
Arcati Mainframe Yearbook 2020
Mainframe strategy
10 Steps to True Mainframe DevOps:A Phased Approach to
Cross-platform DevOps SuccessThis article looks at what this
approach means to you and your company.
Enterprises must quickly and decisively transform their
mainframe practices. The slow, inflexible processes and methods of
the past have become intolerable impediments to success in today’s
innovation-centric digital markets. IT leaders must therefore bring
the proven advantages of Agile, DevOps and related disciplines to
bear on the mainframe applications and data that run their
business.
But how? And where to begin? Mainframe transformation can seem
overwhelming. And no IT leader wants to embark on a “boil the
ocean” project that consumes resources and generates risk without a
high probability of sizable, near-term concrete rewards.
This article addresses these concerns by spelling out a proven,
phased approach for measurably modernizing mainframe practices. By
following this approach, IT leaders can rack up high-impact,
near-term “wins” at each stage along the way, while staying on
course towards a vital strategic objective that can be fully
achieved in less than two years.
Because every organization has its own existing processes, tools
and culture, this mainframe transformation game plan can be
modified to fit each organization’s specific needs.
No company, however, can afford to further delay mainframe
transformation—or embark on such a transformation without a clear
plan. To remain competitive in an app-centric economy,
mainframes must be as adaptive as other platforms. And
enterprise IT must be able to manage DevOps in an integrated manner
across mainframe, distributed and cloud.
This article lays out a plan for doing exactly that.
Step 1:DETERMINE YOUR CURRENT AND DESIRED STATEBefore embarking
on the process of mainframe transformation, it’s wise to first be
clear about what that transformation will entail. To map out a
transformation plan that all relevant stake-holders can understand
and buy into, you will need to:
Document and assess your current state. What tools do your
development, QA and ops teams currently use? What does your
software delivery process look like? What are the results in terms
of velocity, frequency, person hours and quality? How consistent or
variable are those results? How much institutional knowledge is
isolated in the minds of your top SMEs? How well do your mainframe
teams collaborate and coordinate with their counterparts working on
your distributed and cloud platforms?
Define your desired state. Prioritize the goals that are most
important to your organization. These can include velocity
(compressing the time required to go from business requirement to
code in production), agility (the ability to make smaller, more
frequent changes to application code), efficiency (reducing cost
through better use of person hours), integration (better
coordination of code changes across platforms) and generational
shift (empowering technical staff with mainstream skillsets to
assume responsibility for mainframe DevOps). These goals must
obviously be achieved without compromising the reliability and
stability of core applications.
Identify impediments. Many mainframe teams face technical
obstacles, such as tools that lock them into slow, waterfall
processes. Entrenched
-
© Arcati Limited, 2020 7
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
habits and work culture—such as insufficient emphasis on speed
and collaboration—can also tangibly hinder mainframe
transformation. Or you just may be under-funded. Your gap analysis
should zero in on these specific impediments preventing you from
achieving your specific goals.
Define your customized plan. Once you know where you want to go
and what’s currently pre-venting you from getting there, you can
craft a rational, credible transformation plan. Your plan will
likely look very similar to the nine steps that follow. Depending
on the particulars of your situation, however, you may need to
prioritize certain steps or allocate more resources to certain
aspects of your transformation.
Step 2:MODERNIZE YOUR MAINFRAMEDEVELOPMENT ENVIRONMENTMost
mainframe development is still performed in antiquated “green
screen” ISPF environments that require highly specialized knowledge
and problematically limit new staff productivity. Modernizing the
mainframe begins with modernizing this developer workspace.
A modernized mainframe workspace should possess the look and
feel of the Eclipse-style IDEs that have become the de facto
standard for other platforms. This user-friendly interface will
allow staff at all experience levels to move easily between
development and testing as they work on both mainframe and
non-mainframe applications. Your modernized mainframe IDE will also
support a complementary palette of value-added tools as you
continue your mainframe transformation through its subsequent
steps.
ToolsThe enabling technology for Step 2 is Compuware Topaz
Workbench—along with associated Compuware solutions such as
File-AID, Abend-AID and Xpediter.
These products provide complete source code editing, debugging,
fault diagnosis and data browse/edit/compare functionality. Topaz
Workbench, in particular, allows developers to write, compile and
run code all from a modern Eclipse-based IDE. It also brings
drag-and-drop ease to copying files between LPARs and other common
developer tasks. This puts the mainframe development experience on
par with other technologies in the enterprise such as Java.
Find more information on Topaz Workbench.
Success Indicators• Empirical productivity metrics such as
features delivered, code commit frequency and shorter learning
curve for new employees
• Posi t ive anecdota l feedback f rom development and testing
teams
• Ability to recruit and train individuals with no mainframe
experience to work on mainframe applications.
Step 3:ADOPT AUTOMATED TESTINGUnit testing is central to Agile.
Frequently testing small increments of code enables developers to
quickly and continuously assess how closely their current work
aligns with immediate team objectives—so they can promptly make the
necessary adjustments or move on to the next task.
Unfortunately, technical obstacles have historically prevented
the kind of automated unit testing that is commonplace in Java from
being applied to mainframe development. Now that those obstacles
have been removed as described below, reliable automated unit
testing can become a mainframe reality.
Of course, effective unit testing requires more than technology.
Mainframe developers not accustomed to unit testing must learn how
to best leverage the practice to work much more iteratively
https://www.compuware.com/topaz-for-total-test-automation/
-
© Arcati Ltd, 20208
Arcati Mainframe Yearbook 2020
Mainframe strategy
on much smaller pieces of code. One particularly effective way
to accelerate adoption of unit testing best practices across your
development teams is to monitor the percentage of code that has
been subject to automated testing. By combining automated unit
testing with code coverage metrics, you can build confidence among
your developers that they can make incremental changes to
mission-critical applications without jeopardizing quality. It’s
also important to implement controls that ensure unit testing is
successfully completed before promoting code. But automated unit
testing across all platforms is a fundamental requirement for
Agile.
Once you have completed the unit testing phase, you can work
through the functional testing phase. Functional testing validates
that the implementation works as specified in its requirements.
This is different from “the code works correctly,” which is
determined during unit testing.
After functional testing, you can then start the integration
testing phase. With integration testing, you evaluate if the
collaboration between two or more programs works as expected. This
extends the functional testing that focuses on testing the
specifications of one program to test the interaction between
several programs.
ToolsThe enabling technology for Step 3 is Compuware Topaz for
Total Test. The product automatically scans mainframe code and
creates appropriate unit and functional tests based upon the
program’s structure. It can automatically create these test cases
for both main programs and subprograms. Test cases can include test
data and a set of default test result assertions. This automated
test creation empowers developers at all skill levels to quickly,
easily and accurately validate and troubleshoot any changes they
make to mainframe applications.
Topaz for Total Test now offers functional and integration
testing capabilities to extend the
use of the solution further into the application development
lifecycle. This provides additional levels of test automation to
improve velocity to production use of application changes.
Find more information on Topaz for Total Test.
Success Indicators• More frequent drops of code updates
required by the business• Fewer errors found later in the
DevOps
lifecycle (“shift left”)• Lower cost of defect resolution
(since
resolution costs are much lower earlier in the DevOps
lifecycle)
• Tighter synchronization of in-tandem mainframe/non-mainframe
development
Step 4:PROVIDE DEVELOPERS WITH GRAPICAL, INTUITIVE VISIBILITY
INTO EXISTING CODE AND DATA STRUCTUREAs mainframe applications have
been expanded and enhanced over many years, they have typically
become quite large and complex. They are also typically not very
well-documented. This combination of complexity and poor
documentation is a major impediment to core transformation
goals—including agility, confidence and efficiency. In fact, the
undocumented idiosyncrasies of mainframe applications and data
structures almost universally make enterprise IT highly dependent
on the personal/tribal knowledge of senior mainframe staff. Worse
yet, if a seasoned mainframe developer is no longer available, IT
may be fearful of making any changes at all.
To overcome this dependency, you have to make it much easier for
any new participant/ contributor to quickly “read” existing
application logic, program interdependencies, data structures and
data relationships. Developers and other technical staff also need
to be able to understand application runtime behaviors—including
the actual sequence and nature of all program calls as well as file
and
https://www.compuware.com/topaz-for-total-test-automation/
-
© Arcati Limited, 2020 9
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
database I/O—so they can work on even the most unfamiliar and
complex systems with clarity and confidence.
ToolsThe enabling technologies for Step 4 are Compuware Topaz
for Program Analysis and Topaz for Enterprise Data. Their unique,
powerful visualizations reveal underlying program logic and data
relationships through dynamically generated, graphically intuitive
diagrams. These diagrams show how COBOL and Pl/I programs flow with
the associated variables and files, while also enabling developers
to play, save and compare visualizations of application runtime
behaviors— without requiring access to current source code
files.
Find more information on Topaz for Program Analysis and Topaz
for Enterprise Data.
Success Indicators• Reduction in amount of time it takes a
developer to understand large/complex mainframe applications
• Experienced developers gaining further insight into runtime
behavior of mainframe applications
• Developers better estimating project work required to meet
delivery deadlines
Step 5:EMPOWER DEVELOPERS AT ALL SKILL AND EXPERIENCE LEVELS TO
DELIVER HIGH-QUALITY CODE WITH LESS FRICTIONSuccessful mainframe
transformation demands rigorous, reliable and early detection and
resolution of quality issues. There are three primary reasons for
this. First, mainframe applications often support core business
processes that have little to no tolerance for error. Second, in
transitioning from waterfall to Agile delivery cycles, continuous
quality control reduces costs and prevents even
relatively minor errors from adding friction that undermines the
goal of faster, more streamlined application updates.
Third—and of particular importance at this moment in the history
of the mainframe—a new generation of developers with less mainframe
experience and expertise are being called upon to maintain and
evolve mainframe applications. These developers must be supported
with quality controls and feedback above and beyond the automated
unit testing adopted in Step 3.
Every effort must therefore be made to rigorously safeguard
application quality as the mainframe becomes more agile. Continuous
Integration (CI) is especially important in this regard, since it
ensures that quality checks are performed continuously as your code
is updated.
Also, with the right quality control tools and processes, you
can do more than just catch and fix individual issues early in the
cycle. You can also capture KPIs that give you clear visibility
into individual, team and project quality metrics so you can
quickly pinpoint issues requiring additional coaching and
training—allowing you to continuously improve development
performance and productivity.
ToolsAs with Step 3, a core technology enabler for Step 5 is
Compuware Topaz for Total Test—which provides a single
point-of-control for developers to manage their testing activities,
while also tracking code coverage to ensure consistent conformance
with testing best practices prior to code promotion.
Topaz a lso prov ides in tegra t ions wi th SonarSource’s
SonarLint and SonarQube. SonarLint integrates into the Topaz
Workbench environment to give developers on-the-fly feedback on any
potential new bugs and quality issues they may inject into their
code. With SonarLint, even mainframe-inexperienced developers can
quickly be alerted to application quality issues such as
��https://www.compuware.com/topaz-for-enterprise-data-visibility/
-
© Arcati Ltd, 202010
Arcati Mainframe Yearbook 2020
Mainframe strategy
unbalanced or unmatched working storage/data items and sections
of code that are too complex. SonarQube is a feature-rich dashboard
for tracking quality issues, code coverage from automated tests and
technical debt—all of which are useful for capturing the kinds of
KPIs needed to ensure deliverables are being created with
sustainable and high-quality methods.
Jenkins, the open source automation server, is also typically
important for this step since it provides Continuous Integration
functionality. Jenkins can also automatically drive execution of
any essential quality checks—such as static code analysis,
automated unit and functional tests and measuring code
coverage—that you want to ensure are performed on every change to
code.
Compuware zAdviser further aids in the use of KPIs to drive
improvements in mainframe DevOps performance by both capturing
developer behaviors and benchmarking those behaviors against
metrics captured from across a broad range of other organizations
engaged inmainframe agility initiatives.
Find more information on SonarQube, SonarLint, Jenkins and
zAdviser.
Success Indicators• Increasing code coverage exercised by
automated tests• Positive trends in quality and reduced
technical debt metrics• Reduced number of error-related
cycles
between testing and development teams• Quant i f iable
improvements in the
performance and productivity of new mainframe developers
Step 6:INITIATE TRAINING IN ANDADOPTION OF AGILE PROCESSESAt
this point in your journey, you should have the right development
environment in place—so
your development teams will be ready for actual training on
Agile development methodologies. Once completed, you’ll be able to
start shifting your process from a traditional waterfall model with
large sets of requirements and long project timelines to a more
incremental model. The goal is to have developers for mobile, web
and mainframe components collaborate on a single Scrum team. Teams
are focused on stories and epics that capture specific units of
value to your business versus technical tasks in a project plan. By
estimating the size of these stories and assigning them their
appropriate priority, your teams can start engaging in Agile
processes that allow them to quickly iterate towards their
goals.
The move from large-scale waterfall projects to Agile represents
a significant change in work culture for most mainframe teams.
Training in Agile processes and work culture is therefore a must.
Technical Leadership roles and Product Owners, in particular, need
in-depth training and coaching. However, all team members should
have at least some formal introduction to basic Agile
concepts—especially if they’ll be expected to read Scrum or Kanban
boards.
You may want to build your initial Agile mainframe team by
choosing select mainframe developers and pairing them with
Agile-experienced developers from other platforms. You may also
want to consider how you’ll factor conformance to Agile values such
as transparency, knowledge sharing and ideation into your developer
performance reviews.
ToolsEnabling technologies for Step 6 include modern
collaboration platforms and Agile project management tools.
Atlassian Jira, for example, is an Agile task management tool that
supports Agile methodologies such as Scrum and Kanban. It enables
you to plan, track and manage all Agile development activity, so
you can keep your teams on track and continuously improve Agile
adoption in terms of speed, effciency, quality and—most
https://www.sonarqube.org/https://www.sonarlint.org/https://jenkins.iohttps://www.compuware.com/zadviser/
-
© Arcati Limited, 2020 11
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
importantly—ongoing alignment with your most urgent business
needs.
Atlassian Confluence complements Jira by providing a
centralized, well-organized web collaboration space where your
Agile teams can easily and flexibly share ideas and product
requirements as well as provide process and status updates, etc.
This type of collaboration supports the required culture of
innovation.
It’s typically wiser to leverage these kinds of popular
best-in-class tools than it is to adopt a monolithic approach that
requires you to perform all SDLC activities within a single
vendor’s solution set—since going forward you’ll want to avoid
vendor lock-in and ensure your ability to take advantage of the
latest innovations in Agile management.
Find more information on Atlassian Jira and Confluence.
Success Indicators• Target percentage of dev/test staff
completes
Agile training with goal of 100% trained• First delivery of
artifacts from initial Agile
teams and discovery of technical and cultural obstacles to
broader Agile adoption
• Evidence of cross-team collaboration and business
participation in Agile process
Step 7:LEVERAGE OPERATIONAL DATAACROSS THE DEVELOPMENT,TESTING
AND PRODUCTION LIFECYCLETo ensure that your applications will
perform optimally in your production environment, it’s not enough
to just write good code. You also need to understand exactly how
your applications behave as they consume processing capacity,
access your databases and interact with other applications.
One good way to gain this understanding is to leverage
operational data continuously throughout the DevOps lifecycle. This
provides dev and ops teams with a common understanding of the
operational metrics/characteristics of an application throughout
its lifecycle, helping them more fully and accurately measure
progress towards team goals. Early use of operational data can also
dramatically reduce your MIPS/MSU-related costs by allowing you to
discover and mitigate avoidable CPU consumption caused by
ineffcient code.
ToolsThe enabling technologies for Step 7 are Compuware
Abend-AID and Strobe. Abend-AID provides source-level analysis of
application failures—sparing developers the time-consuming work of
manually cross-referencing numerous pages of memory dumps, source
listings and application code. With Abend-AID, dev/test staff can
quickly pinpoint bad statements, identify data issues and isolate
many other types of application problems.
Strobe pinpoints application ineffciencies such as bad SQL
statements, wasteful Db2 system services that cause excessive CPU
consumption, slow data retrieval and other issues that add cost
while undermining the end-user experience. By automatically
identifying these issues, Strobe enables even novice dev/test staff
to contribute to faster delivery of better-performing
applications.
Find more information on Abend-AID and Strobe.
Success Indicators• Early detect ion of avoidable CPU
consumption• Reduction in abends in production• Reduction in
average cost per error and
mean time to resolution
https://www.atlassian.com/software/jirahttps://www.atlassian.com/software/confluencehttps://www.compuware.com/abend-aid-fault-management/https://www.compuware.com/strobe-mainframe-performance-monitoring/
-
© Arcati Ltd, 202012
Arcati Mainframe Yearbook 2020
Mainframe strategy
For more information on transforming your operations team, check
out a companion piece in our DevOps guidebook series entitled 9
Steps to Agile Ops.
Step 8:DEPLOY TRUE AGILE/DEVOPS-ENABLING SOURCE CODE
MANAGEMENTTraditional SCM environments are inherently designed for
waterfall development and are thus incapable of providing essential
Agile capabilities—such as parallel development work on different
user stories, quick compare and merge of different versions, and
conditional pathing of code promotion.
But to truly enable Agile and DevOps on the mainframe, your SCM
must do more than just provide automation, visibility and
rules-based workflows to your SDLC. It must also integrate easily
and seamlessly with other tools in your end-to-end toolchain.
Chances are that your development, test and ops teams will want to
use some combination of Jenkins, XebiaLabs, Slack, CloudBees and/or
other popular tools. You must therefore be able to move data easily
between these tools—and trigger automated actions, messages and
alerts between them. Ideally, your new SCM will do this with
industry-standard REST APIs and webhooks, which provide the
simplest means of doing so and give you the greatest flexibility to
allow your next-gen developers to work with whatever their personal
tools-of-choice happen to be at any points in time.
The shift from waterfall-based SCM to Agile-enabling SCM is a
pivotal moment in any mainframe transformation, and it should be
carefully planned to avoid disruption to current work in progress.
It is, however, an absolutely essential shift if your goal is to
increase the speed and frequency of new mainframe code drops,
optimize developer productivity and simplify end-to-end management
of your SDLC.
ToolsThe enabling technology for Step 8 is Compuware ISPW. ISPW
is a modern, end-to-end Agile SCM and release automation solution
that enables mainframe application developers at all skill levels
to fulfill business requirements, optimize code quality and improve
developer productivity through mainframe DevOps. It provides
automated change management that eliminates manual steps and
empowers your teams to iterate quickly through the development,
test and QA cycles. ISPW is fully integrated into the Topaz
Workbench environment, so developers have full access to all of
their tools from a single, intuitive interface. It also integrates
with other popular tools, including Jenkins, as described
above.
Find more information on ISPW. Compuware also offers SCM
Migration Services to speed and de-risk this critical
conversion.
Success Indicators• Developers no longer working with
“personal
libraries” and different Agile teams working on different
stories in parallel
• Developers with different mainframe skills levels are able to
quickly understand the scope of their changes before they begin to
code
• Reduction in code approval delays
Step 9:AUTOMATE DEPLOYMENT OF CODE INTO PRODUCTIONAgile
development alone is insuffcient for full digital business agility.
To keep pace with today’s fast-moving markets, your business must
also be able to quickly and reliably get new code into production.
That means automating and coordinating the deployment of all
related development artifacts into all targeted environments
https://resources.compuware.com/9-steps-agile-mainframe-ops-en-webhttps://resources.compuware.com/9-steps-agile-mainframe-ops-en-webhttps://www.compuware.com/ispw-source-code-management/https://www.compuware.com/ispw-services/
-
© Arcati Limited, 2020 13
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
The Mainframe Software Partner For The Next 50 Years
73% of customer-facing apps are highly dependent
on the mainframe. Yet 2 out of 3 lost mainframe
positions remain unfilled, putting quality, velocity
and efficiency at risk.
You need Compuware Topaz as your Force Multiplier to:
• Build and deploy with agility
• Understand complex applications and data
• Drive continuous, automated testing
Learn more at compuware.com/force-multiplier
compuware.com | @compuware | linkedin.com/company/compuware
Force Multiplier: \fo(e)rs \mƏl-tƏ-plḹ(Ə)r n: A tool that
dramatically amplifies your effectiveness.
https://www.compuware.com/
-
© Arcati Ltd, 202014
Arcati Mainframe Yearbook 2020
Mainframe strategy
in a highly synchronized manner. You’ll also need to pin-point
any deployment issues as soon as they occur, so you can take
immediate corrective action.
And, if such corrective action is not immediately evident or
doesn’t immediately produce its expected remediative effect, you
have to be able to quickly and automatically fallback to the
previous working version of the application. This automated
fallback is, in fact, a key enabler of rapid deployment—since it is
the primary means of mitigating the business risk associated with
code promotion.
ToolsThe enabling technology for Step 9 is also Compuware ISPW.
ISPW complements its core SCM capabilities with advanced mainframe
deployment features that empower you to rapidly move code through
the deployment process—including test staging and approvals—while
also providing greatly simplified full or partial fallbacks. As
such, it offers a significantly superior solution to traditional
“homegrown” scripts, which do not provide essential capabilities
such as fallback, progress visibility and auditability of
deployment processes.
ISPW also provides visualization that enables DevOps managers to
quickly pinpoint deployment issues in order to both solve immediate
rollout problems and address persistent bottlenecks in code
promotion.
ISPW features a mobile interface that enables mainframe DevOps
managers to respond immediately when alerted that code changes are
ready for approval. This anytime/anywhere mobile management
eliminates a common cause of mainframe code promotion delays.
Find more information on ISPW.
Success Indicators• Faster rollouts of applications into
production• Reduction in code promotion failures• Reduction in
personnel effort and system
down time caused by need to revert after failed deployment
Step 10:ENABLE COORDINATED CROSS-PLATFORM CONTINUOUS
DELIVERYMainframe applications and data increasingly serve as a
back-end resource for multi-platform customer- and employee-facing
applications that include mobile, web and/or cloud components.
DevOps teams must therefore be able to fully synchronize the
delivery of new, approved code across all platforms. These
deployment controls should also provide unified, cross-platform
fallback and progress reporting.
This is the target state of enterprise DevOps at the completion
of Step 10: A de-siloed environment where the mainframe is “just
another platform”—albeit an especially scalable, reliable,
high-performing, cost-effcient and secure one—that can be quickly
and appropriately modified as needed to meet the needs of the
business by whichever staff resources are available to do so.
ToolsThe enabling technologies for Step 10 include ISPW REST
APIs and integration with tools like XebiaLabs XL Release and
CloudBees. ISPW integrates with distributed tools to provide a
single point of control for all changes across z/OS, Windows, Unix
and other platforms. REST APIs are especially important for
ensuring full flexibility to mix and match best-in-class tools and
avoid vendor lock in.
-
© Arcati Limited, 2020 15
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
XL Release is a highly advanced application release automation
solution that eases the planning, automation and analysis of
cross-platform software releases so you can streamline code
promotion, maintain full visibility into release progress, pinpoint
release problems and address any chronic bottlenecks in your DevOps
processes.
Find more information about our integrations with XebiaLabs XL
Release and CloudBees.
Success Indicators• Ability to work on related code on
multiple
platforms in parallel• I n c r e a s e d c o m m u n i c a t i o
n s a n d
collaboration between previously siloed developers with
different skillsets
• First successfully automated cross-platform release
fallout
THE ONGOING PURSUIT OF DIGITAL EXCELLENCEThe evolution of your
mainframe doesn’t stop once you achieve the desired state of
agility and cross-platform integration of your DevOps workflows. In
fact, you’ll probably want to build upon that achievement to
further enhance your company’s digital agility and effciency over
time.
One especially compelling way to do that is by providing your IT
service management (ITSM) team with a unified environment for both
mainframe and non-mainframe applications. This unified ITSM model
will become increasingly useful as more of your company’s digital
value proposition is based on code that traverses multiple
platforms—from back-end mainframe systems of record to
customer-facing web and mobile apps.
Topaz Connect provides this kind of cross-platform ITSM
integration by unifying third-party ITSM solutions such as
ServiceNow, BMC Software, Tivoli and CA. Through this integration,
ITSM staff
can track processes for mainframe applications in the same
manner as they do for other hardware and software platforms. If an
ITSM change request requires a code change, the specifics of that
change are automatically communicated to ISPW. And as those
modifications are delivered, your ITSM environment can track the
progress of the workflow right through to deployment. You can also
use a solution like XebiaLab’s XL Release Dashboard to both
maintain real-time insight into your active projects across
platforms and track KPIs that help you pinpoint opportunities for
further improvement—whether it’s the time it takes your smallest
incremental changes to get through QA or the types of technical
issues that seem to be driving the most chatter in your
collaboration tools.
Compuware zAdviser can also help you pinpoint opportunities for
continuous improvement— especially as its benchmarks reveal how
your peers at other organizations keep raising the bar for
mainframe agility over time.
THE TRANSFORMATION IMPERATIVEIT leaders are under intense
pressure to deliver on many fronts. They have to deliver new
AI-enabled systems. They have to fulfill the escalating mobility
expectations of customers and employees. They have to safeguard the
enterprise from an ever-intensifying range of threats. And they
have to do all this within extremely challenging capex and opex
budget constraints.
Mainframe transformation, however, is central to the success of
all these efforts and more. If your core systems of record aren’t
agile—and if you’re not fully prepared to extend the useful life of
those systems of record well into the next decade, even as you lose
your current cohort of mainframe veterans to retirement—then your
other efforts can only deliver limited benefits. The performance of
your business will ultimately be constrained by the constraints of
your mainframe environment.
https://xebialabs.com/products/application-release-orchestration-xl-release/https://www.cloudbees.com/https://www.compuware.com/topaz-connect/
-
© Arcati Ltd, 202016
Arcati Mainframe Yearbook 2020
Mainframe strategy
The good news is that best practices, modern tools and committed
partners are now available to assist you in your efforts. All you
need is a decision and a plan. Then you can start.
Compuware empowers the world’s largest companies to excel in the
digital economy by taking full advantage of their mainframe
investments.
We do this by delivering innovative software that enables IT
professionals with mainstream skills to develop, deliver and
support mainframe applications with ease and agility. Our
modernized solutions uniquely automate mainframe work, integrate
into a cross-platform DevOps toolchain and measure software
delivery quality, velocity and efficiency.
Learn more at compuware.com.
Mainframe AIIBM Watson Machine Learning for z/OS (WMLz) can’t
run your mainframe for you, but it is taking baby steps toward
making some applications (eg Db2) run more efficiently. In effect
it is working like a DBA on steroids! WMLz, currently at Version
2.1, is said to simplify the production implementation of AI
models. Users can develop models where they want. And, users can
readily deploy within their transaction applications for real-time
insight. At the moment, WMLz can help sites optimize their
transactions, but it isn’t restricted to just mainframes. WMLz
offers a hybrid cloud approach to model development and model
deployment lifecycle management and collaboration that is designed
to help organizations innovate and transform on an enterprise
scale. Db2ZAI is a separate product that can optimize data access
in Db2. I would hope that we will see something similar for CICS
and IMS in the near future. And, in the not too distant future, we
will, perhaps see more features on the mainframe optimized and
controlled by Watson.
The IBM Db2 AI for z/OS (Db2ZAI) can optimize a Db2 for z/OS
engine to determine the best-performing query access paths, based
on the workload characteristics. The optimizer consists of
Relational Data Services (RDS) components that govern query
transformation, access path selection, run time and parallelism for
every SQL statement used. The access path for an SQL statement
specifies how Db2 accesses the data that the query specifies. It
determines the indexes and tables that are accessed, the access
methods that are used, and the order in which objects are accessed.
Db2ZAI collects data from the optimizer and the query execution
history, finds patterns from this data and learns the optimal
access paths for queries entering Db2 for z/OS. That means data
access is as quick as it can possibly be. And because each site has
a unique workload, this optimization is specific for that
particular site. Should workloads change, the optimization will
change, too. It will learn how to work the best it can.
https://www.compuware.com/
-
© Arcati Limited, 2020 17
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
Mainframe Data ManagementWhen it comes to mainframe data
management, there are hidden costs and now there are some new
opportunities.
Based on estimates and benchmarks from Technavio and Gartner,
the mainframe market—hardware and software—is worth about $44
billion a year. An estimated 90% of all credit and debit cards were
still processed on the mainframe in 2017, and IBM continues to sell
more processing capabilities each year.
Considering its size, the mainframe market does not have many
vendors. As a result, it is unusually stable for a technology
market, with rigid control over technological direction.
While stability can be extremely useful, the stagnation of data
management solutions in particular has proven to be of concern.
Mainframes, and the ecosystem of hardware, services, and
applications that surround them are costly. They act on vital data
and run critical workloads. They are counted upon to provide the
highest level of reliability and speed. These factors have created
a high level of risk aversion among mainframe administrators.
Change controls are extremely strict and new technologies are
introduced slowly. Mainframe customers want to see others adopt the
technologies successfully before taking the plunge themselves.
However, with the volume and velocity of data at unprecedented
levels, with business continuity SLAs becoming more demanding, with
increasing demand for business analytics access to mainframe data
and with a high priority focus on mainframe costs, managing
mainframe data needs to be looked at in an entirely different
light. In this article we will explore the real costs of mainframe
data management today and identify modernization opportunities.
THE DATA MANAGEMENT DILEMMA ON MAINFRAMESData backup, restore
and archive is one area of the mainframe ecosystem where evolution
has been significantly slower than in the open systems world. It is
dominated by a small group of vendors such as IBM, CA, EMC, Oracle,
and Innovation Data Processing.
These vendors offer a limited number of backup and archive
products, all of them based on tape architecture and all of which
consume costly central processor (CP) resources. The only
significant innovation over the years has been the introduction of
Virtual Tape Libraries (VTLs)—hard disk arrays that serve as a
buffer between the backup streams and the physical tape storage
devices, often doing away with physical tape altogether.
With mainframes typically handling critical data in highly
regulated business environments, risk-averse mainframe
administrators have not been clamoring for novel backup/restore
solutions despite the high costs of hardware and software,
cumbersome restore procedures, and other drawbacks of these legacy
systems.
DATA MANAGEMENT AND DATA ACCESS IMPACT ON MLCBased on the
analysis of numerous mainframe logs from a wide range of companies
worldwide, backup and space management workloads can take up to 10%
of a mainframe’s compute consumption, much of which is the result
of emulating tape systems. They expend costly main Central
Processor (CP) cycles on house-cleaning tasks that are only
necessary because the backup and space management solutions need to
believe that their data sits on tape.
Further - as access to Mainframe data is on the rise and
analytics tools outside the mainframe require more and more data to
be pushed out from the mainframe, the load of ETL and other
-
© Arcati Ltd, 202018
Arcati Mainframe Yearbook 2020
Mainframe strategy
means of data transport is also putting pressure on Mainframe
compute engines.
IBM employs the Monthly License Charge (MLC) model, in which
organizations are charged based upon a monthly measurement of the
four consecutive peak hours of usage, known as the Rolling 4-Hour
Average (R4HA). Keeping data management and ETL workloads out of
the R4HA peak is a challenge for many administrators. Organizations
frequently face the dilemma of having to choose between restricting
data management job timing and scope so as not to fall into the
R4HA monthly peak, or allowing these processes to run during peak
times and affecting MLC charges.
SHIFTING DATA MANAGEMENT WORKLOADS TO SPECIALTY PROCESSORSFor
many mainframe administrators the R4HA and its impact on MLC are
constantly top of mind. In a 2019 BMC mainframe survey, 61% of
Execs and 68% of Tech managers saw Cost reduction as their top
priority. As noted above, workloads that use the main Central
Processor (CP)—including data management and ETL workloads—drive a
significant portion of the MLC costs. However, there are specialty
processors that allow organizations to execute some percentage of a
workload’s compute time without impacting the main CP. Specialty
processor cycles are charged at significantly lower rates than main
CP cycles.
The z Systems Integrated Information Processor (zIIP) is one
such specialty processor. It can be used to offload numerous types
of workloads, such as Java, XML, and DB2 for z/OS. The zIIP is
proving to be of increasing importance as Java usage grows.
According to the previously mentioned BMC survey, 59% of the
respondents reported Java usage growing, and it is the language of
choice for new applications.
Encryption and decryption are also examples of
processor-intensive data management tasks that can be offloaded to
specialty processors. This cost-
saving measure becomes even more significant in light of
regulatory guidelines that strongly advise that backups—and even
production storage—be encrypted. Fines for violations of the EU’s
General Data Protection Regulation (GDPR) regulations, for example,
can soar to 4% of annual global turnover or €20,000,000, whichever
is higher.
Fortunately, a new generation of Java-based backup, archive and
data access solutions is emerging that can move some or all of the
backup, archive, space management and analytics data access
processing over to the less expensive specialty processors,
avoiding the main CP overhead (and reducing the MLC charges) of
virtual-tape-based and ETL products. This shift can greatly
decrease costs and may have the added benefit of allowing Data
Management processes to be executed with fewer timing constraints,
even as higher priority tasks are executed on the main CP.
MODERN CLOUD ARCHITECTURE STORAGEIn the open systems world,
centralized storage x86 virtualization and more recently Cloud
storage are two great examples of solutions that have delivered
impressive savings to organizations of all sizes. While the
reliability, speed, and functionality of open systems storage keep
advancing, the average storage prices are declining consistently
year over year.
Adopting modern commodity SAN, On-premises Cloud or Public Cloud
storage solutions for use with mainframes would allow organizations
to store 3x to 10x more data compared to traditional mainframe
secondary storage solutions of the same price. This would seem to
be a compelling value proposition to those 68% of mainframe Techs
for whom cost reduction is their primary concern. Using cloud
storage from public providers such as Amazon’s AWS, Microsoft’s
Azure, and others, further reinforces the value proposition. For
archive specifically, cold storage on public cloud providers can
improve the capacity/cost ratio 100X.
-
© Arcati Limited, 2020 19
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
INFO@MODEL9.IO
Cloud backup, archive, disaster recovery and space managementin
a single, complete software solution
Offload processing and data transferto zIIPs, compression,
encryption and OSA cards
Deliver mainframe data directly to analytics services in the
cloudand transform mainframe to universal formats
UNLOCK YOUR MAINFRAME DATA
https://model9.io/
-
© Arcati Ltd, 202020
Arcati Mainframe Yearbook 2020
Mainframe strategy
However, while solutions that enable mainframes to make use of
commodity storage exist, significant barriers to adoption remain.
Risk aversion, based on outdated beliefs that commodity storage
solutions are inadequately resilient for mainframe usage, is one
issue. Another is that many of the extant approaches to adding
commodity storage to mainframes simply make this commodity storage
available as part of a VTL, inheriting the problems and costs
associated with the VTL approach. Furthermore, the current
approaches may place commodity storage behind DASD—which maintains
lock-in to specific storage Hardware.
In reality, commodity and cloud storage solutions have evolved
rapidly, and now provide a wide range of features that can make it
at least as resilient as traditional mainframe storage at a
significantly reduced cost. Cloud storage, for example, can be
easily and inexpensively configured to be locally and
geographically redundant. Mainframes can be configured to use cloud
storage for secondary, or backup storage, as well as a replication
tier.
Commodity storage needn’t be restricted to a storage solution
hidden behind a VTL or DASD. Mainframes can use commodity storage
solutions, ranging from extremely high speed all-flash arrays to
cold archival disk warehouses, or even inexpensive mainstream LTO
(linear tape-open) tape libraries. Perhaps more importantly,
mainframes can embrace all that cloud storage has to offer.
RECOVERY“Data protection for us was easy – everything was on
1/4” tape back then so we just duplicated them. We ran a
twice-yearly recovery exercise to our DR site near Heathrow—in the
13 years I worked there I think the test partially worked once and
failed the other 25 times...”--Jon Waite, CTO, Computer Concepts
Ltd
For some organizations, the high cost of classic mainframe
storage solutions and MLC cost concerns can lead to compromising on
the number
of separate remote backup copies, the frequency of their
backups, and/or the amount of backup testing and verification that
they perform. This is a very serious problem. Backups aren’t worth
anything if you aren’t sure you can restore from them.
Administrators don’t want to see backup jobs creating contention
with other workloads, ensuring that only absolutely critical backup
jobs run during R4HA peaks. Pushing backup jobs off peak can save
money, but it also reduces the window in which backups can run,
potentially allowing fewer backup runs.
Amongst all of this backups need to be verified, restores
tested, and disaster recovery failovers planned and executed. In a
perfect world, all of this is automated so that it occurs
regularly. In reality, many organizations make a series of
compromises to juggle cost and backup execution windows while still
leaving enough time to test restores.
These issues are not unique to mainframes. Administrators in the
open systems world juggle these problems as well. Open systems
administrators, however, don’t have to worry about MLC pricing. The
pressure on mainframe administrators to compromise is even greater
than it is in the open systems world. Considering the criticality
of mainframe workloads and data, this is alarming.
THE COMPLEXITY PROBLEMIf the vendor and technology issues were
insufficient to cause angst, there is a looming skills gap. Many of
today’s mainframes are maintained by practitioners who are retiring
or near retirement. While there are some younger administrators
choosing mainframes as a career, they are not compensating for the
number of individuals looking to exit the workforce.
The aforementioned BMC survey notes that 49% of Techs see
Staffing/Skill Shortages as the key obstacle for Mainframes in
2019.
-
© Arcati Limited, 2020 21
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
Many organizations have intricate or bespoke backup
implementations. The tape management solutions that manage the VTLs
are separate from the solutions that schedule and run the backups
themselves. Instead of a single, simple solution to this critical
aspect of mainframe infrastructure, a complex web of multiple
applications often exists. And the same goes for data transfer off
the mainframe - with multiple components used to offload data
transform it and them push it out to open systems.
This complexity is a problem. The mainframe skills shortage is
not only seeing the pool of experts shrink, but many administrators
also feel that time is running out to transfer knowledge to a new
generation of mainframe practitioners.
CONCLUSIONData management has been a perennial—and
expensive—bugbear for mainframe customers. When we asked Gil Peleg,
CEO of Model 9 and an IBM mainframe veteran, how the mainframe
backup cost could be quantified, here’s what he had to say: If you
consider, as mentioned above, that 35% of mainframe cost is
workload related (MLC) and that 10% of that is data management
related, then you’re already looking at ~3.5% of total MF cost. Add
to that the cost of secondary storage, which can come to 25% of
mainframe hardware costs (based on our experience with customer
budgets), and Gartner saying that hardware accounts for 18% of
total mainframe cost—then we’re looking at another 4.5% of
total budget. So with a total cost item of 8% of budgets and the
potential to save ~30–70% of that cost with offloads and commodity
storage we are looking at a potential savings of 2–5.5% on
mainframe budgets. For a typical enterprise, that could translate
into 100s of thousands to millions of dollars per year. By all
accounts that’s a significant chunk of money, with the additional
upside being that the enterprise also benefits from superior data
protection and access to the best in storage paradigms the cloud
can offer.”
Outside of the mainframe ecosystem, the world has moved on. Open
systems needn’t be feared, and cloud storage has demonstrated
reliability and flexibility. Mainstream data management solutions
have grown and evolved so much that they are no longer merely a
check against various mistakes or disasters, they are themselves
part of solutions that deliver tangible benefits. Cloud storage in
particular can offer a number of resilience options and deliver
consistent and significant cost reductions.
Fortunately, mainframe customers do have the choice to embrace
commodity and cloud storage. They can enjoy the best of both worlds
by combining the unmatched reliability of mainframes with the rapid
evolution of capabilities and significantly decreased costs of open
systems.
Read more about modern solutions that allow you to do just that
at www.model9.io.
z/OS Container Extensions (zCX) is a new technology from IBM. It
provides a way to run Linux capabilities on z/OS. zCX is a virtual
appliance that requires minimal involvement after set up. And the
benefit is that z/OS can run Linux code that has an affinity for
z/OS data or applications. Running a virtual Docker appliance as a
started task on z/OS makes all of the great features of z/OS
available to Linux applications. In particular, Linux applications
will benefit from z/OS performance of high-speed networking as well
as resiliency around data replication and automation. zCX also
expands and modernizes the software ecosystem for z/OS to include
Linux on Z applications.
http://www.model9.io
-
© Arcati Ltd, 202022
Arcati Mainframe Yearbook 2020
Mainframe strategy
Join a Mainframe Development Revolution in 2020 – Embrace
Open!What a difference a decade makes. Mainframe developers have
seen major changes to their workspaces in the last ten years, which
have only accelerated in the last several, and have opened up a
wide new horizon for coders, new and old, as they step into a new
decade.
Tools like the green screens of ISPF and the Eclipse desktop IDE
enhanced with proprietary plug-ins have served mainframe
application developers well over the years and, for those
comfortable with them, will continue to do so. However, there are
changes in the larger world of development that are creating the
conditions for a revolution in mainframe tooling.
Firstly, mainframe developers are aging out of the workforce at
a steady pace, leaving behind extensive code libraries and a
workforce skills gap. Those likely to backfill these roles know and
love modern IDEs, especially Visual Studio Code, while the
popularity of Eclipse is quickly waning. For example, the annual
Stack Overflow Developer Survey found that, over the past year, the
popularity of Visual Studio Code grew from 34.9% to 50.7%, once
again making it the most popular IDE, while Eclipse fell from 18.9%
to 14.4%.
Secondly, application development has made great strides in
productivity since the current mainframe tools were created,
especially in the area of automation through the adoption of DevOps
enablers like task runners, scripting and testing frameworks. Again
per the Stack Overflow survey, scripting language Python is now, by
far, the most sought-after skill among developers.
Finally, as the velocity of overall software delivery increases,
the mainframe has lagged, becoming a drag on digital
transformation. According to
451 Research, 24% of companies are releasing application
software daily or hourly while, similarly, DORA’s State of DevOps
report shows 20% of companies deploy multiple times per day.
Software delivery expectations have changed with continuous
deployment becoming the new normal and, to remain a vital computing
platform for the long term, mainframe application development needs
to support this high cadence.
Enter Che Rather than an evolution of the existing Eclipse IDE,
Eclipse Che is a radical departure. At a high level, Che is an
open-source Java-based developer workspace server and browser-based
IDE that is container-native. For enterprises, Git provides a good
analogy for primary value a hosted IDE provides: its usability and
collaborative power have made Git the de facto standard for version
control. Che’s workspace server offers comparable benefits to
development teams, especially hybrid IT ones.
Che is now relevant to developers working on the mainframe via
the Che4z subproject. It provides access to the z/OS file system,
remote debug and remote editing of COBOL programs with all the
expected modern editor capabilities including syntax highlighting,
code assist and real-time syntax validation. Che4z is currently in
beta but the first release is scheduled for Q4.
Beyond the foundational benefits mentioned above, Che is
revolutionary for the mainframe for the following reasons:
Che, Visual Studio Code and more Che was architected to deliver
a VS Code-like experience from the outset which, with the
popularity of VS Code, turns out to be a sage decision. Che is
based on Eclipse Theia which provides an in-browser VS Code
experience complete with the latest tooling protocols: language
server, debug adapter and compatibility with VS Code
extensions.
-
© Arcati Limited, 2020 23
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
The Che4z subproject, contributed by Broadcom, provides language
support for COBOL and High-level Assembler is planned. The language
server protocol (LSP) enables developers to use these languages
with Che, VS Code and any other LSP clients (e.g., IntelliJ, VIM,
Emacs).
Bottom line: developers can use VS Code for mainframe
development independent of Che adoption.
Cross-platform Applications For many Fortune 500 companies, the
mainframe hosts critical system-of-record applications and data
while the cloud and mobile applications have evolved from
user-oriented systems-of-experience. With tech-first startups
disrupting entire industries, large companies are deploying more
cross-platform applications not only compete but to win by
harnessing the best of both worlds. Cross-platform apps combine
modern user experiences with the transactional power and data
assets of mainframes.
Because cloud and mobile developers need to integrate mainframe
applications, allowing them to use the tools they know and love is
key. That means the popular code editors and IDEs they love today
as well as new innovations, like the
browser-based, container-deployed tools they will love
tomorrow.
Che also facilitates pair programming, perfect when mainframe
and distributed developers need to collaborate on cross-platform
applications.
Bottom line: Che helps to make the mainframe less of a silo and
more like other, easy-to-use platforms
Open Source-enabled Vendor Independence While open source is not
new to the mainframe – zLinux has been around for years – access to
off-platform toolchains were opened with the introduction of the
Zowe open source framework (described in more detail below). Unlike
the incumbent Eclipse vendors, and consistent with the open source
vision, the Che4z extensions are open sourced and free to use. The
legacy vendor-controlled model has given way to open source tooling
for development teams.
Furthermore, Eclipse locked mainframe developers into using a
specific set of editors, runtimes, and user interfaces for required
capability and functionality – thus imposing a learning curve. Che,
by comparison, offers flexibility via modular design, allowing
individual developers to choose the components they need.
Figure 1: CA Brightside
-
© Arcati Ltd, 202024
Arcati Mainframe Yearbook 2020
Mainframe strategy
Rich assets on your Mainframe? But FOMO on innovation?
Today’s digital leadership is about delivering innovations that
drive
the business forward. With open source technologies, like
Zowe,
the first open-source project based on z/OS, you can more
easily
open up your mainframe to connect all the powerful data and
services required for delivering truly rich apps.
You can begin to automate development processes and
establish
CI/CD pipelines to deliver at the speed of business. Only
Broadcom
is fully committed to openness with full enterprise support for
Zowe
and many other open-source projects making a difference.
Want to empower your teams to not miss a beat? Get open with
Broadcom.
Copyright © 2019 Broadcom. All Rights Reserved. The term
“Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
https://www.broadcom.com/brightside
-
© Arcati Limited, 2020 25
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
first open source project based on z/OS, enables developers to
manage, control, script, and develop on the mainframe as easily as
any cloud platform.
Command Line Interfaces (CLIs), for example, are popular with
developers wanting the productivity lift of scripting and higher
levels of automation (i.e., CLIs facilitate DevOps adoption). The
Zowe CLI is a key component of the Che4z stack.
Also, Code4z is a free Visual Studio Code plugin that offers
mainframe developers the ability to access datasets and elements on
the mainframe and develop mainframe code using today’s most popular
desktop IDE.
Bottom line: Using Zowe as the foundation for the Che4z
mainframe extensions ensures the modern experience extends far
beyond the IDE.
Are Che and Zowe Ready For The Enterprise?The continuing stigma
of Open Source as ‘the Wild West’ of software development is
vanishing into the sunset. Enterprises that are used to dealing
with software vendors are, rightly, wary of stepping into open
source implementations that may not provide the infrastructure to
adequately support it once it goes live in production.
For that reason, vendors, like Broadcom, have stepped in to
provide commercial, fully supported versions of these open
technologies that help enterprises adopt and integrate them with
their existing, proven proprietary solutions.
CA Brightside was a foundational element of Open Mainframe
Project’s Zowe initiative, the first open source project for the
mainframe. It was conceived to promote and support the
implementation of enterprise-ready mainframe open source
technologies and was awarded the 2018 Most Innovative DevOps
Solution by DevOps.com.
Today, it provides a streamlined installation process and
technical and legal support for all
Rich assets on your Mainframe? But FOMO on innovation?
Today’s digital leadership is about delivering innovations that
drive
the business forward. With open source technologies, like
Zowe,
the first open-source project based on z/OS, you can more
easily
open up your mainframe to connect all the powerful data and
services required for delivering truly rich apps.
You can begin to automate development processes and
establish
CI/CD pipelines to deliver at the speed of business. Only
Broadcom
is fully committed to openness with full enterprise support for
Zowe
and many other open-source projects making a difference.
Want to empower your teams to not miss a beat? Get open with
Broadcom.
Copyright © 2019 Broadcom. All Rights Reserved. The term
“Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
Whether developing for mainframe or other platforms, the Che
community welcomes contributions from those passionate about
software development. Want to incorporate a lesser-used language?
No need to plead with a vendor. Anyone can seek community support
for an idea or simply do it themselves. The Che/Che4z communities
will ensure the proper procedures are followed before the code is
ever released.
Bottom line: Che provides a more frictionless, fast-moving
market allowing coding innovations to extend to the mainframe
Onboarding and Cost Reduction Consider the compatibility and
maintenance nightmare created by the heavyweight plugins required
for Eclipse. For mainframe leaders, the need to reduce costs is an
ongoing challenge as is the need to quickly onboard the next
generation of developers. As a container-based IDE with no client
footprint, local installation, maintenance, and configuration is
eliminated. The potential of moving entire teams currently using
Eclipse with customizations to Che provides an attractive
opportunity to materially reduce operating costs simultaneously
developer productivity.
Because Che workspaces are containerized, they can be used to
onboard new team members at lightning speed - open the container,
begin work.
Bottom line: For mainframe leaders focused on costs and
facilitating reassignment of team members across projects, Che is a
compelling solution.
Built on Zowe The extensions that open the mainframe to the Che
IDE are powered by the Zowe open source framework, maintaining
mainframe-native security standards. Zowe, which was founded by
Broadcom, IBM and Rocket Software and is the
https://www.broadcom.com/brightside
-
© Arcati Ltd, 202026
Arcati Mainframe Yearbook 2020
Mainframe strategy
Zowe core capabilities, as well as support for the Che4z stack
and Code4z plugin. It tests and certifies these solutions for their
quality and security, as well as supporting integrations with
solutions like CA Endevor SCM, CA File Master Plus and CA
OPS/MVS.
Bottom line: combining open source toolkits with commercially
supported offerings like CA Brightside gives mainframe development
teams the confidence to embrace open source in the enterprise
Conclusion For companies wanting to empower developers
integrating mainframe assets with the most powerful application
development tools on the planet, Che provides the best of both
worlds - the immediacy of VS Code use with the path to full-on,
in-the-cloud team collaboration.
Expanded tool choice, greater alignment with off-platform peers
and DevOps-fueled
software delivery acceleration are all elements of a mainframe
appdev revolution in the making powered by Che.
Broadcom Inc. (NASDAQ: AVGO), a Delaware corporation
headquartered in San Jose, CA, is a global technology leader that
designs, develops and supplies a broad range of semiconductor and
infrastructure software solutions. Broadcom’s category-leading
product portfolio serves critical markets including data center,
networking, software, broadband, wireless, storage and industrial.
Our solutions include data center networking and storage,
enterprise and mainframe software focused on automation, monitoring
and security, smartphone components, telecoms and factory
automation.
Visit us today: broadcom.com/products/mainframe.
You’ve just discovered there’s been a breach! IBM’s “Cost of a
Data Breach Report”, in 2019, reported that the average time to
detect a breach is an unacceptable 206 days, with a further 73 days
taken to control the breach. What are you going to do? Are you
going to look through the SMF records? What’s needed is File
Integrity Management (FIM) software that provides a way of
detecting changes by comparing the current contents of components
to their trusted state. FIM software can tell you who made the
changes by accessing data in SMF and searching it to see what
userid changed the files during the attack interval. It can tell
you what has changed. FIM tools can identify every file that was
modified, added, or deleted. It shows you where the problem started
(a suspicious update) and every component that was accessed. It can
also identify altered log files that would cloak a hacker’s tracks.
However, if the content matches the trusted state, then you very
quickly know it was just a false alarm. FIM software can tell you
what LINE was changed and when it changed. FIM software records
every successful scan, it knows the last time each component was
correct. Now it can give you the attack interval (from the last
good date to incident time) so you can focus your research on the
exact actions during the interval. Knowing the last good date will
also be important in deciding when the recovery process should be
started too. FIM software can tell you why it changed. By querying
change management products like Remedy and ServiceNow, advanced FIM
products can determine whether the change was authorized or not –
avoiding many false alarms and ensure only validated alerts become
an incident.
https://www.broadcom.com/products/mainframe
-
© Arcati Limited, 2020 27
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
Indicators of Compromise and Why It Takes Six-Plus Months to ID
a Breach Poorly Identifying Indicators of Cyberattack Is Why It
Takes Us Six-Plus Months to Identify a Breach, and Even Longer to
Remediate
Across the U.S., both public and private entities are under
assault. Cyberattacks are invisible, but the effects have
increasingly been thrust into the limelight, and ransomware is one
of the most prevalent threats. In August 2019,1 The New York Times
reported that upwards of 40 municipalities had fallen victim to the
data “kidnapping” malware variant. These municipalities range from
towns of less than 5,000 to major city centers including Baltimore
and Atlanta. For cybercriminals, they all represent the potential
for a massive payday at the expense of taxpayers.
Despite warnings from agencies like the FBI, some local
governments are choosing to pay ransoms to hackers in the hopes of
getting their files returned and systems fully restored. Not
surprisingly, data ransom prices have increased as criminals
continue to have their demands met. In March 2019, Jackson County,
FL paid out a $400,000 demand. Riviera Beach, a city of 35,000,
paid a ransom of $600,000 just three months later. When Lake City
was hit, the insurance provider paid hackers $460,000 – a hefty
price for a city of just 12,000.
Although it’s easy to see how paying ransoms perpetuates the
cycle of cyberattacks, public entities often have little choice
when vital systems like emergency response services are down. Steve
Parks, a member of the National Cybersecurity & Communications
Integration Center’s (NCCIC) Threat Analysis Branch (TAB), explains
that “It’s a really tough decision for an organization to have to
make, paying the ransom or paying the cost to fix everything once
they’ve been compromised. The cost of the latter increases
significantly if
the organization hasn’t followed industry best practices,”2
which means the companies most likely to be breached are also most
likely to need to pay up. When systems in Atlanta were breached in
2018, city officials took a stand and staunchly refused to pay a
$51,000 ransom. Recovery costs, which – at this writing – remain
ongoing, have surpassed $7 million.3 Information Security
expenditures are at an all-time high, but hackers continue to find
ways to circumvent security controls, predominantly through insider
threats and employee negligence.
Hackers will always find novel ways to bypass your security
protocols and fighting cybercrime has become less about prevention
and more about stemming the bleeding. Ideally, if you can spot
these indicators of attack, you can transform your organization’s
approach to data protection from reactive to proactive.
Innovation (and Negligence!) has Created Big Business for
Cybercriminals WorldwideAccording to Tim Slaybaugh, Malware Analyst
at the NCCIC, “For many ransomware authors, developing malware is a
business. They’re continuously upgrading and repackaging their
tools to make them harder to detect and more effective at
compromising systems in an effort to provide a superior product for
their clients.”4 In return for offering their Malware as a Service,
these black hat developers rake in a hefty cut of each successfully
conducted attack. Unfortunately, the bad guys just need to be right
once, while the good guys must try and maintain secure defenses
24/7/365 in a constantly shifting landscape.
It should come as no surprise that the battle is taking its
toll. According to data from CyberSeek, the U.S. alone has more
than 300,000 vacant cybersecurity positions – nearly half the total
employed cybersecurity workforce of 700,000.5 Not surprisingly,
these unfilled positions put an immense amount of pressure on the
individuals who are forced to pick up the slack, and research
-
© Arcati Ltd, 202028
Arcati Mainframe Yearbook 2020
Mainframe strategy
from the Information Systems Security Association (ISSA) reports
that 40% of cybersecurity executives blame the skills gap for high
turnover rates and employee burnout.6 Data from a Spiceworks survey
shows that IT professionals are clocking a 52-hour work week on
average, with almost 20% exceeding 60-hour weeks.7 It’s no wonder
filling this human resource gap is at the top of CxO priorities in
2019, second only to security.8
To make matters worse, a gap in communications between
mainframers and the distributed personnel who hold watch over
Security Information and Event Management or SIEM systems leads to
a consequential gap in security that is more than big enough for a
sophisticated hacker to exploit. Even though mainframes power the
enterprise computing needs of regulated industries like banking,
finance, healthcare and government, InfoSec resources continue to
miss the critical vital signs – indicators of compromise – that
spell trouble for highly sensitive data and intellectual property
residing on their mainframes.
Plenty of mistaken assumptions plague mainframe security. Some
personnel think that data security standards don’t apply to
mainframes, or that mainframes can’t be hacked. Others might not
realize that nightly processing of mainframe log data is as real
time as their mainframe gets to sending user events to the
distributed SIEM system managing their organization’s
cybersecurity. Unfortunately, in today’s threatscape, security
measures lacking real-time indicators are creating a security blind
spot that, on average, isn’t discovered for more than six
months.9
Indicators of Compromise Vs.Indicators of Attack – What to Watch
ForAround the globe, organizations rely on user log data as the
foundation of their security efforts. This data, which can quickly
grow to terabytes in seconds in a large enterprise, is indeed an
important component of security, but it is not in itself a defense
mechanism. Instead, log data
empowers an organization to correlate user logs with systems
logs (time, user location, system location, access type, etc.) to
determine potential anomalies in user activity synonymous with
cyber threat. All organizations collect log data, but without
combining logs with these other data points – the correlation –
they are missing the security events that could be indicators of
compromise (IOC).
IOCs are an important ingredient for your organization’s cyber
defense, but their presence often means the breach has already
occurred. They’re largely forensic, used after the fact to compile
evidence of a breach and uncover information about the criminal
actors involved and the methods they used. Knowing how a system was
breached is a great way to ensure that the same method can’t be
utilized again as soon as your systems are back up and running, but
it’s still a reactive approach to cybersecurity and one that’s
costing organizations a fortune.
Instead of relying on traditional IOCs (post-breach indicators),
your organization must move from a reactive approach to a proactive
one that looks for a different kind of evidence – Indicators of
Attack (IOA). At BMC, we work with leading penetration testers and
build our correlation threads around their tactics, techniques and
procedures. If you are going to proactively mitigate the risk of
cyberattack, you’ll need to identify and react to these IOAs in
real time. Without a cutting-edge security solution, it’s highly
possible that an IOA in your organization would have no connection
to any malware definition in your security operations center
(SOC).
How then, can you leverage both IOC events and IOA events to
predict and deter cyber risk? Three key components will help
tremendously:
1. Event correlation across both mainframe and distributed
systems
2. Visibility of enterprise-wide event data in your
organization’s SOC, and
-
© Arcati Limited, 2020 29
Arcati Mainframe Yearbook 2007Arcati Mainframe Yearbook 2020
Mainframe strategy
3. Real-time alerts to appropriate personnel (or automation
queue) of the impending cyber risk so