Top Banner

of 31

A Complete History Of Mainframe Computing

Apr 08, 2018

Download

Documents

Ivan Tišljar
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/7/2019 A Complete History Of Mainframe Computing

    1/31

    A Complete History Of

    Mainframe Computing

  • 8/7/2019 A Complete History Of Mainframe Computing

    2/31

    Harvard Mark I

    Our trip down mainframe lane starts and ends, not so surprisingly, with IBM. Back in the 1930s, when

    a computer was actually a fellow with a slide rule who did computations for you, IBM was mainly

    known for its punched-card machines. However, the transformation of IBM from one of the many

    sellers of business machines to the company that later became a computer monopoly was due in

    large part to forward-looking leadership, at that time going by the name of Thomas Watson, Sr.

    The Harvard machine was a manifestation of his vision, although in practical terms, was not a

    technological starting point for what followed. Still, it is worth looking at, just so we can see how far

    things have come.

    It all began in 1936, when Howard Aiken, a Harvard researcher, was trying to work through a

    problem relating to the design of vacuum tubes (a little ironic, as you will see). In order to make

    progress, he needed to solve a set of non-linear equations, and there was nothing available that

    could do it for him. Aiken proposed to Harvard researchers there that they build a large-scale

    calculator that could solve these problems. His request was not well-received.

    Aiken then approached Monroe Calculating Company, which declined the proposal. So Aiken took it

    to IBM. Aiken's proposal was essentially a requirement document, not a true design, and it was up to

    IBM to figure out how to fulfill these requirements. The initial cost was estimated at $15,000, but

    that quickly ballooned up to $100,000 by the time the proposal was formally accepted in 1939. It

    eventually cost IBM roughly $200,000 to make.

    It was not until 1943 that the five-ton, 51-ft. long, mechanical beast ran its first calculation. Because

    the computer needed mechanical synchronization between its different calculating units, there was a

    shaft driven by a five-horsepower motor running its entire length. The computer "program" was

    created by inserting wire links into a plug board. The data was read by punched cards and the results

    were printed on punched cards or by electric typewriters. Even by the standards of the day, it was

    http://www.tomshardware.com/gallery/02,0101-213667-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    3/31

    slow. It was only capable of doing three additions or subtractions per second and the machine took a

    rather ponderous six seconds to do a single multiplication. Logarithms and trigonometric calculations

    took over a minute each.

    As mentioned, the Harvard Mark I was a technological dead-end, and did not do much important

    work during the 15 years it was used. Still, it represented the first fully-automated computingmachine ever made. While it was very slow, mechanical, and lacked necessities like conditional

    branches, it was a computer, and represented a tiny glimpse at what was yet to come.

    ABC (Atanasoff-Berry Computer)

    Although only recognized as such many years later, the ABC (Atanasoff-Berry Computer) was really

    the first electronic computer. You might think "electronic computer" is redundant, but as we just saw

    with the Harvard Mark I, there really were computers that had no electronic components, and

    instead used mechanical switches, variable toothed gears, relays, and hand cranks. The ABC, by

    contrast, did all of its computing using electronics, and thus represents a very important milestone

    for computing.

    Although it was electronic, the computer's parts were very different than what is used today. In fact,

    transistors and integrated circuits are required just to have the same building blocks. They did not

    exist in 1939 when John Atanasoff received funding to build a prototype, so he used what was

    available at the time: vacuum tubes. Vacuum tubes could amplify signals and act as switches, so they

    could thus be used to create logic circuits. However, they used a lot of power, got very hot, and were

    very unreliable. These were tradeoffs he and others had to live with and were unfortunate

    characteristics of the computers built from them.

    The logic circuits he created with the vacuum tubes were fast, and could do addition and subtraction

    calculations at 30 operations per second. While it would be normal today, it was rare for a computer

    to use a binary system, since it was not a number system with which many were familiar at that time.

    Another important technology was the use of capacitors for memory, and "jogging" them with

    http://www.tomshardware.com/gallery/03,0101-213668-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    4/31

    electricity to keep their contents (similar to a DRAM refresh used today). Memory was not truly

    random though, as it was actually contained in a spinning drum that rotated fully once per second.

    Specific memory locations could only be read from when the section of the drum they were in was

    over the reader. This obviously had serious latency issues. Later, he added a punched-card machine

    (punched cards were very extensively used by businesses at that time to store records and perform

    computations on them) to hold data that could not fit in the drum memory.

    In retrospect, this computer wasn't terribly useful. It wasn't even programmable. But it was, at least

    on a conceptual level, a very important milestone for computers, and a progenitor to computers of

    the future. While working on this machine, Mr. Atanasoff invited a man named John W. Mauchly to

    view his creation. Let's find out why that was significant.

    ENIAC

    On December 7, 1941, Japan attacked Pearl Harbor, drawing the United States into the conflagrationknown as World War II. One problem every country at war had was creating artillery ballistic tables

    for each type of artillery they produced. This was a huge undertaking, being both a very slow and

    tedious process. So, the U.S. army granted funds to the Moore School of Electrical Engineering at the

    University of Pennsylvania to build an electronic computer to facilitate this work. You might have

    guessed from the last page that our friend John Mauchly just happened to be there, and he then took

    on this project with a gifted graduate student named J. Presper Eckert.

    However, World War II ended before the machine was completed. When finished in 1946, this 30-ton

    monstrosity consisted of 49-ft. high cabinets, 18,000 vacuum tubes, 1,500 relays, 70,000 resistors,

    10,000 capacitors, and 6,000 manual switches, and it consumed 200 kilowatts. Although finishedafter the war, it hardly proved useless. Capable of 5,000 additions, 357 multiplications, or 38

    divisions per second, the performance of this machine was incredible. Problems that took a human

    mathematician 20 hours to solve, took only 30 seconds for the ENIAC.

    The main problem with the machine, aside from the unreliability inherent in all vacuum tube

    machines, was that it was not programmable in the conventional sense. "Programs" were entered by

    the "ENIAC girls" working on plug boards and banks of switches. This generally took from a few hours

    to a few days. Also, in a backward step from the ABC computer, the ENIAC worked with decimal and

    not binary numbers.

    http://www.tomshardware.com/gallery/04,0101-213669-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    5/31

    Nevertheless, the ENIAC was an extremely useful machine for the U.S., particularly with the

    enhancements that were later added on, until it was retired in 1955. During its lifetime, it worked on

    problems ranging from weather forecasts, random-number studies, thermal ignition, wind-tunnel

    design, artillery trajectory calculations, and even development of the hydrogen bomb. In fact, by the

    time it was retired in 1955, it was estimated that the ENIAC by itself did more calculations than all of

    humankind did up to 1945.

    While the story of the ENIAC trails off in 1955, our two heroes, Mauchly and Eckert, still have much

    to accomplish before their stories end.

    EDVAC

    Even before the ENIAC ran its first test, Mauchly and Presper were very aware of its shortcomings. So

    was John Von Neumann, whom many of you have heard about from the expression "Von Neumann

    Architecture" (although he received too much personal credit for what was a group effort). At any

    rate, the EDVAC was the first expression of this architecture, although Mauchly and Presper left the

    University of Pennsylvania where it was being built in 1946, before the computer was finished.

    At that time, there were several major issues with the ENIAC. Sure, it was fast. But it had very little

    storage. More than that, it had to be reprogrammed by re-wiring it, which could take hours or even

    days, and it was inherently unreliable because the computer used so many vacuum tubes. In addition

    to being unreliable, vacuum tubes also used a lot of power, required a lot of space, and generated a

    lot of heat. Clearly, minimizing their use would have multiple advantages.

    There were two important conceptual changes (one of which was revolutionary) on the EDVAC that

    seem very obvious today. For one, it was binary rather than decimal, like the ENIAC, and this was

    much more efficient. Also, rather than rewiring the machine every time you wanted to change the

    "program," the EDVAC introduced the idea of storing the program in memory, just as if it were data.

    This is what we do today. We do not, after all, have separate RAM areas for applications and for their

    http://www.tomshardware.com/gallery/05,0101-213670-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    6/31

    data (although L1 caches typically operate this way). The processor knows, based on the context in

    which the memory was accessed, whether it is data or an executable.

    In addition, memory no longer consisted of vacuum tubes, but was stored as electrical impulses in

    mercury. The mercury delay line was 100 times more efficient in terms of the electronics necessary

    to store data and made much larger amounts of memory feasible and more reliable.

    The EDVAC was a major advance, and proved very useful until it was retired in 1960. It was a binary

    stored-program computer, which could be programmed much more quickly than the ENIAC could. It

    was also much smaller, weighing less than nine tons, and consumed "only" 56 kilowatts of power.

    Even still, our two heroes were not done yet.

    UNIVAC

    As mentioned, Eckert and Mauchly left the University of Pennsylvania in 1946 to form the Electronic

    Control Co. They incorporated their company in 1947, calling it the Eckert-Mauchly Computer Corp.,

    or EMCC. Their departure delayed the completion the EDVAC to the extent that the EDSAC, based on

    the EDVAC design, was actually completed before it. The dynamic duo, however, wanted to explorethe commercial opportunities that this new technology offered, which was not possible with

    university-sponsored research, so they developed a computer based on their ideas for the EDVAC

    and even superseded them. Along the way, they created the BINAC for financial purposes, but the

    Universal Automatic Computer (UNIVAC) is really the more interesting machine.

    The UNIVAC was the first-ever commercial computer, 46 units of which were sold to businesses and

    government after its 1951 introduction. All machines before it were unique, meaning they only made

    one of them. The difference for the UNIVAC was there were multiple UNIVACS (meaning many of the

    same design). Eckert and Mauchly correctly concluded that a computer could be used not only for

    computations, but also for data processing, while many of their contemporaries found the idea of

    http://www.tomshardware.com/gallery/06,0101-213671-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    7/31

    using the same machine for solving differential equations and paying bills to be absurd. At any rate,

    this observation was critical in the design and success of the UNIVAC.

    On a lower level, the UNIVAC consisted of 5,200 vacuum tubes (almost all in the processor), weighed

    29,000 pounds, consumed 125 KW, and ran at a whopping 2.25 MHz clock speed. It was capable of

    doing 455 multiplications per second and could hold 1,000 words in its mercury delay-line memory.Each word of memory could contain two instructions, an 11-digit number and sign, or 12 alphabetical

    characters. The processing speed was roughly equivalent to the amount of time it took the ENIAC to

    complete the tasks that it could perform. But in virtually every other way, it was better.

    Perhaps most importantly, the UNIVAC was much more reliable (mainly due to its use of much fewer

    vacuum tubes) than the ENIAC. On top of this, the "Automatic" in its name alluded to how it required

    no human effort to run. All the data was stored and read from a metal tape drive (as opposed to

    having to manually load the programs each time they were to be run with paper tapes or punched

    cards). Using tapes made actual processing much faster than the ENIAC, since the I/O bottleneck was

    mitigated. And of course, setup time re-wiring the ENIAC for the next "program" was eliminated.There were other niceties that made their appearance on the UNIVAC as well, like buffers (similar to

    a cache) between the relatively fast delay lines and relatively slow tape drives, extra bits for data

    checking, and the aforementioned ability to operate on both numbers and alphabetical characters.

    The UNIVAC gained additional fame by correctly predicting the landslide presidential victory of

    Dwight Eisenhower in 1952 on national TV. This and the fact it was the first commercially available

    computer gave Remington Rand (which had bought EMCC) a very strong position in the burgeoning

    electronic computer industry. They had thrown down the gauntlet with UNIVAC. But what was IBM

    doing at this time?

    IBM 701

    While most of our esteemed readers have a good idea of IBM's dominance in the world of computing

    from the mid to late 20th century, what may be less-known is where it starts, how and why it

    happened, and how it progressed. Let's start with one of the two computers it developed at thesame time as the UNIVAC.

    http://www.tomshardware.com/gallery/07,0101-213672-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    8/31

    We'll begin with the IBM 701, which was a direct competitor to the esteemed UNIVAC. Announced in

    1952, there were many similarities between the 701 and the UNIVAC, but many differences as well.

    Memory was not stored in a mercury-delay line, but in 3" vacuum tubes referred to as "William's

    Tubes," in deference to their inventor. Although they were more reliable than normal vacuum tubes,

    they still proved to be the greatest source of unreliability for the computer. However, one benefit

    was that all bits of a word could be retrieved at once, as opposed to the UNIVAC's mercury delay

    lines, where memory was read bit by bit. The CPU was also considerably faster than the UNIVAC's,

    which could almost perform 2,200 multiplications per second, compared to 455 for the UNIVAC. It

    could also execute almost 17,000 additions and subtractions, as well as most other instructions, per

    second. This was remarkable for that time. IBM's eight million byte tape drive was also very good and

    could stop and start much faster than the UNIVAC's and was capable of reading or writing 12,500

    digits per second. However, unlike the UNIVAC with its elegant buffers, the processor had to handle

    all I/O operations, which could severely impact performance on heavily I/O-based applications.

    In 1956, IBM introduced a technology known as RAMAC, which was the first magnetic disk system for

    computers. It allowed data to be quickly read from anywhere on the disk and could be attached not

    just to the 701, but to IBM's other computers, including the 650, which we will look at next. As most

    of you no doubt realize, this technology is the progenitor to the hard disks that are very much with us

    today.

    IBM produced 19 701 units, which were fewer than the number of UNIVACs made, but still enough to

    prevent Remington Rand from dominating the field. The cost was a serious inhibitor to more

    widespread use, setting the user back over $16,000 a month. Also, as mentioned, the 701 was only

    part of IBM's response. The 650 was the other.

    650 Magnetic Drum Data Processing Machine

    While IBM's more direct response to the UNIVAC was the 701 (and later the 702), it also was working

    on a lower-end machine known as the 650 Magnetic Drum Data Processing Machine (so named

    because it employed a rotating drum that spun at 12,500 revolutions per minute and could store

    2,500 10-digit numbers). It was positioned somewhere between the big mainframes like the 701 and

    http://www.tomshardware.com/gallery/08,0101-213673-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    9/31

    UNIVAC and the punched-card machines used at the time, the latter of which were still dominating

    the market.

    While the 701 generated most of the excitement, the 650 earned most of the money and did much

    more to establish IBM as a player in the electronic computer industry. Costing $3,250 per month

    (IBM didn't sell computers at that time, but only leased them), it was much less expensive than the701 and UNIVAC, but was still considerably more expensive than the punched-card machines so

    prevalent at that time. In total, over 2,000 of these machines were built and leased. While this

    greatly exceeded the 701's and UNIVAC's deployment, it was paltry compared to the number of

    punched-card accounting machines that IBM sold during the same period. Although very reliable by

    computer standards, it still used vacuum tubes and thus was inherently less reliable than IBM's

    electromechanical accounting machines. On top of this, it was considerably more expensive. Finally,

    the peripherals for the machine were mediocre at best. So, right up to the end of the 1950s, IBM's

    dominant machine was the punched-card Accounting Machine 407.

    To be able to usurp the IBM Accounting Machine 407, a whirlwind of changes were needed. Thecomputer would need better peripherals and had to become more reliable and faster, while costing

    less. Our next machine is not the computer that finally banished the 407 into obsolescence--at least

    not directly--but many of the technologies that were developed for it did.

    Whirlwind project

    The Whirlwind project was ironic. It went way over budget, took much longer than intended, and was

    never used in its intended role, but was arguably one of the most important technological

    achievements in the computer field.

    In 1943, when the US Air Force gave MIT's Jay Forrester the Whirlwind project, he was told to create

    a simulator to train aircraft pilots rather than have them learn by actually being in a plane. This

    intended use was very significant in that it required what we now call a "real-time system," as the

    simulator had to react quickly enough to simulate reality. While other engineers were developing

    http://www.tomshardware.com/gallery/09,0101-213674-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    10/31

    machines that could process 1,000 to 10,000 instructions per second, Forrester had to create a

    machine capable of a minimum of 100,000 instructions per second. On top of this, because it was a

    real-time system, reliability had to be significantly higher than other systems of that time.

    The project dragged on for many years, long after World War II had ended. By that time, the idea of

    using it for a flight simulator disappeared, and for a while, they weren't quite sure what this machinewas being developed to do. That is, until the Soviets detonated their first nuclear bomb and the U.S.

    government decided to upgrade its antiquated and ineffective existing air defense system. One part

    of this was to develop computer-based command-and-control centers. The Whirlwind had a new life,

    and with so much at stake, funding would never be a problem.

    Memory, however, was a problem. The mercury-delay line that others were using was far too slow,

    so Forrester decided to try a promising technology: electrostatic storage tubes. One problem he

    faced was that they did not yet exist, so a lot of development work had to be put into this before he

    would have a working product. But once it was completed, electrostatic storage tubes were deemed

    unreliable and their storage capacity was very disappointing. Consequently, Forrester, who wasalways looking for better technology, started work on what would later be called "core memory." He

    passed his work on to a graduate student also working on the project, called Bill Papian, who had a

    prototype ready by 1951 and a working product that replaced the electrostatic memory in 1953. It

    was very fast, very reliable, and did not even require electrical refreshes to hold its values. We'll talk

    more about core memory later, but suffice it to say, it was an extremely important breakthrough that

    quickly became the standard for well over a decade.

    Core memory was the final piece of the puzzle. The computer was effectively complete in 1953 and

    first deployed in Cape Cod. Although it failed to reach the intended performance level, it was still

    capable of 75,000 instructions per second. This far exceeded anything available back then. Thetechnology was transferred by MIT to IBM, where the production version was re-christened the IBM

    AN/FSQ-7 and saw production in 1956. These monsters had over 50,000 vacuum tubes, and weighed

    over 250 tons, which made them the largest computers ever built. It also consumed over a megawatt

    of power, not including the necessary air conditioning.

    SAGE (Semi-Automatic Ground Environment), the bomber-tracking application for which the

    Whirlwind was now intended, became fully operational by 1963. Ironically, this was past the time

    when the Whirlwind was truly useful, since it was designed to track bombers, and by then, ICBMs

    had made their appearance a few years earlier. Nonetheless, while the actual uses for the Whirlwind

    were dubious, the technologies either created or accelerated by it were extremely important. Theseinclude not only the aforementioned core memory, but the development of printed circuits, mass-

    storage devices, computer graphics systems (for plotting the aircraft), CRTS, and even the light pen.

    Connecting these computers together gave the United States a big advantage in networking

    expertise and digital communications technologies. It even had a feature we lack in modern

    computers: a built-in cigarette lighter and ashtray. Clearly, it was worth the $8 billion that it cost to

    fully install SAGE, even though SAGE never helped intercept a single bomber.

  • 8/7/2019 A Complete History Of Mainframe Computing

    11/31

    IBM 704

    Announced in 1954, the IBM 704 was the first large-scale commercially-available computer system to

    employ fully automatic floating-point arithmetic commands and the first to use the magnetic core

    memory developed for the Whirlwind.

    Core memory consisted of tiny doughnut-shaped metal pieces that were roughly the size of a pin-

    head with wires running through them, which could be magnetized in either direction, giving a logical

    value of zero or one. Core memory had a lot of important advantages, not the least of which was that

    it did not need power to maintain its contents (an advantage it holds over modern memory). It also

    allowed truly random access, where any memory location was accessed as quickly as any other

    (except when interleaving was used, of course). This was not the case with prior forms of memory. It

    was considerably faster than other memory technologies used, having an access time of 12

    microseconds. Perhaps most importantly, however, was the much greater reliability that the IBM 704

    offered.

    For longer-term storage, the 704 used a magnetic drum storage unit. For additional storage, tapes

    capable of holding five million characters each were used.

    The 704 was quite fast, being able to perform 4,000 integer multiplications or divides per second.

    However, as mentioned, it was also capable of doing floating point arithmetic natively and could

    perform almost 12,000 floating-point additions or subtractions per second. More than this, the 704

    added index registers, which not only dramatically sped up branches, but also reduced program

    development time (since this was handled in hardware now).

    The 704 pioneered two major technologies we have today: the index registers and floating-point

    arithmetic. Magnetic core memory was also extremely useful, offering far greater speed and

    reliability, but it was a transient technology.

    http://www.tomshardware.com/gallery/10,0101-213675-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    12/31

  • 8/7/2019 A Complete History Of Mainframe Computing

    13/31

    The holistic approach IBM took also included software. For the first time, free of charge, IBM

    included software packages for most of the needs of its customers rather than make its customers

    develop their own. This was critically important, since it saved considerable time and money on in-

    house development and allowed businesses that did not have programmers to finally derive the

    benefits of computers.

    And strangely, one of the biggest advantages of the 1401 was its printer. The 1403 "chain" printer

    had a rated speed of 600 lines per minute, which was four times the speed of the 407 accounting

    machine. It was also very reliable. In fact, for many, the 1403 was a salient characteristic of the

    system and often sold the computer that went with it.

    All of these contributed to a machine that transformed the computer industry. It was extremely

    successful not only thanks to its excellent technical characteristics, but also due to its low starting

    price of only $2,500 per month. In fact, after the release of the 1401, the computer industry became

    known as IBM and the seven dwarfs. The 1401 was that good.

    IBM 7090

    Announced in late 1958, IBM replaced the aging 709 (the last of the 700 line we saw a few pages ago)

    with the 7090. In fact, in many ways, the 7090 was essentially a 709 made with 50,000 transistorsrather than vacuum tubes. However, there were many benefits because of it, including both speed

    and reliability.

    The 7090 and its later upgraded form, the 7094, were classic, powerful, and very large mainframe

    computers--and they were very expensive. The 7090 cost around $63,500 a month to rent in a typical

    configuration, and that did not include electricity.

    Despite its cost, the speed of this machine could still make it very appealing. It was roughly five to six

    times faster than the 709 it had replaced, and was capable of 229,000 additions or subtractions,

    39,500 multiplications, or 32,700 divisions in one second. The 7094, announced in 1962, was capable

    http://www.tomshardware.com/gallery/12,0101-213677-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    14/31

    of 250,000 additions or subtractions, 100,000 multiplications, and 62,500 divisions per second. It

    could use 32,768 36-bit words of core storage.

    However, outside of implementing the newest technologies (core memory, RAMAC, transistors, etc.)

    and the consequent improvement in speed, power use, and reliability, it was not functionally very

    different from its predecessor. Jobs were executed by collecting them on reels of tape and were runin batches, and the results were given back to the programmer when done.

    While the performance, capacity, and reliability of these machines were impressive (mainly due to

    the move to transistors and other new technologies), it would be a stretch to call this a

    groundbreaking machine that pushed the boundaries of computing.

    IBM 7030 Stretch

    IBM's 7030, or Stretch, is something of a paradox. It introduced new technologies, many of which are

    still in use today, and was the fastest computer in the world for three years after it was introduced.

    However, it was considered a failure to such an extent that IBM reduced its price before

    discontinuing it very quickly with a loss of around $20 million. How could this be?

    In 1956, Los Alamos Scientific Laboratory awarded IBM a contract to build a supercomputer. The goalof this computer was to offer a hundred-fold improvement over the IBM 704's performance. This was

    a very ambitious goal indeed. And in fact, the 7030 outperformed the 704 by a factor of up to 38

    when it was released in 1961. Due to this "disappointing" performance, IBM was forced to lower the

    price of the machine from $13.7 million to a paltry $7.78 million, which meant IBM lost money on

    every machine. This being the case, after meeting its contractual obligations, IBM withdrew the 7030

    from the market, which was a major disappointment and failure. Or was it?

    Not only was the performance of this machine far ahead of its time (0.5 MIPS), but the technologies

    it introduced read like a who's who list of modern computing. Does instruction prefetching sound

    familiar? Operand prefetching? How about parallel arithmetic processing? There was also a 7619 unitthat channeled data from the core memory to external units, like magnetic tapes, console printers,

    http://www.tomshardware.com/gallery/13,0101-213678-0-2-3-1-png-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    15/31

    card punches, and card readers. This is an expensive version of the DMA functionality we use today,

    although mainframe channels were actual processors themselves and far more capable than DMA. It

    also added interrupts, memory protection, memory interleaving, memory write buffers, result

    forwarding, and even speculative execution. The computer even offered a limited form of out-of-

    order execution called instruction pre-execution. You probably already surmised that the processor

    was pipelined.

    The applications are almost equally impressive. The 7030 was used for nuclear bomb development,

    meteorology, national security, and the development of the Apollo missions. This became feasible

    only with the Stretch due to the enormous amount of memory (256,000 64-bit words) and incredible

    processing speed. In fact, it could perform over 650,000 floating point adds in a second and over

    350,000 multiplications. Up to six instructions could be in flight within the indexing unit and up to

    five instructions could be in flight within the look-ahead and parallel arithmetic unit. Thus, up to 11

    instructions could be in some stage of execution within Stretch at any one time. Even compared to

    the excellent 7090 released at that time, the 7030 was anywhere from .8 to 10 times the speed,

    depending upon the instruction stream.

    So, while the 7030 had a short, but very useful life, its technology is still with us today, and had a very

    important impact on the legendary System/360 family. This could easily be the most important

    computer in the history of mainframes. Yet, it was a failure. Who says life makes sense?

    B 5000

    By now, at least a few of you would probably like to remind me that IBM was not the only company

    to make a computer since the UNIVAC. Your point is well taken, so let us take a look at a machine

    from Burroughs, the B 5000. This is a really interesting machine, especially considering that it was

    announced in 1961. In fact, to this day, UNISYS still supports the software.

    http://www.tomshardware.com/gallery/14,0101-213679-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    16/31

    The B 5000 was developed for high-level languages, namely COBOL and ALGOL. By this I mean the

    machine language was created mainly for easy translation from higher-level languages. It contained a

    hardware stack, segmentation, and many descriptors for data access.

    The descriptors had many uses, which included allowing bound checking in hardware, distinguishing

    between character strings and arrays of words, easing dynamic array allocation, indicating the size ofcharacters, and even whether something was in core memory or not. Why would we need that? In

    two words, virtual memory. The B 5000 was the first commercial computer with this technology. It

    also supported multiprocessing and multiprogramming, even with ALGOL and COBOL. In fact, the

    Master Control Program (MCP), as the operating system was called, handled memory and

    input/output unit assignments, segmentation of programs, subroutine linkages, and scheduling,

    which freed the programmer from all these tedious and time-consuming tasks.

    Another aspect Burroughs was proud of was the modular nature of the computer. It could be

    increased or decreased, without costly "reprogramming" of the entire machine.

    The B 5000 was not the commercial success IBM mainframes were. In fact, it was sometimes referred

    to as the machine everyone loves but no one buys. However, its design was nothing less than elegant

    and efficient. It focused on solving problems within the context of how humans interacted with and

    related to computers, as opposed to speed for the sake of speed. Perhaps more importantly, some of

    the technologies it introduced, like virtual memory and multiprocessing, are necessities in present

    computers, some of which still support this magnificent architecture 48 years after it was introduced.

    UNIVAC 1107 Thin Film Memory Computer

    While IBM deserves much praise for the innovations first expressed in the Stretch, Remington Rand,

    the number-two computer company in the world at the time, was busy conjuring up some of its own

    magic with the UNIVAC 1107 Thin Film Memory Computer.

    As you no doubt guessed from its name, the main technological accomplishment was the use of thin-film memory. It had an access time of 300 nanoseconds and a complete cycle time of 600

    http://www.tomshardware.com/gallery/15,0101-213680-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    17/31

    nanoseconds, making it extremely fast for 1962, when the machine was released. However, this did

    not replace core memory, which had a cycle time of roughly two microseconds, but rather was used

    to provide multiple accumulators, multiple index registers, and multiple input-out control registers.

    This allowed for greater parallelism, with increased speed as the end result. In total, there were 128,

    36-bit words of thin-film memory (alternatively called "control memory" because of its function). By

    today's standards, this would not be considered memory at all, but part of the processor, much like

    registers. Although, in both cases, they are really very fast internal memory. One difference is that

    the control-memory registers were actually accessed by using a memory address as opposed to

    register name, but only when using special instruction designators or when referred to by an

    execution address. If not accessed this way, the addresses were mapped to core memory. So, rather

    strangely, the memory map for the first 128 bytes was different depending upon the context.

    While the thin-film memory was certainly the biggest splash in the pool, there were other interesting

    features of this enduring line worth mentioning. For one, it had usable word sizes of 36-bits.

    Characters were expressed in six bits. Memory banks were interleaved so that if reads were done

    from different banks in successive reads, the access time was only 1.8 microseconds. If the word was

    in the same bank, it was four microseconds. As mentioned, this averaged out to two microseconds

    since it was more likely to access a different bank. The 1107 also contained 16 input and 16 output

    channels, all of which could be used concurrently to support a maximum of 250,000 words per

    second.

    The main storage of the machine consisted of one to eight magnetic drums, each capable of storing

    from 262,144 to 6,291,456 words, giving this machine an enormous capacity of over 94 million 36-bit

    words (or over half a billion characters of storage).

    Although the UNIVAC 1107 was without question a fine machine in its own right, its more importantsignificance was the family of computers it started. While never approaching the sales of a series of

    computers that IBM would soon introduce, UNIVAC's 1100-series made the company the second-

    largest in the world for many years and is still supported by UNISYS today. But enough of the horse

    that placed. Let's head back to Big Blue.

  • 8/7/2019 A Complete History Of Mainframe Computing

    18/31

    IBM System/360

    When most people think of a mainframe, they think of the System/360 family of computers from

    IBM, arguably the most important computer architecture created. In many ways, it is similar to 8086

    processors in that it created the standard for an industry and spawned a long line of descendants

    that are still alive and thriving to this day. One big difference is that IBM actually intended the

    System/360 to be important, unlike the 8086, which gained an importance its creator could neverhave foreseen. In fact, as many of you know, Intel even tried to kill off this instruction set with the

    Itanium.

    But let's get back to the matter at hand. Prior to the System/360, IBM had something of a mess on its

    hands, having created many systems that were incompatible with each other. Not only did this make

    it more difficult for its customers to upgrade, but it also was a logistical nightmare for IBM to support

    all these different operating systems on different hardware. So, IBM decided to create what we

    almost take for granted today: a compatible line of computers, with differing speeds and capacities,

    but all capable of running the same software. In fact, in April 1964, IBM announced six computers in

    the line, with performance varying by a factor of 50 between the highest- and the lowest-endmachines. This actually doubled the design goal of 25, which in itself posed many problems for IBM.

    Scalability of this magnitude was said to be impossible even by the infamous and brilliant Gene

    Amdahl. It was never a simple matter of just making something 25 times "bigger" than the smallest

    part and it really had to be completely re-implemented.

    Today, it is common to disable parts of a processor, or underclock it to somehow compromise the

    performance. But back then, it was not economically feasible to create a high-end processor and

    artificially lower its performance for marketing purposes. So, IBM decided on the idea of adding

    "microprogramming" to the System/360, so that all members of the family used the same instruction

    set (except for the lowest-end, Model 20, which could execute a subset). These instructions werethen broken down into a series of "micro-operations," which were specific to that system

    http://www.tomshardware.com/gallery/16,0101-213681-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    19/31

    implementation. By doing this, the underlying processor could be very different, and this allowed

    scalability of the magnitude IBM wanted, and as mentioned, even exceeded it by two times.

    This probably sounds familiar to you, since something similar has been implemented on x86

    processors since the Pentium Pro (or really, NexGen Nx586). As mentioned, however, IBM planned

    this. The x86 designers did this because the instruction set was so poor, it could not be directlyexecuted effectively. There was one very important advantage of this micro-programming that could

    not be easily implemented on a microprocessor. By creating new micro-programming modules, the

    System/360 could be made compatible with the very popular 1401, for the lower end machines, and

    even the 7070 and 7090 for the higher end System/360s. Since this was done in hardware, it was

    much faster than any software emulation, and in fact generally ran older software much faster on

    the System/360 than on the native machine, due to the machine being more advanced.

    Some of these advances are still with us today. For one, the System/360 standardized the byte at

    eight bits, and used a word length of 32-bits, both of which helped simplify the design since they

    were powers of two. All but the lowest-end Model 20 had 16 general-purpose registers (the same asx86-64), whereas most previous computers had an accumulator, possibly an index register, and

    perhaps other special-function registers. The System/360 could also address an enormous amount of

    memory of 16 MB, although at that time this amount of memory was not available. The highest-end

    processor could run at a very respectable 5 MHz (recall that is the speed the 8086 was introduced at

    14 years later), while the low-end processors ran at 1 MHz. Models introduced later in 1966 also had

    pipelined processors.

    While the System/360 did break a lot of new ground, in other ways it failed to implement important

    technologies. The most glaring deficiency was that there was no dynamic address translation (except

    in the later model 67). This not only made virtual memory impossible, but it made the machinepoorly suited for proper time-sharing, which was now becoming a possibility with the increasing

    performance and resources of computers. Also, IBM eschewed the integrated circuit, and instead

    used "solid-logic technology," which could roughly be considered somewhere between the

    integrated circuit and simple transistor technology. Conversely, on the software side of things, IBM

    was perhaps a bit too ambitious with OS/360, one of the operating systems designed for the

    System/360. It was late, used a lot of memory, was very buggy, lacked some promised features, and

    more than that, continued to be buggy long after it was released. It was a well known, high visibility,

    and dramatic failure, although IBM eventually did get it right and it spawned very important

    descendants.

    Despite these issues, the System/360 was incredibly well-received and over 1,100 units were ordered

    in the first month, far exceeding even IBM's goals and capacity. Not only was it initially successful,

    but it proved enduring and spawned a large clone market. Clones were even made in what was then

    the Soviet Union. It was designed to be a very flexible and adaptable line, and was used extensively in

    all types of endeavors, perhaps most famously the Apollo program.

    More importantly, the System/360 started a line that has been the backbone of computing for

    almost 50 years, and represents one of the most commercially important and enduring designs in the

    history of computing.

  • 8/7/2019 A Complete History Of Mainframe Computing

    20/31

    CDC 6600

    While IBM was busy focusing on a wide swath of compatible systems with its System/360 line, a

    company called CDC had a different design goal for its next computer: fast and really fast.

    Unshackled by any other considerations, such as compatibility or cost, Seymour Cray was free to use

    his legendary talents to focus on raw speed. He succeeded, as the roughly $7 million machine was

    the fastest computer from 1964 to 1969 by employing a unique design that relied on what would

    now be called an asymmetric multiprocessor design.

    The main CPU ran at a blazing 10 MHz, but was very limited in the instructions it could perform, since

    it was in a very real sense a RISC processor long before the term was coined. It was capable of only

    very simple ALU functions, but was complemented by 10 logical peripheral processors that could do

    what the CPU could not, and kept it fed with data, while unfettering it from retiring data. The ability

    to make the processor more specialized and the parallelism by using the 10 "barrel" processors were

    key components in the exceptional performance of this machine. With an enormous amount of

    memory (128 K words), this 60-bit computer could trade off larger executable size for the additional

    performance that a simple instruction set could offer.

    Although the CDC 6600 was a profitable machine, it was never a threat to the System/360s market

    sharenor was it ever intended to be. Like our next machine, sometimes it was better to compete

    where IBM was not, rather than where it had targeted. The 6600 targeted a market higher than even

    the System/360 Model 75 could reach, while the next computer we look at targeted a market below

    where the System/360 Model 20 could.

    http://www.tomshardware.com/gallery/17,0101-213682-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    21/31

    DEC PDP-8

    While IBM was busy releasing its magnificent System/360 line, Digital Equipment Corp. (DEC) was

    about to release a computer that would have a major impact on the future of computing as well, the

    PDP-8. Although the different computers in System/360 had an enormous range of performance

    characteristics and capacities, they were still mainframes, and even the lowest-end models were still

    too expensive for many businesses. This opportunity was not lost on DEC's founder, Ken Olsen.

    Although DEC had released computers as early as 1960, these models were only modestly successful

    and had little impact on the industry. However, the steady advance of technology, mainly in the form

    of integrated circuits, allowed DEC to sell a much smaller and much less expensive computer than its

    predecessors. Integrated circuits also allowed for much lower power use, and consequent to that,

    much less heat dissipation. This freed computers from purpose-built air-conditioned rooms. When

    released in 1965, the PDP-8 sold for an astonishingly low price of $18,000, which, with the

    aforementioned housing requirements, put computers within the reach of many companies that

    previously found them to be prohibitively expensive.

    One unique feature of the PDP-1, DEC's first product, was the use of true direct memory access

    (DMA), which was much cheaper and less complicated than the channels mainframes used, and

    without much negative impact on the processor performance. In fact, a single mainframe channel

    cost more than the entire PDP-1. DMA was used on every successive computer DEC made, including

    the PDP-8. However, not all the cost-cutting comprises made for the PDP-8 were so benign. The 12-

    bit word length dramatically limited the amount of directly addressable memory, while only 7-bits of

    the word comprised the address field, allowing only 128 bytes to be directly addressed. There were

    ways around this drawback, one of which was to use indirect addressing, where the 7-bits pointed to

    a memory location that contained the actual address that you wanted to access, which was slower,

    but allowed a full 12-bit access. The other way was to divide memory into segments of 128 bytes,and change segments when necessary (and people thought the 64 K segments of 16-bit x86

    http://www.tomshardware.com/gallery/18,0101-213683-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    22/31

    processors were bad). Neither solution was desirable, and they severely limited the usefulness of the

    PDP-8 with high-level languages. The PDP-8 was also no speed demon, and was capable of only

    35,000 additions per second.

    Despite these compromises, the PDP-8 was remarkably successful, selling over 50,000 machines

    before it was discontinued. The low purchase price, low continuing costs, and ease of housing it allwere more compelling than its deficits were damning. In fact, this modest machine sparked a whole

    new type of computer, called the mini-computer, which became very successful for over two

    decades and made DEC the second-largest computer company in the world. Perhaps sadly, the mini-

    computer did not survive the march of the micro-computer, and is now an extinct species, and thus is

    more applicably called a dinosaur than the normal recipient of that unflattering term, the mainframe.

    The mainframe still sits on top of the food chain, capable of things far beyond desktop computers.

    IBM System/370

    Although the System/360 was very successful, and in some ways, revolutionary and innovative, it

    also eschewed leading-edge technologies that left opportunities for other companies to exploit. To

    its credit, however, it was still selling well even six years after its announcement, and it laid afoundation for generations that followed it, of which the System/370 was first.

    The initial launch of the System/370 in 1970 consisted of just two machines, the charismatically

    named 155 (running at almost 8.70 MHz) and 165 (running at 12.5 MHz). Naturally, both machines

    were compatible with programs written for the System/360 and could even use the same

    peripherals. Additionally, the performance was greatly improved, with the System/370 165 offering

    close to five times the performance of the System/360 65, the fastest machine available from that

    line when it was released in November 1965.

    There were also several new technologies for the System/370, compared to the System/360. IBMfinally moved to the integrated circuit, a change many people thought long overdue. Most models in

    http://www.tomshardware.com/gallery/19,0101-213684-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    23/31

    the line had transistor memory rather than core memory. The System/370 also finally supported

    dynamic address translation (on all but the initial two models), which was an important technology

    for time sharing and virtual memory. There was also a very high-speed memory cache (80 ns for the

    165), which IBM called a buffer. This was used by the processor to mitigate the relatively slow (two

    microsecond or 2,000 ns) main memory access time. Another important consideration was that the

    System/370 was built from the beginning with dual processors and multiprogramming in mind.

    So, while the System/370 was not a spectacular announcement, it did plug up some glaring holes in

    the System/360, improved speed considerably, expanded the instruction set, and maintained a high

    degree of compatibility. It was a solid step forward and maintained IBM's dominance in the

    mainframe world.

    IBM 3033 The Big One

    While the System/370 line dominated mainframe computing for many years by introducing new

    models with new features and performance characteristics, IBM announced in March 1977 the

    successor to this very successful family of computers, the 3033, or "The Big One."

    Although IBM mainly stressed the additional speed (1.6 to 1.8 times the speed of the System/370

    168-3) and its much smaller size, ironically for "The Big One," this machine's technical merits would

    not look out of place on a modern computer. Running at 17.24 MHz, the processor sported an eight-

    stage pipeline, branch prediction, and even speculative execution. It contained several logical units

    and 12 channels. The units of 3033 processor were the instruction preprocessing function (IPPF),

    execution function (E-function), processor storage control function (PSCF), maintenance and retry

    function, and the well-known channels indigenous to all IBM mainframes. The IPPF fetched

    instructions and prepared them for the execution by the E-function, determined priority, and made

    fetch requests for the operand. It not only used branch prediction, but it could buffer three

    instruction streams at once, so in the event it "guessed" wrong, it was likely to have the other

    instruction sequence ready and preprocessed for the E-Function. The E-Function, not surprisingly,

    was the execution engine of the processor, boasting a very large 64 K cache (with a 64 byte line size)

    for the first time, to speed up memory accesses. Memory itself was eight-way interleaved, allowingrefreshes to occur in the seven banks it was not accessing when it did a read, which sped up read

    http://www.tomshardware.com/gallery/20,0101-213685-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    24/31

    time if the next access was in one of those seven banks (DRAM requires a refresh after a read to be

    accessed again).

    The processor storage control function handled all requests for storing or fetching data from

    processor storage, and translated virtual address to absolute storage addresses using a technology

    we previously mentioned as dynamic address translation. Like modern processors, it used translationlookaside buffers to speed this up. Essentially, this is a cache of addresses already translated from

    virtual to absolute, so if the processor can find them there, conversion is unnecessary. On the 3033, if

    an address could be found, it would take one clock cycle to resolve it, or if not, it could take

    anywhere from 10 to 40, which is quite a difference.

    The maintenance and retry function provided the data path between the operator console and the

    3033 processor for manual and service operations.

    So, while ostensibly the 3033 was just a very fast successor to the powerful System/370 168-3, when

    we look closer we see it has almost all the technologies of a modern processor and even some thatare lacking in a portion of today's modern CPUs. However, it was still a scalar design, and despite its

    impressive characteristics, was replaced relatively quickly by the 3081. While I know you are just

    brimming with curiosity about the 3081 (who could blame you?), and I can assure we will get very

    familiar with it, let us first take a short interlude by looking at what DEC, the second largest computer

    company of the world at that time, had to offer.

    DEC VAX-11/780

    While most of our readers know that the x86 instruction set originated in 1978 with the 8086,

    perhaps a more important development happened a year earlier when Digital released the infamous

    VAX-11/780. But how could anything possibly be more important than the x86 instruction set?

    http://www.tomshardware.com/gallery/21,0101-213686-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    25/31

    When most people think of DEC, they remember a large mini-computer maker that failed and was

    bought by Compaq when the micro-computer usurped DECs key market. But what happened in 1977

    that was so important? DECs VAX and its very comely wife, VMS, the latter of which still has much

    relevance today.

    The VAX-11/780 was ostensibly released to address the shortcomings of the highly successful andvery well liked PDP-11. DEC downplayed many of the changes and instead focused on the ability to

    finally break the 16-bit (64 K) addressable memory limitation of the PDP-11 with the VAX-11/780s

    32-bit address. However, there was much more to it than that.

    The VAX is considered by most to be the finest of all CISC instruction sets, rivaled only by those

    influenced by it. It was a highly orthogonal instruction set, with 243 instructions on several basic data

    types and with 16 different addressing modes. This elegant architecture was a strong influence in the

    Motorola 68000 family, which became the platform for Apple Lisa and MacIntosh until it was

    replaced by the PowerPC in the 1990s. Incidentally, the performance of the VAX-11/780 was adopted

    as a standard measurement when VAX MIPS, or later just one MIPS, became a measure of

    computer performance.

    However, perhaps the most important contribution of the VAX was VMS. Windows NT was

    developed by none other than Dave Cutler, the designer for VMS. He was one of many VMS

    developers who went over to Microsoft to work on the development of Windows NT. Despite the

    controversy surrounding Windows, Windows NT is still the dominant operating system in use today,

    and will remain so for the foreseeable future, particularly since Windows 7 is being far better

    received than Vista. This is not to denigrate the VMS operating system as insignificant other than its

    impact on Windows NT as it was a much respected design that was especially user friendly.

    Many showered accolades on this easy-to-use operating system, which was very much ahead of its

    time. In fact, although the VAX is dead, OpenVMS is clearly not, and is currently running on Intel's

    Itanium processors and HP's archaic Alpha processors, with a new release due out later this year.

    Thus, since its release 32 years ago, the operating system is still going strong.

    As delightful as the VAX and VMS were, and the latter still is, they never challenged the Big Blue

    beast in any real economic way and instead probably helped IBM in its fight with the government,

    which was not too fond of what they considered IBM's monopoly. In 1981, President Ronald Reagan

    dropped the anti-monopoly suit against IBM, and that same year, Big Blue released the 3081, which

    incidentally, was the first mainframe with which I had experience. And what an experience it was.

  • 8/7/2019 A Complete History Of Mainframe Computing

    26/31

    IBM 3081

    I still remember it like it was yesterday, that spring day in 1988 when I got a call from IBM telling me

    to come in to interview for a computer operator position. I was ecstatic when what was considered

    to be the greatest company in the world, the only one I wished to work for, was going to be my

    employer. It was a different time then, when IBM represented everything good about American

    business, and represented the pinnacle of success.

    During my first day of work, I was introduced to a machine that was released in 1981, the 3081. I had

    some experience with an old Univac 1100/63 in college, but until then, I was more familiar with

    micro-computers, which were best represented by the still fairly-new 80386 and 68030. My first

    impression of the mainframes was unfavorable. Even by the microcomputers standards of the day,

    the interface was primitive and far less intuitive than that of PCs. I was not impressed.

    The scale of things was shocking. The air-conditioned, raised-floor room was hundreds of feet wide

    and long, with almost a dozen 3081s and an enormous number of DASDs. We had six printers, which

    were enormous and almost the same size as the mainframe. There were three sets of consoles, one

    for the print area and one for the tape area, while the computers were monitored closely in the main

    console area.

    We had three interfaces to MVS, as the operating system was called, which stood for multiple virtual

    storages, but we derisively referred to it as man versus system. There were the system consoles,

    which were essentially only available in operations: time share option (TSO) and operations planning

    and control (OPC). TSO was what many people used to do their work, while OPC was mainly for

    scheduling batch jobs that were going to run on the system. Many programmers preferred to work

    on VM, which was another operating system IBM offered for the 3081, before transferring their work

    to the MVS machine.

    Our site had responsibility for the customer master record (CMR), which was used by many

    applications and sometimes directly by people. This ran on an IBM internal application called AAS,

    which was never sold. There were also some applications on CICS, which was the product IBM sold

    http://www.tomshardware.com/gallery/22,0101-213687-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    27/31

  • 8/7/2019 A Complete History Of Mainframe Computing

    28/31

    IBM 3090

    Although not one of the more well-known mainframes, when the IBM 3090 was announced in 1985

    it offered a solid advancement on the System/370 architecture that not only continued the

    improvements in speed, but also increased the number of processors while giving them vector

    processing options.

    Initially only available as Model 200 and Model 400 (the first number denoted the number of

    processors), the line was expanded dramatically in its short four years of existence. A uniprocessor

    version (1xx series) and a 600 series of processors were added, as well as an enhanced version of

    each model (denoted with an "E" after the model; for example, 600E). Even the original models were

    formidable, running at over 54 MHz, and executing instructions almost twice as fast as the 3081s

    they replaced.

    The next year the 3090 was expanded to include the vector processing feature, which added 171 new

    instructions and sped up computation-intensive programs by a factor of 1.5 to three times. The "E"

    version of the 3090 ran at a brisk 69 MHz, and was capable of roughly 25 MIPs per processor. By

    comparison, the x86 processor at that time, the 80386, ran at 20 MHz, was capable of roughly 4

    MIPs, was uniprocessor only, and had no vector instructions.

    The 3090 was replaced after four short years by the ES/9000 line. With local area networks (LANs)

    gaining popularity and powerful new processors like the 80486 and the many RISC designs (including

    IBM's own POWER), it was becoming increasingly clear that these technologies would soon render

    the mainframe obsolete and extinct, as they were doing to the mini-computer. The handwriting was

    on the wall for anyone that wanted to read it. Or was it?

    http://www.tomshardware.com/gallery/23,0101-213688-0-2-3-1-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    29/31

    IBM ES/9000

    In late 1990, IBM replaced the illustrious 3090 with the ES/9000 line, which ushered in the era of

    fiber optics with a technology IBM called ESCON or Enterprise Systems Connection. Naturally, this

    was not the only new thing about these systems. In fact, Thomas J. Watson Jr. considered the

    ES/9000 as the most important release in the company's history. Even more important than the

    System/360, you ask? Well, Mr. Watson thought so.

    So let us assume he was lucid and not simply issuing hyperbole. Certainly ESCON was an important

    technology. It was a serial, fiber optic channel that could transmit data at 10 MB/s and up to nine

    kilometers apart when it was released. Or maybe he was referring to the massive amounts of 9 GB of

    memory it could use? Or perhaps it was the ability to use eight processors in one sysplex, which

    allowed it to be treated as one logical unit? Then again, for the first time, one could create multiple

    partitions and allocate processor resources to each logical partition, and run any of the new (and

    compatible) Enterprise System Architecture/390 operating systems on them. Maybe that was it.

    I doubt it was the performance, which was roughly 1.7 to 1.9 times the speed of the 3090/600J (the

    previous fastest mainframe from IBM) in commercial applications, 2.0 to 2.7 in scalar, and 2.0 to 2.8

    in vector performance. Although impressive, we've seen similar jumps before between generations.

    None of this sounds so earth shattering that it should be the most important release in the most

    important computer company's history does it? Yes, by today's standards 9 GB is a lot and 10 MB/s

    over nine kilometers is faster than the Internet speeds to which most of us have access. Serial

    transmission has been around for a few years now, and virtualization is becoming more common all

    the time. Eight processors is a good amount, but dual-socket quad-core processors are not that rare

    anymore. And we'll soon have processors with that many cores. So, I just don't know.

    Maybe it had something to do with it being released in 1990. You know, when the 486 was hot and

    George H.W. Bush was in the first part of his term. Before Yahoo! existed and about six years before

    http://www.tomshardware.com/gallery/24,0101-213689-0-2-3-0-jpg-.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    30/31

    the first article appeared on Tom's Hardware about Softmenu BIOS features for Socket 7

    motherboards. Taken in that time context, it was a monumental achievement, with so many

    important advances in so many aspects of the systems. All in all, it's very hard to disagree with Mr.

    Watson. Would you have expected otherwise from such a distinguished and accomplished person?

    But, although this marvel has technology that hardly seems old even by today's standards, our storyis surely not done. But, what can top the ES/9000? It's hard to imagine, but then again, it's even

    harder to imagine a computer line staying the same for 19 years. So, let's take a look at the latest and

    greatest from Big Blue.

    IBM eServer zSeries E64

    While this article is supposed to be a history of big computers, this last entry is about a computer

    that is still being sold today. But it was sold yesterday too, and that's history, right? So, let's and take

    a look at IBM's biggest and baddest computer on the planet, the eServer zSeries E64.

    In this day and age, it's hard to imagine a physically large computer, but IBM did manage to create a

    30 square ft. beast that weighed in at over 5,000 pounds and consumed 27,500 watts of power. Still

    not impressed? How about 1,520 GB of memory? Yes, that's a bit more than the 6 GB of most Core

    i7-based enthusiast boxes. Well, actually, that's a bit more than the average hard disk of a PC with

    the Nehalem. It can also have 1,024 ESCON, 336 FICON Express4, 336 FICON Express2, 120 FICON

    Express, 96 OSA-Express3, and 48 OSA-Express2 channels. That's more I/O than the X58, wouldn't

    you agree? Maybe several orders of magnitude more? This amazing machine can even host up to 16

    virtual LANs in one machine.

    Needless to say, these computers far exceed your normal server and, in fact, consolidate many

    smaller x86-processor machines. Rather than fading into oblivion, mainframes are finding customers

    that never used them before and wish to consolidate their x86 servers for space and energy savings.

    The flexibility of these servers are truly impressive, as one can stock them with up to 16 integrated

    http://www.tomshardware.com/reviews/softmenu,1.htmlhttp://www.tomshardware.com/reviews/softmenu,1.htmlhttp://www.tomshardware.com/gallery/25,0101-213690-0-2-3-1-jpg-.htmlhttp://www.tomshardware.com/reviews/softmenu,1.html
  • 8/7/2019 A Complete History Of Mainframe Computing

    31/31

    facility for Linux (IFL) processors if Linux is the choice of operating systems or add up to 32 zAAP

    processors to assist with integration of Web apps using Java or XML with backend databases. There

    also can be up to 32 zIIP processors for data and transaction processing and network workloads,

    which are often used for ERP, CRM, and XML applications and IPSec data encryption.

    The main processor, the z10 processor unit chip, has a rich CISC design that can execute 894instructions, 668 of which are hardwired. The processor, in a nod to the ENIAC, even supports

    hardware decimal floating point operations, which can limit rounding errors and is much faster than

    using binary and converting. On top of all this, it can still run software written for the System/360,

    which is now 45 years old, and the amazingly solid MVS operating system, although it's now called

    z/OS. One can have up to 64 of these 4.4 GHz quad-core monsters running, designed for 99.999%

    uptime. It is no wonder these machines are selling well, as they offer incredible reliability, excellent

    and flexible performance, capacity that is hard to imagine, and very advanced, yet rock-solid

    software.

    As suggested, virtualization capabilities on these machines are far beyond those of mere mortalservers. Naturally, they can run multiple operating systems, including Linux, z/OS, z/VM, AIX, and

    OpenSolaris, but more than that, they are capable of hot-swapping capacity non-disruptively and on

    the fly when one partition needs more capacity. One can even bring extra processors online for short

    periods of burst activity, and schedule them for certain times of the day, if there are known peaks.

    These remarkable machines have capabilities that are so advanced that it might be difficult to get

    your mind around it. Forgetting for a moment the remarkable performance and flexibility of these

    machines, it is still dumbfounding how reliable they are. They feature, for example, something called

    "lock-stepping," when each result-oriented instruction is run twice and the results are compared to

    make sure they are the same. If they are not, the instruction is re-executed and the computerattempts to locate where the error occurred. It can even switch in-flight instructions to other

    processors, thus eliminating any negative effects of the error from the users perspective. More than

    this, when used in a parallel sysplex (clustering up to 32 mainframes into one logical image), one can

    update all the software and hardware on any mainframe without any downtime or disruption at all.

    Only in the sense that these magnificent machines make the average desktop machine look small by

    comparison are they dinosaurs. They are far more advanced, powerful, flexible, capacious, and useful

    than the PCs we all know and love, not only in hardware, but in the incredible stability of the system

    software. They still are very much part of the backbone of computing and show absolutely no signs of

    death. On the contrary, their sales increase every year. In fact, how could it be any other way?

    Mainframes arguably express man's highest achievement, not only in the amazing amount of thought

    and intelligence invested in them, but also in the sublime role they have had, and still have, on

    human life, and the endeavors of our kind. Perhaps rather than dinosaurs, they are like something

    even older. Like diamonds, they are a combination of many ordinary parts, that when combined in a

    certain way, through nature or extraordinary thought, become something far greater than the sum of

    ordinary.

    Source: http://www.tomshardware.com/picturestory/508-24-mainframe-computer-history.html