Top Banner

of 58

Super Computers ---Parallel Computers

Jan 14, 2016

ReportDownload

Documents

connie

CS147 Lecture 20. Super Computers ---Parallel Computers. Prof. Sin-Min Lee Department of Computer Science. - PowerPoint PPT Presentation

  • Super Computers ---Parallel ComputersProf. Sin-Min LeeDepartment of Computer Science

  • By 1960, at the age of 34, Seymour had established his reputation for genius in designing high performance computers. He had completed the design of the Control Data 1604, the first computer to be fully transistorized and had begun the design of the first system that earned the title of supercomputer, the CDC 6600 which was also the first major system to employ three-dimensional packaging and an instruction set that was later to be referred to as RISC.

  • Even as a child, Seymour was a problem solver. His sister tells the story about when Seymour was a young boy, he rigged a Morse Code connection between his bedroom and his sister's so that they could communicate after lights out. His father became aware of the late night clicking and told Seymour to shut down the system because it was bothering the rest of the household. Seymour's solution was to convert the clickers to lights and to continue to communicate with his sister.

  • Robert Frost's, "The Road Not Taken" "I shall be telling this with a sigh Somewhere ages and ages hence: Two roads diverged in a wood, and I-- I took the one less traveled by, And that has made all the difference."

  • Seymour liked to work with fundamental and simple tools. Generally only a piece of paper and a pencil. But he admitted that some of his work required more sophisticated tools. Once when told that Apple Computer bought a CRAY to simulate their next Apple computer design, Seymour remarked, "Funny, I am using an Apple to simulate the CRAY-3." His selection of people for his projects also reflected fundamentals. Once asked why he often hires new graduates to help him with early R&D work, he replied, "Because they don't know that what I'm asking them to do is impossible, so they try."

  • Since the first supercomputer, the Cray-1, was installed at Los Alamos National Laboratory in 1976, computational speed has leaped 500,000 times. The Cray-1 was capable of 80 megaflops (80 million operations a second). The Blue Gene/L machine that will be completed next year will be five million times faster.

  • 1: Earth Simulator Center, Japan 2: Intel Itanium2 Tiger4 1.4GHz, Quadrics 3: ASCI Q - AlphaServer SC45, 1.25 GHz 4: Blue Gene/L DD1 Prototype (0.5GHz PowerPC 440 w/Custom) 5: PowerEdge 1750, P4 Xeon 3.06 GHz, Myrinet 6: eServer pSeries 690 (1.9 GHz Power4+) 7: Riken Super Combined Cluster 8: Blue Gene/L DD2 Prototype (0.7 GHz PowerPC 440) 9: Integrity rx2600 Itanium2 1.5 GHz, Quadrics 10: Dawning 4000A, Opteron 2.2 GHz, Myrinet June-2004

  • November-2004

  • Its peak theoretical performance is expected to be 360 teraflops, and will fit into 64 full racks. It will also cut down on the amount of heat generated by the massive power, a big problem for supercomputers. The final machine will help scientists work out the safety, security and reliability requirements for the US's nuclear weapons stockpile, without the need for underground nuclear testing.IBM's senior vice president of technology and manufacturing, Nick Donofrio, believes that by 2006, Blue Gene will be capable of petaflop computing. This means it would be capable of doing 1,000 trillion operations a second.

  • NASA to build 10,000-processor Linux computerIDG News Service 7/28/04 The National Aeronautics and Space Administration (NASA) has given the green light to a project that will build the largest ever supercomputer based on Silicon Graphics Inc.'s (SGI) 512-processor Altix computers.Called Project Columbia, the 10,240-processor system will be used by researchers at the Advanced Supercomputing Facility at NASA's Ames Research Center in Moffett Field, California. . "

  • Scientists will use Columbia to design equipment, simulate future space missions and model weather patterns. A portion of the US$160 million system will also be made available to other government agencies and educational facilities, said Bill Thigpen, manager of Project Columbia. "We need to look at working with other agencies to provide them with access to this system because it is a unique system," he said.What makes Project Columbia unique is the size of the multiprocessor Linux systems, or nodes, that it clusters together. It is common for supercomputers to be built of thousands of two-processor nodes, but the Ames system uses SGI's NUMAlink switching technology and ProPack Linux operating system enhancements to connect 512-processor nodes, each of which will have more than 1,000G bytes of memory

  • "We use a very large single-system image," said Jeff Greenwald, senior director of server product marketing with SGI. "The other guys come with a very thin node cluster, and try to screw them all together."The Altix nodes will use Intel Corp.'s Itanium 2 microprocessors, and the entire 20-node system is expected to be fully assembled by year's end, he said.SGI has used this large-node technology to build a number of smaller Altix systems with between 3,000 and 6,000 processors, but Project Columbia will be the largest to date, Greenwald said

  • The Earth Simulator has held on to the top spot since June 2002. It is dedicated to climate modelling and simulating seismic activity

  • SINGAPORE (CNN) -- A group of researchers from Singapore has created a computer chip that has the power of 100 standard computers.The group of five, all working at Ngee Ann Polytechnic, will commercialize their development by January and sell it to the pharmaceutical industry, where they say the invention will save time and money.Lead researcher Darran Nathan, 24, explains that unlike standard computer chips, which function using software, his is based on a computer's hardware.

  • "An ordinary computer chip will interpret instructions from the software and execute a command," he says."Our chip is a reconfigurable chip, which means it downloads an actual file to the chip and rewires it according to subsequent processing done in the hardware."Nathan says the process is highly technical but, put simply, is a computer chip that works at a speed of 100 standard computers combined.He says the super chip was originally created with the telecommunications industry in mind, but soon after work on the project began two years ago, they realized the benefits would be much more useful to life sciences.

  • "It is 100 times quicker than your standard computer. Most people do not need such a powerful computer, but in the area of designing and developing drugs, it is hugely important," says Nathan."It basically means getting essential drugs on the street quicker, at a cheaper cost."Nathan says the device will cost between US$30,000 and US$61,000, and its key point of difference between other supercomputers is its small size.The team, which calls itself Project Proteus, after the shape-shifting Greek god, are aged between 24 and 27.Last week they showcased their chip at the Global Entrepolis convention in Singapore where Mr Nathan says they received a lot of positive feedback.

  • A Supercomputer at $5.2 million

    Virginia Tech 1,100 node Macs.G5 supercomputer

  • The Virginia Polytechnic Institute and State University has built a supercomputer comprised of a cluster of 1,100 dual-processor Macintosh G5 computers. Based on preliminary benchmarks, Big Mac is capable of 8.1 teraflops per second. The Mac supercomputer still is being fine tuned, and the full extent of its computing power will not be known until November. But the 8.1 teraflops figure would make the Big Mac the world's fourth fastest supercomputer

  • Big Mac's cost relative to similar machines is as noteworthy as its performance. The Apple supercomputer was constructed for just over US$5 million, and the cluster was assembled in about four weeks. In contrast, the world's leading supercomputers cost well over $100 million to build and require several years to construct. The Earth Simulator, which clocked in at 38.5 teraflops in 2002, reportedly cost up to $250 million.

  • Srinidhi Varadarajan, Ph.D. Dr. Srinidhi Varadarajan is an Assistant Professor of Computer Science at Virginia Tech. He was honored with the NSF Career Award in 2002 for "Weaving a Code Tapestry: A Compiler Directed Framework for Scalable Network Emulation." He has focused his research on building a distributed network emulation system that can scale to emulate hundreds of thousands of virtual nodes.

    October 28 Time:7:30pm - 9:00pm Location:Santa Clara Ballroom

  • Parallel ComputersTwo common typesClusterMulti-Processor

  • Cluster Computers

  • Clusters on the Rise Using clusters of small machines to build a supercomputer is not a new concept.Another of the world's top machines, housed at the Lawrence Livermore National Laboratory, was constructed from 2,304 Xeon processors. The machine was build by Utah-based Linux Networx. Clustering technology has meant that traditional big-iron leaders like Cray (Nasdaq: CRAY) and IBM have new competition from makers of smaller machines. Dell (Nasdaq: DELL) , among other companies, has sold high-powered computing clusters to research institutions.

  • Cluster ComputersEach computer in a cluster is a complete computer by itselfCPUMemoryDisketcComputers communicate with each other via some interconnection bus

  • Cluster ComputersTypically used where one computer does not have enough capacity to do the expected workLarge ServersCheaper than building one GIANT computer

  • Although not new, supercomputing clustering technology still is impressive. It works by farming out chunks of data to individual machines, adding that clustering works better for some types of computing problems than others. For example, a cluster would