Top Banner
------------------------------------------ ----------------------------- Supercomputer ------------------------------------------ -----------------------------
168

supercomputer.doc

Apr 18, 2015

Download

Documents

senthilvl

Uploaded from Google Docs
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: supercomputer.doc

-----------------------------------------------------------------------

Supercomputer-----------------------------------------------------------------------

Page 2: supercomputer.doc
Page 3: supercomputer.doc

What is a Supercomputer?A supercomputer is defined simply as the most powerful class of computers

at any point in time. Supercomputers are used to solve large and complex

problems that are insurmountable by smaller, less powerful computers. Since

the pioneering Cray-1® system arrived in 1976, supercomputers have made a

significant contribution to the advancement of knowledge and the

quality of human life. Problems of major economic, scientific and strategic

importance typically are addressed by supercomputers years before becoming tractable on less-capable systems.

In conjunction with some of the world's most creative scientific and engineering minds, these formidable tools already have made automobiles

safer and more fuel-efficient; located new deposits of oil and gas; saved lives and property by predicting severe storms; created new materials and

life-saving drugs; powered advances in electronics and visualization; safeguarded national security; and unraveled mysteries ranging from

Page 4: supercomputer.doc

protein-folding

mechanisms to the shape of the universe.

Capable supercomputers are in short supply.

Today's supercomputer market is replete with "commodity clusters," products assembled from collections of servers or PCs. Clusters are adept at tackling small problems and large problems lacking complexity, but are inefficient at the most demanding, consequential challenges - especially

those of industry. Climate research algorithms, for example, are unable to achieve high levels of performance on these computers.

The primary "design points" for today's clusters are server and PC markets, not supercomputing. Christopher Lazou, a high-performance computing consultant, explains, "Using tens of thousands of commodity chips may provide the capacity (peak flop rates) but not the capability, because of

lack of memory bandwidth to a very large shared memory." Cray's product portfolio addresses this issue with high-bandwidth offerings.

High-end Supercomputers

For important classes of applications, there is no substitute for supercomputers specifically designed not only for performance, but also high-bandwidth and low latency. Historically, this has been accomplished

through vector architectures and more recently, multi-threaded architectures. These specialized supercomputers are built to meet the most

Page 5: supercomputer.doc

challenging computing problems in the world.

Today, new technology

and innovation at Cray Inc. has allowed for a new class of

supercomputers that combines the performance characteristics of vector supercomputers with the scalability of commodity clusters to achieve both high-efficiency and extreme performance in a scalable system architecture.

These characteristics are embodied in the recently announced Cray X1™ system.

The Future of Supercomputing

Applications promising future competitive and scientific advantage create an insatiable demand for more supercomputer power - 10 to 1,000 times greater than anything available today, according to users. Automotive companies are targeting increased passenger cabin comfort, improved

safety and handling. Aerospace firms envision more efficient planes and space vehicles. The petroleum industry wants to "see" subsurface phenomena in greater detail. Urban planners hope to ease traffic

congestion. Integrated digital imaging and virtual surgery - including simulated sense of touch - are high on the wish list in medicine. The

sequencing of the human genome promises to open an era of burgeoning research and commercial enterprise in the life sciences.

As the demand for supercomputing power increases and the market expands, Cray's focus remains on providing superior real-world

performance. Today's "theoretical peak performance" and benchmark tests are evolving to match the requirements of science and industry, and Cray

Page 6: supercomputer.doc

supercomputing systems will provide the tools they need to solve their most complex computational problems

Page 7: supercomputer.doc

American Super Computer released to Russiaa business adventure of

ROY International Consultancy, Inc.!

Posted By: gmaxDate: 10/06/00 16:48

Summary:BioInform: IBM Designs Architecture for Blue Gene Supercomputer, Collaborates with Academia

``Since IBM's announcement last year that it would spend $100 million to build a supercomputer called Blue Gene for protein folding research, it has

begun collaborating with scientists at Indiana University, Columbia University, and the University of Pennsylvania on some of the mathematical

techniques and software needed for the system.

``The company has also decided to use a cellular architecture for the machine, where it will use simple pieces and replicate them on a large

scale. Protein folding research requires advances in computational power

Page 8: supercomputer.doc

and molecular dynamics techniques - the mathematical method for calculating the movement of atoms in the formation of proteins, said Joe

Jasinski, IBM's newly appointed senior manager of Blue Gene and the computational biology center.

```The first problem that we are attacking with Blue Gene is to understand at the atomic level the detailed dynamics, the motions involved in protein folding,' Jasinski said. `That's a very computationally intensive problem

which requires at least a petaflop computer and probably something bigger.'

``Most of the system software as well as the routines that will drive the applications are being developed by IBM's computational biology group,

which was formed in 1992 and now numbers about 40 scientists and engineers.''

May 18, 2000 - Moscow - The first Mainframe High

Page 9: supercomputer.doc

Power Super Computer was exported from USA to Russia. This deal is a

part of a long-term contract between the

Russian Oil exploration firm Tatneftgeophysika and ROY International

Consultancy Inc., headed by Dr.Cherian

Eapen, who is shuttling between USA and

Russia. Last Christmas day, the Super Computer

- Sun Microsystem's "Starfire

Enterprise10000" - was installed at the specially

Page 10: supercomputer.doc

prepared site of the client in Bugulma, an

interior town of Tatarstan Republic of

Russia.

President of Sun Microsystems

Corporation, Mr. Scott McNeally, (who

challenged Mr. Bill Gates and Microsoft for its technology and legal

competency), congratulated the great effort of the President of

ROY International Dr. Cherian Eapen, who is shuttling between USA

Page 11: supercomputer.doc

and Russia and made the impossible a

possible one. "It was a 'Christmas Starfire', a

precious Christmas gift to Russia from America.

This is an opening of high power computer

usage in this geography for peaceful purposes - a

new bridge opened between the two

technology Super Powers of the world" -

he said.

The Starfire Enterprise10000 is purchased for the

Page 12: supercomputer.doc

seismological data processing center of Tatneftgeophysika, a

giant in the geophysical business field in the whole region of the

former Soviet Union. In spite of the existing

financial and economical problems, the Russian Geophysical firms are

struggling hard to stand on their own legs by procuring the most modern technology stream in the field of computerization and

processing of geological

Page 13: supercomputer.doc

and geophysical data. By 1999, the majority of

geophysical firms of Russia achieved the automation of their

production centers. Year 2000 opens with the 2nd phase of modernization and reconstruction of

geophysical computing centers, focusing mainly

on upgrading of the power and speed of data

interpretation.

At present, the Russian seismological survey of Oil and Gas determines all their data based on 2

Page 14: supercomputer.doc

Dimensional film which should be obtained with 3 Dimensional (3D) film.

Without 3D film, it is impossible for accurate and quality identification of Hydro-carbide fields.

This 3D procedure increases the cost of complicated research work, with high power

computers and software. But this gives a

substantial economical advantage to the tune of

20 to 30%.

In order to become competitive in the

Page 15: supercomputer.doc

already saturated 3D seismic market,

traditional geophysical firms started spending

large amounts to modernize their

computing centers. They started inviting

companies specialized in this sphere -- Systems Integrators -- with given

criterion of price, optimum efficiency,

productivity of technical solution, taking into

consideration all aspects of technology for

processing of

Page 16: supercomputer.doc

geophysical information.

One such experienced Systems Integrators

working in CIS is ROY International

Consultancy Inc., whose main activity is the project design and

realization of corporate computing, especially for computing centers

for the oil and gas field. Founded in 1988, ROY

International is the leading Systems

Integrator specialized in large computer systems

development of

Page 17: supercomputer.doc

corporate computing centers. ROY

International is the largest supplier of highly reliable and secure UNIX

based enterprise wide systems in the CIS. By

this period, ROY International designed

and installed 300 projects throughout CIS countries to modernize

and reconstruct computing centers, installing more than 2000 high power Sun

Work Stations and Servers and porting

Page 18: supercomputer.doc

major software, available in the world and networking etc.

Bashkiristan Neftgeophysika,

Udmrtneftgeophysika, Khantimansisk

Geophysika, Sakhalinsk Neftgeophysika,

Moormansk Neftgeophysika, Central Geophysical Expedition,

VNIIGAS, Sever Gazprom, Orenburg

Gazprom, Lukoil Kogalym, Yukos

Moscow, Luk-Arco, Slavneft Tver, etc., to

Page 19: supercomputer.doc

name a few, are the leading computing

centers installed by ROY International in the oil

and gas field.

At present, ROY International is

completing the final installation at

Tatneftgeophysika, one of the major geophysical

companies of Russia. Within the framework of

this project, ROY International is finalizing

the execution of the installation of a Sun

Supercomputer together

Page 20: supercomputer.doc

with leading software in the world. This complex is specially designed for

3D geophysical data processing.

The world's leading oil and gas producers love

the characteristics of Enterprise10000, also

called the 'CRAY-Killer'. Starting with new 18

Ultra SPARC-II Microprocessors, TByte

Storage Array, TByte ETL Tape Library, more

than 100 high power Sun Work Stations and

Networking etc., this

Page 21: supercomputer.doc

center is the most powerful computing

center in Russia and all CIS countries.

Being the Systems Integrator, ROY

International, after several negotiations,

selected Paradigm Geophysical software

(Israel) for Data Processing and

Schlumberger GeoQuest Software (France) for

Data Interpretation. This is in addition to the

existing Data Processing Software, Landmark, of

Page 22: supercomputer.doc

America. ROY International also has

agreements with manufacturers like

Storage Tech (USA), Qualstar Corporation (USA), E.R.Mapping

(Australia), M4 Data Ltd., (England), Fujitsu

(Japan), 3Com (USA), OYO Plotters and

Instruments (Japan), etc., and also with various Russian

manufacturers and software developers.

Trained Specialists and Engineers of ROY

Page 23: supercomputer.doc

International completed the networking job

within two weeks and software installation and training just completed.

Four of the ROY International's

specialists are with Ph.D. qualifications in this field. Processing and Interpretation of Data are handled by more than 400 highly

qualified employees of Tatneftgeophysica.

The General Director of Tatneftgeophysika,

Mr.Rinat Kharisov said,

Page 24: supercomputer.doc

"This is the second time we are entering into a

new era of major technological

modernization of their computing center, which

is being executed by ROY International. Six years back, we have

modernized our computing center with

the help of ROY International. They

replaced the Russian ES computers with the Sun

SPARC Center2000 which increased our

computing power to 20

Page 25: supercomputer.doc

times. The present installation increases

our power to another 70 times. This enables us to

find the results of Interpretation Data,

saving substantial time and money and we can compete at the global

market".

"The new Super Computer project once more confirms that the trend in Russia is to set

up large scale information centers, for which high end Super

Computers are

Page 26: supercomputer.doc

required", said Dr. Cherian Eapen,

President of ROY International. "It was a

difficult task to get licenses from all US

Govt. Departments. The application for export

license and non-proliferation compliance

letter etc. were routed through the Russian Ministry of Fuel and

Energy and through the US Embassy in Moscow to the Bureau of Exports

Administration in Washington DC. The

Page 27: supercomputer.doc

procedure took a long time to grant permission

to allow the use of the Supercomputer for a civilian customer in

Russia as Russia is still under the list of

countries for nuclear proliferation concerns. Since ROY International has got a clean record and doesn't have any

military deals and since it is strict in working only for a peaceful

production activities, it got an advantage on license application.

Page 28: supercomputer.doc

Departments of Commerce, State,

Defense, Atom Energy Ecology cleared the

license application for this Super Computer earlier and finally, the Department of Energy

also gave the green light to lift it to Russia.

This 2 Tons Super Computer complex was

lifted from San Francisco to Amsterdam by Lufthansa Cargo and from there to Moscow

and to Nabereshni Chelni, a nearby airport

Page 29: supercomputer.doc

of the end user in Tatarstan Republic by a

chartered flight arranged by ROY International.

Dr.Cherian Eapen said, "With the highest of security safeguard

procedures, we were able to reach the System to the pre-approved and

designed site of the computing center, per license requirement.

Every moment was tense due to danger of security

reasons against any physical diversion

during shipment. One of

Page 30: supercomputer.doc

our employees from Moscow, Victor, traveled

with the freight forwarders and security crew and informed me each hour the progress

of loading and off loading and air and

ground transport to the destination. About 4 AM

on December 25, Christmas day morning, I got the final call for that day from Victor, asking me to take rest as the job is completed and

therefore requesting to allow them to celebrate

Page 31: supercomputer.doc

the installation of the Super Computer, by opening a bottle of

Vodka."

BiO News

DIFFERENT SUPER COMPUTERCanada's Fastest Computer Simulates Galaxies In Collision

Page 32: supercomputer.doc

Shortened sequence of images showing the detailed

interaction of two galaxies colliding

by Nicolle WahlToronto - Jul 25, 2003

A $900,000 supercomputer at the University of Toronto -- the fastest computer in Canada -- is heating up astrophysics research in this country

and burning its way up the list of the world's fastest computers.

The new computer, part of the Department of Astronomy and Astrophysics and the Canadian Institute for Theoretical Astrophysics (CITA), was ranked

Page 33: supercomputer.doc

as the fastest

computer in Canada and

the 39th fastest in

the world in the latest list from

www.top500.org, compiled by the Universities of Mannheim and Tennessee and the National Energy Research Scientific Computing Center at

Lawrence Berkeley National Laboratory.

"An essential element of modern astrophysics is the ability to carry out large-scale simulations of the cosmos, to complement the amazing

observations being undertaken," said Professor Peter Martin, chair of astronomy and astrophysics and a CITA investigator.

"With the simulations possible on this computer, we have in effect a laboratory where we can test our understanding of astronomical

phenomena ranging from the development of structure in the universe over 14 billion years to the development of new planets in star-forming systems

today."

When the computer, created by the HPC division of Mynix Technology of Montreal (now a part of Ciara Technologies), starts its calculations, the 512

individual central processing units can heat up to 65 C, requiring extra ventilation and air-conditioning to keep the unit functioning.

But with that heat comes the capability of performing more than one trillion

Page 34: supercomputer.doc

calculations per second, opening the door to more complex and comprehensive simulations of the universe. It is the only Canadian

machine to break the Teraflop barrier -- one trillion calculations per second -- and it's the fastest computer in the world devoted to a wide spectrum of

astrophysics research.

"This new computer lets us solve a variety of problems with better resolution than can be achieved with any other supercomputer in Canada,"

said Chris Loken, CITA's computing facility manager. "Astrophysics is a science that needs a lot of computer horsepower and memory and that's what this machine can provide. The simulations are also enabled by in-house development of sophisticated parallel numerical codes that fully

exploit the computer's capabilities."

The machine, nicknamed McKenzie (after the McKenzie Brothers comedy sketch on SCTV), with 268 gigabytes of memory and 40 terabytes of disk

space, consists of two master nodes (Bob and Doug), 256 compute nodes, and eight development nodes. All of these are networked together using a novel gigabit networking scheme that was developed and implemented at

CITA.

Page 35: supercomputer.doc

Essentially, the two gigabit

Ethernet ports on

each node are used to

create a "mesh" that

connects every

machine directly to

another machine

and to one of 17

inexpensive gigabit

switches. It took four

people about two days and

two kilometres of cable to

connect this network. The unique CITA design drives down the networking cost in

the computer by at least a factor of five and the innovative system has attracted industry attention.

Professor John Dubinski has used the new computer to examine both the formation of cosmological structure and the collisions of galaxies by

simulating the gravitational interaction of hundreds of millions of particles representing stars and the mysterious dark matter. The anticipated

collision of the Milky Way Galaxy with our neighbouring Andromeda galaxy -- an event predicted to take place in three billion years time -- has been

modeled at unprecedented resolution.

New simulations on the formation of supermassive black holes, again with the highest resolution to date, have been carried out by his colleagues Professors Ue-Li Pen and Chris Matzner. They have already uncovered

clues which may explain the mystery of why the black hole at the center of our galaxy is so much fainter than had been expected theoretically.

The team has even grander plans for the future. "In astrophysics at the University of Toronto we have continually exploited the latest computing

Page 36: supercomputer.doc

technology to meet our

requirements, always within a modest budget," Martin said. "This is a highly competitive science and to maintain our lead we are planning a

computer some ten times more powerful."

China To Build World's Most Powerful Computer

China's future science and defence needs will require ever

more powerful high performance computer

systems.Beijing (Xinhua) Jul 29, 2003

The Downing Information Industry Co., Ltd., a major Chinese manufacturer of high-performance computers, is to build the world's most powerful computer, capable of performing 10 trillion calculations per second.

Scheduled to be completed by March next year, the super computer marks

Page 37: supercomputer.doc

China's first step in the

development of a cluster computer system, which has the highest calculation speed in the world, according to a source with the company.

Previously, the Downing Information Industry Co., Ltd. had successfully developed a super computer, capable of performing 4 trillion calculations

per second.

Code-named "Shuguang4000A", the planned super computer covers an area equal to a quarter of a football field, and it will use processors

developed by AMD, a United States computer chip maker.

AMD and the Chinese company have signed a cooperation agreementto develop the planned super computer of China.

The fastest existing cluster computer system in the world is capable of calculating at a speed of 7.6 trillion bytes per second.

Page 38: supercomputer.doc

First Super Computer Developed in China

China's first super computer which is capable of making 1.027 trillion calculations per second showed up in Zhongguancun, known as a "Silicon Valley" in the Chinese capital

Beijing, Thursday.

The computer, developed by the Legend Group Corp., China's leading computer manufacturer, boasts the same operation speed as the 24th fastest computer in the world's

top 500 super computers. The leading 23 super computers were developed by Japan and the United States respectively.

Legend president Yang Yuanqing said the computer will be installed at the mathematics and system science research institute affiliated to the Chinese Academy of Sciences in early

September. It will be used in calculating hydromechanics, disposing petroleum and earth quake materials, climatic mode calculation, materials science calculation, DNA and protein

calculation.

Yang said it only takes the computer two minutes to complete the simulation of global climatic changes in one day, compared with 20 hours by other large computers.

Computers with super calculation speed used to be the tool of small number of scientists in labs in the past, but now they are widely used in economic and social fields, even in film-

making.

Page 39: supercomputer.doc

A computer, which is capable of making 85.1 trillion calculations per second, the highest calculation speed in the world, has been recently developed in Japan.

Page 40: supercomputer.doc

Shell to use Linux

supercomputer for oil questDecember 12, 2000

Web posted at: 9:00 AM EST (1400 GMT)

LONDON, England (Reuters) -- Linux, the free computer operating system, is expected to win another high-profile victory on Tuesday

when Anglo-Dutch oil company Royal Dutch/Shell will announce it is going to install

the world's largest Linux supercomputer.Shell's Exploration & Production unit will use the supercomputer, consisting of 1,024 IBM X-

Series servers, to run seismic and other geophysical applications in its search for more oil and gas.

Data collected in Shell exploration surveys will be fed into the computer, which will then analyze it.

The announcement comes just days after Swedish telecom operator Telia said it would use a large Linux mainframe to serve all its Internet subscribers, replacing a collection of Sun Microsystems servers.

Page 41: supercomputer.doc

"Linux is coming of age," said

one source close to the

deal.Linux,

developed by the Fin

Linus Torvalds

and a group of

volunteers on the Web,

has been embraced

by

International Business Machines Corp. as a flexible alternative to licensed software systems such as Microsoft's Windows or the Unix platforms.With Linux companies can quickly add or remove computers without

worrying about licenses for the operating software. Over the past year the software has been tested and trialled for business critical applications.

Major deals have now started to come through.Recently Musicland Stores Corp., the U.S. company that owns Sam Goody,

said it would install new Linux and Java-based cash registers. The most recent announcements indicate that Linux usage is becoming more

versatile, with the operating system moving into many different applications, not just Internet computers.

World's fastest computer simulates Earth

Page 42: supercomputer.doc

Saturday, November 16, 2002

Posted: 2:57 PM EST (1957 GMT)

The Earth Simulator consists of 640 supercomputers that are connected by a

high-speed network.

RELATEDTop 500 most powerful computers 

Page 43: supercomputer.doc

Story Tools

Page 44: supercomputer.doc

SAN JOSE, California (AP) -- A Japanese supercomputer that studies the climate and other aspects of the Earth maintained its ranking as the world's fastest computer, according to a

study released Friday.The Earth Simulator in Yokohama, Japan, performs 35.86 trillion calculations per second -- more than 4 1/2 times

greater than the next-fastest machine.Earth Simulator, built by NEC and run by the Japanese government, first appeared on the list in June. It was the

first time a supercomputer outside the United States topped the list.Two new machines, called "ASCI Q," debuted in the No. 2 and No. 3 spots. The computers, which each can run 7.73 trillion calculations per second, were built by Hewlett-Packard Co. for Los Alamos National Laboratory in

New Mexico.

Clusters of personal computers rankFor the first time, high-performance machines built by clustering personal computers appeared in the top 10.A system built by Linux NetworX and Quadrics for Lawrence Livermore National Laboratory ranked No. 5. A

system built by High Performance Technologies Inc. for the National Oceanic and Atmospheric Administration's Forecast Systems Laboratory was No. 8.

Hewlett-Packard Co. led with 137 systems on the list, followed by International Business Machines Corp. with 131 systems. No. 3 Sun Microsystems Inc. built 88 of the top 500 systems.

The Top 500 list, which has been released twice annually since 1993, is compiled by researchers at University of Mannheim, Germany; the Department of Energy's National Energy Research Scientific Computing Center in

Berkeley and the University of Tennessee.

Page 45: supercomputer.doc

G5

Supercomputer in the Works

Interesting rumor. Virginia Tech got bumped to the head of the line with their order of 1100 new G5 computers, so that they could build a

supercomputer and make Linpack's Top 500 list this year.

Not too surprising that Apple gave them preferential treatment. Wonder if Apple might be tempted into making a commercial?

Even if they just posted it on-line, it might prove interesting.

Virginia Tech building supercomputer G5 cluster By Nick dePlume , Publisher and Editor in Chief

August 30, 2003 - Virginia Tech University is building a Power Mac G5 cluster that will result in a supercomputer estimated to be one of the top

five fastest in the world.

Page 46: supercomputer.doc

In yesterday's

notes article , we reported

that Virginia

Tech had placed a

large order of dual-

2GHz G5s to form a cluster.

Since that time, we've

received additional

information, allowing us to confirm a number of details.

According to reports, Virginia Tech placed the dual-2GHz G5 order shortly after the G5 was announced. Multiple sources said Virginia Tech has

ordered 1100 units; RAM on each is said to be upgraded to 4GB or 8GB.

The G5s will be clustered using Infiniband to form a 1100-node supercomputer delivering over 10 Teraflops of performance. Two sources

said the cluster is estimated to be one of the top five fastest supercomputers in the world.

However, Virginia Tech's on a deadline. The university needs to have the cluster completely set up this fall so that it can be ranked in Linpack's Top

500 Supercomputer list.

Apple bumped Virginia Tech's order to the front of the line -- even in front of first day orders -- to get them out the door all at once. Sources originally estimated the G5s will arrive the last week of August; they're still on track

to arrive early, possibly next week.

Page 47: supercomputer.doc

This information is more-or-less public within the university

community but no

announcement has been made. Earlier in the month, Think Secret contacted Virginia Tech's Associate Vice President for University Relations,

who said the report was an "interesting story" and agreed to see what he could confirm. The university didn't respond to follow-up requests for

comment.

INDIA's 'PARAM-10,000' SUPER COMPUTER

INDIA'S AMAZING PROGRESS IN THE AREA OF HI-TECH COMPUTING"

Page 48: supercomputer.doc

India's Hi-Tech

expertise, has made Harare Pyare

Bharat, a nation to

reckon with. Following, is

a short article by

Radhakrishna Rao, a freelance writer who has contributed this material to "INDIA - Perspectives" (August 1998), page 20 :-

"The restrictions imposed by the United States of America on the transfer of know-how in frontier areas of Technology, and its consistent refusal to make available to

India a range of hardware for its development, have proved to be a blessing in disguise, because Indian scientists and engineers have now managed to develop,

indigenously, most of the components and hardware required for its rapidly advancing space and nuclear power programmes.

It was again the refusal of the U.S. administration to clear the shipment to India of a Cray X-MP super computer, for use by the Institute of Sciences (IISc.), Bangalore,

in the 1980's, along with severe restrictions on the sale of computers exceeding 2000 Mega Theoretical Operations per Second (MTOPS), that led India to build one of

the most powerful super computers in the world. In fact, the unveiling of the "PARAM-10,000" super-computer, capable of performing one trillion mathematical

calculations per second, stands out as a shining example of how 'restrictions and denials' could be turned into impressive scientific gains. For the Pune-based Centre

for Development of Advanced Computing (C-DAC), which built this super-

Page 49: supercomputer.doc

computing machine, it

was a dream come true.

In fact, the "PARAM-10,000",

based on an open-frame architecture, is considered

to be the most

powerful super-

computer in Asia, outside

Japan. So far, only

U.S.A. and Japan have built up a

proven capability to build similar

types of super-

computers. To be sure, Europe is yet to build its own super-computer in this category. As it is, "PARAM-10,000", has catapulted India into the ranks of the elite nations that, already, are in the rarefied world of tera flop computing which implies

a capability to perform one trillion calculations per second. In this context, a beaming Dr. Vijay P. Bhatkar, Director, of C-DAC, says, "We can now pursue our

own mission critical problems at our own pace and on our own terms. By developing this, India's esteem in Information Technology (IT) has been further raised."

As things stand now, "PARAM-10,000" will have applications in as diverse areas as long-range weather forecasting, drug design, molecular modelling, remote sensing

and medical treatment. According to cyber scientists, many of the complex problems that India's space and nuclear power programmes may encounter in the future

could be solved with "PARAM-10,000", without going in for the actual ground level physical testing. On a more practical plain, it could help in the exploration of oil and gas deposits in various parts of the country. Perhaps the post exciting application of

"PARAM-10,000" will be in storing information on Indian culture and heritage, beginning with Vedic times. "We want to preserve our timeless heritage in the form

of a multimedia digital library on "PARAM-10,000", says Dr. Bhatkar, That C-DAC

Page 50: supercomputer.doc

could manage to

build a "PARAM-

10,000" machine in

less than five years is a splendid

tribute to the calibre and

dedication of its scientists

and engineers.

No wonder C-DAC has

bagged orders for as

many as three

"PARAM-10,000"

machines. And two of these are

from abroad ; a Russian academic institute and Singapore University are keenly awaiting the installation of "PARAM-10,000" machines on their premises. The third machine will be used by the New Delhi-based National Informatics Centre (NIC) for setting up a geomatics faculty designed to provide solutions in the area of remote sensing

and image processing. C-DAC is also planning to develop advanced technologies for the creation of a national information infrastructure. Meanwhile, C-DAC has proposed the setting up of a full fledged company to commercially exploit the

technologies developed by it. C-DAC was set up in 1988 with the mandate to build India's own range of super-computers.

Incidently, "PARAM-10,000" is a hundred times more powerful that the first Param machine built, way back in the early 1990's." --- (Radhakrishna Rao , Author)

Page 51: supercomputer.doc

IBM plans

world's most powerful Linux supercomputer

IDG News Service 7/30/03

A Japanese national research laboratory has placed an order with IBM Corp. for a supercomputer cluster that, when completed, is expected to be the most powerful

Linux-based computer in the world.The order, from Japan's National Institute for Advanced Industrial Science and

Technology (AIST), was announced by the company on Wednesday as it simultaneously launched the eServer 325 system on which the cluster will be largely

based. The eServer 325 is a 1U rack mount system that includes two Advanced Micro Devices Inc. Opteron processors of either model 240, 242 or 246, said IBM in

a statement.The supercomputer ordered by AIST will be built around 1,058 of these eServer 325 systems, to make a total of 2,116 Opteron 246 processors, and an additional number of Intel Corp. servers that include a total of 520 of the company's third-generation

Page 52: supercomputer.doc

Itanium 2 processor,

also known by its code

name Madison.

The Opteron systems will collectively

deliver a theoretical

peak performance of 8.5 trillion calculations per second while the Itanium 2

systems will add 2.7 trillion

calculations per second to

that for a total

theoretical peak

performance for the entire cluster of 11.2 trillion calculations per second.

That would rank it just above the current most powerful Linux supercomputer, a cluster based on Intel's Xeon processor and run by Lawrence Livermore National

Laboratory (LLNL) in the U.S. That machine has a theoretical peak performance of 11.1 trillion calculations per second, according to the latest version of the Top 500

supercomputer ranking.Based on that ranking, the new machine would mean Japan is home to two out of

the three most powerful computers in the world. The current most powerful machine, the NEC Corp.-built Earth Simulator of the Japan Marine Science and

Technology Center, has a theoretical peak performance of 41.0 trillion calculations per second while that of the second-fastest machine, Los Alamos National

Laboratory's ASCI Q, is 20.5 trillion calculations per second.The eServer 325 can run either the Linux or Windows operating systems and the

supercomputer ordered by AIST will run SuSE Linux Enterprise Server 8. IBM said it expects to deliver the cluster to AIST in March, 2004. AIST will link the machine

with others as part of a supercomputer grid that will be used in research of grid technology, life sciences bioinformatics and nanotechnology, IBM said.

General availability of the eServer 325 is expected in October this year and IBM

Page 53: supercomputer.doc

said prices for the

computer start at

US$2,919. The

computers can also be

accessed through

IBM's on-demand service

where users pay for

processing power based on capacity

and duration.

IBM's

announcement is the second piece of good news for AMD and its Opteron processor within the last two weeks. The processor, which can handle both 32-bit and 64-bit

applications, was launched in April this year.China's Dawning Information Industry Co. Ltd. announced plans last week to build

a supercomputer based on AMD's Opteron processor. The Dawning 4000A will include more than 2,000 Opteron processors, with a total of 2T bytes of RAM and 30T bytes of hard-disk space and is expected to deliver performance of around 10 trillion calculations per second. The Beijing-based company has an order for the

machine but has not disclosed the name of the buyer or when the computer will be put into service.

Opteron processors were also chosen for a supercomputer which is likely to displace the AIST machine as the most powerful Linux supercomputer. Cray Inc. is currently

constructing a Linux-based supercomputer called Red Storm that is expected to deliver a peak performance of 40 trillion calculations per second when it is delivered

in late 2004. Linux developer SuSE is also working with Cray on that machine.

Page 54: supercomputer.doc

Jefferson Team Building COTS Supercomputer

Chip Watson, head of the High-Performance Computing Group (from left); watches Walt Akers,

computer engineer: and Jie Chen, computer scientist,

install a Myrinet card into a computer node.

Newport News - Jul 03, 2003

Scientists and engineers from Jefferson Lab’s Chief Information Office have created a 'cluster supercomputer' that, at peak operation, can process

250 billion calculations per second

Science may be catching up with video gaming. Physicists are hoping to adapt some of the most potent computer components developed by

Page 55: supercomputer.doc

companies to capitalize on growing consumer demands

for realistic simulations

that play out across personal computer screens.

For

researchers, that means more power, less cost, and much faster and more accurate calculations of some of Nature's most basic, if complex,

processes.

Jefferson Lab is entering the second phase of a three-year effort to create an off-the-shelf supercomputer using the next generation of relatively

inexpensive, easily available microprocessors. Thus far, scientists and engineers from JLab's Chief Information Office have created a "cluster

supercomputer" that, at peak operation, can process 250 billion calculations per second.

Such a 250 "gigaflops" machine -- the term marries the nickname for billion to the abbreviation for "floating-point operations" -- will be scaled up to 800

gigaflops by June, just shy of one trillion operations, or one teraflop.

The world's fastest computer, the Earth Simulator in Japan, currently runs at roughly 35 teraflops; the next four most powerful machines, all in the

United States, operate in the 5.6 to 7.7 teraflops range.

The Lab cluster-supercomputer effort is part of a broader collaboration between JLab, Brookhaven and Fermi National Laboratories and their

Page 56: supercomputer.doc

university partners, in a venture known as

the Scientific Discovery through

Advanced Computing project, or SciDAC,

administered by the Department of Energy's Office of Science. SciDAC's aim is to routinely make available to scientists terascale computational

capability.

Such powerful machines are essential to "lattice quantum chromodynamics," or LQCD, a theory that requires physicists to conduct

rigorous calculations related to the description of the strong-force interactions in the atomic nucleus between quarks, the particles that many

scientists believe are one of the basic building blocks of all matter.

"The big computational initiative at JLab will be the culmination of the lattice work we're doing now," says Chip Watson, head of the Lab's High-

Performance Computer Group. "We're prototyping these off-the-shelf computer nodes so we can build a supercomputer. That's setting the stage

for both hardware and software. "

The Lab is also participating in the Particle Physics Data Grid, an application that will run on a high-speed, high-capacity

telecommunications network to be deployed within the next three years that is 1,000 times faster than current systems.

Page 57: supercomputer.doc

Planners intend that

the Grid will give

researchers across the

globe instant

access to large

amounts of data

routinely shared

among far-flung

groups of scientific

collaborators.

Computational grids integrate networking, communication, computation and information to provide a virtual platform for computation and data

management in the same way that the Internet permits users to access a wide variety of information.

Whether users access the Grid to use one resource such as a single computer or data archive, or to use several resources in aggregate as a

coordinated, virtual computer, in theory all Grid users will be able to "see" and make use of data in predictable ways. To that end, software engineers

are in the process of developing a common set of computational, programmatic and telecommunications standards.

"Data grid technology will tie together major data centers and make them accessible to the scientific community," Watson says. "That's why we're

optimizing cluster-supercomputer design: a lot of computational clockspeed, a lot of memory bandwidth and very fast communications."

Computational nodes are key to the success of the Lab's cluster supercomputer approach: stripped-down versions of the circuit boards

Page 58: supercomputer.doc

found in home

computers. The boards are placed

in slim metal

boxes, stacked together

and

interconnected to form a cluster.

Currently the Lab is operating a 128-node cluster, and is in the process of procuring a 256-node cluster. As the project develops, new clusters will be added each year, and in 2005 a single cluster may have as many as 1,024

nodes. The Lab's goal is to get to several teraflops by 2005, and reach 100 teraflops by 2010 if additional funding is available.

"[Our cluster supercomputer] is architecturally different from machines built today," Watson says. "We're wiring all the computer nodes together, to

get the equivalent of three-dimensional computing."

That can happen because of continuing increases in microprocessor power and decreases in cost. The Lab's approach, Watson explains, is to upgrade

continuously at the lowest cost feasible, replacing the oldest third of the system each year.

Already, he points out, the Lab's prototype supercomputer is five times cheaper than a comparable stand-alone machine, and by next year it will be

10 times less expensive.

Each year as developers innovate, creating more efficient methods of

Page 59: supercomputer.doc

interconnecting the clusters and creating better software to run LQCD calculations, the Lab will have at its disposal a less expensive but more

capable supercomputer.

"We're always hungry for more power and speed. The calculations need it," Watson says. "We will grow and move on. The physics doesn't stop until

we get to 100 petaflops [100,000 teraflops], maybe by 2020. That's up to one million times greater than our capability today. Then we can calculate

reality at a fine enough resolution to extract from theory everything we think it could tell us. After that, who knows what comes next?"

Page 60: supercomputer.doc

SUN

Genome Center SuperComputing

The Genome Center of Wisconsin in May 2000 opened a new supercomputer facility built around a Sun Microsystems Enterprise 10000 and several

smaller computers. The E 10000 is designed to provide flexible computational power using from

between 1 and 36 processors as needed configurable on the fly. In addition to its processor

power it has 36 gigabytes of memory and 3 Terabytes of disk storage to provide the optimal computing environment for genomic research. In

future the E 10000 will be able to expand to 64 processors, 64 gigabytes of ram and 60 terabytes

of online disk storage.

On September 22, 2000 Sun Microsystems announced the Genome Center SuperComputing

was being named a Sun Center of Excellence. Being a Center of Excellence is a statement that

Sun Acknowledges we are a quality center of

Page 61: supercomputer.doc

computing and that there

is a continuing partnership between the

Genome Center and

Sun

Microsystems.

Mission

The mission of Genome Center SuperComputing is to provide Genomic researchers and their academic collaborators access to computing power that would otherwise be outside the scope of their organizations. In providing access to computing power, storage, local databases and most of the commonly available Unix based biological software we are

trying to keep researchers from working with inadequate resources or supporting unwanted infrastructure.

Page 62: supercomputer.doc

Cray super

Computer

Applications for Cray SystemsThe Cray Applications Group is committed to

making available the software which is important to our customers. Cray Inc. works with third-party software vendors to port codes and to assure that our customers get the best possible

performance.

From bioinformatics to seismic imaging to automotive crash simulations, Cray systems are used to run applications

which solve both large and complex computational problems.

Cray applications data sheets:

AMBER and Cray Inc. (pdf)Gaussian 98 and Cray Inc. Supercomputers (pdf)

Page 63: supercomputer.doc

MSC.Nastran and Cray Inc. Supercomputers (pdf)MSC.Nastran Performance Enhancements on Cray SV1

Supercomputers (pdf)

Page 64: supercomputer.doc

Cray

Professional Services

For more than 25 years, Cray has been at the forefront of high

performance computing (HPC), contributing to the advancement of science, national security, and the

quality of human life. Cray has designed, built, and supported high-performance computing solutions for

customers all around the world. Cray helps ensure the success of supercomputer implementation by partnering with customers to provide complete solutions for the most challenging scientific and engineering

computational problems. These robust solutions utilize Cray's deep supercomputing expertise and sterling reputation for quality.

Cray's understanding of high-performance computing is unrivaled. Our Professional Services Solutions give you access to some of the most savvy,

experienced minds in computing. Examples of capabilities in this area

Page 65: supercomputer.doc

include software

development, custom hardware, extensions to Cray supercomputing products, and access to the systems in Cray's world-class data center. We help Cray customers in all aspects of high-performance computing, from problem analysis to solution implementation. Cray Professional Services

draws on Cray's extensive talent and expertise company-wide.

Why engage Cray Professional Services?

● Over 25 years of experience in the HPC industry

● World-class technical expertise with access to the best minds, methods, and tools in the industry

● Exceptional customer service and dedication to quality

STORAGE SERVICESCray Professional Services provides SNIA certified SAN specialists to deliver solutions related to high performance data storage, including SAN design and implementation. Storage services include StorNext File System and

StorNext Management Suite implementations, RS200 extensions, custom Cray SANs, and legacy data migrations.

CUSTOM ENGINEERINGCray has gathered some of the best engineering minds and technologies in the world to produce its computer systems. To achieve the extreme levels of performance found in supercomputers requires an enormous breadth

Page 66: supercomputer.doc

and depth of leading-

edge technical

talent. This talent is

transferable into other high-performance applications as well in terms of system design, code porting and optimization, system packaging, system power and cooling technologies, and troubleshooting issues in the design

and manufacturing process.Cray Custom Engineering also offers custom design enhancements to

existing Cray products and the use of traditional Cray hardware as embedded components in a variety of other applications and products. The custom engineering offering from Cray is targeted to assist both traditional

and nontraditional Cray customers in addressing their most extreme technical issues.

CRAY CONSULTINGCray customers address the most complex, high-

performance computing problems. Whether in support of issues of national security, safety, design simulation, or the environment, Cray systems have been the favored

computational solution for more than 25 years.To produce these state of the art systems, Cray has

developed a broad spectrum of core competencies in the design, implementation, and optimization of high-

Page 67: supercomputer.doc

performance computing solutions. Cray scientists and engineers are 100% focused on high-performance problems and solutions - this is our business.

Cray now offers this tremendous intellectual capital to our customers to address your needs.

SUPERCOMPUTING ON DEMAND

Several generations of Cray products are available to support your high-performance computing needs. "On demand" means these resources can be

scheduled for use whenever and wherever you need them. Whether it's providing compute services to cover a peak in operational demand, support

for application development or code optimization, or an ASP-based environment, Cray will work with you to make computational resources

available to meet your specific high-performance computing needs.

CRAY TRAINING

Cray products are designed to fit the highest performance compute needs of our customers. Our goal is to ensure that our customers make the most

of their systems. Our training options are designed to enable Cray customers to see a quick return on their compute investment. Classes are

available on a wide variety of topics and platforms, such as system

Page 68: supercomputer.doc

administration, programming and optimization, and various quick-start packages.

SITE ENGINEERING

Cray has been installing, relocating, and optimizing computing environments for over 25 years. Managing on-site system power and

cooling, and interior climate conditions requires the skills of highly trained personnel to ensure optimal system support and performance. Site

Engineering at Cray merges the needs and dimensions of a customer's specific computing environment and translates them into comprehensive

work plans and complete site engineering solutions.

Software for Cray SystemsPowerful hardware systems alone cannot meet the requirements of the

most demanding scientific and engineering organizations. Equally powerful,

Page 69: supercomputer.doc

robust software is needed to

turn

supercomputers into indispensable productivity tools for the sophisticated government, commercial, and academic user communities. In these

demanding environments, where multimillion-dollar projects are at stake, reliability, resource management, single job performance, complex multijob

throughput, and high-bandwidth data management are critical.

● UNICOS® The undisputed leader among high-end supercomputer operating

systems

● UNICOS/mk™ The UNICOS/mk operating system fully supports the Cray T3E

system's globally scalable architecture

● CF90® Programming Environment The CF90 Programming Environment consists of an optimizing

Fortran compiler, libraries, and tools

● Cray C++ Programming Environment C++ and C are the computer languages used today for many high-

performance applications

● Message-Passing Toolkit (MPT) Provides optimized versions of industry-standard message-passing

libraries and software

Page 70: supercomputer.doc

● Netw ork

Queuing Environment (NQE)Workload management environment that provides batch scheduling

and interactive load balancing

● Distributed Computing Environment (DCE) Distributed File Service (DFS)

An industry-standard, vendor-neutral set of tools and services providing distributed computing capability. DFS is a distributed DCE application providing an integrated file system with a unified name

space, secure access, and file protection.

● Data Migration Facility (DMF) A low-overhead hierarchical storage management (HSM) solution

● Cray/REELlibrarian A volume management system that controls libraries of tape volumes

Page 71: supercomputer.doc

Cray Systems at Work

Cray systems provide

powerful high

performance solutions for the world's most complex computational problems. The sustained performance obtained from Cray supercomputers

is used by researchers and computer scientists spanning such varied disciplines as automotive manufacturing, geological sciences, climate

prediction, pharmaceutical development, and national security.

Cray supercomputers are used worldwide in research, academia, industry, and government.

The Road to La-La Land - Pittsburgh Supercomputing Center researcher Pei Tang uses the Cray T3E to probe the

mysteries of anesthesia.

Biomedical Modeling at the National Cancer Institute - Researchers from around the world use NCI's Cray SV1 system to solve some of the most difficult problems in

computational biology -- studying protein structure and function at the most detailed levels.

Page 72: supercomputer.doc

Clean Power - George Richards, leader of the National Energy Technology Laboratory's combustion dynamics

team, takes on the challenge of converting fuel to energy without creating pollutants by using simulations on PSC's

Cray T3E.

A Thumb-Lock on AIDS - PSC's Marcela Madrid simulates an HIV enzyme on the Cray T3E to help develop drugs that

shut down HIV replication.

Page 73: supercomputer.doc

SUPER COMPUTERS

There are two main kinds of supercomputers: vector machines and parallel

machines. Both kinds work FAST, but in different ways.

Let's say you have 100 math problems. If you were a vector

Page 74: supercomputer.doc

computer, you would sit down and do all the problems as fast

as you could.

To work like a parallel computer, you would get some friends and share the work. With 10 of you,

you would each do 10 problems. If you got 20 people, you'd only

have to do 5 problems each.

No matter how good you are at math, it would take you longer

Page 75: supercomputer.doc

to do all 100 problems than to have 20 people do them

together.

Page 76: supercomputer.doc

CRAY T3ESDSC's newest supercomputer is the

CRAY T3E. The T3E is a parallel supercomputer and has 256 processors to work on problems. (Let's not worry

about what "T3E" stands for.)

If you get all the T3E processors going full speed, it can do 153.4 billion -- that's

153,400,000,000 -- math calculations every second. But researchers usually

only use some of the T3E's processors at once. That way, many researchers can

Page 77: supercomputer.doc

run their programs at the same time.

Page 78: supercomputer.doc

CRAY C90: Vector MachineThe CRAY C90 is the busiest of SDSC's supercomputers and cost $26 million. A

problem that takes a home computer

Page 79: supercomputer.doc

8 hours to solve, the CRAY C90 can do in 0.002 seconds. And some scientists have

problems that take the CRAY C90 a couple DAYS to do.

The CRAY C90 is a vector machine with eight processors -- eight vector machines

in one. With all eight processors, the CRAY C90 can do 7.8 gigaFLOPS. (In

people-power you'd need one and a half times the Earth's population.) A pretty slick Pentium PC might reach about

0.03 gigaFLOPS, depending on who you

Page 80: supercomputer.doc

ask.

Application Of Super Computer]Decision Agent Offers Secure Messaging At Pentagon

no Max it's a smoking zone, not the secure communication

bubbleWoodland Hills - May 07, 2003

Northrop Grumman Corporation went live at the Pentagon April 1 with the secure organizational messaging services of the "Decision Agent"

following several months of installation and testing by the company's California Microwave Systems business unit.

Page 81: supercomputer.doc

Installed and managed by the Pentagon

Telecommunications Center (PTC), which supports over 30,000 users and 1,000 organizations, "Decision Agent" is designed to provide an enterprise-wide

information profiling and management portal for the Defense Messaging System (DMS) Version 3.0.

"The 'Decision Agent' has been instrumental in allowing us to provide protected message traffic for our customers," said Marvin Owens, director of the PTC. "It has

proved to be a tremendous tool in helping us achieve our mission of implementing DMS in the Pentagon as well as for our other customers worldwide."

The new system at the PTC supports DMS communications for the Office of the Secretary of Defense, the Army Operations Center and the military services

headquarters' staffs. In addition, it provides a communication gateway that permits interoperability between Department of Defense organizations that use DMS and allied NATO and non-Defense Department organizations that use legacy systems.

"By enabling DMS messaging without FORTEZZA cards and readers, and by eliminating the need to install add-ons to end-user devices, 'Decision Agent' will

allow the Pentagon and other government agencies to reduce the costs and manpower requirements traditionally associated with DMS implementations," said

John Haluski, vice president of California Microwave Systems' Information Systems

Page 82: supercomputer.doc

unit.

The "Decision

Agent" consists of a

suite of integrated software

applications that run on a

Windows 2000 server.

These include

Northrop Grumman's

LMDS MailRoom,

the most powerful profiling engine

currently available,

and Engenium

Corporation's Semetric, a knowledge-based retrospective search engine.

The system enhances DMS functionality by automating the processes of identifying, filtering and distributing military organization messages to specified addresses and

recipients, based on interest profiles and security clearances.

"Decision Agent" provides other enhancements as well, including virus checks, security checks for possible mislabeling of messages and attachments, Web-based

message preparation and Boolean logic keyword and concept searches.

Frozen Light May Make Computer Tick Later This Century

an old thumper at NASA in the early 1960s

Boston - May 22, 2003

NASA-funded research at Harvard University, Cambridge, Mass., that literally stops light in its tracks, may someday lead to breakneck-speed

computers that shelter enormous amounts of data from hackers.

Page 83: supercomputer.doc

The research,

conducted by a team led by Dr.

Lene Hau, a Harvard physics

professor, is one of 12

research projects

featured in a special edition of Scientific American entitled

"The Edge of

Physics," available through May 31.

In their laboratory,

Hau and her

colleagues have been able to slow a pulse of light, and even stop it, for several- thousandths of a second. They've also created a roadblock for

light, where they can shorten a light pulse by factors of a billion.

"This could open up a whole new way to use light, doing things we could only imagine before," Hau said. "Until now, many technologies have been

limited by the speed at which light travels."

The speed of light is approximately 186,000 miles per second (670 million miles per hour). Some substances, like water and diamonds, can slow light

to a limited extent.

More drastic techniques are needed to dramatically reduce the speed of light. Hau's team accomplished "light magic" by laser-cooling a cigar-

shaped cloud of sodium atoms to one- billionth of a degree above absolute zero, the point where scientists believe no further cooling can occur.

Using a powerful electromagnet, the researchers suspended the cloud in an ultra-high vacuum chamber, until it formed a frigid, swamp-like goop of

atoms.

Page 84: supercomputer.doc

When they shot a light pulse into

the cloud, it bogged down, slowed

dramatically, eventually stopped, and turned off. The scientists later revived the light pulse and restored its normal speed by shooting an

additional laser beam into the cloud.

Hau's cold-atom research began in the mid-1990s, when she put ultra-cold atoms in such cramped quarters they formed a type of matter called a

Bose-Einstein condensate. In this state, atoms behave oddly, and traditional laws of physics do not apply. Instead of bouncing off each other

like bumper cars, the atoms join together and function as one entity.

The first slow-light breakthrough for Hau and her colleagues came in March 1998. Later that summer, they successfully slowed a light beam to 38 miles

per hour, the speed of suburban traffic. That's two million times slower than the speed of light in free space. By tinkering with the system, Hau and

her team made light stop completely in the summer of 2000.

These breakthroughs may eventually be used in advanced optical-communication applications. "Light can carry enormous amounts of

information through changes in its frequency, phase, intensity or other properties," Hau said.

Page 85: supercomputer.doc

When the light pulse stops, its

information is

suspended and stored,

just as information is stored in the memory

of a computer.

Light-carrying quantum bits could

carry

significantly more information than current computer bits.

Quantum computers could also be more secure by encrypting information in elaborate codes that could be broken only by using a laser and complex

decoding formulas.

Hau's team is also using slow light as a completely new probe of the very odd properties of Bose-Einstein condensates. For example, with the light roadblock the team created, they can study waves and dramatic rotating-

vortex patterns in the condensates.

Page 86: supercomputer.doc

Navy to use IBM

supercomputer for storm forecastingBy Reuters

August 2, 2000, 5:40 PM PThttp://news.com.com/2100-1001-244009.html?tag=prntfr

NEW YORK--IBM said today the U.S. Department of Defense paid $18 million for one of the world's fastest supercomputers to help Navy vessels avoid maritime disasters like the one

portrayed in the film "The Perfect Storm."

Code-named "Blue Wave," the new IBM RS/6000 SP will rank as the most powerful supercomputer at the Defense Department and the fourth-fastest in operation anywhere in

the world. It will enable the U.S. Navy to create the most detailed model of the world's oceans ever constructed.

The research performed by "Blue Wave" is expected to improve maritime storm forecasting as well as search and rescue efforts for naval vessels.

In June, Armonk, N.Y.-based IBM unveiled the fastest computer in the world, able to process more in a second than one person with a calculator could do in 10 million years.

That supercomputer was designed for the U.S. government to simulate nuclear weapons tests. It was made for the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). IBM sold the system, which occupies floor space equivalent to two basketball courts and weighs as much as 17 elephants, to the DOE for $110 million.

Page 87: supercomputer.doc

The Navy computer, which can

process two trillion

calculations per second, will model

ocean depth, temperature

and wave heights to

new levels of accuracy and

detail, boosting the

ability of

meteorologists to predict storms at sea.

"The Perfect Storm," a best-selling book by Sebastian Junger recently made into a film, told the tale of the Andrea Gail, a fishing vessel at sea off the coast of Newfoundland

during a deadly storm that killed the entire crew

In 1999, IBM became the leader in the traditional supercomputer market. IBM now has about 30 percent of that market, in which some 250 computers that range in price from $2

million to $100 million or more are sold every year, for use in weather predictions, research and encryption.

Page 88: supercomputer.doc

Sapphire Slams A Worm

Into .EarthShatters All Previous Infection Rates

taking out earth.com takes but a worm or two or three or more

......San Diego - Feb 04, 2003

A team of network security experts in California has determined that the computer worm that attacked and hobbled the global Internet 11 days ago

was the fastest computer worm ever recorded.

In a technical paper released Tuesday, the experts report that the speed

Page 89: supercomputer.doc

and nature of the

Sapphire worm (also

called Slammer) represent significant

and worrisome milestones

in the evolution of

computer worms.

Computer scientists

at the University

of California, San Diego and its San

Diego

Supercomputer Center (SDSC), Eureka-based Silicon Defense, the University of California, Berkeley, and the nonprofit International Computer

Science Institute in Berkeley, found that the Sapphire worm doubled its numbers every 8.5 seconds during the explosive first minute of its attack.

Within 10 minutes of debuting at 5:30 a.m. (UTC) Jan. 25 (9:30 p.m. PST, Jan. 24) the worm was observed to have infected more than 75,000

vulnerable hosts. Thousands of other hosts may also have been infected worldwide.

The infected hosts spewed billions of copies of the worm into cyberspace, significantly slowing Internet traffic, and interfering with many business

services that rely on the Internet.

"The Sapphire/Slammer worm represents a major new threat in computer worm technology, demonstrating that lightning-fast computer worms are not just a theoretical threat, but a reality," said Stuart Staniford, president and founder of Silicon Defense. "Although this particular computer worm

did not carry a malicious payload, it did a lot of harm by spreading so aggressively and blocking networks."

Page 90: supercomputer.doc

The Sapphire worm's

software

instructions, at 376 bytes, are about the length of the text in this paragraph, or only one-tenth the size of the Code Red worm, which spread through the

Internet in July 2001.

Sapphire's tiny size enabled it to reproduce rapidly and also fit into a type of network "packet" that was sent one-way to potential victims, an

aggressive approach designed to infect all vulnerable machines rapidly and saturate the Internet's bandwidth, the experts said.

In comparison, the Code Red worm spread much more slowly not only because it took longer to replicate, but also because infected machines

sent a different type of message to potential victims that required them to wait for responses before subsequently attacking other vulnerable

machines.

The Code Red worm ended up infecting 359,000 hosts, in contrast to the approximately 75,000 machines that Sapphire hit. However, Code Red took about 12 hours to do most of its dirty work, a snail's pace compared with

the speedy Sapphire.

The Code Red worm sent six copies of itself from each infected machine

Page 91: supercomputer.doc

every second, in

effect "scanning" the Internet randomly

for vulnerable machines. In contrast, the speed with which

the diminutive Sapphire

worm copied

itself and scanned

the Internet for

additional vulnerable hosts was

limited only by the

capacity of individual

network connections.

"For example, the Sapphire worm infecting a computer with a one-megabit-per-second connection is capable of sending out 300 copies of itself each second," said Staniford. A single computer with a 100-megabit-per-second connection, found at many universities and large corporations, would allow

the worm to scan 30,000 machines per second.

"The novel feature of this worm, compared to all the other worms we've studied, is its incredible speed: it flooded the Internet with copies of itself

so aggressively that it basically clogged the available bandwidth and interfered with its own growth," said David Moore, an Internet researcher at

SDSC's

Cooperative Association for Internet Data Analysis (CAIDA) and a Ph.D. candidate at UCSD under the direction of Stefan Savage, an assistant professor in the Department of Computer Science and Engineering.

"Although our colleagues at Silicon Defense and UC Berkeley had predicted the possibility of such high-speed worms on theoretical grounds, Sapphire is the first such incredibly fast worm to be released by computer

Page 92: supercomputer.doc

hackers into the

wild," said Moore.

Sapphire exploited a

known

vulnerability in Microsoft SQL servers used for database management, and MSDE 2000, a mini version of SQL for desktop use. Although Microsoft had

made a patch available, many machines did not have the patch installed when Sapphire struck. Fortunately, even the successfully attacked

machines were only temporarily out of service.

"Sapphire's greatest harm was caused by collateral damage—a denial of legitimate service by taking database servers out of operation and overloading networks," said Colleen Shannon, a CAIDA researcher.

"At Sapphire's peak, it was scanning 55 million hosts per second, causing a computer version of freeway gridlock when all the available lanes are

bumper-to-bumper." Many operators of infected computers shut down their machines, disconnected them from the Internet, installed the Microsoft

patch, and turned them back on with few, if any, ill effects.

The team in California investigating the attack relied on data gathered by an array of Internet "telescopes" strategically placed at network junctions

around the globe. These devices sampled billions of information-containing "packets" analogous to the way telescopes gather photons.

Page 93: supercomputer.doc

With the Internet

telescopes, the team

found that nearly 43 percent of

the machines

that became

infected are located in the United

States, almost 12

percent are in South

Korea, and more than 6 percent are

in China.

Despite the worm's

success in wreaking

temporary havoc, the

technical report analyzing Sapphire states that the worm's designers made several "mistakes" that significantly reduced the worm's distribution

capability.

For example, the worm combined high-speed replication with a commonly used random number generator to send messages to every vulnerable

server connected to the Internet. This so-called scanning behavior is much like a burglar randomly rattling doorknobs, looking for one that isn't

locked.

However, the authors made several mistakes in adapting the random number generator. Had not there been enough correct instructions to

compensate for the mistakes, the errors would have prevented Sapphire from reaching large portions of the Internet.

The analysis of the worm revealed no intent to harm its infected hosts. "If the authors of Sapphire had desired, they could have made a slightly larger version that could have erased the hard drives of infected machines," said Nicholas Weaver, a researcher in the Computer Science Department at UC

Berkeley. "Thankfully, that didn't occur."

Page 94: supercomputer.doc

University of Hawaii will use new IBM

supercomputer to investigate Earth's meteorological mysteries"Blue Hawaii" system marks innovative partnership between university,

Maui High Performance Computing Center and IBM

Honolulu, HI, October 25, 2000—The University of Hawaii (UH) today introduced an IBM supercomputer code-named "Blue Hawaii" that will

explore the inner workings of active hurricanes, helping university researchers develop a greater understanding of the forces driving these destructive storms. The IBM SP system—the first supercomputer ever installed at the University of Hawaii—is the result of an initiative by the

Maui High Performance Computing Center (MHPCC) in collaboration with IBM. This initiative has culminated in an innovative partnership between

the university, MHPCC and IBM."We're delighted to have IBM and MHPCC as partners," said university

president Kenneth P. Mortimer. "This new supercomputer adds immeasurably to the technological capacity of our engineering and science

programs and will propel us to a leadership position in weather research and prediction."

Donated by IBM to the university, Blue Hawaii is the technological heir to IBM's Deep Blue supercomputer that defeated chess champion Garry

Page 95: supercomputer.doc

Kasparov in 1997. Blue Hawaii will

power a wide

spectrum of University of Hawaii research efforts,

such as:

● Hurricane

research. Wind velocity data acquired from weather balloons and aircraft-borne labs will be analyzed to develop a greater understanding of the forces that drive hurricanes. This will enhance meteorologists'

ability to predict the storms.

● Climate modeling. Scientists will investigate the interaction between the oceans and the atmosphere believed to cause long-term climate variations. The research is expected to lead to a more accurate

method for predicting changes in the world's climate, which will benefit numerous industrial sectors, including agriculture, manufacturing, and transportation.

● Weather forecasting. Meteorological data will be processed through state-of-the-art computer models to produce weather forecasts for each of Hawaii's counties.

In addition, scientists will rely on the supercomputer for a number of vital research projects in the areas of physics and chemistry. Educational

programs in the university's Department of Information and Computer Sciences will also be developed to train graduate students in

computational science, which involves using high-performance computers for simulation in scientific research projects.

"This supercomputer strengthens our reputation as a location with a burgeoning high technology industry," Hawaii Governor Benjamin

Cayetano said. "It is an opportunity for our students and educators to work with a powerful research tool. This donation by IBM boosts this

Administration's own support of the university's technology-related

Page 96: supercomputer.doc

programs."The

synergy between

UH, MHPCC, and IBM

will provide the

resources needed to establish UH as a leader in research

computing. MHPCC, an

expert in production-

level computing on the SP

supercomputer, is acting as an advisor to UH on a broad range of technical topics and will install and prepare the supercomputer for UH. In addition,

MHPCC and IBM will assist UH researchers in using the new research tool.Located in the Department of Information and Computer Sciences at the

university's Pacific Ocean Science and Technology Building, Blue Hawaii is powered by 32 IBM POWER2 microprocessors, 16 gigabytes of memory

and 493 gigabytes of IBM disk storage. The machine substantially augments the supercomputing power that's based in the state of Hawaii,

already home to MHPCC, one of the world's most prestigious supercomputer facilities.

Together, Blue Hawaii and MHPCC form a powerful technology foundation for the burgeoning scientific research initiatives located in Hawaii. In the past five years, government research grants awarded to Hawaii scientists have increased by 34 percent to $103 million, according to the UH office of

research services."Scientists at the University of Hawaii are conducting exciting research

across a number of important disciplines," said IBM vice president Peter Ungaro. "IBM is proud to work with UH and MHPCC in providing the

university with the industry's most popular supercomputer, which will help

Page 97: supercomputer.doc

researchers achieve

their important

goals more quickly and with better results."

Most Popular

SupercomputerThe Blue Hawaii system joins a long roster of IBM SP supercomputers

around the world. According to the TOP500 Supercomputer List*, IBM SPs now account for 144 of the world's 500 most powerful high performance computers—more than any other machine. The list is published twice a year by supercomputing experts Jack Dongarra from the University of Tennessee and Erich Strohmaier and Hans Meuer of the University of

Mannheim (Germany).IBM SP supercomputers are used to solve the most complex scientific and business problems. With the IBM SP, scientists can model the effects of the forces exerted by galaxies; corporations can perform complex calculations

on massive amounts of data in order to support business decisions; petroleum exploration companies can rapidly process seismic data to determine where they should drill; and company executives seeking to

meet Internet demand can enable complex Web-based transactions.About the University of Hawaii

The University of Hawaii is the state's 10-campus system of public higher education. The 17,000-student Manoa campus is a Carnegie I research

university of international standing that offers an extensive array of

Page 98: supercomputer.doc

undergraduate, graduate and professional degrees. The university's research program last year drew $179 million in extramural funding and is

widely recognized for its strengths in tropical medicine, evolutionary biology, astronomy, oceanography, volcanology, geology and geophysics, tropical agriculture, electrical engineering and Asian and Pacific studies.

Visit UH at www.hawaii.edu.About MHPCC

MHPCC is ranked among the Top 100 most powerful supercomputer facilities in the world. MHPCC provides DoD, government, private industry,

and academic users with access to leading edge, high performance technology.

MHPCC is a center of the University of New Mexico established through a cooperative agreement with the U.S. Air Force Research Laboratory's

Directed Energy Directorate. MHPCC is a Distributed Center of the DoD High Performance Computing Modernization Program (HPCMP), a

SuperNode of the National Science Foundation's National Computational Science Alliance, and a member of Hawaii's growing science and

technology community.

Page 99: supercomputer.doc

Technology use in Super Computer

Pipelining

Page 100: supercomputer.doc

The most

straightforward way to get more performance out of a processing unit is to speed up the clock (setting aside, for the moment, fully asynchronous designs, which one doesn't find in this space for a number of reasons). Some very early computers even had a knob to continuously adjust the

clock rate to match the program being run.But there are, of course, physical limitations on the rate at which

operations can be performed. The act of fetching, decoding, and executing instructions is rather complex, even for a deliberately simplified instruction set, and there is a lot of sequentiality. There will be some minimum number of sequential gates, and thus, for a given gate delay, a minimum execution time, T(emin). By saving intermediate results of substages of execution in

latches, and clocking those latches as well as the CPU inputs/outputs, execution of multiple instructions can be overlapped. Total time for the execution of a single instruction is no less, and in fact will tend to be

greater, than T(emin). But the rate of instruction execution, or issue rate, can be increased by a factor proportional to the number of pipe stages.The technique became practical in the mid-1960s. The Manchester Atlas and the IBM Stretch project were two of the first functioning pipelined processors. From the IBM 390/91 onward, all state-of-the-art scientific

Page 101: supercomputer.doc

computers have been pipelined.

Multiple Pipelines

Not every instruction requires all

of the resources

of a CPU. In "classical" computers, instructions tend to fall

into categories:

those which

perform memory

operations, those which

perform integer

computations, those which operate on floating-point values, etc. It us thus not too difficult for the processor pipeline to thus be further broken down "horizontally" into pipelined functional units, executing independently of

one another. Fetch and decode are common to the execution of all instructions, however, and quickly become a bottleneck.

Limits to Pipelining

Once the operation of a CPU is pipelined, it is fairly easy for the clock rate of the CPU to vastly exceed the cycle rate of memory, starving the decode logic of instructions. Advanced main memory designs can ameliorate the

problem, but there are always technological limits. One simple mechanism to leverage instruction bandwidth across a larger number of pipelines is

SIMD (Single Instruction/Multiple Data) processing, wherein the same operation is performed across ordered collections of data. Vector

processing is the SIMD paradigm that has seen the most visible success in high-performance computing, but the scalability of the model has also

made it appealing for massively parallel designs.Another way to ameliorate the memory latency effects on instruction issue

is to stage instructions in a temporary store closer to the processor's

Page 102: supercomputer.doc

decode logic.

Instruction buffers are one such structure, filled from instruction memory in advance of their being needed. An instruction cache is a larger and

more persistent

store, capable of holding a

significant portion of a

program across

multiple iterations.

With effective

instruction cache technology, instruction fetch bandwidth has become much less of a limiting factor in CPU performance. This has pushed the

bottleneck forward into the CPU logic common to all instructions: decode and issue. Superscalar design and VLIW architectures are the principal

techniques in use today (1998) to attack that problem.

Scalable Vector Parallel ComputersThe Scalable Vector Parallel Computer Architecture is an architecure where

vector processing is combined with a scalable system design and software. The major components of this architecture are a vector processor

as the single processing node, a scalable high performance interconnection network, including the scalability of I/O, and system

software which supports parallel processing at a level beyond loosely coupled network computing. The emergence of a new Japanese computer architecture comes to the surprise of many who are used to thinking that Japanese companies never undertake a radical departure from existing

architectures. Nonetheless scalable vector parallel computers are an

Page 103: supercomputer.doc

original Japanese

development, which keeps the advantages of a powerful single processor but removes the restrictions of shared memory vector multiprocessing.

The basic idea of this new architecture is to implement an existing vector processor in CMOS technology and build a scalable parallel computer out of these powerful single processors. The development is at the same time

conservative and innovative in the sense that two successful and meanwhile proven supercomputer design principles, namely vector

processing and scalable parallel processing, are combined to give a new computer architecture.

Page 104: supercomputer.doc

Vector ProcessingVector processing is intimately associated with the concept of a

"supercomputer". As with most architectural techniques for achieving high performance, it exploits regularities in the structure of computation, in this case, the fact that many codes contain loops that range over linear arrays

of data performing symmetric operations.The origins of vector architecure lay in trying to address the problem of instruction bandwidth. By the end of the 1960's, it was possible to build

multiple pipelined functional units, but the fetch and decode of instructions from memory was too slow to permit them to be fully exploited. Applying a

single instruction to multiple data elements (SIMD) is one simple and logical way to leverage limited instruction bandwidth.

Latest Technology

Page 105: supercomputer.doc
Page 106: supercomputer.doc

Andromeda™

The latest technology from Super Computer, Inc., code named "Andromeda™" provides game developers a rich set of tools designed to make game

implementation easier and lower the time to market. Other features are designed to allow hosting a game server and server administration easy. In today's

competitive gaming market, it is not only important to get your game to the stores quickly, but keep it selling. All of Andromeda™'s features help you do just

that.Master Browser Service (MBS)

"Push" technology allows your servers to be accurately listed in a single repository accessable from in-game browsers and third

party aplpications. Andromeda™'s MBS uses a propriatry protocol that allows for both forwards and backwards

compatability so that applications that retrieve information from the master browser need to go through updates to continue to show your servers. Because Andromeda™'s MBS is located in the fastest

Page 107: supercomputer.doc

data center in the world you know that players can retrieve server lists quickly

Server Authentication Service (SAS)

Player and server authentication services provided by Andromeda™'s SAS are as flexable as they are robust. You can store everything from basic authentication information to full

player settings allowing players to use their preferred configuration--even if they are on a friend's computer.

SAS also works with Andromeda™'s Pay-Per-Play Service and Games-On-Demand Service to control whether a player may join a

server.

Statistics Tracking Service (STS)

Andromeda™'s STS service uses streaming to collect player and server statistics in real-time. Information is fed into the STS

database and processed resulting in accurate player and server statistics.

Page 108: supercomputer.doc

Dynamic Content Delivery System (DCDS)

In-game dialogs change. Layouts change. Why patch? Andromeda™'s DCDS system can deliver in-game content on the fly. Caching technology allows Andromeda™ to update only what has changed--including game resources. Deliver dynamic content, such as news, forums, and server rental interfaces without having

to patch.

Combined with Andomeda's Server Authentication Service and Pay-Per-Play Service, DCDS can also deliver an interface to

players to allow them to add more time to their account without leaving the game.

Remote Console Interface

The remote console interface provides all the tools to allow server administrators to remotely control the server using a standard

remote console language.

CVAR Management Interface

Another server administration tool, the CVAR Management

Page 109: supercomputer.doc

Interface manages server settings and, combined with the Remote Console Interface, can restrict the ability for players to change

certain CVARs using the remote console.

Netcode Interface

Solid netcode is the backbone of any multiplayer game. High-performance, reliable interfaces allows game developers to reduce

their time to market and concentrate on developing their game.

Pay-Per-Play Service

Andromeda™'s Pay-Per-Play service allows game publishers to charge players by the minute, hour, day, or month to play online.

This technology is both secure and reliable.

Games-On-Demand Service

This service allows a game server to created on demand for whatever duration the ccustomer desires. Whether they want the

server for a few hours to play a match, or the same time every week for practices, this solution is an inexpensive way to obtain a

Page 110: supercomputer.doc

high-performance server that meets tournament regulations inexpensively.

Other FeaturesAndromeda™'s tools integrate together. For example, your in-game browser can

pull up a player summary and their player stats all through the MBS without having to connect to the SAS and STS. High availability clustering technology

ensures that Andromeda™ is available 24/7/365.

ClanBuilder™ features total clan resource management including a roster, calendar, news board, gallery, links, remote game server

console and robust security.

The ClanBuilder™ roster is a fully customizable roster allowing guild masters to create their own fields, squads, ranks, and

awards. The roster supports ICQ allowing visitors to add guild members to their ICQ contact list or send a message directly from

the roster at the click of a button.

The ClanBuilder™ calendar allows events to be created for the guild as a whole or a specific division. Recurrence patterns allow

Page 111: supercomputer.doc

guild masters to easily create recurring events. ClanBuilder™ will even adjust the date and time of an event based on the time zone a

vistor picks.

The ClanBuilder™ news board is a powerful tool that allows news articles to be posted by any visitor requiring approval by the guild

master before appearing on the news board. Like the rest of the ClanBuilder™ tools, division specific news can be posted.

The ClanBuilder™ links and gallery allow screen-shots and links to be easily added and categorized. Images are stored

automatically on the ClanBuilder™ server without the need for FTP or your own web space.

The ClanBuilder™ game servers provides not only a place to list your clan game servers, but also allows remote console to be used

with supported games.

The ClanBuilder™ security model is simple yet powerful allowing guild masters to delegate administrative tasks to specific members

and customize which members can make changes to any tool.

Page 112: supercomputer.doc

ClanBuilder™ offers a rich set of tools allowing full customization down to the individual colors on the screen. Easy to use, yet

powerful, no clan can afford to be without this time-saving tool.

Get into the game fast, with ClanBuilder™.

Architectural Themes

Page 113: supercomputer.doc

Looking over the

numerous

independent high-performance computer designs in the 1970's-90's, one can discern several themes or schools of thought which span a number of

architectures. The following pages will try to put those in some kind of perspective.

Contributions of additional links to seminal articles in these categories are more than welcome.

Email to: [email protected]

Pipelining

Dealing with Memory Latency

Vector Processing

Parallel Processing

Massive Parallelism

Commoditization

Page 114: supercomputer.doc

SUPER COMPUTER

SELECTS WorldCom

FOR HOSTING SERVICES

WorldCom, a the leading global business data and Internet communications provider, announced that Super Computer Inc, a

revolutionary online computer gaming company, has selected WorldCom Internet Colocation Services to power its cutting-edge services which

support millions of online video game players.Super Computer Inc SCI) was created for the 13 million online video game

players to improve the quality of their experience while addressing the high cost of the game server rental industry. To establish a leadership position in the rapidly growing market, SCI developed a supercomputer capable of delivering game server access with unsurpassed speeds, reliability and

value.By choosing WorldCom, SCI benefits from the cost and operating

efficiencies of colocation outsourcing, while leveraging the performance, security and reliability of WorldCom's world-class Internet data centers and direct, scalable, high-speed connections to the facilities-based WorldCom global IP network. Financial terms of the agreement were not disclosed.

"WorldCom has long been a tier one Internet provider, and unquestionably has the most superior and expansive IP network on the planet," said Jesper Jensen, president of Super Computer. "The two biggest factors for success

Page 115: supercomputer.doc

in online gaming are

network

performance and equipment performance. We require access to the best network in order for us to be successful."

SCI's colocation solution ensures that its gaming servers will be up and running 24/7, even at peak usage times. With its industry-leading six-point

Internet colocation performance guarantees, WorldCom assures 100% network availability, 100 percent power availability and 99.5% packet

delivery, as well as other key network performance metrics. In addition, Super Computer chose WorldCom because its network could easily scale and provide the necessary bandwidth - beyond speeds of 1.6 Gbps - as it

grows."The Super Computer agreement highlights the fact that WorldCom is a viable provider today and in the future," said Rebecca Carr, WorldCom

director of Global Hosting Services. "By outsourcing to WorldCom, SCI can leverage our Internet expertise and our global Internet footprint to deliver

the world- class speed and reliability of our network to its customers."Currently colocated in WorldCom's state-of-the-art Atlanta data center, the company plans to expand into additional centers around the globe over the

next nine months.With the support of its managed Web and application hosting affiliate

Page 116: supercomputer.doc

Digex, WorldCom offers the

full array of Web

hosting solutions

from colocation to shared, dedicated

and custom managed hosting.

Each runs over

WorldCom's facilities-based global network, through WorldCom's state-of-the-art data centers across the U.S., Canada, Europe and Asia Pacific. Web

hosting services are part of the full continuum of advanced data communications solutions WorldCom provides to business customers

around the world.About Super Computer Inc

Super Computer Inc (SCI) is one of the world's fastest growing game hosting solutions. With the invention of the world's first supercomputer for

FPS game hosting, the Jupiter Cluster, and the Callisto mini-cluster for Broadband Providers, SCI is working with the gaming industry to

consolidate the game hosting market. With the move to WorldCom's network, SCI now offers the world's fastest gaming technology and

connectivity. SCI introduced the concept of games hosted on supercomputers to the consumers in June 2002.

About WorldCom IncWorldCom Inc is a pre-eminent global communications provider for the digital generation, operating in more than 65 countries. With one of the

most expansive, wholly-owned IP networks in the world, WorldCom provides innovative data and Internet services for businesses to

Page 117: supercomputer.doc

communicate in today's market. In April 2002, WorldCom launched The Neighborhood built by MCI -- the industry's first truly any-distance, all-

inclusive local and long-distance offering to consumers.

Page 118: supercomputer.doc
Page 119: supercomputer.doc
Page 120: supercomputer.doc
Page 121: supercomputer.doc