Applications Requiring An Experimental Optical Network Invited Keynote I-Light Applications Workshop Indiana Univ. Purdue Univ. Indianapolis December 4, 2002 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
33
Embed
Applications Requiring An Experimental Optical Network Invited Keynote I-Light Applications Workshop Indiana Univ. Purdue Univ. Indianapolis December 4,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Applications Requiring An Experimental Optical Network
Director, California Institute for Telecommunications and Information Technologies
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
Closing in on the Dream A High Performance Collaboration Grid
“Using satellite technology…demo ofWhat It might be like to have high-speed fiber-optic links between advanced computers in two different geographic locations.”― Al Gore, Senator
Chair, US Senate Subcommittee on Science, Technology and Space
“What we really have to do is eliminate distance between individuals who want to interact with other people and with other computers.”― Larry Smarr, Director
National Center for Supercomputing Applications, UIUC
Alliance 1997: Collaborative Video Productionvia Tele-Immersion and Virtual Director
Donna Cox, Bob Patterson, Stuart Levy, Glen Whelesswww.ncsa.uiuc.edu/People/cox/
Alliance Project Linking CAVE, Immersadesk, Power Wall, and Workstation
UIC
• Fifteen Countries/Locations Proposing 28 Demonstrations: Canada, CERN, France, Germany, Greece, Italy, Japan, The Netherlands, Singapore, Spain, Sweden, Taiwan, United Kingdom, United States
• Applications Demonstrated: Art, Bioinformatics, Chemistry, Cosmology, Cultural Heritage, Education, High-Definition Media Streaming, Manufacturing, Medicine, Neuroscience, Physics, Tele-science
iGrid 2002 September 24-26, 2002, Amsterdam, The Netherlands
www.startap.net/igrid2002UIC
Sponsors: HP, IBM, Cisco, Philips, Level (3), Glimmerglass, etc.
iGrid 2002 Was Sustaining 1-3 Gigabits/s
Total Available Bandwidth Between Chicago and Amsterdam
Was 30 Gigabit/s
The Move to Data-Intensive Science & Engineering-e-Science Community Resources
ATLAS
Sloan Digital Sky Survey
LHC
ALMA
Why Optical Networks Are Emerging as the 21st Century Driver for the Grid
Scientific American, January 2001
CONTROL
PLANE
Clusters
DynamicallyAllocatedLightpaths
Switch Fabrics
PhysicalMonitoring
Apps Middleware
A LambdaGrid Will Be the Backbone for an e-Science Network
Source: Joe Mambretti, NU
NSF Defines Three Classes of Networks Beyond the Commodity Internet
• Production Networks (e.g. Internet2) – High-Performance Networks– Reaches All US Researchers– 24 / 7 Reliable
• Experimental Networks– Trials of Cutting-Edge High-Performance Networks – Deliver Advanced Application Needs Unsupported by Production Networks– Robust Enough to Support Application-Dictated Development:
– Software Application Toolkits, – Middleware, – Computing and Networking
• Research Networks– Smaller-Scale Network Prototypes – Enable Basic Scientific and Engineering Network Research– Testing of Component Technologies, Protocols, Network Architectures– Not Expected to Be Persistent– Not Expected to Support Production Applications
www.evl.uic.edu/activity/NSF/index.html
Local and Regional Lambda Experimental Networks Are Achievable and Practical
• Several GigaPOPs and States Are Building – Multi-Lambda Metropolitan Experimental Networks– Lighting up Their Own Dark Fiber (I-WIRE, I-Light, CENIC CalREN-XD)– With Hundreds of Lambdas by 2010
• OptIPuter Funded to Research LambdaGrid – Middleware and Control Plane– Application Driven
• Substantial State and Local Funds Can Be Heavily Leveraged by an NSF Experimental Networks Program – Cross-country Inter-Connection (National Light Rail)– Persistent Support of Emerging Experimental Networks– First NSF Workshop UIC December 2001– Second NSF Workshop UCI May 2002– Expected NSF RFP by Winter 2003
The Next S-Curves of NetworkingExponential Technology Growth
0%
100%
Research
Experimental/Early Adopters
Production/Mass Market
Time
Technology S-Curve
Gigabit Testbeds
Connections Program
Internet2 Abilene
DWDM
Experimental Networks
Lambda Grids
~1990s 2000 2010
Networking Technology S-Curves
Tec
hn
olog
y P
enet
rati
on
Cal-(IT)2
An Integrated Approach to the Future Internet
www.calit2.net
220 UC San Diego & UC Irvine FacultyWorking in Multidisciplinary Teams
With Students, Industry, and the Community
The State’s $100 M Creates Unique Buildings, Equipment, and Laboratories
Data Intensive Scientific Applications Require Experimental Optical Networks
• Large Data Challenges in Neuro and Earth Sciences– Each Data Object is 3D and Gigabytes– Data are Generated and Stored in Distributed Archives– Research is Carried Out on Federated Repository
• Requirements– Computing Requirements PC Clusters– Communications Dedicated Lambdas Over Fiber– Data Large Peer-to-Peer Lambda Attached Storage – Visualization Collaborative Volume Algorithms
• Response– OptIPuter Research Project
The Biomedical Informatics Research Network a Multi-Scale Brain Imaging Federated Repository
BIRN Test-bedsBIRN Test-beds::Multiscale Mouse Models of Disease, Human Brain Morphometrics, and Multiscale Mouse Models of Disease, Human Brain Morphometrics, and
FIRST BIRN (FIRST BIRN (10 site project for fMRI’s of Schizophrenics)10 site project for fMRI’s of Schizophrenics)
NIH Plans to Expand to Other Organs
and Many Laboratories
Microscopy Imaging of Neural TissueMarketta Bobik Francisco Capani & Eric Bushong
Confocal image of a sagittal section through rat cortex triple labeled for
glial fibrillary acidic protein (blue), neurofilaments (green) and actin (red)
Projection of a series of optical sections through a Purkinje neuron
revealing both the overall morphology (red) and the dendritic spines (green)
http://ncmir.ucsd.edu/gallery.html
Interactive Visual Analysis of Large Datasets --East Pacific Rise Seafloor Topography
NSF Experimental Network Research Project The “OptIPuter”
• Driven by Large Neuroscience and Earth Science Data– NIH Biomedical Informatics Research Network – NSF EarthScope (UCSD SIO)
• Removing Bandwidth as a Constraint– Links Computing, Storage, Visualization and Networking– Software and Systems Integration Research Agenda
• NSF Large Information Technology Research Proposal– UCSD and UIC Lead Campuses– USC, UCI, SDSU, NW Partnering Campuses– Industrial Partners: IBM, Telcordia/SAIC, CENIC, Chiaro
Networks, IXIA
• PI—Larry Smarr; Funded at $13.5M Over Five Years– Start Date October 1, 2002
www.calit2.net/news/2002/9-25-optiputer.html
From SuperComputers to SuperNetworks--Changing the Grid Design Point
• The TeraGrid is Optimized for Computing– 1024 IA-64 Nodes Linux Cluster– Assume 1 GigE per Node = 1 Terabit/s I/O– Grid Optical Connection 4x10Gig Lambdas = 40 Gigabit/s– Optical Connections are Only 4% Bisection Bandwidth
• The OptIPuter is Optimized for Bandwidth– 32 IA-64 Node Linux Cluster– Assume 1 GigE per Processor = 32 gigabit/s I/O– Grid Optical Connection 4x10GigE = 40 Gigabit/s– Optical Connections are Over 100% Bisection Bandwidth
OptIPuter Inspiration--Node of a 2009 PetaFLOPS Supercomputer
VLIW/RISC CORE 24 GFLOPS 6 Ghz
160 GB/s
Coherence
64 B wide160 GB/s64 B wide
VLIW/RISC CORE 24 GFLOPS 6 Ghz
...
2nd LEVEL CACHE96 MB
2nd LEVEL CACHE 96 MB
CROSS BAR
DRAM - 4 GB - HIGHLY INTERLEAVED
640 GB/s
MULTI-LAMBDA AON
Source: Steve Wallach, Supercomputing 2000 Keynote
Global Architecture of a 2009 COTS PetaFLOPS System
I/O
ALL-OPTICAL SWITCH
Multi-DieMulti-Processor
1
23
64
63
49
48
4 516
17
18
32
3347 46
128 Die/Box4 CPU/Die
10 meters= 50 nanosec Delay
...
...
...
...
LAN/WAN
Source: Steve Wallach, Supercomputing 2000 Keynote
OptIPuter NSF Proposal Partnered with National Experts and Infrastructure
Vancouver
Seattle
Portland
San Francisco
Los Angeles
San Diego(SDSC)
NCSA
SURFnet CERNCA*net4
AsiaPacific
AsiaPacific
AMPATH
PSC
Atlanta
CA*net4
Source: Tom DeFanti and Maxine Brown, UIC
NYC
TeraGrid DTFnet
CENIC
Pacific LightRail
Chicago
UICNU
USC
UCSD, SDSUUCI
• Cluster – Disk
• Disk – Disk
• Viz – Disk
• DB – Cluster
• Cluster – Cluster
OptIPuter LambdaGridEnabled by Chiaro Networking Router
www.calit2.net/news/2002/11-18-chiaro.html
switch switch
switchswitch
Medical Imaging and Microscopy
Chemistry, Engineering, Arts
San Diego Supercomputer Center
Scripps Institution of Oceanography
ChiaroEnstara
Image Source: Phil Papadopoulos, SDSC
½ Mile
The UCSD OptIPuter Deployment
SIO
SDSC
CRCA
Phys. Sci -Keck
SOM
JSOE Preuss
6th College
Phase I, Fall 02
Phase II, 2003
SDSCAnnex
Collocation point
Node M
The OptIPuter Experimental UCSD Campus Optical Network
Earth Sciences
SDSC
Arts
Chemistry
Medicine
Engineering
High School
UndergradCollege
Phase I, Fall 02
Phase II, 2003
SDSCAnnex
To CENIC
Collocation point
Collocation
Chiaro Router
Production Router
Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT)2
Planned Chicago Metro Electronic Switching OptIPuter Laboratory
Int’l GE, 10GE
Nat’l GE, 10GE
Metro GE, 10GE
16x1 GE 16x10 GE
16-Processor McKinley at University of Illinois at Chicago