1 End-to-end Monitoring of High Performance Network Paths Les Cottrell, Connie Logg, Jerrod Williams SLAC, for the ESCC meeting, Columbus Ohio, July 2004 www.slac.stanford.edu/grp/scs/net/talk03/escc-jul04.ppt Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also supported by IUPAP
33
Embed
End-to-end Monitoring of High Performance Network Paths
End-to-end Monitoring of High Performance Network Paths. Les Cottrell , Connie Logg, Jerrod Williams SLAC, for the ESCC meeting, Columbus Ohio, July 2004 www.slac.stanford.edu/grp/scs/net/talk03/escc-jul04.ppt. - PowerPoint PPT Presentation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
End-to-end Monitoring of High Performance Network Paths
Les Cottrell, Connie Logg, Jerrod WilliamsSLAC, for the
ESCC meeting, Columbus Ohio, July 2004www.slac.stanford.edu/grp/scs/net/talk03/escc-jul04.ppt
Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also
supported by IUPAP
2
Need• Data intensive science (e.g. HENP) needs to
share data at high speeds• Needs high-performance, reliable e2e paths
and the ability to use them• End users need long and short term estimates
of network and application performance for: Planning, setting expectations & trouble shooting
• You can’t manage what you can’t measure
3
IEPM-BW• Toolkit:
– Enables regular, E2E measurements with user selectable:• Tools: iperf (single & multi-stream), bbftp, bbcp, GridFTP, ping (RTT),
traceroute• Periods (with randomization)• Remote hosts to monitor
– Hierarchical to match the tiered approach of BaBar & LHC computation / collaboration infrastructures
– Includes:• Auto-clean up of hung processes at both ends• Management tools to look for failures (unreachable hosts, failing
tools etc.)• Web navigation of results• Visualization of data as time-series, histograms, scatter plots, tables• Access to data in machine readable form• Documentation on host etc. requirements, program logic manuals,
methods
4
Requirements– Requires:
• Monitoring toolkit installed on Linux monitoring host– Host provided & administered by monitoring site personnel– No need for root privileges– Appropriate iperf, bbftp etc. ports to be opened– SLAC can do initial install & configuration for monitoring host
» 50 line configuration file for each remote host, tells where directories, applications are located, options for various tools etc (mainly defaults)
• Small toolkit installed at remote (monitored hosts)• Ssh access to an account at remote hosts
– This is the biggest problem with deployment
5
Achievable throughput & file transfer
• IEPM-BW– High impact (iperf, bbftp, GridFTP …) measurements 90+-15 min intervals
Select focal area
Fwd route change
Rev route change
Min RTT
Iperf
bbftpiperf1
abing
Avg RTT
6
Visualization: traceroutes• Compact table to see correlations between many
routes• Identify significant changes in routes
– Differences in > 1 hop, NOT same first 3 octets, NOT same AS
interfaces, unreachable end host, stutters, multi-homed end host
• Note, we observe:– most route changes (>98%) do not result in significant
performance changes– Many performance changes (~50+-20%) are NOT due to
route changes• Applications, host congestion, level 2 changes etc.
7
Route table Example• Compact so can see many routes at once
History navigation
Multiple route changes (due to GEANT), later restored to original route
Available bandwidthRaw traceroute logs for debugging
Textual summary of traceroutes for email to ISPDescription of route numbers with date last seen
User readable (web table) routes for this host for this day
Route # at start of day, gives idea of root stability
Mouseover for hops & RTT
8
Another example
TCP probe type
Host not pingable
Intermediate router does not
respondICMP checksum
error
Level change
Get AS information for routes
9
Topology• Choose times and hosts and submit request
DLCLRC
CLRC
IN2P3
CESnet
ESnet
JAnetGEA
NT
Nodes colored by ISPMouseover shows node namesClick on node to see subroutesClick on end node to see its path backAlso can get raw traceroutes with AS’
• Maxim Grigoriev (FNAL) – event detection, IEPM visualization, major monitoring site
• Ruchi Gupta (Stanford) – event visualization
• Prof Arshad Ali & Fahad Khalid (NIIT, Pakistan) – data collection after event
• Rich Carlson (I2), NDT
21
Thanks: on-going• Foreign:
– Andrew Daviel (TRIUMF), Simon Leinen (SWITCH), Olivier Martin (CERN), Sven Ubik (CESnet), Kars Ohrenberg (DESY), Bruno Hoeft (FZK), Dominique (IN2P3), Fabrizio Coccetti (INFN), Cristina Bulfon (INFN), Yukio Karita (KEK), Takashi Ichihara (RIKEN), Yoshinori Kitasuji (APAN), Antony Antony (NIKHEF), Arshad Ali (NIIT), Serge Belov (BINP), Robin Tasker (DL & RAL), Yee Ting Lee (UCL), Richard Hughes-Jones (Manchester)
• US– Shawn McKee (Michigan), Tom Hacker (Michigan), Eric
Boyd (I2), Stanislav Shalunov (SOX), George Uhl (GSFC), Brian Tierney (LBNL), John Hicks (Indiana), John Estabrook (UIUC), Maxim Grigoriev (FNAL), Joe Izen (UT Dallas), Chris Griffin (U Florida), Tom Dunigan (ORNL), Dantong Yu (BNL), Suresh Singh (Caltech), Chip Watsom (JLab), Robert Lukens (JLab), Shane Canon (NERSC), Kevin Walsh (SDSC), David Lapsley (MIT/Haystack/ISI-E)
22
More information• IEPM-BW home page
– http://www-iepm.slac.stanford.edu/bw/• Comparison of Internet E2E Measurement
• IEPM Web Services– http://www-iepm.slac.stanford.edu/tools/web_services/
23
Extra Slides
24
Web Services• See http://www-iepm.slac.stanford.edu/tools/web_services/ • Working for: RTT, loss, capacity, available bandwidth, achievable throughput• No schema defined for traceroute (hop-list)• PingER
• <message name="GetPathDelayRoundTripInput">• <part name="startTime" type="xsd:string"/>• <part name="endTime" type="xsd:string"/>• <part name="destination" type="xsd:string"/>• </message>• Also dups, out of order, IPDV, TCP thru estimate• Require to provide packet size, units, timestamp, sce, dst
– path.bandwidth.available, path.bandwidth.utilized, path.bandwidth.capacity• Mainly for recent data, need to make real time data accessible• Used by MonALISA so need coordination to change definitions
25
Perl access to PingER
26
PingER WSDL
27
Output from script
28
Perl AMP traceroute
29
AMP traceroute output
30
Intermediate term access
• Provide access to analyzed data in tables via .tsv format download from web pages.
31
Bulk Data• For long term detailed data, we tar and zip the
data on demand. Mainly for PingER data.
32
AbWEIperf
28 days bandwidth history. During this time we can see several different situations caused by
different routing from SLAC to CALTECH
Drop to 100 Mbits/s by Routing (BGP) errors
Drop to 622 Mbits/s path
back to new CENIC path
New CENIC path 1000 Mbits/s
Reverse Routing changes
Forward Routing changes
Scatter plot graphs of Iperf versus ABw on different paths (range 20–800 Mbits/s) showing agreement of two methods
(28 days history)
RTT
BbftpIperf 1 stream
33
Changes in network topology (BGP) can result in dramatic changes in performance
Snapshot of traceroute summary table
Samples of traceroute trees generated from the table
ABwE measurement one/minute for 24 hours Thurs Oct 9 9:00am to Fri Oct 10 9:01am
Drop in performance(From original path: SLAC-CENIC-Caltech to SLAC-Esnet-LosNettos (100Mbps) -Caltech )
Back to original path
Changes detected by IEPM-Iperf and AbWE
Esnet-LosNettos segment in the path(100 Mbits/s)
Hour
Rem
ote
host
Dynamic BW capacity (DBC)
Cross-traffic (XT)
Available BW = (DBC-XT)Mbi
ts/s
Notes:1. Caltech misrouted via Los-Nettos 100Mbps commercial net 14:00-17:002. ESnet/GEANT working on routes from 2:00 to 14:003. A previous occurrence went un-noticed for 2 months4. Next step is to auto detect and notify