Top Banner
Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014
13

Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Jan 15, 2016

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Smithsonian Network Infrastructure Program

ITPA Meeting

Martin BeckmanDirector IT Engineering and Plans

6 November 2014

Page 2: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

BSEE – USMA, West Point MA – Foreign Policy and National Strategy Retired Army Colonel (Infantry) after 30 years, 7 years Active, 23 years Reserves

with 9 months in Bosnia and 5 years at USASOC, Ft Bragg, NC. 1992-1995: USASOC, Fort Bragg, NC. Built the ASOC Command Center

and the SOCOM COOP. Implemented encrypted IP over HF and VHF. 1996-2001: Pentagon. Designed and built the Pre-9/11 Core Network

(FDDI) for Unclassified and Secret Networks. 2001-2007: DISA, Systems and Network Engineer on NIPRNet, SIPRNet,

and the IPv6 Pilot. 2007-2012: Designed and built the Virtualization Environment and the

Private Cloud Infrastructure (VIDAR/DAVE). 2012-Present: Smithsonian Institution. Director IT Operations and now

Director IT Engineering and Plans. Backbone Network and Private Cloud.

Background

Page 3: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Challenges: The buildings themselves range in age from the mid-1800’s (the Castle, the

Renwick Gallery, and the Arts and Industry Building) to the 21st century (NMAI). The main data center for all of the activities is in Herndon, Virginia and is over 30

miles from the National Mall. The ongoing efforts in digitization, genomic research, enhanced use of technology

to support the exhibits, and demands for data storage and access have pushed the current infrastructure to capacity.

Fixed or declining budgets and insufficient manpower limit the speed, flexibility, and responsiveness to the Smithsonian community.

No Engineering Staff and very limited technical expertise and Core Circuits over-utilized (90% or greater).

Stalled Virtualization effort with sub-optimal implementation (Switching and Storage)

Vision and Goals for the Smithsonian

Page 4: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Goals and Objectives: 64-bit and follow-on 128-bit processing and operating systems for all servers, each

accessing the network at dual 10Gbps (physical servers) or dual 40Gbps (virtual server cluster).

64-bit processing and operating systems on all desktops and notebooks with 1Gbps Ethernet access to the network infrastructure.

Desktop video-conferencing and instant messaging within the Smithsonian network at the desktop, notebook, and device.

Utilization of virtual desktops to support mobility of the security operations, guest services, volunteers, interns, NOC, Help Desk, and Training Rooms.

Full integration of Smartphones and devices connecting over the Wi-Fi infrastructure. Note: 802.11ac adds considerable bandwidth requirements

Server Virtualization using 10Gbps FCOE and fibre-channel storage arrays with 40Gbps access to the network.

Reduce operational recurring costs for circuits, space, HVAC, power consumption, and equipment maintenance.

Long-term storage on 100GB Blue Ray disks in storage arrays.

Vision and Goals for the Smithsonian

Page 5: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Goals and Objectives: Core network between Herndon and the National Mall at Quad 100Gbps, increasing in

100Gbps increments on two independent resilient links. High performance firewall structure operating at 40Gbps rates securing all traffic to and

from the Mall, the Data Center, and the Internet. Tier One Internet and Internet II access rates, each at 10Gbps, scalable to 40Gbps in

10Gbps increments. All Museums, the National Zoo and Smithsonian sites connect to the backbone network at

dual 10Gbps rates (except for small office sites with under 20 personnel). High-definition (HD 1080p/60) video networking between sites via an independent fibre

channel network that also supports storage operations. Ubiquitous Wi-Fi access with accountability, authentication, and access control for all

staff, guests, exhibits, and the public. Private Cloud operations to provide 100% resilient network services: Domain Name

Services (DNS), IP address assignment via DHCP, Active Directory access control, Telephone service (VOIP), and IP Video conferencing.

Vision and Goals for the Smithsonian

Page 6: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Goals and Objectives: Migration from diverse PRI circuits to dual homed SIP trunks (VA and NYC). Commercial Encrypted Remote Access Services via dynamic and static Virtual Private

Networks (VPN) from anywhere globally. Fault-tolerant power implementations standardizing all equipment rooms, closets, and

cabinets. Standardized 50-micron fiber optical distribution racks to eliminate wall-mounted fiber

cabinets. Reduced operational recurring costs for circuits, space, HVAC, power consumption, and

equipment maintenance. Enhanced digital radio and satellite telephone services as required by remote locations in

Arizona, Panama, Florida, New York, and Boston, including flyaway communications support via INMARSAT for teams travelling into remote locations.

Blue Ray disks in storage arrays.

Vision and Goals for the Smithsonian

Page 7: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Optical Networking in Phases – VA, DC, MD

Page 8: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Optical Networking – NYC

Page 9: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

1. Internet Connection Upgraded from 1.4Gbps to Dual 10Gbps to Tier 1 Internet and dual 10Gbps to I2.

3. Mall Sites and Zoo upgraded to dual 10Gbps Uplinks.

4. Backbone connections to Herndon increase to dual 40Gbps connections (Pending)

2. DMZ/Firewall Upgraded to 10Gbps Rates (In Progress).

Internet/DMZ Upgrade Plan

Page 10: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Private Cloud Core Plan - 2015

Page 11: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

IT Infrastructure and Private Cloud Core Points Private Cloud requires a resilient high-speed Backbone. 1Gbps is the minimum speed and

10Gbps is optimal. Private and Leased Fiber fixes the recurring costs for circuits (OPEX) and improving

bandwidth and latencies; however, the speed of light is still in effect. Round trip Coast to Coast is ~5,500 miles or about 30ms! OEO adds a couple of micro-seconds per hop.

Meeting the Carriers at the Meet-me Data Center saves money and provides for additional services: Examples: MPLS, Comcast, Ashburn to NYC switched paths, etc.

10Gbps FCOE reduces the CPU use of servers, since the network is faster than the Bus Speeds of the Servers (and other hosts). Servers=3.3Ghz versus Network Access=12Ghz.

Leasing fiber requires understanding the optical limits. Stand Single Mode reaches up to 2KM, Extended Reach (ER) up to 40KM, and Optical Amplification (OEO) reaches up to 180 miles.

SIP Trunking for VOIP is a major cost savings to an enterprise; however, many organizations lack technical depth and understanding.

Contracting for Fiber, Internet Access, or Data Center services is a simple matter of writing Statements of Work for the CO’s. Keep it simple and use other’s work as a template.

Page 12: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Getting things done with IT Infrastructure Planning, Design. Execution. All require LEADERSHIP and THINKING. Going to

Leased Fiber and DWDM was a follow-on idea after costing an OC48 pipe. Look for follow-on opportunities. The Fiber Leases led to discovering Equinix (the new

MAE-East) and meeting the carriers for a major reduction if Internet access costs. That in turn led to finding other switch circuit alternatives (Equinix 10Gbps to NYC, Comcast, MPLS, Cloud Storage Vendors, etc.)

One Size does not fit all, but the solutions are a combination of technologies to meet the organizational missions and organization. Organizations may need realignment!

A common network infrastructure (including Private Cloud Computing) save time, effort, and Money.

Contracting out technical expertise will result in limited progress and improvements to the Infrastructure and adoption of new technologies.

IT is personal and the new appliances and applications are driving the infrastructure. Look ahead and lead the wave or get crushed under the weight of the double overhead coming ashore.

Carriers and Vendors need to make a profit and cannot read minds, read, learn, and train. There is NO box, except the one we make for ourselves.

Page 13: Smithsonian Network Infrastructure Program ITPA Meeting Martin Beckman Director IT Engineering and Plans 6 November 2014.

Personal Views about Future Trends & Other Ideas Virtual Desktops and VDI. Running a Linux/Windows VM on a Mac/Windows Desktop. A

failed VM can be copied down and installed on the fly within minutes! $$$$ The new 802.11ac WiFi allows for 1Gbps WiFi and will slam existing network switching;

however, with encryption of the WiFi connection it can and may supplant using copper to the user in most cases. $$$$

Look out for 128-bit processing and the next generation of processing power as well as HTML 5 enhancements for VOIP/Video use.

HPC Infiniband Clusters (20Gbps) for clustered large scale Database computing (1TB or greater tables and indexes). Distributed virtualization has core limits of sockets per server.

Storage will increase since resolution of still imaging and video seems to never stop (and it won’t). Remember when a 10MB file was considered a beast and a 512KB USB drive was big?

Inventiveness and Innovation are singular and personal. It requires leadership to see it within a team or organization. A camel is a horse designed by a committee!

Planning reduces waste. Do you binge spend at year end OR do your have a deliberate shopping list? No plan survives first contact, but you cannot survive (and thrive) first contact without a plan! $$$$

Virtualize your data centers with a private cloud infrastructure and save power, space, HVAC, and staffing. $$$$

Beware of outsourcing and running blindly after things (SDN, Public Cloud, etc.) $$$$