Top Banner
Technology Integration: Technology Integration: RSerPool & Server RSerPool & Server Load-balancing Load-balancing Curt Kersey, Cisco Systems Curt Kersey, Cisco Systems Aron Silverton, Motorola Labs Aron Silverton, Motorola Labs
43
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Technology Integration: RSerPool & Server Load-balancingCurt Kersey, Cisco SystemsAron Silverton, Motorola Labs

  • ContentsMotivationBackground:Server Load-balancingServer FeedbackRSerPoolUnified approach:DescriptionSample FlowsWork Items

  • Assumptions / TerminologyAll load-balancing examples will use TCP/IP as the transport protocol. This could easily be any other protocol (e.g., SCTP).SLB = Server Load-Balancer.Virtual Server = Virtual instance of application running on SLB device.Real Server = physical machine with application instances.

  • MotivationHighly redundant SLB.More accurate server pooling.

  • Server Load-balancing

  • What does a SLB do?Gets user to needed resource:Server must be availableUsers session must not be brokenIf user must get to same resource over and over, the SLB device must ensure that happens (ie, session persistence)In order to do work, SLB must:Know servers IP/port, availabilityUnderstand details of some protocols (e.g., FTP, SIP, etc)Network Address Translation, NAT:Packets are re-written as they pass through SLB device.

  • Why to Load-balance?Scale applications / servicesEase of administration / maintenanceEasily and transparently remove physical servers from rotation in order to perform any type of maintenance on that server.Resource sharingCan run multiple instances of an application / service on a server; could be running on a different port for each instance; can load-balance to different port based on data analyzed.

  • Load-Balancing AlgorithmsMost predominant:least connections: server with fewest number of flows gets the new flow request.weighted least connections: associate a weight / strength for each server and distribute load across server farm based on the weights of all servers in the farm.round robin: round robin thru the servers in server farm.weighted round robin: give each server weight number of flows in a row; weight is set just like it is in weighted least flows.There are other algorithms that look at or try to predict server load in determining the load of the real server.

  • How SLB Devices Make DecisionsThe SLB device can make its load-balancing decisions based on several factors.Some of these factors can be obtained from the packet headers (i.e., IP address, port numbers, etc.).Other factors are obtained by looking at the data beyond the network headers. Examples:HTTP CookiesHTTP URLsSSL Client certificateThe decisions can be based strictly on flow counts or they can be based on knowledge of application.For some protocols, like FTP, you have to have knowledge of protocol to correctly load-balance (i.e., control and data connection must go to same physical server).

  • When a New Flow ArrivesDetermine if virtual server exists.If so, make sure virtual server has available resources.If so, then determine level of service needed by that client to that virtual server.If virtual machine is configured with particular type of protocol support of session persistence, then do that work.Pick a real server for that client.The determination of real server is based on flow counts and information about the flow.In order to do this, the SLB may need to proxy the flow to get all necessary information for determining the real server this will be based on the services configured for that virtual server.If not, the packet is bridged to the correct interface based on Layer 2.

  • SLB: ArchitecturesTraditionalSLB device sits between the Clients and the Servers being load-balanced.DistributedSLB device sits off to the side, and only receives the packets it needs to based on flow setup and tear down.

  • SLB: Traditional View with NATSLBClientServer1Server2Server3

  • SLB: Traditional View without NATSLBClientServer1Server2Server3

  • Load-Balance: Layer 3 / 4Looking at the destination IP address and port to make a load-balancing decision.In order to do that, you can determine a real server based on the first packet that arrives.

  • Layer 3 / 4: Sample FlowSLBClientServer1Server2Server32: SLB makes decision on ServerRest of flow continues through HTTP GET and Server response.

  • Load-Balance: Layer 5+The SLB device must terminate the TCP flow for an amount of time BEFORE the SLB decision can be made. For example, the cookie value must be sent by the client, which is after the TCP handshake before determining the real server.

  • Layer 5+: Sample FlowSLBClientServer1Server2Server32: SLB device determines it must proxyflow before decision can be made.Rest of flow continues with Server response.Note: the flow can be unproxied at this point for efficiency.

  • SLB: Distributed ArchitectureClientFEFEFEServerServerServerSLB FE: Forwarding Engines, which are responsible for forwarding packets. They ask the SLB device where to send the flow.

  • FESLBClientDistributed Architecture: Sample FlowServer1Server2Server3Server42: FE asks where to send flow.Subsequent packets flow directly from Client to Server2 thru the FE.The FE must notify the SLB device when the flow ends.

  • Server Feedback

  • Determining Health of Real ServersIn order to determine health of real servers, SLB can:Actively monitor flows to that real server.Initiate probes to the real server.Get feedback from real server or third party box.

  • Server FeedbackNeed information from real server while it is a part of a server farm.Why?Dynamic load-balancing based on ability of real server.Dynamic provisioning of applications.

  • Server Feedback: Use of InformationAvailability of real server is reported as a weight that is use by SLB algorithms (e.g., weighted round robin, weighted least connections).As weight value changes over time, the load distribution changes with it.

  • How to Get WeightsStatically configured on SLB device never change.Start with statically configured value on SLB device for initial start-up, then get weight from:Real serverThird party box / Collection PointIt is assumed that if a third party box is being used, it would be used for all the real servers in a server farm.

  • Direct Host FeedbackDescription: Have agents running on host to gather data points. That data is then sent to SLB device just for that physical server.Note: agent could report for different applications on that real server.Agent could be based on available memory, general resources available, proprietary information, etc.

  • Direct Host FeedbackPros:Have some way to dynamically change physical servers capability for SLB flows.Cons:SLB device must attempt to normalize data for all real servers in a server farm. If have heterogeneous servers, it is difficult to do.Difficult for real server to identify itself in SLB terms for case of L3 vs. L4 vs. L5, etc SLB scenarios.

  • Third Party Feedback: NetworkSLBClientServer1Server2Server3Collection Point

  • Host to Third Party FeedbackDescription: Real servers report data to a collection point. The collection point system can normalize the data as needed, then it can report for all physical servers to the SLB device.Pros:Have a device that can analyze and normalize the data from multiple servers. The SLB device can then just do SLB functionality.Cons:Requires more communication to determine dynamic weight could delay the overall dynamic affect if it takes too long.

  • RSerPool

  • ASAPPUASAPRSerPool: ArchitecturePEPEPE

  • RSerPool: OverviewRSerPool protocols sit between the user application and the IP transport protocol (session layer).The application communication is now defined over a pair of logical session layer endpoints that are dynamically mapped to transport layer addresses.When a failure occurs at the network or transport layer, the session can survive because the logical session endpoints can be mapped to alternative transport addresses.The endpoint to transport mapping is managed by distributed servers providing resiliency.

  • RSerPool / SLB: Unified Approach(A Work in Progress)

  • Unified View: OverviewPreserve the RSerPool architecture:Any extensions or modifications are backwards compatible with current RSerPool.SLB extensions at ENRP Server and PE are optional based on pool policy chosen / implemented.Utilize SLB distributed architecture:Introduce FE when using SLB pool policies.Add SLB technology to the ENRP Server:SLB-specific versions of pool policies.SLB-: example SLB-WRR takes into account additional host feedback such as number of flows on each PE.Add server feedback:Enable delivery of host feedback from PEs to home ENRP Server.Enable delivery of host feedback to FE from ENRP Server.

  • Unified: Component DescriptionASAP:Between PE and ENRP Server is extended to include additional host feedback such as current number of flows on PE.Encapsulation of host feedback protocol in pool element parameter.Information will be replicated among peer ENRP Servers.Subscription service and/or polling between ENRP Server and PU allows delivery of host feedback (membership, weights, flows, etc).Subscription is between PU and current ENRP Server (not replicated).PU must be re-register subscription upon selection of new ENRP Server.Subscription and polling service previously discussed in design team as an addition to core ASAP functionality.Make decision on flow destination based on SLB-specific pool policy (i.e., load-balancing algorithm).

  • Unified: Component DescriptionFE:RSerPool enabled application (PU):Uses RSerPool API for sending flows to PE.ASAP control plane for PE selection.Bearer plane uses flow-specific protocol (e.g., HTTP, SIP, etc) and corresponding transport (e.g., TCP, SCTP).Must know which pools support which applications (SLB-types). Add parameter to SLB-enabled PEs?Choose pool handle based on incoming client requests and supported SLB-types (SLB-L4, SLB-HTTP, SLB-SIP, etc).If no other SLB-type matches, the SLB-L4 will be used.NAT, reverse NAT.Proxy service.

  • Unified: Component DescriptionFE (continued):Configuration:Server Pools:Static configuration of pool handles pool names are resolved upon initialization.Static configuration of pool handles and PE detail, including initial/default weights.Automagic configuration?Protocol Table:Maps supported SLB-types to pool handles by looking for best match in incoming packet, e.g.,SLB-L4 (must implement).SLB-HTTP.SLB-SIP.

  • Unified: Component DescriptionPE:SLB-enabled PEs must support dynamic host feedback.

  • Unified: Layer 3/4 ExamplePU / FEClientENRP ServerPE1PE2PE3ASAP with host feedback2: Correlate request to SLB-type; thenchoose pool handle. Then do a send tothat pool handle.ASAP Pool handle resolution &subscription/polling.ENRP Server

  • Server Feedback: How to Implement with RSerPool

  • Unified: PE CommunicationPEs will send their weights to ENRP server via ASAP protocol.Server agent on host provides weight to PE application.There are some protocols that exist for reporting this information. The current list:Server/Application State Protocol, SASP:Joint IBM / Cisco Protocol.IETF draft is currently available.Dynamic Feedback Protocol, DFP:Cisco developed Protocol.IETF draft is in progress.

  • Design Team Work Items

  • How to Implement: To Do ListDetails, Details, Details .....Reconcile design with pool policy draft:Determine what information needs to be passed.Determine what algorithms need to be added where.Define SLB-.Determine best method for implementation of host feedback.Complete Layer 5 example with session persistence mechanism at FE.

  • How to Implement: To Do ListPolling / Subscriptions.Complete DFP IETF draft, so it can be considered.Everything else.

    In #2, when the SLB device asks where to send the flow, it formats the request giving the src ip/port and dst ip/port.

    Once the decision is made, the SLB device can remember the decision, so it can build its own session persistence tables. For ip-only session persistence, SLB device will recall that this src ip is going to the PE ip/port determined by the ENRP server.

    Note: the ENRP server can be getting dynamic feedback on server availability via its admin network that i show here as a separate backend network you dont have to have a separate network for it.

    In #2, when the SLB device asks where to send the flow, it formats the request giving the src ip/port and dst ip/port.

    Once the decision is made, the SLB device can remember the decision, so it can build its own session persistence tables. For ip-only session persistence, SLB device will recall that this src ip is going to the PE ip/port determined by the ENRP server.

    Note: the ENRP server can be getting dynamic feedback on server availability via its admin network that i show here as a separate backend network you dont have to have a separate network for it.

    In #2, when the SLB device asks where to send the flow, it formats the request giving the src ip/port and dst ip/port.

    Once the decision is made, the SLB device can remember the decision, so it can build its own session persistence tables. For ip-only session persistence, SLB device will recall that this src ip is going to the PE ip/port determined by the ENRP server.

    Note: the ENRP server can be getting dynamic feedback on server availability via its admin network that i show here as a separate backend network you dont have to have a separate network for it.

    The FEs do not make any decisions. All decisions on where to send the flow is done by the SLB device.

    When the flow gets terminated, the FEs tell the SLB device. That way the SLB device can keep track of flow counts. Tracking flow counts is critical to keep load distribution even.In #2, when the SLB device asks where to send the flow, it formats the request giving the src ip/port and dst ip/port.

    Once the decision is made, the SLB device can remember the decision, so it can build its own session persistence tables. For ip-only session persistence, SLB device will recall that this src ip is going to the PE ip/port determined by the ENRP server.

    Note: the ENRP server can be getting dynamic feedback on server availability via its admin network that i show here as a separate backend network you dont have to have a separate network for it.

    Reason for requiring host feedback in PE: need to have flow count information for SLB decisions; otherwise, you are not load-balancing.