Load balancing (computing)

By means of load balancing ( Load Balancing English ) are distributed extensive calculations or large volumes of requests to several concurrent systems in computer science. This can have very different characteristics. A simple load distribution takes place on computers with multiple processors, for example. Each process can run on its own processor. The type of distribution of processes to processors can have a big impact on the overall performance of the system, as is local to each processor, for example, the cache contents.

Another method can be found in computer clusters or servers. This form several computers to a composite which mostly behaves to the outside as a single system. Some possible methods are the upstream connection of a system ( load balancer, front-end server ), which divides the requests, or the use of DNA with the round-robin method. Especially with a web server Server load balancing is important because a single host can answer only a limited amount of HTTP requests at once. The case upstream load balancer adds the HTTP request additional information in order to send requests from the same user to the same server. This is also important in the use of SSL encryption communication to prevent a new SSL handshake must be performed for each request.

Load distribution is also used in Daten-/Sprachleitungen to distribute the flow of traffic on parallel lines routed. In practice, however, often trouble getting on to distribute the Daten-/Sprachverkehr evenly on both lines. Therefore, it is often the solution that implements a line as round and the second line is used as a return channel.

Load balancing is often accompanied by mechanisms for reliability: By setting up a cluster with the appropriate capacity and distribution of the requests to individual systems, one reaches a increase reliability, provided that detected the failure of one system and submitted the requests automatically to another system (see also: High Availability and high Availability, "HA" ).

Server load balancing

Server load balancing ( en. Server Load Balancing, " SLB " ) is used in all applications where a large number of clients would generate a high request density and thus overloading a single server computer. Typical criteria for determining the need for SLB are the data rate, the number of clients and the request rate.

Another aspect is to increase data availability by SLB. The use of multiple systems allows redundant data storage. The task of the SLB is here teaching the client to each server. This technique is used in the case of Content Delivery Networks.

SLB may be used in various layers of the ISO -OSI - reference model.

DNS Round Robin

Here are deposited Name System to a host name in the domain of IP addresses that are mutually returned as the result of queries. It is the simplest method of load balancing. For a detailed description, see Load Balancing via DNS.

NAT based SLB

More expensive, but more powerful is the so-called NAT based SLB. Here initially have two networks are constructed: a private network, which consists of the server, and a public network, which is connected via a router to the public Internet. Between these two systems, a content switch is switched, so a router that accepts requests from the public network, evaluates and then decide to which computer in the private network it conveys the connection. This is done at the network layer of the OSI reference model. Is used here, the NAT technology: The load balancer manipulating incoming and outgoing IP packets so that the client has the impression that he always communicating with one and the same machine, namely the Load Balancer. The server on the private network have as it were all the same virtual IP address.

The problem with this method is that all the traffic on the load balancer flows, so that sooner or later become a bottleneck if this was too small not designed or redundant.

The advantage arising from the NAT based SLB that the individual servers are additionally protected by the load balancer. Numerous manufacturers of Load Balancer solutions provide this additional security modules that attacks or error requests can filter out even before reaching the server cluster. The termination of SSL sessions and thus the discharge of the server cluster for HTTP farms is a not to be underestimated advantage of server-based load balancing.

In addition to active health checks as they are needed for other methods are for some time for large web clusters increasingly passive health checks in use. Here the one - and outgoing traffic through the load balancer is monitored once a computer in the server cluster triggers a timeout in reply to a request, thus can be provided to another cluster server the same query without this being noticed by the client.

Flat based SLB

In this method, it requires only a network. The servers and the load balancer must be connected via a switch. When the client sends a request ( to Load Balancer ), the corresponding Ethernet frame is manipulated so that it is a direct request from the client to a server - the load balancer exchanges to its own MAC address to that of the server to be switched out and sends the frame on. The IP address does not change. One speaks in this procedure by MAT (MAC Address Translation). The server that received the frame, sends the response directly to the IP address of the sender, so the client. The client thus has the impression that he is communicating with only one computer, namely the load balancer, while the server actually, only communicates with a computer directly to the client. This process is referred to as DSR ( Direct Server Return ).

Advantage in Flat based SLB is the discharge of the load balancer. The (usually richer data ) return traffic takes place directly.

Anycast SLB

The load distribution over Anycast an entire group of computers is addressed by an address. It answers the one that is accessible via the shortest route. On the Internet, this is realized by BGP.

The advantage of this solution is the geographically close to selecting a server with a corresponding reduction in latency. However, the implementation requires the operation of a separate autonomous system on the Internet.

Problems of practice

Applications such as online shops manage client requests often Sessions. For existing sessions, for example, the contents of your shopping cart will be saved. However, this presupposes that a client has already opened a session for those who always communicates with the same server if client-based sessions are used here. Either have to all connections of a client by its IP to the same server are routed or the load balancer must be able to act on the application layer of the OSI reference model, eg cookies and session IDs from packages to extract and evaluate to then to make a placement decision. Forwarding a session on getting the same backend server is called " affinity ". When Load Balancer in practice, therefore, Layer 4-7 switches are used. Alternatively the problem can be solved by the application itself (for example, by storing the session in a database), so that a question may be answered by any computer of the server pool.

201281
de