To put simply, load balancing means to distribute workloads (data) intelligently between different computing services to improve connectivity – Reliability, redundancy and network performance. Load balancers are like traffic police. They manage traffic between enterprise servers. Load balancers are crucial today to manage evolving traffic patterns ensuring there’s no overload in enterprise servers. Load balancers provide enterprise IT teams the flexibility to add or subtract servers as per the demands and requirements of the business.
There are different types of load balancing algorithms which IT teams go for depending on the distribution of load i.e. whether its load on the network or application layer. The selection of the the selection of backend servers to forward the traffic is based on the load balancing algorithms used. The algorithms take into consideration two aspects of the server i) Server health and ii) Predefined condition. Few common load balancing algorithms used frequently by IT teams include
Round robin (RR) algorithm is a circular distribution of requests to enterprise servers in a sequence. There are two types of Round Robin – Weighted Round Robin and Dynamic Round Robin. Used mainly for a cluster of different servers, in weighted round robin each server is assigned a weight depending on its composition. Based on the preassigned efficiency the load is distributed in a cyclical procedure. Dynamic round robin are used to forward requests to associated servers based on real time calculation of assigned server weights.
By taking into consideration the number of active and current connections to each application instance, ‘Least Connections’ load balancing algorithm distributes the load by choosing the server with the least number of active transactions (connections).
In weighted least connections the load distribution based on both the factors – the number of current and active connections to each server and the relative capacity of the server.
In a source IP Hash, load balancing a server is selected based on a unique hash key. The Hash key is generated by taking the source and destination of the request. Based on the generated hash key, servers are assigned to clients.
URL hashing is a load balancing technique used in load balanced servers to serve to request URL i.e to serve content unique per server. Its improves capacity of backend caches by avoiding cache duplication.
In the least response time algorithm, the backend server with the least number of active connections and the least average response time is selected. Using this algorithm IT ensures quick response time for end clients.
In the least bandwidth method backend servers are selected based on the server’s bandwidth consumption i.e the server consuming least bandwidth is selected (measured in Mbps). Similar to least bandwidth method is the least packets method. Here the server transmitting least packets are selected by the load balancer.
In the custom load method, the backend servers are chosen based on the load. CPU usage, memory and response time of the server is taken into consideration to calculate the server load. In enterprises, IT teams use this algorithm frequently to efficiently establish resource utilization. This algorithm is suitable when traffic is predictable and stable, in case of uneven and sudden traffic changes its not so suitable.
Here the requests are dispersed according to the content of the request to be processed, including the session cookies along with the HTTP/S header and message. As this is data-driven, intelligent distribution of the incoming requests is possible. Even the tracking of responses can be done as they provide data regarding each server load when they travel back from the server.
Application layered algorithms works on the basis of content to be shared as per the request. For example messages, HTTP/S header and session cookies. The most frequently used and significant application layer algorithm is the LPR: Least pending requests. In the least pending requests (LPR) method, the pending requests go through strict scrutiny post which it is distributed across best available servers.
The pending requests are monitored and efficiently distributed across the most available servers. It can adapt instantly to an abrupt inflow of new connections, equally monitoring the workload of all the connected servers.
The benefits of LPR benefits include:
The Lavelle networks SD-WAN solution suite supports traffic load balancing to prioritize application traffic on the links attached to a CloudPort. Session based load balancing is active by default on the Network Group. Load Balancing policies allow IT teams to prioritize and associate links to traffic based on business policies. It is an intent driven configuration framework that does not need persistent old school transport connections like SSH, and therefore the control plane can failover to the right WAN path even before it loses a single transaction.
The CloudStation supports Load Balancing policies to prioritize the links attached to a CloudPort. In scenarios, where load balancing needs to be applied on certain links and certain links need to be used mainly as a backup link, IT teams can use load balance policies to prioritize certain links more than the other links. By default, CloudStation uses all the links that are connected to the CloudPort. The Load Balancing policies are used to override the default behaviour.
Note: The Load Balancing Policies can be applied only at a Device level, unlike other network policies.