Products
Platform Service
Load Balancer
Load Balancer
Distribute workload across groups of resources to provide High Availability and improve your application performance.

Overview
Load Balancer is one of the solutions used to enhance High Availability (HA) for a system and reduce the chances of service downtime, allowing you to distributes incoming workload across multiple resources to optimize resource use, maximize throughput, and minimize response time.
Load Balancer can operate in two modes: Application Load Balancing and Network Load Balancing, depending on the protocol selected for Backend Group and Listener within a Load Balancer.
How It Works?

Load Balancer operates through a Listener, which is responsible for receiving and distributing incoming workloads from users to multiple servers in the system based on specific rules. Then, distributes workload to the Backend Group, which aggregates service nodes that perform the same tasks. These nodes process the workload in parallel, enabling what is known as horizontal scaling.
You can choose one of these following load balancing algorithm for your backend group.
- Least connections: requests will be sent to the server with the least number of active connections.
- Round robin: requests will be sent to each server in turn.
- Source IP: provides stickiness between a user and a server. Request from the same client will be sent to the same server if possible.

Weight Configuration

Health Check

Scalability
Load Balancing Types
Load Balancer can operates in 2 modes: Application Load Balancing and Network Load Balancing, depending on the protocol selected for Backend Group and Listener within a Load Balancer.
Application Load Balancing
Application Load Balancing operates at the application layer of the OSI model. It offers versatile capabilities for categorizing and balancing load with moderate performance compared to Network Load Balancing, and is ideal for general internet-facing tasks.

Features
Load Balancing
You can load balance HTTP/HTTPS traffic to targets - resources, containers.
HTTPS support
Application Load Balancer supports HTTPS to HTTP conversion for internal communication between the Load Balancer and backend resources.
Forwarding policy
Forwarding policy is used to manage data such as request headers or cookies to differentiate workload and route it appropriately.
Purposes
Load balancer specifications are divided into 2 groups based on their suitable purposes: development and production. You can select the purpose that best suits your applications.
This Development Load Balancer allows you to use a maximum of:
32,000
TPS
Maximum transaction per second when using network protocol (TCP/UDP).
2 Gbps
Throughput
Resources are shared between Load Balancer, so you can use the Load Balancer at a lower cost.