{"id":13245,"date":"2025-12-20T11:11:59","date_gmt":"2025-12-20T05:41:59","guid":{"rendered":"https:\/\/www.youstable.com\/blog\/?p=13245"},"modified":"2025-12-24T16:18:01","modified_gmt":"2025-12-24T10:48:01","slug":"use-load-balancer-on-linux","status":"publish","type":"post","link":"https:\/\/www.youstable.com\/blog\/use-load-balancer-on-linux","title":{"rendered":"How to Use Load Balancer on Linux Server? L4 vs L7 Explained with Examples"},"content":{"rendered":"\n<p><strong>A load balancer on a Linux server<\/strong> distributes incoming traffic across multiple backend servers to improve performance, uptime, and scalability. To use it: choose a tool (HAProxy, Nginx, or LVS\/IPVS), configure backends and health checks, enable SSL\/TLS if needed, harden security, and monitor traffic. Test failover and performance before going live.<\/p>\n\n\n\n<p>If you\u2019re scaling applications, learning how to use a load balancer on a Linux server is one of the highest-impact steps you can take. In this guide, you\u2019ll set up a production-ready <a href=\"https:\/\/www.youstable.com\/blog\/configure-load-balancer-on-linux\/\">load balancer<\/a> with Nginx and HAProxy, compare approaches (L4 vs L7), add SSL, sticky sessions, health checks, high availability, and tune for speed and security.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"what-is-a-load-balancer-and-why-use-it-on-a-linux-server\"><strong>What Is a Load Balancer and Why Use It on a Linux Server?<\/strong><\/h2>\n\n\n\n<div class=\"wp-block-media-text has-media-on-the-right is-stacked-on-mobile\"><div class=\"wp-block-media-text__content\">\n<p>A load balancer sits in front of your application servers and distributes traffic to prevent overload, reduce latency, and provide high availability. On Linux, common options include Nginx (Layer 7 HTTP\/HTTPS), HAProxy (Layer 4\/7 TCP\/HTTP), and LVS\/IPVS (high-performance Layer 4 in-kernel load balancing).<\/p>\n<\/div><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1168\" height=\"784\" src=\"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/What-Is-a-Load-Balancer-and-Why-Use-It-on-a-Linux-Server.png\" alt=\"\" class=\"wp-image-13602 size-full\" srcset=\"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/What-Is-a-Load-Balancer-and-Why-Use-It-on-a-Linux-Server.png 1168w, https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/What-Is-a-Load-Balancer-and-Why-Use-It-on-a-Linux-Server-150x101.png 150w\" sizes=\"auto, (max-width: 1168px) 100vw, 1168px\" \/><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"layer-4-vs-layer-7\"><strong>Layer 4 vs Layer 7<\/strong><\/h2>\n\n\n\n<p>Layer 4 (TCP\/UDP) forwards connections without inspecting HTTP; it\u2019s extremely fast and simple. Layer 7 understands HTTP\/HTTPS and can route by URL, headers, cookies, and handle SSL termination. Choose L4 for raw throughput (e.g., TCP services), L7 for smarter routing (web apps, APIs).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"key-benefits\"><strong>Key Benefits<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>High availability: <\/strong>remove failed nodes via health checks<\/li>\n\n\n\n<li><strong>Scalability:<\/strong> add or remove backend servers seamlessly<\/li>\n\n\n\n<li><strong>Performance: <\/strong>concurrency, caching layers, TCP reuse<\/li>\n\n\n\n<li><strong>Zero-downtime deployments: <\/strong>drain nodes and roll out safely<\/li>\n\n\n\n<li><strong>Security:<\/strong> centralize TLS, WAF, rate limiting (L7)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"choosing-a-linux-load-balancer-nginx-vs-haproxy-vs-lvs-ipvs\"><strong>Choosing a Linux Load Balancer (Nginx vs HAProxy vs LVS\/IPVS)<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"nginx-layer-7-http-https\"><strong>Nginx (Layer 7 HTTP\/HTTPS)<\/strong><\/h3>\n\n\n\n<p>Great as a reverse proxy and HTTP load balancer, easy SSL termination, and static file performance. Passive health checks are built-in; active checks require Nginx Plus or third-party modules. Simple to learn and widely supported.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"haproxy-layer-4-7\"><strong>HAProxy (Layer 4\/7)<\/strong><\/h3>\n\n\n\n<p>Production workhorse with advanced health checks, detailed stats, TLS offload, stickiness, and high performance. Ideal for dynamic routing, microservices, and demanding traffic patterns.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"lvs-ipvs-layer-4-kernel-level\"><strong>LVS\/IPVS (Layer 4, kernel-level)<\/strong><\/h3>\n\n\n\n<p>IPVS (with Keepalived) provides extremely fast L4 load balancing in kernel space. Best for very high throughput (millions of connections), often paired with HAProxy or Nginx at L7 for HTTP logic.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"which-one-should-you-use\"><strong>Which one should you use?<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Web apps\/APIs needing SSL, headers, URL routing: HAProxy or Nginx<\/li>\n\n\n\n<li>Massive L4 throughput (TCP\/UDP): IPVS\/Keepalived<\/li>\n\n\n\n<li>Simple reverse proxy to start fast: Nginx<\/li>\n\n\n\n<li>Advanced health checks and stickiness: HAProxy<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"prerequisites-and-reference-architecture\"><strong>Prerequisites and Reference Architecture<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>1\u20132 Linux load balancer nodes (Ubuntu\/Debian\/CentOS\/Alma\/Rocky)<\/li>\n\n\n\n<li>2+ application servers (e.g., 10.0.0.11, 10.0.0.12)<\/li>\n\n\n\n<li>DNS A\/AAAA for your domain to the load balancer\u2019s public IP or VIP<\/li>\n\n\n\n<li>Firewall open: 80\/443 to LB, 8404 (optional HAProxy stats), protocol 112 for VRRP (Keepalived)<\/li>\n\n\n\n<li>System access: sudo, SSH, editor, curl<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"quick-start-nginx-load-balancer-http-https\"><strong>Quick Start: Nginx Load Balancer (HTTP\/HTTPS)<\/strong><\/h2>\n\n\n\n<p>Nginx is a straightforward way to start with an L7 load balancer on a Linux server. Below is a minimal HTTP configuration with least-connections and passive health checks.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># Ubuntu\/Debian\nsudo apt update &amp;&amp; sudo apt <a href=\"https:\/\/www.youstable.com\/blog\/install-nginx-on-linux\/\">install -y nginx<\/a>\n\n# RHEL\/CentOS\/Alma\/Rocky\nsudo dnf install -y nginx\nsudo systemctl enable --now nginx<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code># \/etc\/nginx\/conf.d\/lb.conf\nupstream app_pool {\n    least_conn;\n    server 10.0.0.11:80 max_fails=3 fail_timeout=10s;\n    server 10.0.0.12:80 max_fails=3 fail_timeout=10s;\n    # For session persistence, consider:\n    # ip_hash;  # Simple stickiness by client IP (not ideal behind NAT)\n}\n\nserver {\n    listen 80 default_server reuseport;\n    server_name example.com;\n\n    location \/ {\n        proxy_pass http:\/\/app_pool;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n\n        proxy_connect_timeout 5s;\n        proxy_read_timeout 60s;\n        proxy_next_upstream error timeout http_502 http_503 http_504;\n    }\n}<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo nginx -t\nsudo systemctl reload nginx<\/code><\/pre>\n\n\n\n<p><strong>To enable HTTPS quickly with Let\u2019s Encrypt on Nginx:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo apt install -y certbot python3-certbot-nginx\nsudo certbot --nginx -d example.com -d www.example.com\nsudo systemctl reload nginx<\/code><\/pre>\n\n\n\n<p>Note: Active HTTP health checks require Nginx Plus or an external checker. If you need robust checks, prefer HAProxy for the load balancer role.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"quick-start-haproxy-load-balancer-l4-l7\"><strong>Quick Start: HAProxy Load Balancer (L4\/L7)<\/strong><\/h2>\n\n\n\n<p>HAProxy provides strong health checks, stats, TLS termination, and stickiness\u2014ideal for production web apps and APIs on a Linux server.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># Ubuntu\/Debian\nsudo apt update &amp;&amp; sudo apt <a href=\"https:\/\/www.youstable.com\/blog\/install-haproxy-on-linux\/\">install -y haproxy<\/a>\n\n# RHEL\/CentOS\/Alma\/Rocky\nsudo dnf install -y haproxy<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code># \/etc\/haproxy\/haproxy.cfg (minimal but production-friendly)\nglobal\n    log \/dev\/log local0\n    log \/dev\/log local1 notice\n    user haproxy\n    group haproxy\n    daemon\n    maxconn 50000\n    tune.ssl.default-dh-param 2048\n\ndefaults\n    log     global\n    mode    http\n    option  httplog\n    option  dontlognull\n    option  http-server-close\n    option  forwardfor\n    retries 3\n    timeout http-request 10s\n    timeout queue        30s\n    timeout connect      5s\n    timeout client       60s\n    timeout server       60s\n    timeout http-keep-alive 10s\n    timeout check        5s\n\nfrontend http-in\n    bind *:80\n    http-response add-header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" if { ssl_fc }\n    <a href=\"https:\/\/www.youstable.com\/blog\/redirect-http-to-https\/\">redirect scheme https<\/a> code 301 if !{ ssl_fc }\n    default_backend app\n\nfrontend https-in\n    bind *:443 ssl crt \/etc\/haproxy\/certs\/\n    http-request set-header X-Forwarded-Proto https\n    default_backend app\n\nbackend app\n    balance leastconn\n    option httpchk GET \/health\n    http-check expect status 200\n    cookie SRV insert indirect nocache\n    server app1 10.0.0.11:80 check cookie s1\n    server app2 10.0.0.12:80 check cookie s2\n\nlisten stats\n    bind :8404\n    stats enable\n    stats uri \/\n    stats realm HAProxy\\ Stats\n    stats auth admin:StrongPass123!<\/code><\/pre>\n\n\n\n<p>For TLS, place PEM files at \/etc\/haproxy\/certs (one .pem per domain, containing fullchain + private key):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># Using Certbot to obtain certs, then combine for HAProxy:\nsudo certbot certonly --standalone -d example.com\nsudo bash -c 'cat \/etc\/letsencrypt\/live\/example.com\/fullchain.pem \\\n\/etc\/letsencrypt\/live\/example.com\/privkey.pem \\\n&gt; \/etc\/haproxy\/certs\/example.com.pem'\n\nsudo haproxy -c -f \/etc\/haproxy\/haproxy.cfg\nsudo systemctl enable --now haproxy<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"high-availability-with-keepalived-vrrp\"><strong>High Availability with Keepalived (VRRP)<\/strong><\/h2>\n\n\n\n<p>Run two Linux load balancers with a Virtual IP (VIP) that fails over automatically. Keepalived uses VRRP (protocol 112) to elect a MASTER and BACKUP. If the primary fails\u2014or HAProxy stops\u2014the VIP moves to the backup in seconds.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># Install\nsudo apt install -y keepalived   # or: sudo dnf install -y keepalived\n\n# \/etc\/keepalived\/keepalived.conf (on primary)\nvrrp_script chk_haproxy {\n    script \"killall -0 haproxy\"\n    interval 2\n    weight -30\n}\nvrrp_instance VI_1 {\n    state MASTER\n    interface eth0\n    virtual_router_id 51\n    priority 150\n    advert_int 1\n    authentication {\n        auth_type PASS\n        auth_pass StrongPass\n    }\n    virtual_ipaddress {\n        10.0.0.10\/24 dev eth0\n    }\n    track_script {\n        chk_haproxy\n    }\n}<\/code><\/pre>\n\n\n\n<p>Use the same config on the backup node but set state BACKUP and a lower priority (e.g., 100). Open VRRP (protocol 112) between the LBs in your firewall or security group.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"session-persistence-sticky-sessions\"><strong>Session Persistence (Sticky Sessions)<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>HAProxy:<\/strong> cookie-based stickiness (recommended for web apps)<\/li>\n\n\n\n<li><strong>Nginx OSS: <\/strong>ip_hash (basic, IP-based; may be inaccurate behind NAT or CDNs)<\/li>\n\n\n\n<li><strong>Stateless apps:<\/strong> prefer no stickiness; use shared session stores (Redis, database) when needed<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"health-checks-monitoring-and-logging\"><strong>Health Checks, Monitoring, and Logging<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>HAProxy health checks: <\/strong>option httpchk, http-check expect, mark down failed nodes automatically<\/li>\n\n\n\n<li><strong>Stats and metrics: <\/strong>HAProxy stats page (:8404), Prometheus exporters, syslog<\/li>\n\n\n\n<li><strong>Nginx: <\/strong>access\/error logs, stub_status module for basic metrics<\/li>\n\n\n\n<li><strong>External uptime checks:<\/strong> curl, Pingdom, UptimeRobot, or k6\/wrk for load testing<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"performance-tuning-for-linux-load-balancers\"><strong>Performance Tuning for Linux Load Balancers<\/strong><\/h2>\n\n\n\n<p>Apply sane kernel and process limits to handle spikes and keep latency low.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># \/etc\/sysctl.d\/99-lb.conf\nnet.core.somaxconn = 65535\nnet.ipv4.ip_local_port_range = 1024 65000\nnet.ipv4.tcp_fin_timeout = 15\nnet.ipv4.tcp_tw_reuse = 1\nfs.file-max = 1000000\n\nsudo sysctl --system\n\n# Raise open files limit\necho \"* soft nofile 100000\" | sudo tee -a \/etc\/security\/limits.conf\necho \"* hard nofile 100000\" | sudo tee -a \/etc\/security\/limits.conf<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Nginx:<\/strong> use reuseport on listen, enable keepalive to backends<\/li>\n\n\n\n<li><strong>HAProxy:<\/strong> tune.maxaccept, nbthread (modern HAProxy uses threads), http-reuse safe, reasonable timeouts<\/li>\n\n\n\n<li><strong>Scale horizontally: <\/strong>add more LB nodes with Keepalived or anycast<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"security-hardening\"><strong>Security Hardening<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Restrict management surfaces: <\/strong>bind stats to localhost or protect with auth and firewall<\/li>\n\n\n\n<li><strong>Strong TLS: <\/strong>enable TLS 1.2\/1.3, modern ciphers, HSTS, OCSP stapling<\/li>\n\n\n\n<li><strong>Firewall:<\/strong> allow only 80\/443 (and 8404 if required), permit VRRP (protocol 112) between LBs<\/li>\n\n\n\n<li><strong>Sanitize headers: <\/strong>set X-Forwarded-* and strip hop-by-hop headers<\/li>\n\n\n\n<li><strong>Rate limiting\/WAF:<\/strong> HAProxy stick-tables or Nginx limit_req, add a WAF where needed<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"testing-and-troubleshooting\"><strong>Testing and Troubleshooting<\/strong><\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code># Sanity checks\ncurl -I -H \"Host: example.com\" http:\/\/YOUR_LB_IP\/\ncurl -I https:\/\/example.com\/\n\n# Validate configs\nnginx -t\nhaproxy -c -f \/etc\/haproxy\/haproxy.cfg\n\n# Observe logs\nsudo journalctl -u nginx -f\nsudo journalctl -u haproxy -f\n\n# Load testing (install first): wrk or ab\nwrk -t4 -c200 -d60s https:\/\/example.com\/<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"real-world-use-cases-and-patterns\"><strong>Real-World Use Cases and Patterns<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>WordPress at scale:<\/strong> HAProxy terminates TLS and load balances PHP-FPM\/Nginx backends; media on object storage or CDN<\/li>\n\n\n\n<li><strong>Blue\/green deployments:<\/strong> drain a server from the pool, deploy, health-check, re-add<\/li>\n\n\n\n<li><strong>Microservices gateway: <\/strong>route paths (\/api\/, \/auth\/) to different backends with L7 rules<\/li>\n\n\n\n<li><strong>Hybrid L4+L7:<\/strong> IPVS for raw TCP scale, HAProxy for HTTP intelligence behind it<\/li>\n<\/ul>\n\n\n\n<p>Don\u2019t want to build this alone? YouStable\u2019s managed hosting team routinely deploys HAProxy\/Nginx load balancers with Keepalived, SSL automation, monitoring, and DDoS filtering\u2014so you can focus on your app while we operate the edge.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"step-by-step-summary-how-to-use-a-load-balancer-on-linux-server\"><strong>Step-by-Step Summary: How to Use a Load Balancer on Linux Server<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Pick your stack: <\/strong>Nginx or HAProxy (L7), IPVS (L4) for extreme throughput<\/li>\n\n\n\n<li>Install on a Linux server; configure backends and routing algorithm<\/li>\n\n\n\n<li>Add health checks (HAProxy) and enable TLS offload<\/li>\n\n\n\n<li>Harden security and tune kernel\/process limits<\/li>\n\n\n\n<li>Optionally add Keepalived for a VIP and automatic failover<\/li>\n\n\n\n<li>Test with curl and load tools; monitor logs and metrics<\/li>\n\n\n\n<li>Scale by adding backend servers or additional LB nodes<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"faqs\"><strong>FAQs<\/strong><\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1765789524641\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"which-is-better-for-linux-load-balancing-nginx-or-haproxy\"><strong>Which is better for Linux load balancing: Nginx or HAProxy?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>For most HTTP\/HTTPS apps, HAProxy offers richer health checks, stickiness, and observability. Nginx is excellent as a simple reverse proxy and can be enough for many sites. If you need advanced L7 features, start with HAProxy; if you need simplicity, use Nginx.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765789541515\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"how-do-i-choose-between-layer-4-and-layer-7-load-balancing\"><strong>How do I choose between Layer 4 and Layer 7 load balancing?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Choose L4 for maximum throughput with minimal logic (TCP\/UDP services). Choose L7 when you need HTTP-aware routing, SSL termination, header-based rules, caching, or WAF integrations. Many large deployments combine both.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765789556947\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"how-do-i-enable-sticky-sessions-on-a-linux-load-balancer\"><strong>How do I enable sticky sessions on a Linux load balancer?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>In HAProxy, use cookie insertion in the backend and set a cookie per server. In Nginx OSS, use ip_hash (basic) or move sessions to a shared store (Redis) to avoid stickiness. Cookie-based persistence is usually more reliable than IP-based.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765789567149\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"can-i-terminate-ssl-tls-on-the-load-balancer\"><strong>Can I terminate SSL\/TLS on the load balancer?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes. Terminate TLS at HAProxy or Nginx and forward plaintext to backends, or re-encrypt to backends if required. Use Let\u2019s Encrypt automation and strong TLS settings (TLS 1.2\/1.3, modern ciphers, HSTS).<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765789581637\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"how-do-i-achieve-high-availability-for-the-load-balancer-itself\"><strong>How do I achieve high availability for the load balancer itself?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Run two Linux load balancers and use Keepalived (VRRP) to float a Virtual IP. Health checks ensure automatic failover if the primary node or process fails. Ensure VRRP (protocol 112) is allowed between the nodes and test failover regularly.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>A load balancer on a Linux server distributes incoming traffic across multiple backend servers to improve performance, uptime, and scalability. [&hellip;]<\/p>\n","protected":false},"author":21,"featured_media":15532,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[350],"tags":[],"class_list":["post-13245","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-knowledgebase"],"acf":[],"featured_image_src":"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/How-to-Use-Load-Balancer-on-Linux-Server.jpg","author_info":{"display_name":"Sanjeet Chauhan","author_link":"https:\/\/www.youstable.com\/blog\/author\/sanjeet"},"_links":{"self":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/13245","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/comments?post=13245"}],"version-history":[{"count":5,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/13245\/revisions"}],"predecessor-version":[{"id":15533,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/13245\/revisions\/15533"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/media\/15532"}],"wp:attachment":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/media?parent=13245"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/categories?post=13245"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/tags?post=13245"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}