{"id":14227,"date":"2025-12-30T10:35:20","date_gmt":"2025-12-30T05:05:20","guid":{"rendered":"https:\/\/www.youstable.com\/blog\/?p=14227"},"modified":"2025-12-30T10:35:22","modified_gmt":"2025-12-30T05:05:22","slug":"create-load-balancer-on-linux-server","status":"publish","type":"post","link":"https:\/\/www.youstable.com\/blog\/create-load-balancer-on-linux-server","title":{"rendered":"How to Create Load Balancer on Linux Server in 2026"},"content":{"rendered":"\n<p><strong>A Linux load balancer<\/strong> distributes incoming traffic across multiple backend servers to increase availability, performance, and fault tolerance. To create a load balancer on a Linux server, install and configure a proxy like<strong> HAProxy or Nginx<\/strong>, set health checks, enable SSL termination if needed, and harden, monitor, and test for high availability.<\/p>\n\n\n\n<p>If you\u2019re wondering how to create a load balancer on Linux server, this guide walks you through a production-ready setup using Nginx and HAProxy, with optional high availability via Keepalived. <\/p>\n\n\n\n<p>We\u2019ll cover planning, installation, configuration, SSL termination, sticky sessions, security, monitoring, and performance tuning\u2014step by step and beginner friendly.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"what-is-a-linux-load-balancer-and-why-you-need-one\"><strong>What is a Linux Load Balancer (and Why You Need One)<\/strong><\/h2>\n\n\n\n<div class=\"wp-block-media-text has-media-on-the-right is-stacked-on-mobile\"><div class=\"wp-block-media-text__content\">\n<p>A load balancer sits in front of your application servers and routes requests to healthy instances. This helps you handle traffic spikes, reduce downtime, and scale horizontally. On Linux, the most popular open-source choices are Nginx and HAProxy for Layer 7 (HTTP\/HTTPS) and Layer 4 (TCP) load balancing, and LVS\/IPVS for ultra-high throughput at Layer 4.<\/p>\n<\/div><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1168\" height=\"784\" src=\"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/What-Is-a-Linux-Load-Balancer-and-Why-You-Need-One.png\" alt=\"What Is a Linux Load Balancer (and Why You Need One)\" class=\"wp-image-14647 size-full\" srcset=\"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/What-Is-a-Linux-Load-Balancer-and-Why-You-Need-One.png 1168w, https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/What-Is-a-Linux-Load-Balancer-and-Why-You-Need-One-150x101.png 150w\" sizes=\"auto, (max-width: 1168px) 100vw, 1168px\" \/><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"prerequisites-and-architecture-planning\"><strong>Prerequisites and Architecture Planning<\/strong><\/h2>\n\n\n\n<p>Before you begin, plan your architecture and gather the essentials:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>1 public Linux server (Debian\/Ubuntu\/CentOS\/RHEL\/AlmaLinux\/Rocky) for the load balancer<\/li>\n\n\n\n<li>2+ private\/backend servers running your app (e.g., Node.js, PHP-FPM, Django, or static content)<\/li>\n\n\n\n<li>Domain and DNS record (A\/AAAA) pointing to the <a href=\"https:\/\/www.youstable.com\/blog\/fix-load-balancer-on-linux\/\">load balancer<\/a><\/li>\n\n\n\n<li>Root or sudo access, ports 80\/443 open<\/li>\n\n\n\n<li><a href=\"https:\/\/www.youstable.com\/blog\/activate-an-ssl-certificate\/\">SSL certificate<\/a> (Let\u2019s Encrypt or custom)<\/li>\n<\/ul>\n\n\n\n<p><strong>Decide on:-<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Layer 7 (HTTP\/HTTPS) vs Layer 4 (TCP) balancing<\/li>\n\n\n\n<li>Load balancing algorithm (round-robin, leastconn, source\/IP hash)<\/li>\n\n\n\n<li>Session persistence (sticky sessions) needs<\/li>\n\n\n\n<li>Active health checks vs passive checks<\/li>\n\n\n\n<li>Single load balancer vs high availability (active\/passive with VRRP)<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"option-1-create-an-http-https-load-balancer-with-nginx\"><strong>Option 1: Create an HTTP\/HTTPS Load Balancer with Nginx<\/strong><\/h2>\n\n\n\n<p>Nginx is a lightweight, fast reverse proxy and HTTP load balancer. It supports round-robin by default, with optional IP-based session persistence.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"install-nginx\"><strong>Install Nginx<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Debian\/Ubuntu\nsudo apt update &amp;&amp; sudo apt <a href=\"https:\/\/www.youstable.com\/blog\/install-nginx-on-linux\/\">install -y nginx<\/a>\n\n# RHEL\/CentOS\/Alma\/Rocky\nsudo dnf install -y nginx\nsudo systemctl enable --now nginx<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"configure-upstreams-and-load-balancing\"><strong>Configure Upstreams and Load Balancing<\/strong><\/h3>\n\n\n\n<p>Create an upstream with your backend servers and a server block that proxies requests. This example uses round-robin with passive health checks.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo nano \/etc\/nginx\/conf.d\/loadbalancer.conf\n\nupstream app_backend {\n    # Use round-robin by default\n    server 10.0.0.11:8080 max_fails=3 fail_timeout=10s;\n    server 10.0.0.12:8080 max_fails=3 fail_timeout=10s;\n\n    # For simple persistence, uncomment IP hash:\n    # ip_hash;\n}\n\nserver {\n    listen 80;\n    server_name example.com;\n\n    # <a href=\"https:\/\/www.youstable.com\/blog\/redirect-http-to-https\/\">Redirect HTTP<\/a> to HTTPS (uncomment after SSL is ready)\n    # return 301 https:\/\/$host$request_uri;\n}\n\nserver {\n    listen 443 ssl http2;\n    server_name example.com;\n\n    ssl_certificate \/etc\/ssl\/certs\/example.crt;\n    ssl_certificate_key \/etc\/ssl\/private\/example.key;\n\n    # Security and performance\n    ssl_protocols TLSv1.2 TLSv1.3;\n    ssl_ciphers HIGH:!aNULL:!MD5;\n    client_max_body_size 25m;\n    proxy_read_timeout 60s;\n\n    location \/ {\n        proxy_pass http:\/\/app_backend;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n        proxy_http_version 1.1;\n        proxy_set_header Connection \"\";\n    }\n\n    # Health endpoint passthrough\n    location \/health {\n        proxy_pass http:\/\/app_backend\/health;\n    }\n}<\/code><\/pre>\n\n\n\n<p><strong>Validate and reload:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo nginx -t\nsudo systemctl reload nginx<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"enable-ssl-with-lets-encrypt-optional\"><strong>Enable SSL with Let\u2019s Encrypt (Optional)<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Debian\/Ubuntu\nsudo apt install -y certbot python3-certbot-nginx\nsudo certbot --nginx -d example.com\n\n# RHEL family (EPEL may be required)\nsudo dnf install -y certbot python3-certbot-nginx\nsudo certbot --nginx -d example.com<\/code><\/pre>\n\n\n\n<p>Nginx open-source supports passive health checks via <code>max_fails<\/code>\/<code>fail_timeout<\/code>. For advanced active health checks or cookie-based stickiness, consider HAProxy or NGINX Plus. For simpler persistence, use <code>ip_hash<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"option-2-create-a-tcp-http-load-balancer-with-haproxy\"><strong>Option 2: Create a TCP\/HTTP Load Balancer with HAProxy<\/strong><\/h2>\n\n\n\n<p>HAProxy excels at Layer 7 HTTP and Layer 4 TCP load balancing, offers robust health checks, detailed observability, and built-in sticky sessions, making it ideal for production workloads.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"install-haproxy\"><strong>Install HAProxy<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Debian\/Ubuntu\nsudo apt update &amp;&amp; sudo apt <a href=\"https:\/\/www.youstable.com\/blog\/install-haproxy-on-linux\/\">install -y haproxy<\/a>\n\n# RHEL\/CentOS\/Alma\/Rocky\nsudo dnf install -y haproxy\nsudo systemctl enable --now haproxy<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"basic-http-load-balancer-round-robin-plus-health-checks\"><strong>Basic HTTP Load Balancer (Round-robin + Health Checks)<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo nano \/etc\/haproxy\/haproxy.cfg\n\nglobal\n    log \/dev\/log local0\n    log \/dev\/log local1 notice\n    maxconn 50000\n    tune.ssl.default-dh-param 2048\n    user haproxy\n    group haproxy\n    daemon\n\ndefaults\n    log     global\n    mode    http\n    option  httplog\n    option  dontlognull\n    timeout connect 5s\n    timeout client  60s\n    timeout server  60s\n    retries 3\n\nfrontend fe_http\n    bind *:80\n    redirect scheme https code 301 if !{ ssl_fc }\n\nfrontend fe_https\n    bind *:443 ssl crt \/etc\/ssl\/private\/example.pem\n    mode http\n    option httpclose\n    option forwardfor\n    default_backend be_app\n\nbackend be_app\n    mode http\n    balance roundrobin\n    option httpchk GET \/health\n    http-check expect rstring OK\n    server app1 10.0.0.11:8080 check fall 3 rise 2\n    server app2 10.0.0.12:8080 check fall 3 rise 2\n\nlisten stats\n    bind *:8404\n    mode http\n    stats enable\n    stats uri \/stats\n    stats refresh 5s\n    # Protect with basic auth\n    stats auth admin:StrongPassHere<\/code><\/pre>\n\n\n\n<p><strong>Concatenate your certificate and key into a PEM for HAProxy:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo bash -c 'cat \/etc\/ssl\/certs\/example.crt \/etc\/ssl\/private\/example.key &gt; \/etc\/ssl\/private\/example.pem'\nsudo chmod 600 \/etc\/ssl\/private\/example.pem\nsudo systemctl restart haproxy<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"sticky-sessions-cookie-based\"><strong>Sticky Sessions (Cookie-Based)<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>backend be_app\n    mode http\n    balance roundrobin\n    cookie SRV insert indirect nocache\n    option httpchk GET \/health\n    http-check expect rstring OK\n    server app1 10.0.0.11:8080 check cookie s1\n    server app2 10.0.0.12:8080 check cookie s2<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"rate-limiting-and-ddos-basics\"><strong>Rate Limiting and DDoS Basics<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>frontend fe_https\n    # ...previous lines...\n    stick-table type ip size 200k expire 10m store http_req_rate(10s)\n    tcp-request connection track-sc0 src\n    acl too_fast sc0_http_req_rate gt 50\n    http-request deny if too_fast<\/code><\/pre>\n\n\n\n<p>This limits abusive clients to 50 requests per 10 seconds. Adjust for your workload.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"high-availability-with-keepalived-vrrp-virtual-ip\"><strong>High Availability with Keepalived (VRRP Virtual IP)<\/strong><\/h2>\n\n\n\n<p>To avoid a single point of failure, run two load balancers (LB1 and LB2) and float a Virtual IP (VIP) between them using Keepalived. Clients connect to the VIP, which fails over automatically.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"install-and-configure-keepalived\"><strong>Install and Configure Keepalived<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Debian\/Ubuntu\nsudo apt install -y keepalived\n\n# RHEL family\nsudo dnf install -y keepalived<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code># On LB1 (MASTER)\nsudo nano \/etc\/keepalived\/keepalived.conf\nvrrp_instance VI_1 {\n  state MASTER\n  interface eth0\n  virtual_router_id 51\n  priority 200\n  advert_int 1\n  authentication {\n    auth_type PASS\n    auth_pass StrongVRRPPass\n  }\n  virtual_ipaddress {\n    203.0.113.10\/24 dev eth0\n  }\n  track_process {\n    haproxy\n  }\n}\n\n# On LB2 (BACKUP)\nsudo nano \/etc\/keepalived\/keepalived.conf\nvrrp_instance VI_1 {\n  state BACKUP\n  interface eth0\n  virtual_router_id 51\n  priority 100\n  advert_int 1\n  authentication {\n    auth_type PASS\n    auth_pass StrongVRRPPass\n  }\n  virtual_ipaddress {\n    203.0.113.10\/24 dev eth0\n  }\n  track_process {\n    haproxy\n  }\n}\n\nsudo systemctl enable --now keepalived<\/code><\/pre>\n\n\n\n<p><strong>Test failover by stopping HAProxy or Keepalived on LB1 and verifying VIP moves to LB2:<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ip addr show | grep 203.0.113.10\nsudo systemctl stop haproxy   # triggers VIP failover<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"firewall-selinux-and-hardening\"><strong>Firewall, SELinux, and Hardening<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"open-required-ports\"><strong>Open Required Ports<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># UFW (Ubuntu)\nsudo ufw allow 80,443\/tcp\nsudo ufw allow 8404\/tcp   # HAProxy stats (optional)\nsudo ufw enable\n\n# firewalld (RHEL family)\nsudo firewall-cmd --add-service=http --permanent\nsudo firewall-cmd --add-service=https --permanent\nsudo firewall-cmd --add-port=8404\/tcp --permanent\nsudo firewall-cmd --reload<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"selinux-contexts-haproxy\"><strong>SELinux Contexts (HAProxy)<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Place certs where HAProxy can read them:\nsudo mkdir -p \/etc\/pki\/haproxy\nsudo cp example.pem \/etc\/pki\/haproxy\/\nsudo chown haproxy:haproxy \/etc\/pki\/haproxy\/example.pem\nsudo chmod 600 \/etc\/pki\/haproxy\/example.pem\n\n# If SELinux is enforcing, label context:\nsudo semanage fcontext -a -t haproxy_etc_t \"\/etc\/pki\/haproxy(\/.*)?\"\nsudo restorecon -Rv \/etc\/pki\/haproxy<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"system-tuning\"><strong>System Tuning<\/strong><\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo tee -a \/etc\/sysctl.d\/99-lb-tuning.conf &gt; \/dev\/null &lt;&lt;'EOF'\nnet.core.somaxconn=65535\nnet.ipv4.ip_local_port_range=10240 65000\nnet.ipv4.tcp_tw_reuse=1\nnet.ipv4.tcp_fin_timeout=15\nnet.core.netdev_max_backlog=16384\nEOF\n\nsudo sysctl --system\n\n# <a href=\"https:\/\/www.youstable.com\/blog\/how-to-increase-file-upload-size-in-cpanel\/\">Increase file<\/a> descriptors\necho '* soft nofile 200000' | sudo tee -a \/etc\/security\/limits.conf\necho '* hard nofile 200000' | sudo tee -a \/etc\/security\/limits.conf<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"monitoring-logs-and-observability\"><strong>Monitoring, Logs, and Observability<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enable HAProxy stats page on port 8404 (as shown) and protect it with auth.<\/li>\n\n\n\n<li>Ship logs to a central system (Elastic, Loki, CloudWatch) using rsyslog or Vector.<\/li>\n\n\n\n<li>Monitor key metrics: request rate, active connections, response times, backend health states, 4xx\/5xx rates.<\/li>\n\n\n\n<li>Use node_exporter and HAProxy exporter with Prometheus + Grafana for dashboards.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"benchmark-and-tune\"><strong>Benchmark and Tune<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use <code>wrk<\/code> or <code>ab<\/code> to load test against the LB VIP or domain.<\/li>\n\n\n\n<li>Adjust timeouts (<code>timeout connect\/client\/server<\/code>) to match app behavior.<\/li>\n\n\n\n<li>Pick the right algorithm: <code>leastconn<\/code> for long-lived requests; <code>roundrobin<\/code> for uniform loads; <code>source<\/code> for simple persistence.<\/li>\n\n\n\n<li>Scale out backends horizontally; add\/remove servers without downtime.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"common-errors-and-quick-fixes\"><strong>Common Errors and Quick Fixes<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>502 Bad Gateway:<\/strong> Check backend port, firewall, SELinux, or health check path.<\/li>\n\n\n\n<li><strong>SSL handshake failure:<\/strong> Ensure full chain PEM for HAProxy and correct file permissions.<\/li>\n\n\n\n<li><strong>Reload fails: <\/strong>Validate config syntax (<code>nginx -t<\/code> \/ <code>haproxy -c -f \/etc\/haproxy\/haproxy.cfg<\/code>).<\/li>\n\n\n\n<li><strong>Sticky sessions not working:<\/strong> Confirm cookie insertion and that app respects it.<\/li>\n\n\n\n<li><strong>VIP not moving:<\/strong> Verify Keepalived priorities, VRRP IDs, interface names, and process tracking.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"when-to-use-managed-load-balancing\"><strong>When to Use Managed Load Balancing<\/strong><\/h2>\n\n\n\n<p>If you don\u2019t want to manage SSL renewals, failover, monitoring, and patching, consider managed load balancing. At YouStable, our engineers design and operate HAProxy\/Nginx clusters with VRRP, SSL offload, WAF, and 24\/7 observability\u2014so you can focus on your app. Ask us about a migration or performance review.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"faqs\"><strong>FAQ&#8217;s<\/strong><\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1765952766449\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"1-which-is-better-for-linux-load-balancing-nginx-or-haproxy\">1. <strong>Which is better for Linux load balancing: Nginx or HAProxy?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Nginx is excellent for simple HTTP reverse proxying and static content. HAProxy offers richer health checks, detailed metrics, advanced stickiness, and TCP support. For most production apps, HAProxy is the safer default. Many teams use Nginx at the edge and HAProxy for application routing.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765952788340\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"2-how-do-i-add-ssl-to-my-linux-load-balancer\">2. <strong>How do I add SSL to my Linux load balancer?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Use Let\u2019s Encrypt with Nginx\u2019s Certbot plugin, or bind SSL in HAProxy with a PEM file that concatenates certificate and private key. Terminate TLS at the load balancer and proxy HTTP to backends, or re\u2011encrypt to HTTPS if required by compliance.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765952804467\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"3-do-i-need-sticky-sessions-for-my-application\">3. <strong>Do I need sticky sessions for my application?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Use sticky sessions if your app stores session state in memory on a single backend (e.g., PHP session files). For stateless apps or distributed session stores (Redis, database), you don\u2019t need stickiness. HAProxy supports cookie-based stickiness; Nginx open-source supports IP-based <code>ip_hash<\/code>.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765952829214\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"4-how-can-i-make-the-load-balancer-highly-available\">4. <strong>How can I make the load balancer highly available?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Deploy two load balancer nodes and a virtual IP using Keepalived (VRRP). The VIP automatically fails over if the primary node or process dies. You can also use cloud-managed LBs or BGP-based designs for larger environments.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1765952841808\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"5-whats-the-difference-between-layer-4-and-layer-7-load-balancing\">5. <strong>What\u2019s the difference between Layer 4 and Layer 7 load balancing?<\/strong><\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Layer 4 operates at the transport level (TCP\/UDP) and is extremely fast but lacks HTTP awareness. Layer 7 understands HTTP headers, paths, and cookies, enabling features like URL routing, header rewriting, compression, and sticky sessions. Choose based on feature needs and performance goals.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>A Linux load balancer distributes incoming traffic across multiple backend servers to increase availability, performance, and fault tolerance. To create [&hellip;]<\/p>\n","protected":false},"author":21,"featured_media":16651,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[350],"tags":[],"class_list":["post-14227","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-knowledgebase"],"acf":[],"featured_image_src":"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2025\/12\/How-to-Create-Load-Balancer-on-Linux-Server.jpg","author_info":{"display_name":"Sanjeet Chauhan","author_link":"https:\/\/www.youstable.com\/blog\/author\/sanjeet"},"_links":{"self":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/14227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/comments?post=14227"}],"version-history":[{"count":6,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/14227\/revisions"}],"predecessor-version":[{"id":16653,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/14227\/revisions\/16653"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/media\/16651"}],"wp:attachment":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/media?parent=14227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/categories?post=14227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/tags?post=14227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}