To use HAProxy on a Linux server, install the package for your distro, create a front end that listens on ports 80/443, define back ends with your app servers, enable health checks and SSL termination, then start and enable the service.
This guide explains installation, configuration, security, tuning, and troubleshooting.
In this beginner-friendly guide, you’ll learn how to use HAProxy on Linux server environments to load balance traffic, terminate SSL, route by host or path, and scale web apps. We’ll cover installation on Ubuntu/Debian and RHEL-based distros, step-by-step configuration, best practices, and real-world tips from hosting production workloads.
What Is HAProxy and Why Use It?
HAProxy (High Availability Proxy) is a high‑performance, open-source TCP/HTTP load balancer and reverse proxy. It sits in front of your application servers to distribute requests, improve reliability through health checks, support zero‑downtime maintenance, offload TLS/SSL, and add security controls like rate limiting and ACLs.

- Primary use cases: load balancing, reverse proxy, SSL termination, Layer 4/7 routing.
- Benefits: high availability, scalability, observability, and strong performance under heavy traffic.
- Alternatives: NGINX, Envoy, Traefik—but HAProxy is renowned for speed and stability.
Prerequisites
- A Linux server (Ubuntu 22.04/24.04, Debian 12, Rocky/AlmaLinux 8/9, or RHEL 8/9).
- Root or sudo access.
- Public DNS records pointing to your HAProxy server (for HTTPS, e.g., app.example.com).
- Two or more backend application servers (or containers) to load balance.
- Basic firewall access to open ports 80 and 443 (and 8404/9000 for stats/admin if needed).
Install HAProxy on Linux
Ubuntu/Debian
sudo apt update
sudo apt install -y haproxy
haproxy -v
Rocky/AlmaLinux/RHEL
sudo dnf install -y haproxy
haproxy -v
Enable and Verify the Service
sudo systemctl enable haproxy
sudo systemctl start haproxy
sudo systemctl status haproxy
Basic HAProxy Configuration (HTTP)
HAProxy’s main config file is at /etc/haproxy/haproxy.cfg. Always validate changes before reloading. Below is a minimal yet production-ready template.
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
daemon
maxconn 50000
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 30s
timeout server 30s
retries 3
option redispatch
frontend fe_http
bind *:80
mode http
option forwardfor
http-request set-header X-Forwarded-Proto http
default_backend be_app
backend be_app
mode http
balance roundrobin
option httpchk GET /health
http-check expect status 200
cookie SRV insert indirect nocache
server app1 10.0.0.11:8080 check cookie s1
server app2 10.0.0.12:8080 check cookie s2
- Health checks:
option httpchkandhttp-check expect status 200ensure only healthy servers receive traffic. - Sticky sessions: Enable stickiness with a cookie for stateful apps (e.g., PHP sessions). For stateless apps, you can remove cookies.
- Load balancing:
roundrobinis simple; considerleastconnfor uneven request durations.
# Validate then reload
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
sudo systemctl reload haproxy
Enable HTTPS and SSL Termination
Terminating TLS at HAProxy reduces CPU load on your app servers and centralizes certificate management. You can use Let’s Encrypt with HAProxy using a PEM bundle or a certbot hook.
Create the TLS PEM Bundle
Place your certificate and key as a single PEM file readable by HAProxy:
sudo mkdir -p /etc/haproxy/certs
sudo cat fullchain.pem privkey.pem | sudo tee /etc/haproxy/certs/app.example.com.pem >/dev/null
sudo chmod 600 /etc/haproxy/certs/app.example.com.pem
sudo chown haproxy:haproxy /etc/haproxy/certs/app.example.com.pem
HTTPS Frontend with HTTP->HTTPS Redirect
frontend fe_http
bind *:80
http-request redirect scheme https code 301 unless { ssl_fc }
default_backend be_app
frontend fe_https
bind *:443 ssl crt /etc/haproxy/certs/app.example.com.pem alpn h2,http/1.1
mode http
option forwardfor
http-request set-header X-Forwarded-Proto https
default_backend be_app
For automatic renewal, use certbot with the --deploy-hook to rebuild the PEM and reload HAProxy after renewal.
sudo certbot certonly --standalone -d app.example.com --agree-tos -m admin@example.com --non-interactive
# Deploy hook example (script should concatenate PEM and reload HAProxy)
sudo systemctl reload haproxy
Host/Path Routing with ACLs (Layer 7)
Use ACLs to route traffic by domain or URL path—ideal for microservices or splitting static/dynamic content.
frontend fe_https
bind *:443 ssl crt /etc/haproxy/certs/app.example.com.pem
mode http
acl host_api hdr(host) -i api.example.com
acl path_static path_beg /assets/ /static/
use_backend be_api if host_api
use_backend be_static if path_static
default_backend be_app
backend be_api
balance leastconn
server api1 10.0.0.21:9000 check
server api2 10.0.0.22:9000 check
backend be_static
balance roundrobin
server cdn1 10.0.0.31:8080 check
backend be_app
balance roundrobin
server app1 10.0.0.11:8080 check
server app2 10.0.0.12:8080 check
Security Best Practices
- Harden TLS: Use modern ciphers and enable HTTP/2. Consider setting
ssl-default-bind-ciphersuitesfor TLS 1.3. - Rate limiting: Use stick tables to mitigate abusive clients.
- Least privilege: Run as the
haproxyuser; avoid world-readable certs. - PROXY protocol: If terminating on a second HAProxy or a cloud LB, enable PROXY protocol to preserve client IPs.
- Firewall: Restrict management ports; only expose 80/443 publicly.
# Example: Simple rate limit by source IP (100 reqs in 10s)
frontend fe_https
stick-table type ip size 1m expire 10s store http_req_rate(10s)
tcp-request connection track-sc0 src
http-request deny if { sc_http_req_rate(0) gt 100 }
Observability: Logs, Stats, and Runtime API
Proper logging and metrics are essential in production. Enable access logs and the built-in stats page or admin socket.
global
log /dev/log local0
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
listen stats
bind *:8404
stats enable
stats uri /haproxy?stats
stats refresh 10s
stats auth admin:StrongPassword
Most distros use rsyslog. Ensure HAProxy logs are captured:
# /etc/rsyslog.d/49-haproxy.conf
$template HaproxyFormat,"%timestamp% %hostname% haproxy[%procid%]: %msg%\n"
if ($programname == "haproxy") then /var/log/haproxy.log;HaproxyFormat
& stop
sudo systemctl restart rsyslog
Performance Tuning Tips
- Max connections: Set
global maxconnbased on RAM and workload; increaseulimit -nif needed. - Timeouts: Keep timeouts realistic (e.g., 30–60s for clients; 30s for servers) to prevent socket exhaustion.
- HTTP keep-alive: Enabled by default; ensures fewer TCP handshakes.
- Compression: Offload compression to app/CDN; if enabling in HAProxy, monitor CPU closely.
- Kernel tuning: Consider
net.core.somaxconn,net.ipv4.ip_local_port_range, andnet.ipv4.tcp_tw_reusefor high connection rates.
Common Issues and Troubleshooting
- Service won’t start: Run
haproxy -c -f /etc/haproxy/haproxy.cfgto validate syntax before reloads. - SSL errors: Ensure the PEM includes full chain + private key; permissions readable by
haproxyuser. - Missing client IPs: Use
option forwardforin HTTP mode; enable PROXY protocol end-to-end if behind another LB. - Health checks failing: Verify backend
/healthendpoint returns 200 quickly; check firewalls and SELinux labeling on RHEL-based systems. - High latency: Check timeouts, server saturation, DNS issues, or Layer 7 inspection rules (ACLs) that are too heavy.
Real-World Use Cases
- Blue/Green deployments: Weight backends or switch ACLs to shift traffic gradually during releases.
- Microservices gateway: Route by host/path to different services, apply rate limits per route.
- SSL offload and WAF chain: Terminate TLS in HAProxy, then forward to a WAF or app servers.
- Multi-region failover: Use DNS health checks and multiple HAProxy nodes with VRRP/keepalived or cloud load balancers.
Deploy HAProxy on YouStable Infrastructure
For high-availability architectures, pair HAProxy with reliable compute. YouStable’s SSD-backed VPS and dedicated servers deliver low latency, generous bandwidth, and optional managed support. Spin up multiple backend instances, add a public HAProxy node, and let our team help you harden TLS, tune performance, and monitor uptime—without overspending.
Full Example: Production-Ready haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
daemon
maxconn 50000
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 60s
timeout server 60s
retries 3
# Public HTTP (redirect to HTTPS)
frontend fe_http
bind *:80
http-request redirect scheme https code 301 unless { ssl_fc }
default_backend be_app
# Public HTTPS
frontend fe_https
bind *:443 ssl crt /etc/haproxy/certs/app.example.com.pem alpn h2,http/1.1
option forwardfor
http-request set-header X-Forwarded-Proto https
# Basic rate limiting (100 req/10s/IP)
stick-table type ip size 1m expire 10s store http_req_rate(10s)
tcp-request connection track-sc0 src
http-request deny if { sc_http_req_rate(0) gt 100 }
# Host/Path routing examples
acl host_api hdr(host) -i api.example.com
acl path_static path_beg /assets/ /static/
use_backend be_api if host_api
use_backend be_static if path_static
default_backend be_app
backend be_app
balance roundrobin
option httpchk GET /health
http-check expect status 200
cookie SRV insert indirect nocache
server app1 10.0.0.11:8080 check cookie s1
server app2 10.0.0.12:8080 check cookie s2
backend be_api
balance leastconn
option httpchk GET /health
http-check expect status 200
server api1 10.0.0.21:9000 check
server api2 10.0.0.22:9000 check
backend be_static
balance roundrobin
server cdn1 10.0.0.31:8080 check
# Stats UI
listen stats
bind *:8404
stats enable
stats uri /haproxy?stats
stats refresh 10s
stats auth admin:StrongPassword
How to Use HAProxy on Linux Server: Step-by-Step
- Install HAProxy via your distro package manager.
- Point DNS to your HAProxy server’s public IP.
- Create
/etc/haproxy/haproxy.cfgwith frontends/backends and health checks. - Add HTTPS with a PEM certificate; redirect HTTP to HTTPS.
- Enable logs and stats; secure stats with auth and firewall rules.
- Validate config:
haproxy -c -f /etc/haproxy/haproxy.cfg. - Reload:
systemctl reload haproxy. - Monitor health, tune timeouts/maxconn, and iterate safely.
FAQs
Is HAProxy Layer 4 or Layer 7?
HAProxy supports both. Layer 4 (TCP) is faster and protocol-agnostic; Layer 7 (HTTP) enables advanced features like host/path routing, header rewrites, and content-based decisions. Choose L4 for raw speed and L7 for smart routing.
How do I preserve the real client IP?
In HTTP mode, enable option forwardfor and have your app read the X-Forwarded-For header. If there’s another load balancer in front, use the PROXY protocol end-to-end or terminate TLS at HAProxy and pass XFF downstream.
What’s the best load-balancing algorithm?
It depends on workload. roundrobin is simplest. leastconn works better when requests have variable duration. For sticky apps, use cookies or source IP. Always measure with real traffic.
How do I set up Let’s Encrypt with HAProxy?
Obtain certs with certbot (standalone or webroot), concatenate fullchain and privkey into a single PEM, place it in /etc/haproxy/certs/, and reload HAProxy. Use a deploy hook to rebuild PEMs and reload automatically on renewal.
Can HAProxy replace NGINX?
Yes for many use cases, especially load balancing and reverse proxying. HAProxy excels at performance and L7 routing. If you rely heavily on NGINX-specific features (e.g., complex rewrites or static file offload), you may run both—HAProxy in front and NGINX behind.