To monitor and secure a load balancer on a Linux server, instrument metrics and logs, set health checks and alerts, harden the OS and TLS, restrict network access, and add WAF, rate limiting, and DDoS protections. Use HAProxy or Nginx metrics with Prometheus and Grafana, automate patches, and enforce least-privilege access with audit trails.
Managing high-availability traffic requires two things: visibility and defense. In this guide, you’ll learn how to monitor & secure load balancer on Linux server using proven, production-grade practices. We’ll cover HAProxy and Nginx monitoring, alerting with Prometheus and Grafana, OS hardening, TLS security, WAF/rate limits, DDoS mitigation, and high-availability (HA) design patterns.
What Is a Load Balancer on Linux?
A load balancer distributes incoming traffic across multiple backend servers to improve performance, reliability, and uptime. On Linux, common choices include HAProxy (L4/L7), Nginx/Envoy (L7), and IPVS with Keepalived (L4). Regardless of the stack, you must monitor health and secure every layer: network, TLS, application, and the underlying OS.
Monitoring Fundamentals: What to Watch and Why It Matters
Key Metrics for Load Balancers
- Availability: Up/Down of frontends, backends, and servers
- Latency: Response time (P50/P95/P99), queue time, handshake time (TLS)
- Throughput: Requests per second (RPS), connections per second (CPS), bandwidth
- Errors: 4xx/5xx rates, retries, redispatches, timeouts, circuit opens
- Capacity: Concurrent sessions, queue depth, connection limits, CPU/RAM
- TLS Health: Handshake failures, protocol/cipher usage, certificate expiry
Logs to Collect
- Access logs: Request method, path, status, bytes, timing breakdowns
- Error logs: Timeouts, connection resets, backend failures
- Security logs: WAF events, rate limiting triggers, blocked IPs
- System logs: Kernel messages, SSH access, sudo actions (auditd/journald)
Health Checks and Alerting Strategy
- Define SLOs: e.g., 99.9% availability, P95 < 300ms, error rate < 1%
- Set alerts: High 5xx rate, latency spikes, backend down, queue depth growth, TLS cert expiring
- Use multi-source probes: Internal health checks + external synthetic monitoring
Setting Up a Linux Monitoring Stack (HAProxy/Nginx + Prometheus + Grafana)
Install and Expose Metrics
Below is a minimal HAProxy setup on Ubuntu that exposes a Prometheus exporter and a stats page, plus basic TLS.
# 1) Install packages
sudo apt update && sudo apt install -y haproxy prometheus-node-exporter
# Optional: HAProxy Prometheus exporter
# On Debian/Ubuntu, you can use container or binary:
# Example with Docker:
docker run -d --net=host --name haproxy_exporter prom/haproxy-exporter \
--haproxy.scrape-uri="http://127.0.0.1:8404/;csv"
# 2) HAProxy config (e.g., /etc/haproxy/haproxy.cfg)
global
log /dev/log local0
log /dev/log local1 notice
maxconn 10000
tune.ssl.default-dh-param 2048
# Harden TLS
ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 30s
timeout server 30s
retries 3
frontend fe_https
bind :443 ssl crt /etc/haproxy/certs/site.pem alpn h2,http/1.1
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
http-response set-header X-Content-Type-Options "nosniff"
http-response set-header X-Frame-Options "DENY"
# Rate limit by IP using stick tables
stick-table type ip size 100k expire 30s store http_req_rate(10s)
http-request track-sc0 src
acl abuse sc_http_req_rate(0) gt 100
http-request deny if abuse
default_backend be_app
backend be_app
balance roundrobin
server app1 10.0.1.10:8080 check
server app2 10.0.1.11:8080 check
listen stats
bind 127.0.0.1:8404
stats enable
stats uri /
stats refresh 5s
# 3) Restart
sudo systemctl enable haproxy --now
# 4) Prometheus: add job to scrape exporter and node_exporter
# /etc/prometheus/prometheus.yml
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['<LB_IP>:9100']
- job_name: 'haproxy'
static_configs:
- targets: ['<LB_IP>:9101'] # if exporter mapped to 9101
# 5) Grafana: import HAProxy and Node dashboards from grafana.com
For Nginx, enable the stub_status module and use the nginx-prometheus-exporter, or deploy the VTS module for richer metrics. Envoy exposes /stats and supports native Prometheus scraping.
Sample Prometheus Alerts
groups:
- name: haproxy-lb
rules:
- alert: High5xxRate
expr: sum(rate(haproxy_frontend_http_responses_total{code="5xx"}[5m]))
/ sum(rate(haproxy_frontend_http_requests_total[5m])) > 0.02
for: 10m
labels: { severity: "page" }
annotations:
summary: "5xx error rate > 2% on LB"
- alert: BackendDown
expr: haproxy_server_status{state="down"} == 1
for: 2m
labels: { severity: "page" }
annotations:
summary: "Backend server is down"
- alert: TLSCertExpiring
expr: (probe_ssl_earliest_cert_expiry - time()) < 86400 * 7
for: 5m
labels: { severity: "ticket" }
annotations:
summary: "TLS certificate expires in < 7 days"
Security Hardening: Best Practices for Linux Load Balancers
Restrict the Network Surface
- Permit only required ports (80, 443). Limit SSH (22) to trusted IPs or a bastion.
- Separate data plane and management plane networks where possible.
- Enable reverse path filtering and disable redirects.
# UFW example
sudo ufw default deny incoming
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow from <YOUR_IP> to any port 22 proto tcp
sudo ufw enable
# firewalld example
sudo firewall-cmd --permanent --set-default-zone=drop
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="<YOUR_IP>" port protocol="tcp" port="22" accept'
sudo firewall-cmd --reload
TLS/HTTPS Hardening
- Use TLS 1.2+ (prefer TLS 1.3), HSTS, OCSP stapling, and modern cipher suites.
- Automate certificates with Let’s Encrypt (certbot) and monitor expiry.
- Enable HTTP/2 and use session resumption to lower CPU overhead.
# Example: obtain and auto-renew Let's Encrypt cert for Nginx (similar for HAProxy with certbot + hook)
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com --redirect --hsts --staple-ocsp
sudo systemctl list-timers | grep certbot
Access Control and Auditing
- Disable root SSH and password logins; use SSH keys and sudo with least privilege.
- Enable MFA on bastions and centralize secrets (e.g., Vault or cloud KMS).
- Use auditd/journald and ship logs to a SIEM; alert on privilege escalations.
# Harden SSH
sudo sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sudo sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo systemctl reload ssh
# Enable Fail2ban for SSH
sudo apt install -y fail2ban
sudo systemctl enable fail2ban --now
WAF, Bot Control, and Rate Limiting
- Nginx + ModSecurity with the OWASP CRS to block common OWASP Top 10 attacks.
- HAProxy stick tables for request and connection rate limiting by IP or path.
- Challenge suspicious clients and deny abusive IPs with Fail2ban automation.
# Nginx basic rate limiting
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /login {
limit_req zone=one burst=20 nodelay;
proxy_pass http://app;
}
}
}
DDoS Resilience
- Enable SYN cookies and increase SYN backlog; tune connection tracking.
- Use anycast/CDN or upstream scrubbing for large volumetric attacks.
- Rate-limit new connections, cap per-IP concurrency, and drop invalid packets early.
# /etc/sysctl.d/99-lb-tuning.conf
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 16384
sudo sysctl --system
High Availability and Zero-Downtime Techniques
Active/Passive with Keepalived (VRRP)
Use two Linux nodes with a virtual IP (VIP). Keepalived monitors HAProxy/Nginx and fails the VIP over if the primary is unhealthy. Health-check scripts ensure that failover only happens when the data plane is truly impacted.
# /etc/keepalived/keepalived.conf (primary)
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 150
advert_int 1
authentication { auth_type PASS auth_pass 42secret }
virtual_ipaddress { 10.0.1.100/24 }
track_script { chk_haproxy }
}
vrrp_script chk_haproxy {
script "pidof haproxy"
interval 2
fall 2
rise 2
}
Graceful Reloads, Draining, and Blue/Green
- Use HAProxy or Nginx graceful reloads to avoid dropping connections.
- Drain backends before deploys to protect user sessions.
- Blue/green and canary routing reduce risk across rolling changes.
Incident Response and Troubleshooting
Common Symptoms and Fast Checks
- High 5xx: Check backend health, timeouts, and saturation (CPU, DB).
- Latency spikes: Inspect queue depth, TLS handshakes, GC pauses on apps.
- Connection resets: Look for MTU issues, firewall drops, or keepalive mismatches.
- Intermittent 4xx: Verify WAF rules and rate limits aren’t overly strict.
Runbooks and Automation
- Create runbooks for failover, certificate renewal, and rollbacks.
- Use Ansible/Terraform to version configurations and ensure consistency.
- Automate backups of configs, dashboards, and alerts; test restoration regularly.
Quick Start: Secure HAProxy on Ubuntu (Step-by-Step)
- Install: apt install haproxy fail2ban ufw prometheus-node-exporter
- Configure HAProxy: enable TLS, stats page, rate limiting, health checks.
- Harden OS: apply sysctl, SSH hardening, and firewall rules.
- Observability: deploy Prometheus, HAProxy exporter, and Grafana dashboards.
- HA: add Keepalived for VRRP and test failover.
# Verify HAProxy syntax and reload safely
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
sudo systemctl reload haproxy
# Log rotation for HAProxy (Debian defaults exist; verify)
/etc/logrotate.d/haproxy
# Test endpoints
curl -Ik https://example.com
curl -s http://127.0.0.1:8404/ | head
When to Use Managed Load Balancers
If you lack a 24/7 SRE team or face frequent DDoS, consider a managed load balancer or CDN-based proxy. Managed options provide built-in metrics, auto-scaling, global anycast, and integrated WAF/DDoS protection. YouStable’s managed servers and cloud hosting can deploy HAProxy/Nginx with Prometheus, WAF, and HA configured for you—plus continuous monitoring and patch management.
Best Practices Checklist
- Metrics first: Prometheus + Grafana; alert on 5xx, latency, and cert expiry.
- Harden TLS: TLS 1.2/1.3, HSTS, OCSP stapling, automated renewals.
- Lock down access: firewall, SSH keys, no root logins, auditd, Fail2ban.
- Defend the edge: WAF, rate limits, DDoS guardrails, sysctl tuning.
- Engineer for HA: VRRP/Keepalived, graceful reloads, blue/green deploys.
- Automate: configuration management, backups, and runbooks.
FAQs: How to Monitor & Secure Load Balancer on Linux
What is the best tool to monitor a Linux load balancer?
Prometheus and Grafana are the most flexible. Use node_exporter for system metrics and a protocol-specific exporter (haproxy_exporter or nginx-prometheus-exporter) for LB metrics. Add Alertmanager for paging and a log pipeline (e.g., Loki, ELK) for detailed analysis.
How do I secure HAProxy or Nginx against common web attacks?
Enable TLS 1.2/1.3 with strong ciphers, use HSTS, deploy ModSecurity with OWASP CRS (for Nginx) or integrate a WAF in front. Add request rate limiting, strict timeouts, and a firewall limiting management access. Monitor and patch regularly.
How can I detect if my load balancer is overloaded?
Watch queue depth, concurrent sessions, RPS/CPS trending, CPU, and memory. Rising latency (P95/P99) with stable error rates often indicates saturation. Alert when thresholds breach and scale out backends or raise capacity limits with careful testing.
What’s the difference between L4 and L7 load balancing for security?
L4 operates at TCP/UDP and is fast but blind to HTTP semantics. L7 understands HTTP/HTTPS and supports WAF, header-based routing, and granular rate limiting. Many stacks combine L4 VIPs (Keepalived/IPVS) with L7 proxies (HAProxy/Nginx/Envoy).
Do I need a CDN or managed DDoS in front of my Linux load balancer?
If you face large volumetric attacks or global audiences, yes. A CDN or managed DDoS service absorbs floods upstream, reducing origin pressure and bandwidth costs. For self-hosted setups, combine network filtering, rate limits, and upstream protection for best results.
With the right monitoring, hardening, and HA design, a Linux load balancer can be both fast and resilient. If you want an expert-built stack, YouStable can provision, secure, and monitor your HAProxy/Nginx load balancers with 24/7 support and proactive patching.