To optimize Nginx on a Linux server, tune worker processes and connections to match CPU and traffic, enable efficient I/O (epoll, sendfile), compress and cache responses, optimize TLS/HTTP protocols, and align OS limits (ulimits, sysctl) with Nginx demands. Validate configuration, benchmark changes, and monitor metrics to ensure real performance gains and stability.
In this guide, you’ll learn exactly how to optimize Nginx on Linux Server—from core Nginx configuration to Linux kernel tuning, caching, TLS, and monitoring. Written for beginners and intermediate admins, this step-by-step playbook reflects 12+ years of real-world hosting experience at YouStable, helping you squeeze maximum performance, security, and efficiency from Nginx.
What “Optimization” Means for Nginx
Optimization is not a single tweak; it’s a stack-wide alignment between Nginx, your application (PHP-FPM, Node.js, Python), and the operating system. The goal is to handle more requests per second with lower latency and resource usage—without sacrificing stability, security, or maintainability.
Pre‑Flight: Confirm Your Baseline
- Update to a supported Nginx build (prefer mainline for features and fixes).
- Know your workload: static files, reverse proxy, PHP-FPM, Node.js, or mixed.
- Check CPU cores, RAM, disk type (SSD/NVMe), and network capacity.
- Audit current limits: open files, somaxconn, TIME_WAIT behavior, kernel parameters.
nginx -V
nginx -t
lscpu
ulimit -n
sysctl net.core.somaxconn net.ipv4.ip_local_port_range
Core Nginx Performance Tuning
1) Worker Processes and Connections
Set workers to match physical CPU cores, and allow enough connections per worker to meet peak concurrency. For most web use, Nginx is event-driven and efficient with fewer processes.
# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
worker_rlimit_nofile 200000;
events {
use epoll;
worker_connections 65535;
multi_accept on;
# accept_mutex off; # Off for high connection churn, test both
}
Tip: Ensure system open file limits and systemd unit limits are higher than Nginx’s needs.
# /etc/systemd/system/nginx.service.d/limits.conf
[Service]
LimitNOFILE=200000
2) Buffers, Keep‑Alive, and Timeouts
Right-size buffers and sensible timeouts prevent memory bloat and hanging connections.
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
keepalive_requests 1000;
client_header_buffer_size 1k;
large_client_header_buffers 2 8k;
client_body_buffer_size 16k;
client_max_body_size 32m;
# Timeouts
client_body_timeout 10s;
client_header_timeout 10s;
send_timeout 10s;
}
3) Gzip or Brotli Compression
Compressing text assets saves bandwidth and speeds delivery. Gzip is widely supported; Brotli can yield better compression for static assets.
# Gzip (built-in)
gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types text/plain text/css application/javascript application/json application/xml image/svg+xml;
gzip_vary on;
For Brotli, install the module and enable it as a drop-in alternative or alongside gzip for clients that support it.
4) TLS and HTTP/2 (and HTTP/3 if available)
Modern protocols reduce latency. Use HTTP/2 for multiplexing and enable strong, hardware-accelerated ciphers. If your build supports QUIC/HTTP/3, test carefully before production rollout.
server {
listen 443 ssl http2; # add "quic reuseport" if built with HTTP/3
server_name example.com;
ssl_certificate /etc/ssl/certs/fullchain.pem;
ssl_certificate_key /etc/ssl/private/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256';
ssl_prefer_server_ciphers off;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m; # ~400k sessions
ssl_session_tickets off;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
}
5) Static File Delivery and Caching Headers
Serve static content directly from Nginx with caching headers to minimize revalidation and origin load.
location ~* \.(?:css|js|jpg|jpeg|gif|png|svg|ico|webp|woff2?)$ {
access_log off;
log_not_found off;
expires 30d;
add_header Cache-Control "public, max-age=2592000, immutable";
}
Optimizing Nginx as a Reverse Proxy
6) Upstream Keep‑Alives and Buffers
Keep upstream connections warm to reduce handshake overhead. Tune buffers to stabilize response streaming without hoarding memory.
upstream api_backend {
server 127.0.0.1:3000 max_fails=3 fail_timeout=10s;
keepalive 64;
}
location /api/ {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_read_timeout 30s;
}
7) PHP‑FPM and FastCGI Tuning
Most WordPress/PHP sites bottleneck at PHP-FPM. Use persistent sockets and microcaching to reduce dynamic overhead.
upstream php {
server unix:/run/php/php-fpm.sock;
keepalive 32;
}
fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2 keys_zone=PHP:100m inactive=60m max_size=2g;
map $request_method $skip_cache { default 0; POST 1; }
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_read_timeout 30s;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
# Microcache
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache PHP;
fastcgi_cache_valid 200 301 302 1m;
add_header X-Cache $upstream_cache_status;
}
Exclude personalized paths or logged-in sessions from caching to avoid serving private content to others.
8) Proxy/Microcaching for APIs
Short TTL microcaching smooths traffic spikes for cacheable endpoints without staleness risks.
proxy_cache_path /var/cache/nginx/proxy levels=1:2 keys_zone=API:100m inactive=30m max_size=1g;
location /v1/ {
proxy_cache API;
proxy_cache_valid 200 10s;
proxy_cache_use_stale error timeout updating http_502 http_503 http_504;
add_header X-Cache $upstream_cache_status;
}
Linux Kernel and OS-Level Tuning
Your Linux settings must match Nginx’s concurrency and connection patterns. These sysctl tunings are safe starting points—benchmark and adjust for your workload.
# /etc/sysctl.d/99-nginx.conf
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 16384
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_syncookies = 1
fs.file-max = 500000
sysctl --system
echo "* soft nofile 200000" >> /etc/security/limits.conf
echo "* hard nofile 200000" >> /etc/security/limits.conf
On busy multi-core systems, test reuseport and IRQ affinity for network interrupts. Always roll out incrementally.
Logging, Observability, and Benchmarking
9) Access Logs and Buffering
Logging every static request is expensive. Disable for assets and buffer writes for dynamic paths.
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
access_log /var/log/nginx/access.log main buffer=128k flush=1s;
10) Metrics and Status
Enable stub_status or an exporter to feed Prometheus/Grafana. Track active connections, request rates, upstream latency, cache hit ratio, and errors.
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
11) Benchmark Before and After
Use realistic concurrency and payload sizes. Compare latency percentiles (p50/p95/p99), not just averages.
# Example tools
wrk -t4 -c400 -d30s https://example.com/
ab -n 20000 -c 200 https://example.com/
Security Hardening While Optimizing
- Hide version: server_tokens off;
- HSTS, X-Content-Type-Options, X-Frame-Options, Referrer-Policy, and robust Content-Security-Policy.
- Limit request rates and connections to mitigate abuse.
server_tokens off;
limit_req_zone $binary_remote_addr zone=req_per_ip:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
location / {
limit_req zone=req_per_ip burst=20 nodelay;
limit_conn conn_per_ip 20;
}
Common Pitfalls to Avoid
- Over-allocating worker_connections without raising OS file limits.
- Leaving default timeouts too high, causing slowloris-style resource retention.
- Enabling aggressive caching for user-specific pages (e.g., WordPress admin, cart).
- Missing HTTP/2 server push deprecation and relying on it; use preload hints instead.
- Ignoring upstream bottlenecks (DB, PHP-FPM) and blaming Nginx.
Step‑By‑Step Example Configuration
Below is a safe, high-performance baseline you can adapt. Always validate with nginx -t and stage before production.
user nginx;
worker_processes auto;
worker_rlimit_nofile 200000;
events {
use epoll;
worker_connections 65535;
multi_accept on;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 15;
keepalive_requests 1000;
# Compression
gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types text/plain text/css application/json application/javascript application/xml image/svg+xml;
gzip_vary on;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
access_log /var/log/nginx/access.log main buffer=128k flush=1s;
error_log /var/log/nginx/error.log warn;
# Caches
proxy_cache_path /var/cache/nginx/proxy levels=1:2 keys_zone=API:100m inactive=30m max_size=1g;
fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2 keys_zone=PHP:100m inactive=60m max_size=2g;
# Status
server {
listen 127.0.0.1:8080;
location /nginx_status { stub_status; allow 127.0.0.1; deny all; }
}
# Site
server {
listen 80;
listen 443 ssl http2;
server_name example.com;
root /var/www/html;
index index.php index.html;
ssl_certificate /etc/ssl/certs/fullchain.pem;
ssl_certificate_key /etc/ssl/private/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:50m;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
location ~* \.(?:css|js|jpg|jpeg|gif|png|svg|ico|webp|woff2?)$ {
access_log off;
expires 30d;
add_header Cache-Control "public, max-age=2592000, immutable";
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_read_timeout 30s;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_cache PHP;
fastcgi_cache_valid 200 301 302 1m;
add_header X-Cache $upstream_cache_status;
}
}
}
Validation and Rollout Checklist
- nginx -t passes; config is under version control.
- System limits (nofile, somaxconn) exceed Nginx demands.
- Benchmark before and after; monitor p95 latency, RPS, CPU, memory.
- Cache rules exclude personalized or admin endpoints.
- TLS score tested; HTTP/2 enabled; Brotli/Gzip verified via response headers.
- Access logs buffered; asset logs disabled.
- Rollback plan ready; changes deployed during low traffic.
When to Choose Managed Hosting
If you prefer results without the trial-and-error, consider a managed VPS or dedicated server. At YouStable, our engineers pre-tune Nginx, PHP-FPM, and the Linux kernel, set up caching and observability, and continuously optimize for your workload—so you get peak performance with less risk and downtime.
By aligning Nginx configuration, caching, TLS, and Linux kernel parameters with your actual workload—and by validating every change—you can unlock significant gains in throughput and user experience. If you want experts to handle this end-to-end, YouStable’s managed servers are built for speed from day one.
FAQs: How to Optimize Nginx on Linux Server
What is the best worker_processes value for Nginx?
Use worker_processes auto; to match your CPU cores. This lets Nginx fully utilize multi-core systems without manual guesswork. Always pair it with sufficient worker_connections and OS file limits.
How do I speed up WordPress with Nginx?
Enable FastCGI microcaching for anonymous traffic, serve static assets with long cache headers, compress with Gzip/Brotli, and ensure PHP-FPM has enough workers. Offload heavy tasks to a CDN. YouStable’s managed WordPress VPS ships with these optimizations by default.
Should I enable HTTP/3 for Nginx?
HTTP/3 can improve performance on high-latency or mobile networks. If your Nginx build supports QUIC/HTTP/3, test it behind a feature flag. Measure real impact versus HTTP/2 before broad rollout.
Is Brotli better than Gzip for Nginx?
Brotli typically compresses text assets smaller than Gzip, improving load times. However, it’s heavier CPU-wise. It’s great for static assets, while Gzip remains a reliable baseline for broad compatibility.
How do I know if my changes worked?
Benchmark with wrk or ab before and after, monitor p95/p99 latency, error rates, CPU, memory, and cache hit ratios. Use Nginx stub_status or a Prometheus exporter. Only keep changes that show measurable improvements under realistic load.