To use Docker on a Linux server, install Docker Engine from the official repository, enable and start the service, add your user to the docker group, pull a trusted image, and run a container with volumes & networks configured for persistence and security.
For multi container apps, define services with Docker Compose and apply best practices. Learning how to use Docker on Linux server unlocks fast, consistent deployments for apps of any size.
This guide walks you through installation, core commands, Compose, storage, networking, security hardening, and production operations based on real world hosting experience so you can confidently run containers on Ubuntu, Debian, RHEL/CentOS, or Fedora.
What is Docker and Why Use it on a Linux Server?
Docker packages applications and their dependencies into lightweight containers that run consistently across environments.
On Linux servers, Docker uses kernel features like namespaces and cgroups to isolate workloads without the overhead of full virtual machines, enabling faster builds, denser workloads, simpler rollbacks, and repeatable CI/CD pipelines.
Prerequisites and System Requirements
- 64-bit Linux with a recent kernel (Ubuntu 22.04+, Debian 11/12, RHEL/CentOS 8/9, Fedora 38+).
- Root or sudo privileges.
- Outbound internet access to pull images (or a private registry mirror).
- Recommended: 2+ CPU, 4GB+ RAM, SSD storage, cgroup v2 enabled.
Install Docker Engine on Popular Linux Distributions
Ubuntu / Debian (Official Repository)
# Remove old versions (if any)
sudo apt-get remove -y docker docker-engine docker.io containerd runc
# Prereqs
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
# Keyring and repo
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# For Ubuntu:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
# For Debian, replace ubuntu with debian and $VERSION_CODENAME accordingly.
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Enable and start
sudo systemctl enable --now docker
# Optional: run Docker without sudo
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
RHEL / CentOS / AlmaLinux / Rocky Linux
# RHEL (use rhel repo) or CentOS/Alma/Rocky (use centos repo)
# Example for CentOS/Alma/Rocky:
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# For RHEL, use:
# sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
Fedora
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
If your distro maintains its own “docker.io” package, prefer the official Docker repository for the latest features and security patches unless your compliance requires vendor packages.
Post Install: Configuration and Security Hardening
- Enable at boot:
sudo systemctl enable --now docker - Run without sudo:
sudo usermod -aG docker $USERthennewgrp docker. - Rootless mode (extra isolation):
dockerd-rootless-setuptool.sh install - Firewall: allow only needed ports; default bridge uses NAT (iptables/nftables).
- SELinux/AppArmor: keep enforcing; avoid
--privilegedunless absolutely necessary. - Content trust:
export DOCKER_CONTENT_TRUST=1to verify signed images. - Daemon options: manage in
/etc/docker/daemon.json(log rotation, cgroup driver, registry mirrors).
# /etc/docker/daemon.json (example)
{
"log-driver": "json-file",
"log-opts": { "max-size": "10m", "max-file": "3" },
"storage-driver": "overlay2",
"live-restore": true
}
# Apply changes
sudo systemctl restart docker
Docker Basics: Images, Containers, and Registries
- Images: read‑only templates (e.g.,
nginx:alpine). - Containers: running instances created from images.
- Registries: where images live (Docker Hub, GHCR, private registry, or mirror).
# Find and pull images
docker search nginx
docker pull nginx:alpine
# List images and containers
docker images
docker ps -a
# Remove unused resources
docker rm -f CONTAINER_ID
docker rmi IMAGE_ID
docker system prune -f
Run Your First Container
# Start NGINX and publish port 80
docker run -d --name web -p 80:80 nginx:alpine
# View logs and test
docker logs -f web
curl -I http://localhost
# Stop/start/restart
docker stop web
docker start web
# Auto-restart on failure or boot
docker run -d --restart unless-stopped --name web -p 80:80 nginx:alpine
Use --restart unless-stopped on Linux servers so containers survive reboots. Combine with health checks for resilient services.
Persisting Data with Volumes and Bind Mounts
Containers are ephemeral. Use volumes for managed storage or bind mounts for direct host paths.
# Named volume (portable, managed by Docker)
docker volume create appdata
docker run -d -v appdata:/var/lib/mysql --name db mysql:8
# Bind mount (full host control)
sudo mkdir -p /srv/nginx/html
docker run -d -p 80:80 -v /srv/nginx/html:/usr/share/nginx/html:ro --name web nginx:alpine
# Inspect volumes
docker volume ls
docker volume inspect appdata
Networking: Bridge, Host, and Custom Networks
- Bridge (default): NATed container IPs; publish ports with
-p. - Host: shares host network (no isolation); use sparingly.
- Custom bridge: containers discover each other by name on an isolated subnet.
# Custom bridge network
docker network create --driver bridge app-net
# Launch two services on the same network
docker run -d --name db --network app-net postgres:16
docker run -d --name api --network app-net -e DB_HOST=db myorg/api:latest
# No need to expose ports between containers on the same user-defined network
Docker Compose on Linux Server
Compose defines multi‑container applications as code. With the Compose v2 plugin, use docker compose (not the legacy docker-compose binary).
# docker-compose.yml
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: supersecret
volumes:
- dbdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
app:
image: myorg/app:1.0.0
depends_on:
db:
condition: service_healthy
ports:
- "8080:8080"
environment:
DATABASE_HOST: db
deploy:
resources:
limits:
cpus: "1.0"
memory: "512M"
volumes:
dbdata: {}
# Up/Down
docker compose up -d
docker compose ps
docker compose logs -f
docker compose down
Resource Limits, Health Checks, and Restart Policies
- Limit resources:
--cpus,--memory,--memory-swapto prevent noisy neighbors. - Health checks: Docker restarts or gates dependencies when unhealthy.
- Restart policy:
--restart unless-stoppedfor services, or define in Compose.
# CLI example with limits
docker run -d --name api --cpus="1.5" --memory="512m" --restart unless-stopped myorg/api:latest
Security Best Practices for Linux Servers Running Docker
- Use minimal base images (alpine, distroless) and pin versions/tags.
- Drop privileges: run as non‑root user in the image; add
--read-onlyand--cap-drop ALL, then selectively--cap-addif required. - Avoid
--privileged; prefer specific capabilities and device mappings. - Scan images for CVEs (e.g.,
docker scout cvesor third‑party scanners). - Keep the host patched; enable automatic security updates when possible.
- Rotate logs and secrets; use environment variables only for non‑sensitive data. Prefer Docker secrets or external secret managers.
- Network segmentation: use custom networks; expose only necessary ports behind a reverse proxy (Nginx, Traefik, Caddy).
- Audit and monitor:
docker events,docker logs, and system journaling; consider Falco or similar runtime security tools.
# Example: drop privileges and lock down container
docker run -d --name app \
--user 1000:1000 \
--read-only \
--cap-drop ALL \
--security-opt no-new-privileges \
-p 8080:8080 myorg/app:1.0.0
Monitoring and Maintenance
- Logs:
docker logs -f NAME, systemd:journalctl -u docker - Metrics:
docker stats, cAdvisor, Prometheus, node_exporter for host. - Cleanup:
docker system pruneregularly, or prune unused volumes/images with care. - Backups: snapshot volumes (LVM/ZFS/Btrfs) or use
docker run --rm -v vol:/data -v $(pwd):/backup busybox tar. - Updates: schedule
dnf/aptupdates and image rebuilds; restart services with zero‑downtime via rolling deployments behind a load balancer.
Troubleshooting on Linux
- Daemon isn’t running:
systemctl status docker, thenjournalctl -u docker -xe. - Permission denied: ensure user is in
dockergroup; re‑login or runnewgrp docker. - Networking: verify firewall/NAT rules, ensure no port conflicts, and inspect networks:
docker network inspect. - Pull rate limits or timeouts: authenticate to Docker Hub, use a mirror, or host a private registry.
- SELinux denials: check
/var/log/audit/audit.log; add appropriate labels or volumes with:z/:Zon SELinux systems.
When to Use Kubernetes Instead
If you need auto‑scaling, service discovery across nodes, self‑healing, and advanced ingress, consider Kubernetes. For single‑server or small clusters, Docker with Compose is often simpler, faster to adopt, and easier to manage. You can later migrate Compose definitions to Kubernetes manifests with tools like Kompose.
How YouStable Helps You Succeed with Docker
As a hosting provider focused on performance and security, YouStable offers VPS and dedicated servers optimized for containers: fast NVMe storage, hardened Linux images, premium bandwidth, and 24×7 support. Need a Docker‑ready stack, private registry, or CI/CD pipeline? Our team can provision and secure it for you, so you focus on shipping code.
Quick Reference: Essential Docker Commands
# Images and containers
docker pull IMAGE[:TAG]
docker build -t NAME:TAG .
docker run -d --name NAME -p HOST:CONTAINER IMAGE:TAG
docker exec -it NAME sh
docker stop/start/restart NAME
# Compose
docker compose up -d
docker compose logs -f
docker compose down
# System
docker ps -a
docker images
docker stats
docker system df
docker system prune -f
FAQ’s
How do I start Docker in Linux?
Install Docker Engine, then run sudo systemctl enable --now docker. Verify with docker version and docker info. If rootless mode is enabled, use the rootless setup tool and ensure your environment variables are loaded after login.
How can I run Docker without sudo?
Add your user to the docker group: sudo usermod -aG docker $USER, then re‑login or run newgrp docker. For stronger isolation, consider rootless Docker with dockerd-rootless-setuptool.sh install.
What’s the difference between Docker and a virtual machine?
VMs virtualize hardware and run full guest OSes, while Docker containers share the host kernel and isolate processes using namespaces and cgroups. Containers start faster and use fewer resources; VMs offer stronger isolation and OS‑level customization.
How do I persist data in Docker on Linux?
Use volumes (docker volume create, then -v vol:/path) for portable, managed storage, or bind mounts (-v /host:/container) for full control over the host filesystem. For databases, always mount data directories to a volume or bind mount.
Is Docker free to use on Linux servers?
Yes. Docker Engine is free and open-source under the Apache 2.0 license. Docker Hub offers a free tier with pull limits; for teams and enterprises, paid plans provide higher limits, private repositories, and additional features.
Conclusion
Now you know how to use Docker on Linux server from installation and first containers to Compose workflows, storage, networking, and hardening. Start small, script your setup, and apply best practices. When you’re ready for production, pair these steps with monitoring, backups, and a security mindset to run reliable containerized services.