To install Kubernetes on a Linux server, prepare the OS (disable swap, load kernel modules, configure sysctl), install a container runtime (containerd recommended), add the Kubernetes repository, and install kubeadm, kubelet, and kubectl. Initialize the control plane with kubeadm, apply a CNI plugin, then join worker nodes and verify with kubectl.
In this guide, you’ll learn how to install Kubernetes on a Linux server the right way—using kubeadm, containerd, and a production-friendly configuration. We’ll cover Ubuntu/Debian and RHEL-based distros, pod networking (CNI), firewall ports, validation, upgrades, troubleshooting, and practical tips drawn from real-world hosting environments.
What You’ll Build
You will set up a Kubernetes cluster using kubeadm with one control plane node and one or more worker nodes. We will use containerd as the container runtime, install a CNI plugin for networking (Calico or Flannel), and validate with a sample workload.
Prerequisites
- Supported OS: Ubuntu 22.04/24.04, Debian 12, Rocky/AlmaLinux 9, RHEL 9 (or equivalents)
- Hardware (minimum): 2 vCPU and 4 GB RAM per node (8 GB+ recommended for control plane)
- Network: Stable connectivity between nodes, unique hostnames, and open required ports
- Root or sudo access, and basic Linux familiarity
- Time sync enabled (chrony or systemd-timesyncd)
Step 1: Prepare Your Linux Servers
Run the following on all nodes (control plane and workers). This ensures kernel modules, sysctl, and swap settings meet Kubernetes requirements.
Set Hostnames and /etc/hosts
# Example: set a hostname per node
sudo hostnamectl set-hostname cp-1 # control plane
# sudo hostnamectl set-hostname worker-1 # on worker, adjust accordingly
# Optional: map hostnames (use your IPs and names)
echo "10.0.0.10 cp-1" | sudo tee -a /etc/hosts
echo "10.0.0.11 worker-1" | sudo tee -a /etc/hosts
Load Kernel Modules and Configure Sysctl
Disable Swap
sudo swapoff -a
# Permanently disable swap
sudo sed -i.bak '/ swap / s/^\(.*\)$/#\1/' /etc/fstab
Step 2: Install the Container Runtime (containerd)
Containerd is the recommended runtime for Kubernetes. Install and configure it on every node.
Ubuntu/Debian: Install containerd
sudo apt-get update
sudo apt-get install -y containerd
# Generate default config and enable systemd cgroups
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl enable --now containerd
RHEL/Rocky/AlmaLinux: Install containerd
sudo dnf install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# Optional for simplicity: set SELinux to permissive (or configure properly)
# sudo setenforce 0
# sudo sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
sudo systemctl enable --now containerd
Systemd cgroups align Kubernetes with the host’s init system and prevent resource accounting issues. Always restart containerd after changes.
Step 3: Install kubeadm, kubelet, and kubectl
Use the official Kubernetes repositories from pkgs.k8s.io. Repeat on all nodes.
Ubuntu/Debian: Add repo and install
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Add Kubernetes signing key and apt repo (adjust v1.30 to your target minor version)
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
RHEL/Rocky/AlmaLinux: Add repo and install
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
EOF
sudo dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
The kubelet will start but remain not-ready until the control plane and CNI are configured.
Step 4: Initialize the Control Plane
Run the below only on the control plane node. Choose a pod CIDR that matches your CNI plugin’s defaults.
# Example pod network CIDR compatible with Calico (192.168.0.0/16)
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
# After successful init, set up kubectl for your user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown "$(id -u):$(id -g)" $HOME/.kube/config
kubeadm outputs a kubeadm join command. Save it for the worker nodes. If you lose it, you can regenerate a token later.
Install a CNI Plugin (Networking)
Choose one CNI. Calico offers NetworkPolicy and BGP capabilities; Flannel is simpler for small labs.
- Calico (feature-rich):
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
- Flannel (lightweight):
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Wait until all control plane pods become Ready. Check with:
kubectl get nodes
kubectl get pods -A
Step 5: Join Worker Nodes
On each worker node, run the join command that kubeadm printed (example below). If expired, create a fresh one on the control plane with kubeadm token create –print-join-command.
# On control plane (if you need a new token)
kubeadm token create --print-join-command
# On each worker node (example)
sudo kubeadm join <CONTROL_PLANE_IP>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
Back on the control plane, verify nodes show Ready.
kubectl get nodes -o wide
Step 6: Validate with a Test Workload
Deploy NGINX and expose it via a NodePort, then test in your browser or with curl.
kubectl create deployment nginx --image=nginx --replicas=2
kubectl expose deployment nginx --type=NodePort --port=80
kubectl get svc nginx
# Test from outside: use any node’s IP and the allocated NodePort
# curl http://<NODE_IP>:<NODEPORT>
Required Firewall Ports
- Control plane: 6443 (API server), 2379–2380 (etcd), 10250 (kubelet), 10257 (controller-manager), 10259 (scheduler)
- Workers: 10250 (kubelet), 30000–32767 (NodePort Services)
- CNI-dependent: Calico (179 TCP for BGP, 4789 UDP for VXLAN if enabled), Flannel (8285/8472 UDP)
Allow these in your cloud security group and OS firewall (ufw/firewalld) for proper communication.
Upgrades and Maintenance
Plan Cluster Upgrades
# On control plane
sudo kubeadm upgrade plan
# Apply the target version (example)
sudo kubeadm upgrade apply v1.30.x
# Upgrade kubelet/kubectl on the control plane
sudo apt-get install -y kubelet=1.30.* kubectl=1.30.* && sudo systemctl restart kubelet
# Or on RPM-based:
# sudo dnf install -y kubelet-1.30.\* kubectl-1.30.\* --disableexcludes=kubernetes && sudo systemctl restart kubelet
# Drain workers one by one, upgrade, then uncordon
kubectl drain worker-1 --ignore-daemonsets --delete-emptydir-data
# Upgrade worker’s kubeadm/kubelet/kubectl to the same minor
# Then:
kubectl uncordon worker-1
Back Up etcd (stacked topology)
# On control plane
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /root/etcd-snapshot.db
Reset/Uninstall (When Needed)
# Reset node (control plane or worker)
sudo kubeadm reset -f
sudo systemctl stop kubelet
sudo systemctl stop containerd
sudo rm -rf /var/lib/cni /var/lib/kubelet /etc/cni/net.d $HOME/.kube
# Optional: purge packages
# Ubuntu/Debian:
# sudo apt-get purge -y kubeadm kubelet kubectl containerd
# sudo apt-get autoremove -y
# RHEL/Rocky/Alma:
# sudo dnf remove -y kubeadm kubelet kubectl containerd
Common Errors and Quick Fixes
- kubelet NotReady: Ensure swap is off, cgroups set to systemd, and CNI applied.
- Pods stuck in ContainerCreating: CNI plugin not installed or incorrect pod CIDR. Reapply the correct CNI manifest.
- Cannot pull images: Check containerd status, DNS resolution, and registry connectivity.
- Join command fails: Token expired. Regenerate with kubeadm token create –print-join-command.
- Firewall blocks: Open required ports between control plane and workers.
Alternatives: Single-Node, CRI, and Managed Kubernetes
- Single-node cluster: For labs, you can taint-tolerate the control plane or remove the taint to schedule workloads on it.
- CRI-O vs containerd: Both are CNCF-compliant. Containerd is widely adopted and documented, making it a safe default.
- Managed services: If you prefer less ops burden, consider managed Kubernetes. You still need to understand nodes, networking, and workload design.
Why Run Kubernetes on YouStable
As a hosting provider focused on performance and reliability, YouStable’s cloud and dedicated servers are ideal for Kubernetes. You get fast NVMe storage, DDoS protection, private networking, and clean OS images that follow best practices. Our engineers can recommend right-sized instances and architectures for production clusters without overpaying.
Full Command Summary (Copy/Paste)
Below is a concise sequence for Ubuntu/Debian. Adjust versions and network CIDR as needed.
# 1) System prep (all nodes)
sudo hostnamectl set-hostname <your-hostname>
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay && sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
sudo swapoff -a && sudo sed -i.bak '/ swap / s/^/#/' /etc/fstab
# 2) containerd (all nodes)
sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl enable --now containerd
# 3) Kubernetes packages (all nodes)
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
# 4) Control plane init (control plane only)
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 5) CNI (choose one; run on control plane)
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
# OR
# kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 6) Join workers (on each worker, use the printed command)
# sudo kubeadm join <CONTROL_PLANE_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH>
# 7) Validate (on control plane)
kubectl get nodes -o wide
kubectl create deployment nginx --image=nginx --replicas=2
kubectl expose deployment nginx --type=NodePort --port=80
kubectl get svc nginx
Best Practices from Real-World Deployments
- Use static IPs or DHCP reservations for nodes; DNS stability matters.
- Enable time sync (chrony) across nodes to avoid TLS and scheduling issues.
- Pin Kubernetes minor versions across nodes to avoid skew. Upgrade deliberately.
- Isolate etcd data to fast disks and back it up regularly.
- Keep OS lean: only required packages, automatic security updates where possible.
- For production, plan HA: multiple control planes, external etcd, and multiple nodes per AZ.
FAQs: Install Kubernetes on Linux Server
Can I install Kubernetes on a single Linux server?
Yes. Initialize the control plane with kubeadm and remove the default control-plane taint so it can run workloads: kubectl taint nodes <node-name> node-role.kubernetes.io/control-plane- This setup is fine for development but not recommended for production.
What are the minimum server requirements?
For a small cluster, start with 2 vCPU and 4 GB RAM per node. The control plane benefits from 4 vCPU and 8–16 GB RAM in real workloads. Use fast SSD/NVMe and reliable networking for stable performance.
Is Docker required, or should I use containerd?
Use containerd. Kubernetes removed dockershim, and containerd is a first-class, CNCF-compliant runtime. It’s lightweight, well-supported, and aligns with modern Kubernetes guidance.
How do I expose apps to the internet?
For quick tests, use a NodePort service. For production, deploy a LoadBalancer (via your cloud provider) or install an Ingress controller (like NGINX Ingress or Traefik) and create Ingress resources with TLS.
How do I uninstall Kubernetes installed with kubeadm?
Drain and delete nodes from the cluster, then run kubeadm reset -f on each node. Remove CNI configs and kubelet data, and optionally purge kubeadm, kubelet, kubectl, and containerd packages as shown in the reset section above.
Which CNI plugin should I choose?
For most beginners and SMB production, Calico is a solid default thanks to NetworkPolicy and mature docs. Flannel is simple for labs. If you need advanced observability and eBPF features, consider Cilium
Final Word
With these steps, you can confidently install Kubernetes on a Linux server and scale to a resilient cluster. If you’d like help selecting the right instances, storage, and network for your workloads, YouStable’s team can guide you from pilot to production.