To use Kubernetes on a Linux server, install a container runtime (containerd), set up kubeadm, kubelet, and kubectl, initialize a control plane with kubeadm init, install a CNI plugin for networking, then join worker nodes and deploy workloads with kubectl. This guide walks you through a secure, production ready setup, end-to-end.
If you’re wondering how to use Kubernetes on a Linux server, this beginner friendly tutorial covers everything from prerequisites and installation to deploying apps, networking, storage, and day-2 operations..
We’ll use kubeadm on Ubuntu/Debian, highlight best practices, and show commands that work on most Linux distributions.
What is Kubernetes and Why Run it on a Linux Server?
Kubernetes (K8s) is an open source system for orchestrating containers across multiple servers. It automates deployment, scaling, high availability, and failover.
Running Kubernetes on Linux gives you full control over performance, cost, and security ideal for self hosted apps, edge deployments, labs, and production clusters in your own data center or VPS environment.
Search Intent: Learn, Install, and Operate Kubernetes on Linux
Based on common queries (e.g., “install Kubernetes Ubuntu,” “kubeadm tutorial,” “Kubernetes cluster setup”), readers want a practical, step-by-step guide that also covers networking, storage, access, and troubleshooting. That’s exactly what this guide provides, aligned with current Kubernetes best practices and real-world hosting experience.
Prerequisites and Planning
System Requirements
Use at least 2 Linux servers (1 control plane, 1 worker). For learning, a single node works, but multi-node is better.
- OS: Ubuntu 22.04+ or Debian 12+ (RHEL/CentOS/Rocky/Alma supported with minor changes)
- CPU/RAM: Control plane (2 vCPU, 4–8 GB RAM); Worker (2 vCPU, 4 GB RAM) minimum
- Storage: 40+ GB disk per node
- Network: Stable connectivity between nodes; open required ports
Architecture Overview
Kubernetes has a control plane (API server, scheduler, controller manager, etcd) and worker nodes running kubelet and a container runtime. kubeadm bootstraps the cluster; kubectl manages it. A CNI plugin provides Pod networking; optional Ingress routes HTTP/HTTPS traffic.
Networking and DNS
Pick a Pod CIDR compatible with your CNI (e.g., 192.168.0.0/16 for Calico). Ensure hostname resolution works. If using a firewall or cloud security groups, open Kubernetes ports (6443, 2379-2380, 10250-10259, 30000-32767 on workers).
Security Baseline
Use unique SSH keys, disable password SSH, keep OS packages updated, and enable automatic security patches. Plan for a non-root user with sudo and configure time sync (chrony or systemd-timesyncd) across nodes.
Step-by-Step: Install Kubernetes on Linux (kubeadm)
Step 1: Prepare the OS
Run these on all nodes (control plane and workers). Commands shown for Ubuntu/Debian.
sudo swapoff -a
sudo sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Load kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Sysctl params required by Kubernetes networking
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Step 2: Install containerd (container runtime)
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
# Install containerd
sudo apt-get install -y containerd
# Generate default config and enable systemd cgroup driver
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl enable --now containerd
Step 3: Install kubeadm, kubelet, kubectl
Use the official Kubernetes apt repository. Replace v1.30 with your target minor version if needed.
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
Step 4: Initialize the control plane
Run on the control plane node. We set a Pod network CIDR compatible with Calico.
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
After success, configure kubectl for your user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 5: Install a CNI plugin (Calico)
Calico provides networking and policy. Apply the manifest from the upstream repository.
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml
# Wait for nodes and core pods to become Ready
kubectl get nodes -o wide
kubectl get pods -A
Step 6: Join worker nodes
Run the join command shown by kubeadm init on each worker. If you lost it, create a new token:
kubeadm token create --print-join-command
# Run the output on the worker node(s)
Step 7: Verify the cluster
kubectl get nodes
kubectl get pods -A
kubectl cluster-info
kubectl describe node <node-name>
Deploy Your First Application
Create a namespace
kubectl create namespace demo
Deploy NGINX with a Service
cat <<EOF | kubectl apply -n demo -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30080
EOF
kubectl get svc -n demo -o wide
Access the app via any node’s IP on port 30080 (e.g., http://NODE_IP:30080). For local testing without NodePort, use:
kubectl -n demo port-forward svc/web-svc 8080:80
# Browse http://127.0.0.1:8080
Day 2 Basics: Storage, Ingress, Security, and Observability
Storage and Persistence
For real workloads, use a CSI driver for dynamic PersistentVolumes (e.g., Longhorn, OpenEBS, or your cloud provider’s CSI). For quick tests, hostPath works but isn’t suitable for production.
# Example PVC (needs a default StorageClass in the cluster)
cat <<EOF | kubectl apply -n demo -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
EOF
Ingress for HTTP/HTTPS
Install an ingress controller (e.g., NGINX Ingress Controller) and create Ingress resources to route traffic to Services using hostnames and TLS certificates (Let’s Encrypt via cert-manager).
RBAC and Service Accounts
Follow least privilege with Role/ClusterRole and bindings. Create a ServiceAccount per app that needs API access, and scope permissions tightly.
kubectl create sa reader -n demo
kubectl create role pod-reader --verb=get,list,watch --resource=pods -n demo
kubectl create rolebinding reader-binding --role=pod-reader --serviceaccount=demo:reader -n demo
Observability: Metrics and Logs
Install metrics-server for resource metrics; use kubectl top to view usage. For logging and tracing, consider Prometheus, Grafana, and OpenTelemetry. Even small clusters benefit from node and pod-level dashboards.
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl top nodes
kubectl top pods -A
Security and Best Practices
- Use the systemd cgroup driver for kubelet and containerd (already configured above).
- Set resource requests/limits for every container to avoid noisy-neighbor issues.
- Enforce Pod Security Standards and NetworkPolicies to segment traffic by namespace/app.
- Rotate kubeadm tokens, TLS certs, and kubeconfig files; restrict access to admin.conf.
- Regularly apply OS and Kubernetes updates; cordon and drain nodes before maintenance.
- Back up etcd (control plane) regularly and test restores.
- Use separate namespaces, apply quotas, and label everything (team, env, app, version).
- Prefer Ingress + certificates for public endpoints; avoid exposing NodePort to the internet.
Troubleshooting Cheatsheet
- Node not Ready: check kubelet logs and containerd status
systemctl status kubelet containerd
journalctl -u kubelet -f - Pods Pending: inspect events, check CNI plugin
kubectl get pods -A -o wide
kubectl describe pod <name> -n <ns>
kubectl get pods -n kube-system - Networking issues: verify kernel modules and sysctl, confirm Pod CIDR matches CNI
lsmod | grep br_netfilter
sysctl net.ipv4.ip_forward
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' - Join failures: regenerate token and CA cert hash
kubeadm token create --print-join-command - API server unreachable: check port 6443, control-plane health, and etcd
kubectl cluster-info
kubectl get componentstatuses
docker/ctr/crictl logs for apiserver
Costs and Hosting Considerations
Kubernetes runs great on VPS or dedicated servers. Plan CPU, RAM, and SSD IOPS for your workload mix, and keep room for overhead (control plane, CNI, observability). For public exposure, place an edge proxy or load balancer in front of the cluster and harden network policies.
If you want Kubernetes-ready Linux servers with clean networking, IPv4/IPv6, and 24/7 support, YouStable offers high-performance VPS and bare-metal plans ideal for kubeadm clusters. Prefer a hands-off approach? Ask about our managed Kubernetes options—installation, monitoring, backups, and security handled by experts.
Upgrades and Maintenance
- Control plane: cordon and drain worker workloads if needed, then run kubeadm upgrade plan/apply.
- Workers: drain, upgrade kubelet/kubeadm, restart, and uncordon.
- Version skew: keep kubelet within one minor version of the control plane.
- Backup etcd before upgrades, and test rollbacks in staging.
FAQ’s
Is Docker required to run Kubernetes on Linux?
No. Kubernetes uses the Container Runtime Interface (CRI). containerd is the recommended runtime and works natively. You don’t need the Docker Engine to run Kubernetes, though you can build images with Docker on your workstation.
What’s the easiest way to install Kubernetes on Ubuntu?
kubeadm is the standard path: install containerd, add the official Kubernetes repository, install kubeadm/kubelet/kubectl, initialize the control plane, apply a CNI plugin, and join workers. For single-node testing, tools like kind or Minikube are even faster.
Which CNI plugin should I choose: Calico, Cilium, or Flannel?
For simplicity and NetworkPolicy support, Calico is a strong default. Cilium adds eBPF-powered features and deep observability. Flannel is lightweight but limited. Match the plugin to your needs and test performance and features before production.
How do I expose services to the internet securely?
Use an Ingress controller behind a cloud or hardware load balancer. Terminate TLS with cert-manager for automatic certificates. Apply NetworkPolicies and WAF rules as needed. Avoid exposing NodePort directly to the public internet.
Should I run Kubernetes myself or use a managed service?
Self-managed clusters give control and can reduce costs, but require time and expertise. Managed services (or managed nodes on your servers) cover installation, upgrades, security, and monitoring. If you prefer support-backed, Kubernetes-ready Linux servers, YouStable can help with both DIY and managed options.
Conclusion
We covered a complete workflow: prepare Linux, install containerd and kubeadm, bootstrap the control plane, join workers, deploy apps, add storage and ingress, and secure and observe the cluster. With this foundation, you can scale confidently and adapt Kubernetes to your on prem or cloud environment.