Use Kubernetes on a Linux server to automate the deployment, scaling, and management of containerized applications. Kubernetes is a powerful open-source container orchestration platform that simplifies running complex applications reliably across clusters of servers.

This guide covers how to use Kubernetes on a Linux server—from installation and basic configuration to deploying your first application.
Prerequisites
- A Linux server (or multiple servers) running Ubuntu, Debian, CentOS, Red Hat, or similar
- Root or sudo access to install software and configure the system
- A container runtime installed (Docker, containerd, or CRI-O)
- Network connectivity between nodes (if setting up a multi-node cluster)
- Familiarity with the Linux command line
Use Kubernetes on a Linux Server
Kubernetes is a powerful container orchestration platform that automates the deployment, scaling, and management of containerized applications. Running Kubernetes on a Linux server provides a stable and resource-efficient environment ideal for managing microservices and distributed workloads. With Linux’s performance and flexibility, Kubernetes can efficiently coordinate clusters, ensure high availability, and streamline DevOps workflows for modern infrastructure needs.
Prepare Your Linux Server(s)
Before installing Kubernetes components, make sure the server is updated:
sudo apt update
sudo apt upgrade -y
Disable swap:
sudo swapoff -a
To make this permanent, comment out or remove swap entries in /etc/fstab
.
Install necessary tools:
sudo apt install -y apt-transport-https ca-certificates curl
Install Container Runtime
Kubernetes requires a container runtime like Docker or containerd.
- Install Docker on Ubuntu/Debian:
sudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
Alternatively, install containerd as preferred.
Install Kubernetes Components
Add the Kubernetes repository and install kubeadm, kubelet, and kubectl:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo bash -c 'cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF'
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Initialize Your Kubernetes Cluster
Initialize the cluster on the control plane node (master):
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
The --pod-network-cidr
flag is commonly used with Flannel networking, but choose based on your network plugin.
After initialization:
- Configure kubectl for the regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Deploy a pod network add-on (e.g., Flannel):
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Join Worker Nodes to the Cluster
On each worker node, run the join command provided at the end of kubeadm init
output on the control plane, which looks like this:
sudo kubeadm join <control-plane-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
This connects your worker nodes to the cluster.
Verify the Cluster and Deploy an Application
Check node status:
kubectl get nodes
Deploy a sample Nginx application:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
Check pods and services:
kubectl get pods
kubectl get svc
Access the Nginx server via the node’s IP and assigned NodePort.
Conclusion
To use Kubernetes on a Linux server, install a container runtime, Kubernetes components, and initialize a cluster with kubeadm. Securely add worker nodes and deploy applications to benefit from Kubernetes’ powerful orchestration features. Kubernetes drastically simplifies managing containerized applications across many Linux servers. For more in-depth guidance and advanced configurations, refer to the official Kubernetes documentation.