Kubernetes (often referred to as K8s) is an open-source platform used to automate the deployment, scaling, and management of containerized applications. It helps developers and system administrators efficiently manage clusters of containers and is one of the most popular tools for container orchestration today.

If you’re looking to configure Kubernetes on a Linux server, you’re in the right place. This guide will walk you through every step needed to configure Kubernetes, from installing the necessary components to deploying a test application on your cluster.
Prerequisites
Before we dive into the installation, there are a few requirements to ensure everything runs smoothly:
- Supported Linux Distributions: Kubernetes supports various Linux distributions, including Ubuntu 20.04 and later, CentOS 7 and later, and Debian 10 and later.
- System Requirements:
- At least 2 CPUs for your server.
- A minimum of 2 GB RAM (4 GB is recommended).
- 20 GB or more of free disk space.
- User Permissions: You need sudo or root access to the server to install and configure Kubernetes components.
- Network Configuration: Make sure all nodes (master and worker) can communicate with each other.
Install Docker (Container Runtime)
Kubernetes requires a container runtime to run the containers, and Docker is the most widely used container runtime.
Install Docker
On Ubuntu, run the following commands to install Docker:
sudo apt-get update
sudo apt-get install -y docker.io
Enable and Start Docker
To ensure Docker starts on boot and is currently running, use these commands:
sudo systemctl enable docker
sudo systemctl start docker
Verify Installation
Check the version of Docker to ensure it is installed correctly:
docker --version
Disable Swap
Kubernetes recommends disabling swap to ensure optimal performance and to avoid any memory-related issues.
Temporarily Disable Swap
Run the following command to disable swap:
sudo swapoff -a
Permanently Disable Swap
To make this change persistent across reboots, comment out the swap entry in the /etc/fstab
file:
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Install Kubernetes Components
Kubernetes requires a few essential components to be installed: kubeadm, kubelet, and kubectl.
Add Kubernetes APT Repository
Add the official Kubernetes repository to your package manager:
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update
Install kubeadm, kubelet, and kubectl
Now, install the necessary Kubernetes components:
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Verify Installation
Ensure the installation of Kubernetes components is successful:
kubeadm version
kubectl version --client
kubelet --version
Check Out | How to Install Kubernetes on a Linux Server
Initialize Kubernetes Master Node
Now, let’s set up the Kubernetes master node, which will manage the cluster.
Initialize Cluster
On the master node, initialize the Kubernetes cluster using kubeadm. You’ll also need to specify the pod network CIDR for communication between pods:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Set up kubeconfig for kubectl
For kubectl to work with the newly created cluster, you’ll need to configure it to use the admin.conf
file:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify Cluster Status
Once the master node is initialized, check the status of the nodes:
kubectl get nodes
You should see your master node in a Ready state.
Install Network Plugin
Kubernetes requires a network plugin to manage communication between pods. There are several plugins to choose from, such as Calico and Flannel.
Install Network Plugin
For example, to install Calico, apply the network plugin YAML file:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
You can use Flannel or any other network plugin if you prefer, but make sure to configure the pod network appropriately.
Join Worker Nodes to the Cluster
Now, it’s time to join worker nodes to your Kubernetes cluster.
Run the Join Command on Worker Nodes
On each worker node, run the join command provided by kubeadm during the master node initialization process. It will look something like this:
sudo kubeadm join <master-node-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Verify Node Status
After the worker nodes have successfully joined, check the status of all nodes:
kubectl get nodes
All nodes, including the master and worker nodes, should now be listed as Ready.
Verify Cluster and Deploy a Test Application
Your Kubernetes cluster is now set up, but let’s test it with a simple deployment.
Check Cluster Components
Make sure all necessary Kubernetes components are up and running:
kubectl get pods --all-namespaces
Deploy a Test Application
You can now create a simple test deployment using nginx:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
This will deploy Nginx and expose it on a NodePort.
Access the Application
To access the application, find the NodePort by running:
kubectl get svc nginx
Access the application using the assigned port.
Secure Your Kubernetes Cluster
Security is critical when working with Kubernetes. Here are a few security measures to consider:
- Set Up Role-Based Access Control (RBAC): RBAC helps control access to Kubernetes resources by defining roles and permissions for users and services.
- Configure Network Policies: Network policies allow you to control traffic between pods and services within the cluster.
- Enable Audit Logging: Audit logging helps track access and changes to your cluster for improved security and monitoring.
Conclusion
You’ve successfully configured Kubernetes on your Linux server. Your Kubernetes cluster is now ready to manage containerized applications at scale.
Now that your cluster is set up, you can explore more advanced features such as Helm for package management, Kubernetes namespaces, and persistent storage solutions for your containers.
For further learning, refer to the official Kubernetes documentation to deepen your understanding and explore advanced use cases.