Kubernetes
Setup Cluster
KubeadmSetup
Clustersetup

Kubernetes Cluster Setup Using Kubeadm

What is Kubeadm?

kubeadm is a tool to bootstrap a Kubernetes cluster by installing all the control plane components and preparing the cluster for use. In addition to the control plane components (API Server, ETCD, Controller Manager, Scheduler), it installs various CLI tools like kubeadm, kubelet, and kubectl.

Note: This guide demonstrates how to install Kubernetes on cloud-based virtual machines (VMs) in a self-managed environment.

Steps to Set Up the Kubernetes Cluster

Prerequisites for AWS EC2

If you are using AWS EC2 servers, allow specific traffic on required ports as detailed in the Kubernetes networking documentation (opens in a new tab).

  1. Provision three VMs: one master node and two worker nodes.
  2. Create two security groups:
    • Attach one to the master node.
    • Attach the other to the worker nodes.
  3. Disable source destination check for the VMs following AWS documentation (opens in a new tab).

Steps for Master Node Setup

1. SSH into the Master Node

ssh -i <your-key.pem> ubuntu@<master-node-ip>

2. Disable Swap

swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

3. Enable IPv4 Forwarding and Configure iptables for Bridged Traffic

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
 
sudo modprobe overlay
sudo modprobe br_netfilter
 
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
 
sudo sysctl --system
 
# Verify configurations
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

4. Install Container Runtime (Containerd)

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.7.14-linux-amd64.tar.gz
curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
 
# Verify the service
systemctl status containerd

5. Install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
sudo install -m 755 runc.amd64 /usr/local/sbin/runc

6. Install CNI Plugins

curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.5.0.tgz

7. Install kubeadm, kubelet, and kubectl

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
 
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
 
sudo apt-get update
sudo apt-get install -y kubelet=1.29.6-1.1 kubeadm=1.29.6-1.1 kubectl=1.29.6-1.1 --allow-downgrades --allow-change-held-packages
sudo apt-mark hold kubelet kubeadm kubectl
 
# Verify installations
kubeadm version
kubelet --version
kubectl version --client

8. Configure crictl to Work with Containerd

sudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

9. Initialize the Control Plane

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<master-private-ip> --node-name master

Note: Copy the join command output for later use.

10. Prepare kubeconfig for kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11. Install Calico Networking

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
kubectl apply -f custom-resources.yaml

Steps for Worker Node Setup

  1. Perform steps 1-8 from the master node setup on both worker nodes.
  2. Use the join command generated in step 9 on the master node to join the cluster:
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name <worker-name>
  1. If you lost the join command, regenerate it on the master node:
kubeadm token create --print-join-command

Validation

  1. Verify all nodes are ready:
kubectl get nodes
  1. Ensure all pods are running:
kubectl get pods -A

Troubleshooting Calico

  • Disable source/destination checks for all nodes.
  • Configure security group rules to allow bidirectional traffic on TCP 179.
  • Update Calico daemonset environment:
kubectl set env daemonset/calico-node -n calico-system IP_AUTODETECTION_METHOD=interface=<default-interface>

Replace <default-interface> with the primary interface, e.g., ens5.

  • Optionally, deploy the Calico manifest:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

🧙 AI Wizard - Instant Page Insights

Click the button below to analyze this page.
Get an AI-generated summary and key insights in seconds.
Powered by Perplexity AI!