Learn to Provision Kubernetes Cluster with Kubeadm

· 8 min read
Learn to Provision Kubernetes Cluster with Kubeadm

This article is more like a note to myself, but this may help if you are looking at configuring a Kubernetes cluster on Ubuntu.

Virtual Machine Configuration

All my VM images will be based on Ubuntu 20.04 LTS Server. Each of these VMs is configured with 2 vCPUs and 4GB of virtual memory. It is recommended to configure each virtual machine with a static IP address.

I am using the following Hostnames & IP Assignments:

  • 1 Kubernetes Master Nodes
    — k8s-master : 192.168.1.121
  • 3 Kubernetes Worker Nodes
    — k8s-node-a : 192.168.1.122
    — k8s-node-b : 192.168.1.123
    — k8s-node-c : 192.168.1.124

Container runtime

Kubernetes uses a Container Runtime Interface (CRI) compliant container runtime to orchestrate containers in Pods.

There are many runtimes that are supported within Kubernetes. The most popular ones include Docker (via cri-dockerd), containerd, and CRI-O. The choice of a runtime depends on several factors such as performance, isolation needs, and security. For this virtual cluster, I chose containerd as the runtime instead of Docker for this setup as Kubernetes deprecated Docker engine.

Let’s get started!

Prepare Virtual Machines / Servers

Start by preparing 4 machines with Ubuntu 20.04 LTS Server using the correct hostnames and IP addresses. Once done, power on all of them and apply the latest updates using:

sudo apt update && sudo apt upgrade

Map host on /etc/hosts

vim /etc/hosts

---
192.168.1.121 k8s-master
192.168.1.122 k8s-node-a
192.168.1.123 k8s-node-b
192.168.1.124 k8s-node-c
---

Set Timezone

timedatectl set-timezone Asia/Jakarta

Each node in the Kubernetes cluster the following components.

  • A container runtime
  • Kubectl - The command line interface to Kubernetes API
  • Kubelet - Agent on each node that receives work from the scheduler
  • Kubeadm - Tool to automate deployment and configuration of a Kubernetes cluster

Before going into this, you must ensure that nodes that will be a part of the Kubernetes cluster can communicate with each other and the firewall ports required for node to node communication are open.

The following network ports must be open for inbound TCP traffic on the control plane node.

  • 6443
  • 2379:2380
  • 10250
  • 10257
  • 10259
  • 179 (required for Calico)

On the worker nodes, you should configure to allow incoming TCP traffic on the following ports.

  • 10250
  • 30000:32767

On Ubuntu, you can use ufw command to perform this configuration.

sudo ufw allow proto tcp from any to any port 6443,2379,2380,10250,10257,10259,179

On each node, you must disable swap and configure IPv4 forwarding and IP tables to see the bridged traffic. Before all this, ensure that each node has the latest and greatest packages. You will need curl as well on the node to download certain packages.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

On each node that will be a part of the Kubernetes cluster, you must disable swap.

sudo swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Along with this, you must also check if swap is listed in the /etc/fstab and either comment it or remove it.

Next, you need to configure IPv4 forwarding and IP tables on each node.

# Enable IP tables bridge traffic on all nodes
# https://kubernetes.io/docs/setup/production-environment/container-runtimes/#forwarding-ipv4-and-letting-iptables-see-bridged-traffic
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

$ sudo modprobe overlay
$ sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
$ sudo sysctl --system

Installing containerd

The next set of commands download the latest release of containerd from GitHub and configure it. You need to run this on each node.

# Install Containerd
curl -Lo /tmp/containerd-1.6.9-linux-amd64.tar.gz https://github.com/containerd/containerd/releases/download/v1.6.9/containerd-1.6.9-linux-amd64.tar.gz
sudo tar Cxzvf /usr/local /tmp/containerd-1.6.9-linux-amd64.tar.gz

curl -Lo /tmp/runc.amd64 https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
sudo install -m 755 /tmp/runc.amd64 /usr/local/sbin/runc

curl -Lo /tmp/cni-plugins-linux-amd64-v1.1.1.tgz https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin /tmp/cni-plugins-linux-amd64-v1.1.1.tgz

# Remove the temporary files
rm /tmp/containerd-1.6.9-linux-amd64.tar.gz /tmp/runc.amd64 /tmp/cni-plugins-linux-amd64-v1.1.1.tgz

sudo mkdir /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo curl -Lo /etc/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
sudo systemctl status containerd

Installing kubeadm, kubelet, and kubectl

These three tools are needed on each node.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Kubeadm, Kubelet, and Kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

The above commands download and install the three tools that we need on each node. Once installed, we mark the packages as held so that they don’t get automatically upgraded or removed.

Initialize Kubernetes Cluster

Configure and Initialize the Control Plane Node

Once the prerequisite configuration is complete, you can initialize the Kubernetes cluster using kubeadm init command on the master node.

IPADDR=$(hostname -I)
APISERVER=$(hostname -s)
NODENAME=$(hostname -s)
POD_NET="10.244.0.0/16"

sudo kubeadm init --apiserver-advertise-address=$IPADDR \
                  --apiserver-cert-extra-sans=$APISERVER \
                  --pod-network-cidr=$POD_NET \
                  --node-name $NODENAME

This command starts a few preflight checks and starts the necessary Pods needed for starting the Kubernets control plane. At the end of successful execution, you will see output similar to what is shown here.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.121:6443 --token ddv77z.iw34a74lih7ry6dk \
        --discovery-token-ca-cert-hash sha256:9c86d72e5f1ee63ac1792c7c3a4b3cc29d8b9e9298d8b2ae979827e06fec8770

Before proceeding or clearing the screen output, copy the kubeadm join command. You need this to join the worker nodes to the Kubernetes cluster.

Prepare kube config

Before installing the Pod network addon, you need to make sure you prepare the kubectl config file. kubeadm init command provides the necessary commands to do this.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Once this is done, verify if the Kubernetes control plane objects can be queried or not.

kubectl get nodes

NAME         STATUS     ROLES           AGE     VERSION
k8s-master   NotReady   control-plane   3m25s   v1.26.1

This command will show only the control plane node and it will be shown as NotReady. This is because the Pod network is not ready. You can now install the Pod network addon.

Installing Calico

Installing Calico is just two steps. First, we install the opertor.

curl -Lo /tmp/tigera-operator.yaml https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml

kubectl create -f /tmp/tigera-operator.yaml

Next, we need to install the custom resources.

curl -Lo /tmp/custom-resources.yaml https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml

In this YAML, you need to modify the the spec.calicoNetwork.ipPools.cidr to match what you specified as the argument to --pod-network-cidr. Once this modification is complete, you can implement the custom resources.

CIDR='10.244.0.0/16'
sed -i "s|192.168.0.0/16|$CIDR|" /tmp/custom-resources.yaml
kubectl create -f /tmp/custom-resources.yaml

You need to wait for the Calico Pods to transition to Ready state before you can proceed towards joining the worker nodes to the cluster.

watch kubectl get pods -n calico-system

Once all Calico pods in the calico-system namespace are online and ready, you can check if the control plane node is in ready state or not using kubectl get nodes command.

kubectl get nodes

NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   7m52s   v1.26.1

Configure and Join the Worker Node

Finally, you can move on to joining all worker nodes the cluster. You need to run the command you copied from the kubeadm init command on each worker node.

kubeadm join 192.168.1.121:6443 --token ddv77z.iw34a74lih7ry6dk \
        --discovery-token-ca-cert-hash sha256:9c86d72e5f1ee63ac1792c7c3a4b3cc29d8b9e9298d8b2ae979827e06fec8770

The node joining process takes a few minutes. On the control plane node, you can run watch kubectl get nodes command wait until all nodes come online and transition to ready state.

kubectl get nodes

NAME         STATUS   ROLES           AGE   VERSION
k8s-master   Ready    control-plane   14m   v1.26.1
k8s-node-a   Ready    <none>          86s   v1.26.1
k8s-node-b   Ready    <none>          86s   v1.26.1
k8s-node-c   Ready    <none>          86s   v1.26.1

You should also verify if all control plane pods are online and ready or not.

kubectl get pods --all-namespaces
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-694d94c6cc-5v9wp          1/1     Running   0          7m4s
calico-apiserver   calico-apiserver-694d94c6cc-ssrtv          1/1     Running   0          7m4s
calico-system      calico-kube-controllers-67df98bdc8-lq7fx   1/1     Running   0          8m
calico-system      calico-node-42rtg                          1/1     Running   0          2m
calico-system      calico-node-6qkk7                          1/1     Running   0          2m
calico-system      calico-node-jj72d                          1/1     Running   0          8m
calico-system      calico-node-s54c5                          1/1     Running   0          2m
calico-system      calico-typha-8588fd4cf9-5zfkq              1/1     Running   0          8m1s
calico-system      calico-typha-8588fd4cf9-mkwfz              1/1     Running   0          110s
kube-system        coredns-787d4945fb-fv2w8                   1/1     Running   0          14m
kube-system        coredns-787d4945fb-kccxr                   1/1     Running   0          14m
kube-system        etcd-k8s-master                            1/1     Running   0          14m
kube-system        kube-apiserver-k8s-master                  1/1     Running   0          14m
kube-system        kube-controller-manager-k8s-master         1/1     Running   0          14m
kube-system        kube-proxy-b9x8t                           1/1     Running   0          2m
kube-system        kube-proxy-kc7m5                           1/1     Running   0          2m
kube-system        kube-proxy-l79wg                           1/1     Running   0          2m
kube-system        kube-proxy-pvkhk                           1/1     Running   0          14m
kube-system        kube-scheduler-k8s-master                  1/1     Running   0          14m
tigera-operator    tigera-operator-7795f5d79b-mbtrl           1/1     Running   0          8m20s

This is it. You now have a four node Kubernetes cluster that you can use for your learning, development, and even production (if you are brave enough!).