Unlocking Kubernetes Mastery with Kubeadm and Calico

Unlocking Kubernetes Mastery with Kubeadm and Calico

Welcome to the world of Kubernetes, often referred to as k8s, where containerized applications meet orchestration excellence. In this guide, we'll embark on a journey to master Kubernetes using Kubeadm and Calico, ensuring your containerized applications thrive in a dynamic and scalable environment.


Before we dive into the exciting world of Kubernetes, make sure you have the following prerequisites in place:

Hardware Requirements:

Prepare for Kubernetes mastery by ensuring you have a minimum of two nodes. Each node should meet these hardware specs:

  • Minimum of 2GB of RAM

  • Minimum of 2 CPU cores

Software Requirements:

Your chosen nodes should be running a Linux distribution that plays well with Kubernetes. For our journey, we recommend Ubuntu 20.04. Additionally, you'll need to equip each node with Docker.

Network Requirements:

Ensure your nodes can communicate seamlessly with each other over a network. A reliable network connection between the nodes is crucial for a successful Kubernetes journey.

Installation of Kubeadm and Calico: Step By Step

Step 1: Setting the Stage with Hostnames

Let's begin by configuring hostnames to ensure smooth communication between nodes. Open the /etc/hosts file on each node using the following command:

sudo nano /etc/hosts

This opens the file in the nano text editor. Append the following lines to the end of the file, replacing the IP addresses with your node's IP addresses:        master-node        worker-node1        worker-node2

Save and close the file.

Step 2: Skipping Swap Memory Tweaks

If you're running your Kubernetes cluster on a cloud VM service like a pro, you can skip this step. Kubernetes will work harmoniously with the memory configurations provided by your cloud provider.

sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Step 3: Crafting the IPV4 Bridge on All Nodes

To establish the IPV4 bridge across all nodes, execute the following commands on each node:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

sudo modprobe overlay
sudo modprobe br_netfilter

# Sysctl parameters required by the setup; these persist through reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1

# Apply sysctl parameters without reboot
sudo sysctl --system

These commands set up the necessary network bridge and kernel parameters, orchestrating Kubernetes in harmony.

Step 4: Updating the Repository and Securing Required Packages

Prepare your nodes by updating the package repository and securing essential packages with these commands:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

Step 5: Grasping the Google Cloud Public Signing Key

Secure the Google Cloud public signing key with these commands:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Step 6: Adding the Kubernetes APT Repository

Add the Kubernetes APT repository with this command:

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Step 7: Revving Up the Repo & Deploying kubelet, kubeadm, kubectl, and docker.io

Prepare to deploy Kubernetes and Docker on all nodes with these commands:

sudo apt-get update
sudo apt install -y kubelet=1.28.1-1.1 kubeadm=1.28.1-1.1 kubectl=1.28.1-1.1 docker.io
sudo apt-mark hold kubelet kubeadm kubectl docker.io

This crafts the right environment and ensures that Kubernetes packages remain stable.

Step 8: Orchestrating Containerd Configurations on All Nodes

Create a containerd directory and configure container runtime settings with these commands:

sudo mkdir /etc/containerd
sudo sh -c "containerd config default > /etc/containerd/config.toml"

Now, it's time for some fine-tuning. Edit the config.toml file and locate the entry setting SystemdCgroup to false. Switch it to true, save the file, and restart containerd with these commands:

sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd.service
sudo systemctl restart kubelet.service
sudo systemctl enable kubelet.service

Note that these commands must be executed as root on all nodes in your Kubernetes orchestra.

Step 9: Initiating Kubeadm Magic

On the master node, invoke the magic wand to initialize the Kubernetes control plane:

sudo kubeadm init --pod-network-cidr=

Step 10: Copying Configuration to User's Realm

On the master node, transfer the configuration to the user's realm with these commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

This sets the stage for user interaction with the Kubernetes cluster.

Step 11: Crafting the Calico Network

After the Kubeadm initiation, it's time to weave the Calico network for seamless pod-to-pod communication. Execute these commands on the master node:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
sed -i 's/cidr: 192\.168\.0\.0\/16/cidr:\/16/g' custom-resources.yaml
kubectl create -f custom-resources.yaml

Step 12: Welcoming Worker Nodes

With the master node all set, it's time to welcome the worker nodes into the Kubernetes party. During the Kubeadm initialization on the master node, you'll receive a token to invite worker nodes: