Setting up a K8s cluster with kubeadm and integrating with Addons
Most applications today rely on Kubernetes for efficient running, and this is the reason it’s a major point of discussion for Platform, Site reliability or DevOps engineer because their work cycle revolves around it.
In this article I would be explaining how to set up a cluster and you see your applications running.
What is a Kubernetes cluster?
A Kubernetes cluster is a set of node machines for running containerized applications. A cluster contains a control plane hosts important components like api-server, etcd, kube-scheduler and Kube-controller in your Kubernetes cluster. As said, it is same as Master Node. I explained them in About Kubernetes architecture(1). It’s exciting how we can ship large… | by Alex Emeka Izuka | Medium
The cluster also consists of other nodes which run the application, these nodes are hosted on a virtual machine or a server just as the master, and are joined to the master (which we would see how that is done in this article)
Creating a cluster with Kubeadm
With kubeadm
, you can create a viable Kubernetes cluster that conforms to best practices. whether you’re deploying into the cloud or on-premises, you can integrate kubeadm
into provisioning systems such as Ansible or Terraform.
Requirements for running a cluster
The following guide is necessary for the set up:
- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
- 2 GiB or more of RAM per machine — any less leaves little room for your apps.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Full network connectivity among all machines in the cluster. You can use either a public or a private network.
- Install a container runtime and kubeadm on all the host
For setting up your kubeadm, kubelet and kubectl, check Upgrading a Kubernetes cluster(EKS inclusive) | by Alex Emeka Izuka | Nov, 2022 | Medium
At this point, you have kubeadm, kubelet, and kubectl installed on your master node and worker nodes. Now you initialize the Kubernetes control plane, which will manage the worker node and the pods where your application would run. You run this command to initiate kubeadm initialize your control plane;
sudo kubeadm init --apiserver-advertise-address=<your ip> --apiserver-cert-extra-sans=<your ip> --pod-network-cidr=<your ip> --node-name primary
And it gives you this output;
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
The output outlines you copying your Kubernetes config to your home environment, setting up a pod network and joining your nodes to the master.
kubeadm init
has configured the control-plane node based on the specified flags. Below, we will discuss each of them.
- — apiserver-advertise-address: This is the IP address that the Kubernetes API server will advertise it’s listening on. If not specified, the default network interface will be used.
- — apiserver-cert-extra-sans: This flag is optional, and is used to provide additional Subject Alternative Names (SANs) for the TLS certificate used by the API server. It’s worth noting that the value of this string can be both IP addresses and DNS names.
- — pod-network-cidr: This is one of the most important flags, since it indicates the range of IP addresses for the pod network. This allows the control-plane node to automatically assign CIDRs for each node.
- — node-name: As the name indicates, this is the name of this node.
To start using your cluster you would need to copy the Kubernetes configuration from the Kubernetes folder to your home environment, follow this commands;
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
The first command creates a folder for your Kubernetes configuration on your home environment, the second copies your Kubernetes configuration to that folder you created while the third changes the permission on the created folder.
When you run this, you should be able to run kubectl
commands
Joining your nodes to master/control plane
When you kubeadm init
on your master, you would have seen a command to join your nodes to your master which was like this:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
But the issue here is the token created when you ran that command could have gotten expired and won’t work when you try to join nodes, so run this command on your master node:
kubeadm token create --print-join-command
It gives same output you got when the cluster was created but different tokens which won’t get expired if you use them quickly, so that command gives you this output:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Copy the command and run them on your worker nodes, and that should join them to your control plane. If you are getting errors of it not joining, it could be caused by either insufficient space on where your nodes is running or an initial kubernetes joined config running on your nodes. So endeavor there is enough space on your server/virtual machine where your machine is running before joining, then for config issue run kubeadm reset
on the node and try joining again to the master/control plane.
Installing weave net cni pluggin
Installing a cni pluggin is really important for your cluster, without it your pods won’t be able to communicate with each other.
Before setting up weave net, ensure the following ports are not blocked by your firewall; TCP 6783 and UDP 6783/6784
Weave net can be installed with just this command;
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.
To check if it’s running:
kubectl get pods -n kube-system -l name=weave-net
And it outputs:
NAME READY STATUS RESTARTS AGE
weave-net-1jkl6 2/2 Running 0 1d
weave-net-bskbv 2/2 Running 0 1d
weave-net-m4x1b 2/2 Running 0 1d
In some instance, they may not start running immediately, and you shouldn’t be scared, just give it some moment and they would all be running.
To see the nodes and IP they are running, use this command:
kubectl get pods -n kube-system -l name=weave-net -o wide
It outputs:
NAME READY STATUS RESTARTS AGE IP NODE
weave-net-1jkl6 2/2 Running 0 1d 10.128.0.4 host-0
weave-net-bskbv 2/2 Running 0 1d 10.128.0.5 host-1
weave-net-m4x1b 2/2 Running 0 1d 10.128.0.6 host-2
Installing on weave-net cni plugin on EKS
EKS by default installs amazon-vpc-cni-k8s
CNI. Please follow below steps to use Weave-net as CNI
- create EKS cluster in any of prescribed way
- delete
amazon-vpc-cni-k8s
daemonset by runningkubectl delete ds aws-node -n kube-system
command - delete
/etc/cni/net.d/10-aws.conflist
on each of the node - edit instance security group to allow TCP 6783 and UDP 6783/6784 ports
- flush iptables nat, mangle, filter tables to clear any iptables configurations done by
amazon-vpc-cni-k8s
- restart kube-proxy pods to reconfigure iptables
- apply weave-net daemoset by following above installation steps
- delete existing pods so they get recreated in Weave pod CIDR’s address-space.
Please note that while pods can connect to the Kubernetes API server for your cluster, API server will not be able to connect to the pods as API server nodes are not connected to Weave Net (they run on network managed by EKS).
After all these is done, you should be able to see your pods and nodes running with applications or deployment you have ran with the following command:
kubectl get pods
kubectl get nodes
kubectl get deployment
Conclusion
This articles helps with you setting up your cluster with kubeadm
and ensure your pods are able to network/communicate with each other with weave-net cni pluggin.