Say good-bye to HAProxy and Keepalived with kube-vip on your HA K8s control plane

Kohei Ota
4 min readFeb 10, 2021

--

tl;dr

kube-vip provides a Kubernetes native HA load balancing on your control plane nodes and you no longer need to setup HAProxy and Keepalived externally for high-availability cluster.

Motivation

One day, I was looking at Tanzu release notes and found this update.

I wondered what this was and haven’t had much time to play this new stack, but I finally made it work with self-hosted kubeadm cluster with 3 control plane nodes, so here’s what I did.

kube-vip?

kube-vip is an open source project that provides High-Availability and load-balancing for both inside and outside a Kubernetes cluster. However, this time we use this just for control plane load balancer on top of Kubernetes.

HA cluster setup overview with HAProxy and kube-vip

Previously, when you create a Kubernetes cluster on non-cloud environment, you either had to prepare a hardware/software load balancer to create multi-control plane cluster and HAProxy+Keepalived is most likely the choice for Open Source based solution.

Typically you create 2 load balancer VMs then you assign a VIP to make it redundant, then use the VIP to serve a loadbalancer, then either of them redirects the traffics to one of the Kubernetes control plane nodes in the backend.

This additionally requires 2 non-Kubernetes VMs and they are not somewhere that you can manage with your Kubernetes natively, so the maintenance cost increases. Now, let’s see what happens if you use kube-vip

kube-vip runs on control plane nodes as static pods(you can also set it as DaemonSet), these pods talks ARP to recognise other hosts using /etc/hosts on each node, so it is required to set each node’s IP addresses on your hosts file. You can choose either BGP or ARP to setup the load balancer, which is similar to Metal LB. This time I do not have BGP service and just wanted to try it out quickly, so I used ARP with static pods.

Prerequisites

  • 3 control plane nodes
  • 3 worker nodes

Install dependencies including kubeadm, kubelet, kubectl and a container runtime on your host OSes.

This time I used containerd so let’s go with that procedure.

get kube-vip docker image and setup the static pod yaml in /etc/kubernetes/manifests so that Kubernetes will automatically deploy your kube-vip pod on each control plane node.

export VIP=192.168.0.100export INTERFACE=eth0ctr image pull docker.io/plndr/kube-vip:0.3.1ctr run --rm --net-host docker.io/plndr/kube-vip:0.3.1 vip \/kube-vip manifest pod \--interface $INTERFACE \--vip $VIP \--controlplane \--services \--arp \--leaderElection | tee  /etc/kubernetes/manifests/kube-vip.yaml

Now, setup your kubeadm however you like. This is my sample setup.

cat > ~/init_kubelet.yaml <<EOFapiVersion: kubeadm.k8s.io/v1beta2kind: InitConfigurationbootstrapTokens:- token: "9a08jv.c0izixklcxtmnze7"description: "kubeadm bootstrap token"ttl: "24h"nodeRegistration:criSocket: "/var/run/containerd/containerd.sock"---apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationcontrolPlaneEndpoint: "192.168.0.100:6443"---apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationcgroupDriver: "systemd"protectKernelDefaults: trueEOF
kubeadm init --config init_kubelet.yaml --upload-certs

Install CNI. This time I used Cilium but anything is fine.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bashhelm repo add cilium https://helm.cilium.io/helm install cilium cilium/cilium --version 1.9.4 \--namespace kube-system

After the first control plane node is ready, make other nodes join to your cluster. For other control plane nodes, run something like

kubeadm join 192.168.0.100:6443 --token hash.hash\
--discovery-token-ca-cert-hash sha256:hash \
--control-plane --certificate-key key

Then they will be ready. For worker nodes, run something like

kubeadm join 192.168.0.100:6443 --token hash.hash\
--discovery-token-ca-cert-hash sha256:hash

Then your result should be

# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master-0 Ready control-plane,master 121m v1.20.2 192.168.0.201 <none> Ubuntu 20.04.2 LTS 5.4.0-45-generic containerd://1.4.3
k8s-master-1 Ready control-plane,master 114m v1.20.2 192.168.0.202 <none> Ubuntu 20.04.2 LTS 5.4.0-45-generic containerd://1.4.3
k8s-master-2 Ready control-plane,master 113m v1.20.2 192.168.0.203 <none> Ubuntu 20.04.2 LTS 5.4.0-45-generic containerd://1.4.3
k8s-worker-0 Ready <none> 114m v1.20.2 192.168.0.204 <none> Ubuntu 20.04.2 LTS 5.4.0-45-generic containerd://1.4.3
k8s-worker-1 Ready <none> 114m v1.20.2 192.168.0.205 <none> Ubuntu 20.04.2 LTS 5.4.0-45-generic containerd://1.4.3
k8s-worker-2 Ready <none> 112m v1.20.2 192.168.0.206 <none> Ubuntu 20.04.2 LTS 5.4.0-45-generic containerd://1.4.3

and yet your control endpoint is 192.168.0.100 without additional nodes.

--

--

Kohei Ota

Architect at Hewlett Packard Enterprise, CNCF Ambassador, Opinions are my own.