Certified Kubernetes Administrator
Cluster Architecture, Installation and Configuration
- We can setup HA Cluster using two topologies
- Stacked etcd
- External etcd
- Stacked etcd:
- The distributed data storage (etcd) is part of nodes managed by kubeadm
- Each control plane node has kube-apiserver, kube-scheduler, kube controller manager & etcd node

- External etcd:
- This HA Cluster will have external etc where the data is distributed and stored
- Each control plane node has kube-apiserver, kube-scheduler, kube controller manager
- This is more HA and it needs minimum six nodes to setup master
Lab setup:
- We will be creating 3 worker nodes and 3 master nodes to be part of control plane
- The load Balancer will be software based load balancer (HAProxy)
Steps
- Create 3 master nodes (master-node-1, master-node-2, master-node-3)
- login into any master
- Ensure master-node-1 has the private key to connect other nodes copied into it
- Install tmux
sudo apt update
sudo apt install tmux -y
# rename window
ctrl + b ,
# create a vertical split
ctrl + b %
# create a horizontal split
ctrl + b "
# To move across panes verticall
ctrl + b (arrows)
| Role |
Name Tag |
FQDN |
IP |
| Master |
master-node-1 |
ip-172-31-48-55.ec2.internal |
172.31.48.55 |
| Master |
master-node-2 |
ip-172-31-50-145.ec2.internal |
172.31.50.145 |
| Master |
master-node-3 |
ip-172-31-55-80.ec2.internal |
172.31.55.80 |
| Load Balancer |
haproxy |
ip-172-31-42-89.ec2.internal |
172.31.42.89 |
sudo apt update
sudo apt install haproxy -y
- Since api-server runs on port 6443, Let do the following config
- Take backup of haproxy config
/etc/haproxy/haproxy.cfg
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg-orig
- Now replace the contents fo haproxy.cfg with the following contents
frontend kubernetes-frontend
bind 172.31.42.89:6443
mode tcp
option tcplog
default_backend kubernetes-backend
backend kubernetes-backend
mode tcp
option tcp-check
balance roundrobin
server ip-172-31-48-55.ec2.internal 172.31.48.55:6443
server ip-172-31-50-145.ec2.internal 172.31.50.145:6443
server ip-172-31-55-80.ec2.internal 172.31.55.80:6443
- reload the daemon and start haproxy
sudo systemctl daemon-reload
sudo systemctl restart haproxy.service
- Using tmux panes execute the following commands on all master nodes to install docker, kubelet, kubectl and kubeadm
cd /tmp
curl -fsSL https://get.docker.com -o install-docker.sh
sh install-docker.sh
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd_0.3.4.3-0.ubuntu-jammy_amd64.deb
sudo dpkg -i cri-dockerd_0.3.4.3-0.ubuntu-jammy_amd64.deb
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
-
kubeadm accepts the following named arguments
- –control-plane-endpoint: can be used to set the shared endpoint for all control-plane nodes
- –apiserver-advertise-address: can be used to set the advertise address for this particular control-plane node’s API server
- –cri-socket: to set the cri for container runtime
- –pod-network-cidr: to set the pod network range
-
Login into any of the master nodes
master-node-1
kubeadm init --control-plane-endpoint="172.31.42.89:6443" \
--upload-certs \
--apiserver-advertise-address=172.31.48.55 \
--pod-network-cidr=192.168.0.0/16 \
--cri-socket="unix:///var/run/cri-dockerd.sock"
- Now in other nodes also execute join commands as shown in class, install any suitable cni plugin
- now get nodes

Like this:
Like Loading...