Config maps and Secrets
- Config maps are non sensitive key value pairs that can be loaded into container and Secrets are sensitive key value pairs where the values should be base64 encoded in manifest files
-
Config maps and secret values are loaded into containers in two possible ways
- as environmental variables:
- They are loaded during startup of the Pod and if the values change (config map or secret ) then the Pod needs to be restarted for new values
- as mount
- The values are mounted on a filesystem and new updates will reach the container in few seconds, application will be reading values from files (dynamic)
- as environmental variables:
Lets create a config map and mount it into Pod
Environmental variable
- Refer Here for the changes done
Mount
Refer Here for the changes
Lets create secrets
- Refer Here for the changes
Upgrading k8s cluster on-prem
To perform a kubeadm upgrade while ensuring the safety of your cluster data, it is crucial to back up the etcd datastore first. Below are the comprehensive steps to backup etcd and then upgrade your Kubernetes cluster.
Steps for etcd Backup and Kubeadm Upgrade
1. Backup etcd
- Log in to the Control Plane Node: Access your Kubernetes control plane node where etcd is running.
-
Install etcdctl (if not already installed):
bash
sudo apt install etcd-client -
Take a Snapshot of etcd: Use the following command to create a snapshot. Ensure you have the necessary certificates:
bash
date_str=`date +'%m-%d-%Y'`
ETCDCTL_API=3 etcdctl snapshot save "snapshot-${date_str}.db" \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key -
Verify the Snapshot: Check that the snapshot was created successfully:
bash
ETCDCTL_API=3 etcdctl snapshot status --write-out=table snapshot.db
2. Prepare for the Upgrade
-
Check Current Versions: Verify the current version of kubeadm:
bash
kubeadm version -o json -
Identify Upgrade Version: Determine the target version for upgrading:
bash
sudo apt update
sudo apt-cache madison kubeadm | tac
3. Upgrade Control Plane Node
-
Upgrade Kubeadm: Unhold kubeadm and install the desired version:
bash
sudo apt-mark unhold kubeadm && \
sudo apt-get update && \
sudo apt-get install -y kubeadm= && \
sudo apt-mark hold kubeadm -
Plan the Upgrade: Run a plan to check if your cluster can be upgraded:
bash
sudo kubeadm upgrade plan -
Drain the Control Plane Node: Evict workloads from the control plane node:
bash
kubectl drain --ignore-daemonsets --delete-local-data -
Apply the Upgrade: Execute the upgrade command:
bash
sudo kubeadm upgrade apply -
Upgrade Kubelet and Kubectl: Unhold and upgrade both components:
bash
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && \
sudo apt-get install -y kubelet= kubectl= && \
sudo apt-mark hold kubelet kubectl -
Restart Kubelet: Reload systemd and restart the kubelet service:
bash
sudo systemctl daemon-reload
sudo systemctl restart kubelet -
Uncordon the Node: Make the control plane node schedulable again:
bash
kubectl uncordon
4. Upgrade Worker Nodes
For each worker node, follow these steps:
-
Unhold and Install Kubeadm:
bash
sudo apt-mark unhold kubeadm && \
sudo apt-get update && \
sudo apt-get install -y kubeadm= && \
sudo apt-mark hold kubeadm -
Drain Worker Node: Evict workloads from each worker node:
bash
kubectl drain --ignore-daemonsets --delete-local-data -
Upgrade Kubelet on Worker Node:
Run this command on each worker node to upgrade its kubelet configuration:
bash
sudo kubeadm upgrade node -
Upgrade Kubelet and Kubectl:
Similar to control plane, upgrade both components on worker nodes:
bash
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && \
sudo apt-get install -y kubelet= kubectl= && \
sudo systemctl daemon-reload && \
sudo systemctl restart kubelet -
Uncordon Worker Node: Make each worker node schedulable again:
bash
kubectl uncordon
Final Verification
After completing upgrades on all nodes, verify that all nodes are in a Ready state and check their versions with:
kubectl get nodes
This structured approach ensures that your Kubernetes cluster is safely backed up before performing any upgrades, minimizing risks associated with potential failures during the upgrade process.
Citations:
[1] https://k21academy.com/docker-kubernetes/etcd-backup-restore-in-k8s-step-by-step/
[2] https://www.techtarget.com/searchitoperations/tip/How-does-Kubernetes-use-etcd
[3] https://devopscube.com/backup-etcd-restore-kubernetes/
[4] https://docs.openshift.com/container-platform/4.8/backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.html
[5] https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
[6] https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
[7] https://docs.vmware.com/en/VMware-Tanzu-Application-Catalog/services/tutorials/GUID-backup-restore-data-etcd-kubernetes-index.html
[8] https://discuss.kubernetes.io/t/etcd-backup-ssues/8304
Drain, Cordon and uncordon nodes
- Cordon (Take diversion): Stops the pods to be scheduled further
- Drain (Waiting for road to free up): pods running on this node will be scheduled to other nodes
- Uncordon (Removing take diversion): Marks the node ready to be serving pods
Daemonset
- Refer Here for official docs
- Refer Here for sample yaml
Namespace
- Refer Here for official docs
- All resources are not part of namespaces,
- In k8s we have two types of scopes
- cluster wide scope: Some resources fit cluster wise. These resources will have namespaced value as false (refer below screenshot)
- namespace: Some resources fit namespace, these resources will have namespaced as true (refer below screenshot)

- Execute
kubectl get nsto view all the namespaces in k8s cluster. - Namespaces with prefix
kube-generally are managed by k8s control plane - The namespace
defaultis the default namespace in which k8s objects are created. - kubectl is by default configured to use default namespace which can be changed
kubectl config set-context --current --namespace=<your-ns> - This namespace has implications with dns
