Scheduling
Affinity
- We have two types of affinties (anti-affinity)
- node
- pod
-
When we use pod affinity we can colocate pods further based on toplogy key
- kubernetes.io/hostname: Represents the node itself (host level). Allows you to target individual nodes.
- topology.kubernetes.io/zone: Corresponds to the availability zone (often used in cloud environments).
- topology.kubernetes.io/region: Represents the region the node is scheduled in.
Lets try to colocate pods
- Create a pod1 with the following spec
---
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
app: pod1
spec:
containers:
- name: web
image: nginx
resources:
requests:
cpu: 100m
memory: 10M
limits:
cpu: 500m
memory: 128M
- Now lets write a spec to colocate pod2 on the same node as pod 1 (podaffinity & topologyKey)
---
apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
app: pod2
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: pod1
containers:
- name: web
image: nginx
resources:
requests:
cpu: 100m
memory: 10M
limits:
cpu: 500m
memory: 128M
- Lets create pod3 which should be scheduled on different node than pod 1
---
apiVersion: v1
kind: Pod
metadata:
name: pod3
labels:
app: pod3
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: pod1
containers:
- name: web
image: nginx
resources:
requests:
cpu: 100m
memory: 10M
limits:
cpu: 500m
memory: 128M

Taints and Tolerations
- Refer Here for official docs
-
Taint Effects
- NoSchedule
- PreferNoSchedule
- NoExecute
- Lets apply a taint to node 0
kubectl taint nodes aks-nodepool1-16257739-vmss000000 ssd=ultra:NoExecute
- Lets create pod 1
---
apiVersion: v1
kind: Pod
metadata:
name: pod1
labels:
app: pod1
spec:
containers:
- name: web
image: nginx
resources:
requests:
cpu: 100m
memory: 10M
limits:
cpu: 500m
memory: 128M
- Now lets create a spec for pod 2 with matching toleration
---
apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
app: pod2
spec:
tolerations:
- key: ssd
operator: Equal
value: ultra
effect: NoExecute
containers:
- name: web
image: nginx
resources:
requests:
cpu: 100m
memory: 10M
limits:
cpu: 500m
memory: 128M
- Now apply the manifests

Mounting volumes into Pods (container)
- Mostly current focus will be on empty volumes and config maps/secrets as volumes
- Lets write a simple pod spec to run alpine
---
apiVersion: v1
kind: Pod
metadata:
name: vol-demo
spec:
containers:
- name: test
image: alpine
command:
- sleep
- 1d
volumeMounts:
- name: test-vol
mountPath: /tools
volumes:
- name: test-vol
emptyDir:
-
Mounting empty dir into container
- Now lets try to mount a config map
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-cm
data:
MYSQL_ROOT_PASSWORD: qwerty456
MYSQL_DATABASE: sales
MYSQL_USER: ltdevops
MYSQL_PASSWORD: qwerty456
- Now lets try mounting config map as volume

Kubernetes cluster Administration
- Lets install a kubeadm cluster with 3 nodes (2 worker), lets use weavenet and k8s version 1.31
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io
containerd config default | sudo tee /etc/containerd/config.toml >/dev/null 2>&1
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Usecase – Maintenance of a node in k8s cluster
- Now create some pods (use existing rs)

- Pods are running on node 1 & node 2
- Lets assume we need to bring node 1 down, for some maintenace
- Cordon -> drain -> uncordon
Backing up k8s cluster
- Commands on master node
sudo apt install etcd-client
ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /opt/etcd.db
ETCDCTL_API=3 etcdctl --write-out=table snapshot status /opt/etcd.db
Upgrading kubernetes cluster
- Ensure a backup of k8s cluster is captured before upgrade.
To upgrade a Kubernetes cluster created with kubeadm, follow these steps in order on each node type (control plane and worker). Adapt version numbers as per your requirements and available upgrades.
1. Upgrade kubeadm on All Nodes
First, upgrade kubeadm to the desired version on all nodes (control plane and worker):
sudo apt-mark unhold kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.32.6
sudo apt-mark hold kubeadm
kubeadm version
Replace ` with the specific version, e.g.,1.33.x-or1.29.3-1.1[1][2].*)*[2][6].
<h3>2. Check the Upgrade Plan (on First Control Plane)</h3>
On the <strong>first control plane node</strong>, check your upgrade options:
<pre><code class="language-sh">sudo kubeadm upgrade plan
</code></pre>
Choose a version and take note of the upgrade path suggested by the tool[1][2].
<h3>3. Apply the Upgrade (First Control Plane Node)</h3>
Apply the upgrade to the control plane:
<pre><code class="language-sh">sudo kubeadm upgrade apply v
</code></pre>
Example:
<pre><code class="language-sh">sudo kubeadm upgrade apply v1.33.x
</code></pre>
This upgrades control plane components, CoreDNS, kube-proxy, and manages certificates[1][2][3][6].
<h3>4. Upgrade Other Control Plane Nodes</h3>
On <em>other</em> control plane nodes:
<pre><code class="language-sh">sudo kubeadm upgrade node
</code></pre>
This upgrades static pod manifests and related configurations on each additional control plane node[1][6].
<h3>5. Drain and Upgrade Worker Nodes</h3>
On <em>each worker node</em>:
<ul>
<li><strong>Drain workloads</strong>:
<code>sh
kubectl drain --ignore-daemonsets</code></li>
<li><strong>Upgrade kubeadm</strong> (already done in step 1).</li>
<li><strong>Upgrade node’s configuration</strong>:
<code>sh
sudo kubeadm upgrade node</code></li>
</ul>
<h3>6. Upgrade kubelet and kubectl on Each Node</h3>
On <strong>every node</strong> after control plane is upgraded:
<pre><code class="language-sh">sudo apt-mark unhold kubelet kubectl
sudo apt-get update
sudo apt-get install -y kubelet= kubectl=
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
</code></pre>
<em>Replace <code>` as needed (e.g.,</code>1.33.x-</em>
7. Uncordon Drain Nodes
Once upgrade and restart are complete, uncordon the node to allow scheduling again:
kubectl uncordon
Check all node statuses with:
kubectl get nodes
Notes:
– Always consult the Kubernetes official documentation for your target version before upgrades[1].
– Backup your etcd data before starting.
– The process is similar for Ubuntu and other platforms using APT. For RPM-based systems, use the corresponding package manager[1].
These are the canonical kubeadm upgrade commands and workflow for safely upgrading Kubernetes clusters managed by kubeadm[1][2][6].
[1] https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
[2] https://devopscube.com/upgrade-kubernetes-cluster-kubeadm/
[3] https://man.archlinux.org/man/kubeadm-upgrade-apply.1.en
[4] https://en.opensuse.org/Kubic:Upgrading_kubeadm_clusters
[5] https://cjyabraham.gitlab.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/
[6] https://itgix.com/blog/upgrading-a-kubernetes-cluster-by-using-kubeadm/
[7] https://komodor.com/learn/kubernetes-upgrade-how-to-do-it-yourself-step-by-step/
