Assigning Pods to nodes
-
Generally scheduler will automatically do the reasonable plcement of your pods accross nodes, but there are some circumstances where you want to control which node the pod deploys
-
Node Selector:
- nodeSelctor is the simplest for node selection constraint.
- This specifes a map of key value pairs.
- For the pod to be elibigle to run on a node, node musht have indicated key value pairs as labels
- Step 1: Attach label to nodes
- Step 2: Attach nodeSelector field to Pod Manifest Refer Here
- Now apply the configuration and verify the pod schedule details
- Now lets try to do a negative scenario, lets try to use nodeSelector with labels that donot exist and see what happens
- Pod spec Refer Here
- After applying we observe the pods will not be schedule by scheduler as no node has matching labels
-
Nodes in k8s come with pre-populate standard set of labels Refer Here
-
Affinity and anti-affinity:
- The affinity/anti-affinity feature, greately expands types of constraints. The key enhancements from nodeSelector are
- The affinity/anti-affinity language is more expressive, it offers more matching rules besides exact mathc created
- You can indicate that a rule is soft or preference rather than hard requirement, so if the scheduler cannot statisfy, pods will still be scheduled
- You can constraint aginst labels on other pods running on the node, rather than against labels of node itself, which allows rules about which pods can and cannot be located
- Node affinity:
- There are two types of node affinity
- requiredDuringSchedulingIgnoredDuringExecution:
- Hard requirement
- prefferedDuringSchedulingIgnoredDuringExecution
- Soft requirement
- requiredDuringSchedulingIgnoredDuringExecution:
- Refer Here for the node affinity example
- Created a basic pod with some labels Refer Here
- Write a Pod manifest
- to schedule the new pod (pod-1) in the same node as basic-pod (podAffinity)
- to schedule the new pod (pod-2) in the different node as basic-pod (podAntiAffinity)
- There are two types of node affinity
- The affinity/anti-affinity feature, greately expands types of constraints. The key enhancements from nodeSelector are
-
Taints and Tolerations:
- Node affinity is property of Pods that attracts them to set of nodes.
- Taints are opposite – they allow node to run a set of pods
- Tolerations are applied to pods and allow the pods to schedule on the nodes with matching taints.
- Taints and Tolerations work together to ensure pods are not scheduled into inappropriate nodes.
- To apply a taint to the node
kubectl taint nodes <node-name> key1=value1:NoSchedule - To remove the applied taint
kubectl taint nodes <node-name> key1=value1:NoSchedule- - Lets apply a taint to any node
- Currently the following taints are supported
- OutofDisk: node.kubernetes.io/out-of-disk
- MemoryPressure: node.kubernetes.io/memory-pressure
- DiskPressure: node.kubernetes.io/disk-pressure
- PIDPressure: node.kubernetes.io/pid-pressure
- Generally in some case it is advised to write tolerations based on node conditons
- Refer Here for the yaml manifests
Helm
- To install softwares in linux we use packaging tools such as apt,yum, in mac we have brew and in windows we have choco. These software are called as package managers
- Helm is an opensource packaging tool for k8s to deploy and manage the lifecycle of your application
- Installing Helm Refer Here
- In a very simple terms helm charts make the k8s manifest dynamic
- consider the following two deployments
- deployment 1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
spec:
replicas: 2
selector:
matchLabels:
app: myapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
template:
metadata:
labels:
app: myapp
version: v3
spec:
containers:
- name: myapp-cont
image: shaikkhajaibrahim/myapp:3.0
ports:
- name: http
containerPort: 80
- the other deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otherapp-deploy
spec:
replicas: 3
selector:
matchLabels:
app: otherapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
template:
metadata:
labels:
app: otherapp
version: v1
spec:
containers:
- name: otherapp-cont
image: nginx
ports:
- name: http
containerPort: 80
protocol: TCP
- Helm use go templates to parametrize and value.yaml file which will have values defined
- Try installing any chart Refer Here
