Managed Life Cycle
-
Overview: Containerized applications managed by cloud-native platforms have no control over their lifecycle & to be good cloud native citizens they have to listen to the events emitted by managing platforms and adapt their lifecylcles according.
-
Problem:
- In additon to monitoring the state of container, the platform sometimes might issue commands and expect the application to react to those
- Driven by external factors and policies, platform may decide to start or stop the applications it is managing at any moment & it is up to the containerized application to determine which events are important to react & how to react.
-
Solution:
- For this usecase some events are emitted by the platform, that the container can listen to and react to if desired

- SIGTERM signal: whenever k8s decides to shut down a container (failed liveness probe or Pod it belongs to is shutting), the container recieves a SIGTERM signal. Once the SIGTERM signal has been recived, the application should shutdown as quickly as possible.
- SIGKILL signal: If a container process has not shutdown after SIGTERM signal, it is shutdown forcefully by following the SIGKILL signal. k8s doesn’t send the SIGKILL signal immemdietly after SIGTERM it will wait for a grace period (default is 30 second). The grace period can be defined per pod
.spec.terminationGracePeriodSeconds - Container hooks: Refer Here for offical docs and Refer Here for the api reference of Lifecycle hooks
- PostStart Hook
- PreStop Hook
- For this usecase some events are emitted by the platform, that the container can listen to and react to if desired
-
Refer Here for the article terminating with grace
apiVersion: v1
kind: Pod
metadata:
name: hooks-demo
spec:
containers:
- name: hooks-container
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
port: 80
path: /
initialDelaySeconds: 30
lifecycle:
preStop:
httpGet:
port: 80
path: /shutdown
Automated Placement
- Overview: Automated Placement is the core function of the k8s scheduler for signing new Pods to nodes. This pattern is about exploring the ways to influence placement decisions from outside.
- Problem: A decent microservice will consist of tens to hunderds of isolated processes (Containers in a Pod). With large and ever growing number of microservices, assigning and placing them individually to nodes is not a managable activity
- Solution:
- In k8s, assining Pods to nodes is done by kube-scheduler.
- kube-scheduler whenever it finds a newly created Pod defintion for API Server assigns the pod to node.
- Allocatable Node Resources: “` Allocatable = Node Capacity – kube-Reserved (kubelet, container) – System-Reserved (sshd, udev)
