Kubernetes Contd
Experiments:
- Try creating a pod with
- alpine container with sleep 10s
- observe the failures after 10 s
---
apiVersion: v1
kind: Pod
metadata:
name: exp1
spec:
containers:
- name: exp1
image: alpine
command:
- sleep
- 10s
* Observations: IN this case kubernetes is try to restart the container once the docker container has finished execution
- Try creating a pod with 2 containers
- one container put sleep 1d
- other container put sleep 5s
- observe failures
---
apiVersion: v1
kind: Pod
metadata:
name: exp2
spec:
containers:
- name: first
image: alpine
command:
- sleep
- 1d
- name: second
image: alpine
command:
- sleep
- 5s
* Now execute kubectl get pod exp2 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"exp2","namespace":"default"},"spec":{"containers":[{"command":["sleep","1d"],"image":"alpine","name"
:"first"},{"command":["sleep","5s"],"image":"alpine","name":"second"}]}}
creationTimestamp: "2023-08-03T13:10:53Z"
name: exp2
namespace: default
resourceVersion: "1566"
uid: 530f8716-7573-4c5d-9183-d7b1c4364159
spec:
containers:
- command:
- sleep
- 1d
image: alpine
imagePullPolicy: Always
name: first
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-pbncq
readOnly: true
- command:
- sleep
- 5s
image: alpine
imagePullPolicy: Always
name: second
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-pbncq
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node2
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-pbncq
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2023-08-03T13:10:53Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2023-08-03T13:14:24Z"
message: 'containers with unready status: [second]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2023-08-03T13:14:24Z"
message: 'containers with unready status: [second]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2023-08-03T13:10:53Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://aad78c31e83c5951b8a316a2a31ba8656e5a729512a9a47bd24a5f1db5e57144
image: docker.io/library/alpine:latest
imageID: docker.io/library/alpine@sha256:82d1e9d7ed48a7523bdebc18cf6290bdb97b82302a8a9c27d4fe885949ea94d1
lastState: {}
name: first
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2023-08-03T13:10:54Z"
- containerID: containerd://287e000ae3a66b9227fc4ec55c345f939f905bb7e2750c36bf0732d8ace7accb
image: docker.io/library/alpine:latest
imageID: docker.io/library/alpine@sha256:82d1e9d7ed48a7523bdebc18cf6290bdb97b82302a8a9c27d4fe885949ea94d1
lastState:
terminated:
containerID: containerd://287e000ae3a66b9227fc4ec55c345f939f905bb7e2750c36bf0732d8ace7accb
exitCode: 0
finishedAt: "2023-08-03T13:14:24Z"
reason: Completed
startedAt: "2023-08-03T13:14:19Z"
name: second
ready: false
restartCount: 5
started: false
state:
waiting:
message: back-off 2m40s restarting failed container=second pod=exp2_default(530f8716-7573-4c5d-9183-d7b1c4364159)
reason: CrashLoopBackOff
hostIP: 192.168.0.22
phase: Running
podIP: 10.5.1.3
podIPs:
- ip: 10.5.1.3
qosClass: BestEffort
startTime: "2023-08-03T13:10:53Z"
- Container states in a pod Refer Here
- Pod lifecycle Phases Refer Here
- Restart policy Refer Here
- Create a Pod spec which will restart Containers
- only when failed
- Always
- Figure out if we can stop restarting containers after 4 attempts
- Restart policy Never and impact on status
---
apiVersion: v1
kind: Pod
metadata:
name: exp3
spec:
restartPolicy: Never
containers:
- name: exp1
image: alpine
command:
- sleep
- 10s
- In this case the pod has gone into completed state
---
apiVersion: v1
kind: Pod
metadata:
name: exp4
spec:
restartPolicy: Never
containers:
- name: exp1
image: alpine
command:
- sleep
- 10s
- &&
- exit 1
-
Create a Pod spec which will restart Containers
- only when failed
- Figure out if we can stop restarting containers after 4 attempts
-
Try to run a jenkins container in docker and see the logs
docker logs
apiVersion: v1
kind: Pod
metadata:
name: jenkins
spec:
restartPolicy: OnFailure
containers:
- name: jenkins
image: jenkins/jenkins
- Kubectl portforward
-
Exercise:
- Try setting up any one of them
- minikube Refer Here
- kind Refer Here
- Try setting up any one of them
-
Terms:
- CPU units
- memory units