DevOps Classroom Series – 11/Jul/2020 (Evening)

Kubernetes Contd..

Storage of Applications

  • Classic Datacenter (On-Premise)
    • Generally application data is stored on Network storage or some databases Preview
    • Storage is treated separtely in datacenters Preview
  • With Container into picture
    • With containers into play our applications will be running inside containers or pods Preview
    • The data that is generated by application will be alive as long as pod is alive, once pod is deleted data will also be deleted.
    • Since this is not ok as we loose critical data, K8s/docker containers also need storage options, But storage options are same when it comes to vm’s or containers
    • Since Pod(s) can run literally anywhere (on-premise, hybrid, cloud) we have many storage options Preview
    • K8s had to hide the implementation details of storage like aws ebs, azure disk, google persistent disk, nfs etc..
    • K8s gives us the following major options for storage
      • Volumes
      • Persistent Volumes
        • Persistent Volume Claims
        • Storage Classes
      • CSI Standard

k8s Volumes

  • K8s Pods contain containers. If the containers are crashed, kubelet will restart container but files of the container will be lost.
  • Docker already has concept of Volumes
  • A kubernetes volume has an explicit lifetime same as the Pod that encloses
  • K8s volume can be used so that data is not lost while Pod is running. Once the Pod is deleted, k8s volumes also will be deleted.
  • K8s supports several types of volumes
    • awsElasticBlockStore
    • azureDisk
    • azurefile
    • configMap
    • csi
    • emptyDir
    • gcePersistentDisk
    • hostPath
    • nfs
    • and many more
  • K8s volume can preserve the data with respect to failures in the containers inside Pod
  • K8s volume is also just like any other k8s object

K8s Persistent Volumes

  • k8s Persistent Volumes can create volumes outside of lifecycle of Pod. Even if the Pod is deleted data can be retained.

  • When a storage needs to be provided as volume to the docker container running inside Pod the following phases happen

    • Manually create a storage (aws ebs, azure disk, local drive storage, nfs etc)
    • Make it available to k8s
    • Mounting the storage into Pods
  • Static Provisioning

    • Manually create storage (administrator)
    • Create a Persistent Volume to make this storage space available to k8s cluster
    • Create a PersistentVolumeClaim from the Pod to gain access to storage
  • Dynamic Provisioning

    • Dynamic Provisioning can be done by using Storage classes
    • You just need to create Persistent Volume Claim
  • Storage Class Provides a way for administrators to describe the storage which has to be offered to k8s cluster

  • A PersistentVolumeClaim (PVC) is a request for storage by a user. Pods can request specific levels of resources (CPU and Memory), in the same way Claims can request specific sizes and access modes

  • K8s supports 3 Access modes

    • ReadWriteOnce
    • ReadOnlyMany
    • ReadWriteMany
  • All plugins might not support all the access modes Refer Here

  • Types of persistent Volumes Refer Here

  • What happens if the Persistent Volume Claim is deleted

    • Reclaim Policy
      • Retain
      • Recycle
      • Delete
  • K8s has supported Raw Block Volume. This is supported in

    • AWSElasticBlockStore
    • AzureDisk
    • Local volume
    • GCEPersistentDisk
  • Raw Block will make storage available as Raw Block device

  • Lets try to create a simple mysql pod with hostPath as Volume

---
apiVersion: v1
kind: Pod
metadata:
  name: mysql-pod
  labels:
    app: db
spec:
  volumes:
    - name: "mysql-data"
      hostPath:
        path: "/home/ubuntu/mysql"
  containers:
    - name: mysql
      image: mysql:5.7
      volumeMounts:
        - mountPath: /var/lib/mysql
          name: "mysql-data"
      ports:
        - containerPort: 3306
          name: dbport
          protocol: TCP
      env:
        - name: MYSQL_DATABASE
          value: 'test'
        - name: MYSQL_USER
          value: 'directdevops'
        - name: MYSQL_PASSWORD
          value: 'directdevops'
        - name: MYSQL_ROOT_PASSWORD
          value: 'password'
  • This spec will help in creating a volume in the node where mysqlpod is running. SO if the node is down we might loose the storage, so we need better options than this.

  • When we run k8s by installing manually using kubeadm or kops or kube-spray, we need to manage master, network addons (weavenet, loadbalancers), storage plugins (storage classes, storage plugins) and also identity.

  • K8s is also offered as a service by cloud providers like aws, azure and google. If we use these services

    • master nodes is managed by cloud providers
    • networking driver, loadbalancers, ingress etc are already supported in k8s cluster
    • Storage classes and suitable plugins for storage are already made available
    • You can use existing cloud identity and cloud based IAM for k8s identity
  • AWS has k8s as a service by a product line called as EKS (Elastic Kubernetes Services) and Azure has a Service called as AKS (Azure Kubernetes Service)

AKS (Azure Kuberenetes Services)

  • Master node highly available & is free and is managed by Azure
  • You will paying only for other nodes which your create.
  • Networking plugin azure-cni
  • Loadbalancer, Ingress support with Azure Loadbalancer and Azure application gateways
  • Storage classes are provided by default, you can use AzureDisk or Azure file in Persistent Volumes
  • Other Azure Service integrations like Azure DNS, Azure network policy, Azure DDOS protections can be added
  • Lets create an AKS cluster using docs over here
  • Once your cluster is up, you can use k8s from any where where azure cli/azure powershell is installed Preview Preview Preview Preview
  • Once you login into cloud shell execute the following command
az aks get-credentials --resource-group <rgname> --name <k8scluster>
az aks get-credentials --resource-group k8slearning --name myAKScluster
kubectl get nodes
kubectl get sc

Preview

  • Now lets create a Persistent Volume claim for a dummy pod. API Reference Click Here
  • Lets create a persistent volume claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: azure-managed-disk
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: managed-premium
  resources:
    requests:
      storage: 1Gi
  • Lets create a nginx pod with Persistent volume claim
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  volumes:
    - name: myvolume
      persistentVolumeClaim:
        claimName: azure-managed-disk
  containers:
    - name: mysql
      image: nginx
      volumeMounts:
        - mountPath: /mnt/azure
          name: myvolume
      ports:
        - containerPort: 80
          name: appport
          protocol: TCP
      
  • Now apply both the yaml files Preview Preview
  • So lets add a basic svc using Loadbalancer
---
apiVersion: v1
kind: Service
metadata:
  name: dummy-svc
spec:
  type: LoadBalancer
  ports:
    - port: 80
  selector:
    app: nginx

Preview Preview

  • Resources in azure resource groups Preview

EKS (Elastic Kubernetes Services)

  • Master node is highly available and is charage 0.2$ per hour
  • You will pay for other nodes hourly
  • Networking plugin aws-vpc-cni
  • Loadbalancer and ingress support is available using Elastic Load Balancer and application load balancer
  • Storage classes are provided by default and you can create aws elastic block storage and elastic file system in Persistent volumes
  • Other AWS Service integrations are possible.
  • Exercise: Create an eks cluster by referring over here

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About learningthoughtsadmin