DevOps Classroom notes 08/Feb/2025

k8s clusters

  • k8s clusters are made up of nodes.
  • There are two types of nodes
    • master node: This is responsible for managing the cluster
    • node (minions): These execute workloads(your application)
      Preview

Architecture

  • Overview
    Preview
  • Master node components

    • Api Server:
      • This component is responsible for all communications external as well internal in k8s cluster
      • For client it looks as if API server itself is k8s.
      • Exposes Rest API for k8s clients (kubectl, libraries) to interact
    • etcd:
      • This is memory of k8s cluster
      • This is a distribute key value store
    • Scheduler:
      • Scheduler is responsible for scheduling new workloads (pods) on a suitable node
    • Controller Manager:
      • This is reponsible for maintaining the desired state
    • Cloud Controller Manager (Optional, used only for Managed k8s): This component is part of managed k8s (AKS, EKS, GKE …) which can help to establish connection with cloud natively
  • Node:
    • Kubelet:
      • This is an agent of control plane (master nodes)
      • This recieves instructions from control plane and executes them.
    • Container Runtime:
      • To create containers we need container runtime.
      • k8s allows us to use any continer runtime which is CRI (Container runtime interface) compliant
    • Kube-proxy
      • This is responsible for networking services

Managed k8s clusters – an intro

  • All the cloud providers offer k8s and they manage master nodes i.e. they give options to
    • backup
    • restore
    • upgrades
    • HA

Interacting with k8s

  • To interact with k8s we can use plain https requests as k8s uses api server to expose functionality over REST APIs
  • To make the interaction convient we have
    • kubectl
    • client libraries
      Preview

Kubectl

  • Kubectl has two ways of interactions
    • imperative:
      • we construct a command
    • declarative:
      • We write a specification in YAML format describing what we want
      • then we give this as input to kubectl which creates the necessary workloads

Kubernetes installation methods

  • On a broader note we have 3 options
    • desktops (dev setups)
    • self-hosted (on-premise)
    • managed k8s or k8s as a service (cloud-hosted)
      Preview

Self-hosted options – kubeadm

  • official docs
  • Hardware requirements:
    • 2 GB or more of RAM per machine
    • 2 CPUs or more for control plane machines.
    • Full network connectivity between all machines in the cluster (public or private network is fine).
      • Ensure control plane have the following ports open
        Preview
      • Ensure nodes have following ports opened
        Preview
  • K8s control plane nodes have to be linux, whereas we can have windows nodes.

Our setup in AWS

  • Ensure you have a security group which opens necessary ports
  • Create three t2.medium or t3.medium servers with ubuntu 22.04 in the same vpc
    Preview

Steps

  • Install docker on all the nodes
curl -fsSL https://get.docker.com -o install-docker.sh
sh install-docker.sh
sudo usermod -aG docker ubuntu
  • Now exit and relogin and verify docker installation with docker info on all nodes
  • Now install cri-dockerd
cd /tmp
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd_0.3.16.3-0.ubuntu-jammy_amd64.deb
sudo dpkg -i cri-dockerd_0.3.16.3-0.ubuntu-jammy_amd64.deb
  • Our docker engine will be running on CRI-Socket unix:///var/run/cri-dockerd.sock
  • Now install kubeadm, kubelet and kubectl on all three nodes
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable --now kubelet
  • Now we need to create a k8s cluster based on the installations done above Refer Here
  • Login into master node and become root user
sudo -i
  • Now initialize a k8s cluster
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket "unix:///var/run/cri-dockerd.sock"
  • Now observe the output from this command
  • To run kubectl as a not root user execute the following highlighted section
    Preview
  • To join nodes to the k8s cluster which we intialized above login into nodes as root users and execute the below command which is highlighted. note we need to add --cri-socket "unix:///var/run/cri-dockerd.sock" to the join command
    Preview
  • After join commands when we execute kubectl get nodes on master, the status of nodes will be not ready
    Preview
  • Reason for this is CNI is not installed, lets install flannel-cni on the master node
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Preview

Published
Categorized as Uncategorized Tagged

By continuous learner

devops & cloud enthusiastic learner

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Please turn AdBlock off
Animated Social Media Icons by Acurax Responsive Web Designing Company

Discover more from Direct DevOps from Quality Thought

Subscribe now to keep reading and get access to the full archive.

Continue reading

Visit Us On FacebookVisit Us On LinkedinVisit Us On Youtube