Kubeadm with docker
- Lets build a two node cluster with ubuntu 22.04
- Install docker on both nodes
cd /tmp
curl -fsSL https://get.docker.com -o install-docker.sh
sh install-docker.sh
sudo usermod -aG docker <username>
- Now install cri-dockerd as of now we are using deb package Refer Here
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.4.0/cri-dockerd_0.4.0.3-0.ubuntu-jammy_amd64.deb
sudo dpkg -i cri-dockerd_0.4.0.3-0.ubuntu-jammy_amd64.deb
- Now install kubeadm, kubectl and kubelet on both nodes
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
- In this mode in all kubeadm we need to pass
--cri-socket=unix:///var/run/cri-dockerd.sock - our init commmand with flannel as CNI
kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock
- Then join the other node to the cluster
- Lets install CNI flannel
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
K8s Workloads – Pod
- Docker creates a Container, Hypervisor creates Virtual Machines and k8s creates Pods
- Pod has container(s) in it.
- Every Pod gets a unique ip with in a k8s cluster
- Pod can have one or more containers in it.
- Containers with in pod communicate over localhost (127.0.0.1) i.e. two containers cannot be inside pod which require same port
- In a Pod if we have more than one container
- primary container => maincar container
- other container(s) => sidecar containers
-
Ideally a Pod should run an application component/microservice in it
- Refer Here for restapi
Interacting with k8s
- K8s has a REST API for all the resources of k8s exposed by kube api server
- All the resources that can be created by k8s can be fetched by
kubectl api-resourcescommand - API Versioning
- API Groups:
- core
- other
- ApiVersion
# format
<apiGroup>/<version>
# core group
<version>
- ApiVersion examples
v1 => group => core, version => v1
network.k8s.io/v2beta3 => group => network.k8s.io, version v2beta3
- Pod
- Refer Here for kubectl cheatsheet
-
Creation types:
- imperative:
- construct a command line
- declartive
- create a manifest file (YAML)
- imperative:
Play with pods
- Imperative
- declartive
apiVersion: v1
kind: Pod
metadata:
name: web-3
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Choosing CNI for kubeadm
| CNI Plugin | Overlay/ Routing | Network Policy Support | Security Features | Performance | Advanced Features | Ease of Use | Cloud Friendly | Notable Use Cases/Notes |
|---|---|---|---|---|---|---|---|---|
| Calico | Overlay & BGP | Yes (advanced) | Distributed firewall, policy enforcement | High | BGP routing, egress/ingress control, multi-cluster | Medium | Partial[1] | Highly scalable, strong security focus |
| Flannel | Overlay (VXLAN, host-gw, others) | No | None | Good | Simple L3 fabric | Easy | Yes | Simplicity, basic connectivity |
| Cilium | Overlay & native routing (eBPF) | Yes (L3-L7, DNS, HTTP) | eBPF-based, identity-based, application-aware | Very High | Deep observability, service mesh integration, multi-cluster | Medium | Yes | Advanced security, microservices focus |
| Weave Net | Overlay (mesh) | Yes | Basic | Good | DNS, encryption, mesh topology | Easy | Yes | Simple install, mesh networking |
| Kube-router | Routing (BGP, IPVS, VXLAN) | Yes | Network policy | High | Service proxy, BGP peering | Medium | Partial | Combines routing, proxy, policy |
| Antrea | Overlay & native | Yes | Policy enforcement | High | Flow visibility, multi-cluster | Medium | Yes | VMware-backed, modern features |
| Multus | N/A (meta-plugin) | N/A | N/A | N/A | Multi-homed pods (multiple CNIs) | Medium | Yes | Attach multiple networks to pods |
| OVN-Kubernetes | Overlay & native | Yes | Policy enforcement | High | VLAN, QoS, advanced topologies | Medium | Yes | Based on Open Virtual Network |
Key notes:
– Calico: Advanced network policy, BGP routing, distributed firewall, excellent for large-scale and secure clusters[2][3][4][5].
– Flannel: Very simple, easy to deploy, basic connectivity, no network policy support[3][4][1][5].
– Cilium: eBPF-powered, high performance, deep observability, supports L3-L7 policies, integrates with service mesh, advanced security[2][3][4][5].
– Weave Net: Mesh overlay, simple install, supports encryption and basic policies[4][1][5].
– Kube-router: Combines routing, proxy, and policy in one, BGP support, high performance[4][1].
– Antrea: Modern, supports advanced policies, observability, multi-cluster, designed for Kubernetes[4].
– Multus: Not a networking provider itself, but enables pods to use multiple CNIs (e.g., for SR-IOV, DPDK, etc.)[4].
– OVN-Kubernetes: Advanced features, Open vSwitch-based, supports VLAN, QoS, and complex topologies[4].
Cloud compatibility varies: Flannel, Cilium, and Weave Net work well in cloud environments, while Calico may need extra configuration for some clouds[1].
Feature summary:
– Overlay networking: Flannel, Weave Net, Cilium, Calico (optional)
– Native routing: Calico, Cilium, Kube-router, OVN-Kubernetes
– Network policy: Calico, Cilium, Antrea, Kube-router, Weave Net (basic)
– Advanced security: Calico, Cilium, Antrea
– Service mesh integration: Cilium, Antrea
– Multi-cluster support: Calico, Cilium, Antrea
This table should help you compare and select the right CNI for your kubeadm-based Kubernetes cluster based on your requirements[2][3][4][1][5].
[1] https://github.com/weibeld/cni-plugin-comparison
[2] https://kubevious.io/blog/post/comparing-kubernetes-container-network-interface-cni-providers/
[3] https://www.plural.sh/blog/kubernetes-cni-guide/
[4] https://www.devopsschool.com/blog/list-of-cni-plugins-used-in-kubernetes/
[5] https://gyptazy.com/kubernetes-the-four-most-common-cnis/
[6] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
[7] https://github.com/kubernetes/website/issues/45007
[8] https://www.reddit.com/r/kubernetes/comments/1110k8p/suggestions_for_k8s_cni/
[9] https://learn.microsoft.com/en-us/azure/aks/concepts-network-cni-overview
[10] https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-cni-plugin
[11] https://kubernetes.io/blog/2021/04/20/defining-networkpolicy-conformance-cni-providers/
[12] https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
[13] https://www.suse.com/c/rancher_blog/comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/
[14] https://stackoverflow.com/questions/62129716/is-it-possible-to-have-2-network-plugins-for-the-kubernetes-cluster
[15] https://www.tigera.io/learn/guides/kubernetes-networking/kubernetes-cni/
[16] https://blog.developersteve.com/kubernetes-networking-cni-plugins-and-policy-solutions-918167dff965
[17] https://kubernetes-docsy-staging.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
[18] https://kubernetes.io/docs/concepts/cluster-administration/addons/
[19] https://rstforum.net/whitepaper/understanding-kubernetes-cni
[20] https://overcast.blog/choosing-the-right-container-network-interface-plugin-in-kubernetes-45391c7d4cc8
