DevOps Classroom Series – 27/Jun/2020

Docker Storage Drivers

  • Storage Drivers allows is to create data in the writable layer of the container
  • We are already of Image Layers concept. For every RUN,ADD/COPy in the Dockerfile new layers are created.
  • When we create a container, a Thin R/w Layer is available for each container to store the data in the container.
  • Lets Experiment This with the following Dockerfile
FROM alpine
RUN mkdir /mycontent && touch /mycontent/1.txt

  • Build the docker image and give name as layerdemo
  • we would be using interactive mode to understand the behavior
  • Alpine image has one layer and layerdemo will be adding a new layer. And every container created using layerdemo will add a new Thin R/W layer Preview
  • Run the following commands in the container 1
docker container run --name one -it layerdemo /bin/sh
  • This container is dealing with 3 layers now but inside the container it will look as if it is one file system Preview
  • Internally the layers are combined and presented as one disk to container using storage drivers.
  • Now lets try to make some changes
# Inside docker container one
cat /mycontent/1.txt
echo "hello" >> /mycontent/1.txt

Preview

  • Since the layers are readonly whenever you modify any existing content, the file gets copied to read/layer and changes are recored in the R/W. This approach is called as copy-on-write Preview

How to preserve the data changed by the docker container

  • Persistence/Preserving the data even after the container is deleted can be acheived by storing the data outside container. To do this we have 3 options
    • volumes
    • bind mounts
    • tmpfs Preview

Bind mounts

  • are stored anywhere on the host system.
  • Oldest option available for persistenting the changes
  • Create a docker container with bind mount option
docker container run -it --name bindmount -v /home/ubuntu/test:/app alpine /bin/sh

df -h
touch /app/1.txt
exit
docker inspect bindmount
docker container rm bindmount

Preview Preview

  • Lets create a new container using the same storage as mount and the same data should be loaded in the new container
docker container run -it --name bindmount2 -v /home/ubuntu/test:/app alpine /bin/sh
  • There is other way of using the same command
docker container run --mount type=bind,source=/home/ubuntu/test,target=/app <imagename>
docker container run --mount type=bind,source=/home/ubuntu/test,target=/app,readonly <imagename>

Preview

Volumes

  • Created and managed by Docker.
  • You create docker volumes and use that volume in the container.
  • Docker Volumes can be create in the Dockerfile using instruction VOLUME
FROM openjdk:8
ARG url=https://referenceappkhaja.s3-us-west-2.amazonaws.com/spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar
ENV myfilename=spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar
RUN mkdir /app && cd /app && wget ${url}
VOLUME /app
WORKDIR /app
EXPOSE 8080
CMD ["java",  "-jar",  "spring-petclinic-2.2.0.BUILD-SNAPSHOT.jar"]
  • Build the image
  • Now execute following commands
docker volume --help
docker volume ls
docker container run -d spc:1.0
docker volume ls

Preview

  • Lets inspect the volume
docker volume inspect <volname>

Preview

  • Lets navigate to the mountpoint
cd <path>

Preview

  • Now lets remove the container and look into volume
docker container rm -f voldemo
docker volume ls

Preview

  • Volume is still available after container is deleted, so we dont loose data.
  • Planning for volume creation in Dockerfile might not happen in all cases, the we need to create and mount volumes
  • Volumes are of two types
    • anonymous volumes
    • named volumes
  • Lets create a named volume
docker volume create myvol
docker volume ls
docker volume inspect myvol

Preview

  • Lets create an alpine container with this volume mounted
docker container run -d --name alp1 --mount source=myvol,target=/test alpine sleep 1d
docker volume inspect myvol
docker container inspect alp1
docker container exec alp1 df -h
docker container exec alp1 touch /test/1.txt
docker container exec alp1 ls /test

Preview Preview Preview

  • Now lets remove the container alp1 and create a new container alp2 with the same volume mount and see if the data is preserved
docker container rm -f alp1
docker volume ls
docker container run -d --name alp2 --mount source=myvol,target=/test alpine sleep 1d
docker container exec alp2 ls /test

Preview

  • Lets try to share the data between two contianer using the same volume
docker container run -d --name alp3 --mount source=myvol,target=/test alpine sleep 1d
docker container exec alp3 touch /test/2.txt
docker container exec alp2 ls /test

Preview

  • Docker volume has also an option of docker volume drivers which allows volumes to be created in nfs, aws s3 etc
  • Now lets remove the containers
 docker container rm -f $(docker container ls -a -q)
 docker volume ls
  • Volumes can be deleted by using rm Preview

Tempfs mount

  • Mounted on to tmpfs in memory
  • Tempfs mount can be created using –tmpfs or –mount flag
docker container run -d --name tmp1 --tmpfs /app alpine sleep 1d
docker container inspect tmp1
docker container run -d --name tmp2 --mount type=tempfs,destination=/app alpine sleep 1d

Preview

Container communication in Single docker host

  • Lets create two containers
docker container run -d --name cont1 alpine sleep 1d
docker container run -d --name cont2 alpine sleep 1d

  • Every container gets an ip address lets see
docker container inspect cont1
docker container inspect cont2

Preview Preview

  • Now lets see if containers have network connectivy b/w then by executing
docker container exec cont1 ping cont2

Preview

  • Now lets see if containers have network connectivy b/w then by executing
docker container exec cont1 ping -c 4 172.17.0.4

Preview

  • We are able to ping from one container to other by ipaddresses but names are not resolving
  • Refer Here for networking series
  • Lets start networking in next class

Multi staged builds

  • In CI/CD we build applications and for building applications if we want to use docker containers then this multistage build concept will be handy
  • Let me give a simple example of a java application
git clone https://github.com/wakaleo/game-of-life
# java 8 installed and maven install
cd game-of-life 
mvn package
  • In Traditional ways
    • Now after the Package is created, Ensure you have Dockerfile which creates the image
    • Create the image and push the image to docker registry
    • When you want to deploy application the pull from registry
  • Multistage build
    • Build the package as part of your docker image build
    • Create a Dockerfile as shown
    FROM maven:3-openjdk-8 as builder
    RUN git clone https://github.com/wakaleo/game-of-life.git 
    RUN cd game-of-life && mvn package
    
    
    FROM tomcat:8
    COPY --from=builder /game-of-life/gameoflife-web/target/gameoflife.war /usr/local/tomcat/webapps/gameoflife.war
    EXPOSE 8080
    
  • Now as part of building docker image application code gets build and a package is generated which will be copied into destination container
docker image build -t spcms:1.0 .

Docker logging

  • docker logs command show the loggs by a running continer
  • This docker logs will show the logs from STDOUT, STDERR
  • In docker to configure logging, we have loggin driver plugins
  • The supported logging drivers are over here

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About learningthoughtsadmin