1. Docker

1.1. What is docker?


Figure 1.15. Docker vs LXC

1.2. Installing

1.2.1. Install docker from terminal

curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh
sudo apt update
sudo apt install docker.io

1.2.2. Requirements for workshop

docker pull python:3.7 \
    && docker pull postgres \
    && docker pull ubuntu \
    && docker pull bash

1.3. Nomenclature

  • Container (How to run your application)

  • Image (How to store your application)

  • Layer

  • LTS version

  • Edge version

  • Host


Figure 1.16. Layers


Figure 1.17. Layers


Figure 1.18. Container Layers


Figure 1.19. Container Layers

1.4. CLI - Command Line Interface

1.4.1. Docker Management commands

checkpoint  Manage checkpoints
config      Manage Docker configs
container   Manage containers
image       Manage images
network     Manage networks
node        Manage Swarm nodes
plugin      Manage plugins
secret      Manage Docker secrets
service     Manage services
stack       Manage Docker stacks
swarm       Manage Swarm
system      Manage Docker
trust       Manage trust on Docker images
volume      Manage volumes

1.4.2. Docker commands

attach      Attach local standard input, output, and error streams to a running container
build       Build an image from a Dockerfile
commit      Create a new image from a container's changes
cp          Copy files/folders between a container and the local filesystem
create      Create a new container
deploy      Deploy a new stack or update an existing stack
diff        Inspect changes to files or directories on a container's filesystem
events      Get real time events from the server
exec        Run a command in a running container
export      Export a container's filesystem as a tar archive
history     Show the history of an image
images      List images
import      Import the contents from a tarball to create a filesystem image
info        Display system-wide information
inspect     Return low-level information on Docker objects
kill        Kill one or more running containers
load        Load an image from a tar archive or STDIN
login       Log in to a Docker registry
logout      Log out from a Docker registry
logs        Fetch the logs of a container
pause       Pause all processes within one or more containers
port        List port mappings or a specific mapping for the container
ps          List containers
pull        Pull an image or a repository from a registry
push        Push an image or a repository to a registry
rename      Rename a container
restart     Restart one or more containers
rm          Remove one or more containers
rmi         Remove one or more images
run         Run a command in a new container
save        Save one or more images to a tar archive (streamed to STDOUT by default)
search      Search the Docker Hub for images
start       Start one or more stopped containers
stats       Display a live stream of container(s) resource usage statistics
stop        Stop one or more running containers
tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top         Display the running processes of a container
unpause     Unpause all processes within one or more containers
update      Update configuration of one or more containers
version     Show the Docker version information
wait        Block until one or more containers stop, then print their exit codes

1.5. Containers

1.5.1. Searching

docker search NAME

1.5.2. Pulling from Docker Hub

  • Only pull, not run

docker pull NAME
docker pull ubuntu  # will pull latest
docker pull ubuntu:latest
docker pull ubuntu:18.10

1.5.3. Run containers

  • Check hostname

  • Check PS1 (bash prompt)

  • Will pull automatically

docker run bash
  • -t - run pseudo terminal and attach to it

  • -i - interactive, keeps stdin open

  • --rm - Automatically remove the container when it exits

docker run -it bash
  • ctrl + p + q - quit container without stopping it

  • ctrl + d - exits and stops the container

docker run -it ubuntu:latest bash
  • -d - daemon (runs in the background)

docker run -d -it ubuntu:latest bash
  • --name - named container

docker run -d -it --name bash ubuntu:latest bash

1.5.4. Show containers

  • show running:

    docker ps
  • Show all containers, even not running:

    docker ps -a

1.5.5. Attach to running containers

  • Attach local standard input, output, and error streams to a running container:

    docker attach CONTAINER_NAME_OR_ID
  • Attach to running container and execute bash

    docker exec -it CONTAINER_NAME_OR_ID bash
    docker exec -u 0 -it CONTAINER_NAME_OR_ID bash

1.5.6. What application is running inside the container?


1.5.7. Stop containers

  • Filesystem inside container is ephemeral (it will be deleted after stop)


1.5.8. Remove container


1.5.9. Remove all stopped containers

docker rm $(docker ps -a -q)

1.5.10. Inspect

docker inspect jenkins

1.5.11. Update

  • Do not autostart jenkins container after Docker engine restart (computer reboot)

docker update --restart=no jenkins

1.6. Images

1.6.1. Build images

docker build -t docker .

1.6.2. List images

docker images

1.6.3. Remove images

docker rmi IMAGE

1.7. Volumes

  • A data volume is a specially-designated directory within one or more containers that bypasses the Union File System.

  • Data volumes provide several useful features for persistent or shared data:

    • Volumes are initialized when a container is created.

    • If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)

    • Data volumes can be shared and reused among containers.

    • Changes to a data volume are made directly.

    • Changes to a data volume will not be included when you update an image.

    • Data volumes persist even if the container itself is deleted.

  • Data volumes are designed to persist data, independent of the container’s life cycle.

  • Docker therefore never automatically deletes volumes when you remove a container, nor will it “garbage collect” volumes that are no longer referenced by a container.


You can also use the VOLUME instruction in a Dockerfile to add one or more new volumes to any container created from that image.

1.7.1. Creating persistent storage

docker run -it -v /data --name bash ubuntu:latest /bin/bash
echo 'hello' > /data/hello.txt
# exit with ``ctrl+q+p``
ls /var/lib/docker/containers/volumes/.../

1.7.2. Attaching local dir to docker container

  • Will mount /tmp/my_host from host to /data inside container

docker run -v <host path>:<container path>[:FLAG]
docker run -d -P --name web -v /home/myproject:/data ubuntu /bin/bash

1.7.3. Mount read-only filesystem

docker run -d -P --name web -v /home/myproject:/data:ro ubuntu /bin/bash

1.7.4. Creating Volumes

docker volume create -d flocker --opt o=size=20GB myvolume
docker run --detach -P -v myvolume:/data --name web ubuntu /bin/bash

1.7.5. Volume container

docker create -v /data --name dbstore postgres /bin/true
docker run --detach --volumes-from dbstore --name db1 postgres

1.8. Docker network

  • Create a new docker network and connect both containers to that network

  • Containers on the same network can use the others container name to communicate with each other

  • https://docs.docker.com/network/bridge/

  • bridge networks are best when you need multiple containers to communicate on the same Docker host.

  • host networks are best when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.

  • overlay networks are best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.

  • macvlan networks are best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.

  • Third-party network plugins allow you to integrate Docker with specialized network stacks.


Figure 1.20. Docker network

1.8.1. Expose ports

docker run -d -p 5432:5432 --name postgres postgres
docker run -d -p --name postgres postgres

1.8.2. Create network

docker network create mynetwork
docker network create -d bridge --subnet --gateway mynetwork
version: '3'

    image: some/image
      - mynetwork

    external: true

1.8.3. List networks

docker network ls

1.8.4. Delete network

docker network rm mynetwork

1.8.5. Connect new container to network

docker network create mynetwork
docker run -d --net mynetwork --name host1 ubuntu
docker run -d --net mynetwork --name host2 ubuntu

docker attach host1
ping host2

1.8.6. Connect running container to network

docker run -d --name host1 ubuntu
docker run -d --name host2 ubuntu

docker network create mynetwork
docker network connect mynetwork host1
docker network connect mynetwork host2

docker attach host1
ping host2

1.8.7. Inspect network

docker network inspect

1.9. Dockerfile

1.9.1. Creating and building Dockerfile

FROM python:latest
CMD python
docker build -t mypython:1.0.0 .
docker run mypython:1.0.0
docker build -t mypython:latest .
docker run mypython
docker images

1.9.2. FROM

  • The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.

FROM python:3.7
FROM python:latest
FROM alpine
FROM ubuntu          # links to :latest
FROM ubuntu:latest   # always current LTS
FROM ubuntu:rolling  # released every 6 months (also LTS, if it was LTS release)
FROM ubuntu:devel    # released every 6 months (only devel)

1.9.3. USER

  • Run the rest of the commands as the user

USER postgres

1.9.4. RUN

RUN ["/bin/bash", "-c", "echo hello"]

1.9.5. CMD vs RUN

  • There can only be one CMD instruction in a Dockerfile

  • If you list more than one CMD then only the last CMD will take effect

  • The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.

  • The resulting committed image will be used for the next step in the Dockerfile


  • The main purpose of a CMD is to provide defaults for an executing container.

  • An ENTRYPOINT helps you to configure a container that you can run as an executable.

FROM alpine
ENTRYPOINT ["/bin/ping"]
docker run myping
FROM alpine
CMD ["/bin/ping", ""]
docker run myping
  • will be an argument to ENTRYPOINT

1.9.7. EXPOSE

  • The EXPOSE instruction does not actually publish the port

  • It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published

EXPOSE 80/tcp
EXPOSE 80/udp

1.9.8. ENV

ENV <key> <value>
ENV <key>=<value> ...

1.9.9. COPY vs ADD

  • ADD allows <src> to be a URL

  • If the <src> parameter of ADD is an archive in a recognised compression format, it will be unpacked

  • Best practices for writing Dockerfiles suggests using COPY where the magic of ADD is not required.

COPY requirements.txt /www

1.9.10. VOLUME

  • The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.

VOLUME ["/data"]

1.9.11. WORKDIR

  • The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile

WORKDIR /path/to/workdir

1.9.12. Run Django App in container

FROM python:3.7

COPY . /data
RUN pip install -r /data/requirements.txt
EXPOSE 8000 8000/tcp

CMD ["python", "manage.py", "runserver", ""]

1.9.13. Apache 2

FROM debian:stable

RUN apt-get update && apt-get install -y --force-yes apache2
EXPOSE 80 443
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]

ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]

1.9.14. Example dockerfile

## Creating image based on official python image
FROM python:3.7

## Sets dumping log messages directly to stream instead of buffering

## Install system dependencies
RUN apt update && apt install -y nginx

## Change working directory

## Creating and putting configurations
COPY habitat /srv/habitat
COPY manage.py /srv/
COPY docker-entrypoint.sh /srv/docker-entrypoint.sh
COPY requirements.txt /srv/requirements.txt
COPY conf/nginx.conf /etc/nginx/sites-enabled/habitatOS

## Installing all python dependencies
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN pip install --no-cache-dir -r /srv/requirements.txt

## Open ports to outside world
EXPOSE 80 80/tcp
EXPOSE 8000 8000/tcp

## When container starts, this script will be executed.
## Note that it is NOT executed during building
CMD sh /srv/docker-entrypoint.sh

## Run like that
# docker build . -t habitatos:latest
# docker run -d --env-file=.env --rm --name habitatOS -p 80:80 habitatos
# docker run -d --env-file=.env --rm --name habitatOS -p 80:80 -v /Users/matt/Developer/habitatOS/habitat:/srv/habitat habitatos
# docker exec -it habitatOS bash

1.10. Docker Hub

1.10.1. Publishing

docker build -t habitatos:1.0.0 .
docker tag habitatos:1.0.0 astromatt/habitatos:latest
docker login
docker push astromatt/habitatos:latest
docker image remove habitatos:1.0.0
docker run astromatt/habitatos

1.11. Docker-compose

Compose is a tool for defining and running multi-container Docker applications.

1.11.1. Docker Compose Jenkins

  1. Create file docker-compose.yaml

    version: '3'
        driver: bridge
        image: jenkins/jenkins
        container_name: jenkins
        restart: "no"
          - "8080:8080"
          - devtools-ecosystem
          - /tmp/jenkins:/var/jenkins_home/
          - /var/run/docker.sock:/var/run/docker.sock
  2. Run Jenkins

    docker-compose up
  3. Run Jenkins in background (daemon)

    docker-compose up -d

1.11.2. Docker-compose Django application

  • docker-compose.yaml

version: '3'

    image: postgres
      - "5432:5432"

    build: .
    command: python manage.py runserver
      - .:/www
      - "8000:8000"
      - db
docker-compose up -d
docker swarm init
docker stack deploy -c docker-compose.yml my-stack

1.11.3. Docker compose CI/CD ecosystem

  • docker-compose.yaml

version: '3'

    driver: bridge



    image: jenkins/jenkins
    container_name: jenkins
    restart: always
      - "8080:8080"
      - mynetwork
      - /tmp/jenkins:/var/lib/jenkins/
      - sonar
      - gitlab
      - artifactory
      - SONAR_PORT=9000

    image: sonarqube
    container_name: sonarqube
    restart: always
     - "9000:9000"
     - "9092:9092"
      - mynetwork

    image: gitlab/gitlab-ce:latest
    container_name: gitlab
    restart: always
      - /tmp/gitlab/config:/etc/gitlab
      - /tmp/gitlab/logs:/var/log/gitlab
      - /tmp/gitlab/data:/var/opt/gitlab
     - "443:443"
     - "80:80"
     - "2222:22"
      - mynetwork

    image: docker.bintray.io/jfrog/artifactory-oss:latest
    container_name: artifactory
    restart: always
      - "8081:8081"
      - mynetwork
docker-compose up -d

1.12. Visualizing docker container

1.14. Where docker store containers

  • docker info

  • /var/lib/docker/containers

1.15. Kubernetes

1.15.1. Deploying

  • Automatic health checks

  • Autohealing

  • Rollback deployment

1.15.2. Scaling

  • Services

  • Load ballancing

  • Same machine or different machines

  • Scaling container within Service

1.15.3. Monitoring

1.16. Swarm

1.17. Mesos

1.18. Assignments

1.18.1. Ehlo World

  1. Zainstaluj Docker

  2. Czym różni się Docker od Vagrant?

  3. Wyświetl Ehlo World! z wnętrza kontenera Docker

  4. Wyświetl listę działających kontenerów Docker

1.18.2. Create container and run

  1. Ściągnij repozytorium https://github.com/AstroTech/sonarqube-example-java-maven-junit

  2. Zbuduj projekt za pomocą mvn install

  3. Przygotuj obraz oraz uruchom aplikację wykorzystując Docker

  4. Użyj pliku Dockerfile do opisu środowiska kontenera

1.18.3. Dockerfile

  1. Na bazie czystego Ubuntu stwórz własnt kontener dla PostgreSQL

1.18.4. Docker Compose

  1. Ściągnij repozytorium https://github.com/AstroTech/sonarqube-example-java-maven-junit

  2. Zbuduj projekt za pomocą mvn install

  3. Przygotuj obraz oraz uruchom aplikację wykorzystując Docker

  4. Użyj pliku docker-compose.yaml do opisu środowiska kontenera