Docker 101

6 minute read

What is Docker?

Once upon a time people were running applications on bare metal machines. Then this process was optimized by using VMs (virtual machines). But this still caused problems because the operating system and all additional applications on the VM had to be configured in exactly the right way to run the main application, e.g., a webserver. Additionally updating of the OS and software on the VM often had to be done by someone of the operations/ sysadmin team.

docker

Docker allows you to go beyond virtual machines to a virtualized operating system. A Docker daemon gets installed on the host machines which allows to lauch Docker containers on it that contain all files necessary for running your application, including a Linux operating system. Usually one container contains one application process so the state of the container can easily reflect the state of the application. The Docker daemon is available for Linux, Mac and Windows. This makes it possible to run exactly the same containers on different operating systems. Additonally starting containers is much faster than booting up a Vagrant environment. Installation instructions for Docker can be found here.

This system allows for immutable deployments. If there is a change in the application code a new container is launched and the old one is shut down. Since the sysadmin now only provides a docker host there is more responsibility for the developer to create the right bundle to be deployed.

How to create a Docker container

To run a Docker container you first need a Docker image. An image consists of all OS and application files. It is created by adding one file to the repository. This file is called Dockerfile and contains the instructions which OS to use, what to install, which files to copy from the repository, which command to run to start the app etc. See Dockerfile reference for details on all available options.

# Example Dockerfile for a Node.js project
FROM node:8 # Base image

WORKDIR /usr/src/app
ENV NODE_ENV=production

COPY package*.json ./
RUN npm install

COPY app ./app
COPY config ./config
COPY index.js ./index.js

EXPOSE 8080
CMD [ "node", "index.js" ] # command to start the application

ProTip: (Nearly) every instruction line in the Dockerfile creates one layer of the Docker image. Think about keeping the number of lines small and arrange them so the files that change often are added last.

Once you have your Dockerfile you create the the Docker image with this command:

docker build -t <imagename>:<tag> .

The default tag is latest but for production use you should use the tag to specify the version of your application. To correlate the source code with the Docker image it makes sense to use a Git tag that includes the version as image tag.

Now you have a Docker image, but only on your local machine. You need to put it somewhere so it can be used for the deployment. This place is called a Docker registry. To put something into a registry you need to log in there first and then you can push the image. To specifiy the target registry for an image the registry URL is used as the first part of the image name. Here is an example.

docker -D login docker.company.com
docker build -t docker.company.com/user-service:1.0.0 .
docker push docker.company.com/user-service:1.0.0

Now you can use the image on another machine by pulling it from the registry and starting it. "Running" the image creates a Docker container. It is possible to multiple containers for the same application on the same Docker host (e.g., via mapping the port to a random one).

docker pull docker.company.com/user-service:1.0.0
docker run -e SOME_ENV=value docker.company.com/user-service:1.0.0

How to manage a running container

You can see all containers running (or stopped) on a Docker host by running docker ps -a. Using the container name or id you can now see everything that the container sends to stdout (usually logs) and manage your container with these commands.

docker logs -ft <container> # use without "-ft" for detached mode
docker exec -ti <container> /bin/sh # get inside the running container
docker stop <container> # just stop the container, its local data will not be deleted
docker start <container>
docker rm <container> 
docker rm -f <container> # stop when running and remove so everything will be gone

How to work with multiple containers at once

For running your app locally you might need more parts than just that container with your app inside. You might need MySQL, REDIS, a fake S3, another dockerized service that a colleague has build etc. In production you would not run a database inside a container for performance reasons but for local testing or running integration tests with a CI system this is just fine. So what you can do is to describe your whole stack in terms of Docker images. On Dockerhub you find ready-to-use images for (nearly) everything. You pick what you need and write it down in a docker-compose.yml file. Here an example:

version: '2'
services:
  service-a:
    build: . # don't use an image but build the container from files in the workspace
    environment:
      - MYSQL_HOST=mysql
      - MYSQL_PASSWORD=somepassword
      - MYSQL_USER=root
      - REDIS_HOST=redis
    ports:
      - 3000:3000 # make the port available from localhost
    restart: always # restart the service until the DB is ready
  service-b:
    image: registryhost/service-b:latest
    environment:
      - SERVICE_A_URL=http://service-a:3000
  redis:
    image: redis
  mysql:
    image: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=somepassword

When you are in a folder that contains such a docker-compose file use one of these commands to boot up the whole stack

docker-compose up --build # build containers that need building and boot up the stack
docker-compose up -d # boot up the stack and run in detached mode
docker-compose up -d --build # both of the above

Every container in the stack can reach the other by using the service name as host name. That is why the container that runs service-a can reach the database with the host name mysql. To reach any of the service from your local machine the service needs to expose its port using the ports key like for service-a in the example above.

How to clean up/ start fresh

Here are some helpful commands to help you in case you messed up and keep your system clean.

docker-compose down # stop and remove all containers listed in the docker-compose file
docker container prune # remove all stopped containers
docker rm -f $(docker ps -a -q) # remove all containers no matter whether they are still running
docker system prune -a # remove all unused containers, images, networks etc.
Docker
comments powered by Disqus