The Definitive Guide To Docker in 2023:

This is the most comprehensive guide on Docker online.

In this docker tutorial, you will learn docker from scratch to an advanced level starting from the concept of containerization to Docker Compose in Docker.

You will learn how to dockerize a JavaScript application and how to use dockerize a full-stack application using Docker Compose.

The Definitive Guide To Docker in 2023

Contents

In a hurry? Save this article as a PDF.

Tired of scrolling? Download a PDF version for easier offline reading and sharing with coworkers.

Chapter 1: Complete Docker Overview

In this chapter, you will learn the complete overview of Docker and containerization to help you understand the concept of Docker and how to develop containerized applications.

This chapter will teach you some Docker terminologies such as Docker Image, Docker Containers, and Docker Registry.

The concept of Containerization is not new, in fact, according to TechTarget, it dates back to the 1960s with VM partitioning which enables multiple users to access a computer concurrently.

However, the emergence of the Docker Engine in 2013 placed a significant impact on popularizing the concept of Containerization because it made it much easier to containerize your applications.

As of the time of writing, if you look at the 2023 Stack Overflow Developer Survey, it reveals that Docker is the top-used and most popular tool among professional developers in other tools categories. That’s a 53% rise from its second-place spot last year.

docker stackoverflow stats

You can clearly spot the demand for Docker in software engineering. However, learning Docker can seem intimidating at first that’s why I decided to make it simple to learn Docker. I broke down some of the complex things about Docker for complete beginners.

What is Containerization?

Containerization is a process of packaging an application's code with all the files and libraries it needs to run on any infrastructure.

You may have heard these popular words from your team members before:

“It was working on my system”

These kinds of problems are what containerization is trying to solve, the ability to package your application with all the required dependencies it needs to run the application on any infrastructure.

Let’s say you built your application with Node.js, installed different versions of dependencies, and use the Linux operating system.

Let’s say the entirety of your application uses the following dependencies for instance:

  • Node.js v15

  • Express.js v4

  • Postgres v13

  • Nginx v1.23

  • TypeORM v0.3

  • Linux

  • etc

Then you want to deploy or share this project with a teammate who uses a different operating system or has already installed different versions of these dependencies, It might cause errors as you run this application in a different environment.

Because if you look deeper, Node.js depends on a build tool called node-gyp for building native add-ons. And according to the installation guide in the official repository, this build tool requires Python 2 or 3 and a proper C/C++ compiler tool-chain which becomes a pain to install and set up properly in different operating systems.

Here’s my personal experience, I moved from using Windows to Linux on June 13, 2018, after trying to install and set up Memcached in Windows without success.

Let’s assume one of your teammates uses Windows while you use Linux. Now you have to consider the inconsistencies in handling paths between these two operating systems. Additionally, if you’re using Nginx and Redis in your application, Nginx is not well-optimized to run on Windows, Redis doesn’t even come pre-built for Windows.

Even if you passed through the entire development phase, what about the deployment phase, how do you build these different codebases from the different operating systems together to become one?

There will always be questions, issues, and problems when working with isolated teams and different codebases.

That’s where packaging the application in containers comes in handy because it will include all the dependencies exact version needed, the exact operating system, and the database system included in the exact arrangement, etc.

When this packaged application is shipped to your teammate or deployed, the application has the same dependencies it needs to run and therefore experiences no errors or failures.

Therefore, all these issues can be solved if only you could:

  • Develop and run the application inside an isolated environment called containers that contains the exact dependencies as your final deployment environment.

  • Put your application inside a single file (known as an image) with all the dependencies and necessary configurations.

  • And upload the image on a central server (known as a registry) that is accessible by everyone with proper authorization for your teammates.

In this scenario, your teammates will only download the image from the registry and run the application as it is within an isolated environment free from platform-specific inconsistencies. Your operations team can even deploy it directly on a server since the image comes with all the proper production configurations.

That’s the idea behind containerization — putting your applications inside a self-contained package while making it portable and reproducible across various environments.

Now what is Docker and what role does it play in making the concept of containerization popular? That’s exactly what I will explore in the next section.

Chapter 2: Docker Deep Dive

In this chapter, I will take you deeper into the Docker architecture.

I will explore different Docker concepts and terminologies such as Docker images, and Docker containers and lastly, you learn the architecture of Docker enough to build your first containerized application.

This chapter is split into 3 sections to discuss the important components of Docker and how they work together in Docker, Docker Architecture, Docker images, and Docker Containers.

If you’re excited as I am, let’s dive right in.

What is Docker Image?

A Docker image is a read-only multi-layered self-contained template with instructions to create a container. An image is a template used to create a container. Images can be created based on another image with some personalized customization for specific use cases.

The container image must contain everything needed to run an application - all dependencies, configurations, scripts, binaries, etc. The image also contains other configurations for the container, such as environment variables, a default command to run, and other metadata.

The Open Container Initiative (OCI) defined a standard specification for creating container images which means that images created for Docker will also work with Podman. Unlike in the early days when container engines had different image formats.

There are many images already published on the Docker registry or other registries that are publicly available for use based on the OCI standards.

You can choose from the list or create your own image from scratch as we will demonstrate further in this lesson.

Creating a Docker Image

Docker images are created using a Dockerfile, to build your own image, you create a `Dockerfile` with a simple syntax for defining the steps needed to create the image and run it. Here is an example of a simple `Dockerfile` to create an image:

FROM node:16.17.0-alpine

# Set working directory
WORKDIR /usr/src/app

# Install dependencies
COPY package*.json ./
RUN npm ci --production

# Copy app source code
COPY . .

# Build app
RUN npm run build --production

COPY ./.env ./build

# Expose port, you can expose any port your app uses here
EXPOSE 3333

# Start app
CMD ["node", "./build/server.js"]

The Dockerfile is self-explanatory due to the comments included in the file. Let’s go deeper into Docker commands used in the `Dockerfile`.

  1. FROM: specifies which image is being used to build this new Docker image

  2. RUN: used to run a command while building the Docker image

  3. WORKDIR: It creates a new folder inside the Docker image

  4. COPY: used to copy the source codes and other files into a folder specified inside the Docker image.

  5. EXPOSE: used to expose the port number of the Docker image to the outside (client) machine.

  6. CMD: used to set the default command to be executed when a container is run. It is usually used in conjunction with the ENTRYPOINT command to provide a default application to be run when the container is started.

This simple `Dockerfile` creates an image from a base image in this case `node:16.17.0-alpine` and set the working directory inside the container to `/usr/src/app` using the `WORKDIR` command, inside the `app` directory is where all the files the is copied using the command `COPY package*.json ./` will be stored.

Next, the image executes the `npm ci --production` using the `RUN` command inside the container, Next, we copy all the files from our local project directory to the container working directory.

Lastly, we execute the `run build` command, copy other files, expose the container port to the outside world, and start our server.

This was only a demonstration to show you how to use `Dockerfile` to create an image. Every command in the Dockerfile above has specific functionality and there are many more commands you can use to create your container images. You can explore all the commands in the official documentation.

Now that you have created a Dockerfile, it's of no use if you don’t build it to an image. Therefore, let’s look at some of the Docker Client commands for images:

Image Commands

Here are some of the popular commands you can use to manage images:

Build the Docker Image

Once you've created a Dockerfile, you can build the image using the `docker build` command where `image-name` is the user-defined name of the image.

docker build -t image-name .

Run the Docker Image

There are several ways to run your image using the Docker Run command. Below, we're going to explore a few ways to run your Docker image:

Run the Docker Image:

docker run image-name

You can run the Docker Image with specific parameters. For example, to run the Docker Image with parameters such as port with `-p`, the name with `--name`, and interactive mode with `-it`, use the following command:

docker run --name container-name -it -p 3333:3333 image-name

Below is a table showing all the commands you can use to explore and manage Docker container images.

docker image build

Build an image from a Dockerfile. It’s the same with `docker build` command

docker image history

Show the history of an image

docker image import

Import the contents from a tarball to create a filesystem image

docker image inspect

Display detailed information on one or more images

docker image load

Load an image from a tar archive or STDIN

docker image ls

List images

docker image prune

Remove unused images

docker image pull

Download an image from a registry

docker image push

Upload an image to a registry

docker image rm

Remove one or more images

docker image save

Save one or more images to a tar archive (streamed to STDOUT by default)

docker image tag

Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Now that you have learned how to create your own image for your project, you can also create an image the same way and upload it to the Docker registry for others or your team members to use. Let’s explore what the Docker registry entails.

What is a Docker Registries?

An image registry serves as a centralized repository where you can upload your own images and download images created by others. Docker Hub, the default public registry for Docker, and Quay by Red Hat, another highly popular image registry.

You can go to Docker Hub and scan through to see the most popular image or some of the images for your favorite tools. When you are at the home page, click on the Explore menu to see all the images as shown below:

Docker registry

What are Docker Containers?

The concept of containerization gave birth to containers. Therefore, the concept of containers is very fundamental to the world of containerization.

A container is a runnable instance of an image. A docker container can be created, started, stopped, moved, or deleted using the Docker CLI Client or any other accepted clients. You can combine multiple containers to become one or attach one or more networks, storage, or create a new image based on the current state of the container.

If you know virtual machines, then you may consider containers to be equivalent to modern virtual machines.

Docker containers are completely isolated environments from the host machine as well as other containers. They are built to be a more lightweight, standalone, executable package of software that includes everything needed to run an application which is the codes, runtimes, system tools, system libraries, or settings.

Docker container.pngLet me work you through the diagram a little deeper to understand Docker containers from the ground up.

Docker containers are lightweight because they share the machine’s OS system kernel and therefore do not require an OS per application, driving higher server efficiencies and reducing server and licensing costs.

As you can see in the diagram above, each container relies on the Docker layer which in turn uses the resources from the machine’s OS.

This is one of the differences between virtual machines and Docker containers, and also a great benefit of using Docker containers because a virtual machine relies on individual Guest OS for each application as shown in the image below:

virtual machinesNow that we understand Docker containers, let’s look at how to manipulate and manage a Docker container in the next lesson.

How to Run a Container

When we learned how to run Docker images, we used the docker run command to achieve it. The command is also used to create and start a container by specifying the image name and some optional options as shown below:

docker run image-name

This command will create and start a container if the image name is specified correctly and it exists anywhere in your local machine or Docker registry.

However, Docker has updated this command and made it easy to understand thereby improving developer experiences. The structure of the new command syntax looks like this:

docker <object> <command> <options>

Where:

  • <object> is the type of Docker object you'll be manipulating. This can be a containerimagenetwork, or volume object.

  • <command> indicates the task to be carried out by the daemon, that is the run command.

  • <options> can be any valid parameter that can override the default behavior of the command, like the -publish option for port mapping.

Now the complete syntax to start and run a container from an image will look like this:

docker container run image-name

Let’s replace the image-name with something more practical, let’s run the nginx container as an example. To run a container using this image, execute the following command on your terminal:

docker container run --publish 8080:80 nginx

This will result in an output such as:

Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
3ae0c06b4d3a: Pull complete 
efe5035ea617: Pull complete 
a9b1bd25c37b: Pull complete 
f853dda6947e: Pull complete 
38f44e054f7b: Pull complete 
ed88a19ddb46: Pull complete 
495e6abbed48: Pull complete 
Digest: sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
Status: Downloaded newer image for nginx:latest

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

2023/07/06 12:51:31 [notice] 1#1: using the "epoll" event method
2023/07/06 12:51:31 [notice] 1#1: nginx/1.25.1
2023/07/06 12:51:31 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 
2023/07/06 12:51:31 [notice] 1#1: OS: Linux 5.15.49-linuxkit
2023/07/06 12:51:31 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/07/06 12:51:31 [notice] 1#1: start worker processes
2023/07/06 12:51:31 [notice] 1#1: start worker process 29
2023/07/06 12:51:31 [notice] 1#1: start worker process 30
2023/07/06 12:51:31 [notice] 1#1: start worker process 31
2023/07/06 12:51:31 [notice] 1#1: start worker process 32
2023/07/06 12:51:31 [notice] 1#1: start worker process 33

Also, notice that we have a new option added to the command, the --publish option. Let’s talk about that in the next section and other popular options you can use with your Docker container command.

How to Publish a Port

As you already know, Docker containers are isolated environments. Your host system (local machine) does not know anything that’s going on inside the container environment unless you expose it.

Therefore, one way to expose the running application inside your container is by exposing the port to your host system.

Let’s say, for example, you started an Express application inside your container on port 3000, your host machine will not know that port 3000 is running your Express application unless you explicitly expose it when you want to create the container using the command below:

docker container run --publish 3000:3000 my-express-app

When you write --publish 3000:3000, it meant any request sent to port 3000 of your host system will be forwarded to port 3000 inside the container‌.

The process is called port mapping, you can map your container port to a different and available port on your local machine as shown in this example:

docker container run --publish 4000:3000 my-express-app

In this case, if you visit localhost:3000, it will not work because your container port is mapped to 4000 on your local machine and it’s expecting traffic from localhost:4000.

Also, you can use a shorthand version of the command option as shown below which means the same thing:

docker container run -p 4000:3000 my-express-app

How to Use Detached Mode

Next, you can run your container commands in detached mode meaning that your command will run in the background and your terminal will be wide open for new commands.

Here’s a command and the option to do so:

docker container run  --detach  --publish 4000:3000 my-express-app

or the shorthand version:

docker container run -d -p 4000:3000 my-express-app

How to List Containers

The next command is the container ls command, which allows you to list all available containers in your local machine that are currently running.

If you have any container in your local machine running, it should show you a list of them as shown below:

docker container ls

# CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                  NAMES
# 9f21cb888058        my-express-app  "/docker-entrypoint.…"   5 seconds ago       Up 5 seconds        0.0.0.0:8080->80/tcp   upbeat_burn

To list all your containers including all states such as running, stopped, etc. Use the command shown in the example below:

docker container ls --all

 // or the short-hand version

docker container ls -a

CONTAINER ID   IMAGE            COMMAND                  CREATED            STATUS                     PORTS                  NAMES
9f21cb888058   my-express-app  "/docker-entrypoint.…"   5 seconds ago       Up 5 seconds               0.0.0.0:8080->80/tcp   upbeat_burn
ae9192b8e32d   node            "docker-entrypoint.s…"   9 minutes ago       Exited (0) 6 minutes ago                          upbeat_blackburn

A container named upbeat_burn is running. It was created 5 seconds ago and the status is Up 5 seconds, which indicates that the container has been running fine since its creation.

The CONTAINER ID is 9f21cb888058 which is the first 12 characters of the full container ID. The full container ID is 9f21cb88805810797c4b847dbd330d9c732ffddba14fb435470567a7a3f46cdc which is 64 characters long. This full container ID was printed as the output of the docker container run command in the previous section.

How to Name or Rename a Container

Every container has two identifiers by default which are:

  • CONTAINER ID - a random 64-character-long string.

  • NAME - a combination of two random words, joined with an underscore.

You can rename a container to your user-defined name which will be easy to refer to than using the two randomly generated identifiers.

You can rename a container using the --name option or -n for short. To run another container using the my-express-app image with the name my-express-app-container you can execute the following command:

docker container run --detach --publish 4000:3000 --name my-express-app-container my-express-app

A new container with the name my-express-app-container will be started.

You can even rename old containers using the container rename command. The syntax for the command is as follows:

docker container rename <container identifier> <new name>

Here's an example:

docker container rename my-express-app my-express-app-container-2

The command doesn't yield any output but you can verify that the changes have taken place using the container ls command. The rename command works for containers both in the running state and the stopped state.

How to Stop or Kill a Running Container

To stop a running container is easy. You can use the ctrl + c command to stop a running container on your terminal.

However, if you’re container is running in a detached mode, you need to use the following command to stop it.

Below is a generic syntax for the command is as follows:

docker container stop <container identifier>

Where the container identifier can either be the id or the name of the container. You can get the identifier with the docker container ls command. Next, here’s an example to stop a running container.

docker container stop my-express-app-container

# my-express-app-container

If you use the name as an identifier, you'll get the name thrown back to you as output. The stop command shuts down a container gracefully by sending a SIGTERM signal. If the container doesn't stop within a certain period, a SIGKILL signal is sent which shuts down the container immediately.

In cases where you want to send a SIGKILL signal instead of a SIGTERM signal, you may use the container kill command instead. The container kill command follows the same syntax as the stop command.

docker container kill my-express-app-container

How to Restart a Container

Restarting a container can happen in two ways:

  • Restarting the container from a failed, stopped, or killed state.

  • Rebooting a container from a running state.

You can start a container from a failed, stopped, or killed state using the command below:

docker container start my-express-app-container

Next, you can reboot a running container using the following command:

docker container restart my-express-app-container

The main difference between the two commands is that the container restart command attempts to stop the target container and then starts it back up again, whereas the start command just starts an already stopped container.

In the case of a stopped container, both commands are exactly the same. But in the case of a running container, you must use the container restart command.

How to Create a Container Without Running

Sometimes, you just want to create a container without running it. You can achieve this using the following command:

docker container create --publish 4000:3000 my-express-app-container

The STATUS of the container is Created at the moment, and, given that it's not running, it won't be listed without the use of the --all option in the docker container ls command.

The create command will create a new container and store it inside your local machine without running it. So that you can use the docker container start command to start the container next time without creating it again.

How to Remove Containers

Sometimes, you want to remain unused containers since all containers including stopped and killed containers still remain in your local system. Removing unused containers can save your system memory.

The following command is used to remove any container by passing in the identifier of the container you want to delete.

Here’s the syntax:

docker container rm <container identifier>

You can use the docker container ls -all command to get the identifier.

docker container rm 6cf52771dde1

# 6cf52771dde1

This command will delete the container whose identifier is specified as seen above.

In addition, you can use the --rm option to indicate a one-time container meaning that you want the container to be deleted immediately after they are stopped. You can use the --rm option in both start and run commands. Let’s look at this example with the run command.

docker container run --rm --detach --publish 5000:3000 --name my-express-app-container-one-time my-express-app-container

The my-express-app-container-one-time container will be deleted immediately after it is stopped or killed.

How to Run a Container in Interactive Mode

Some images come with different lightweight versions of operating systems such as Linux and different distributions such as Ubuntu, Fedora, Kali, etc. If you use any of these images that come with an operating system. You can interact with the inside of the operating system when running the container.

For instance, Programming languages such as pythonphpgo, or run-times like node and deno all have their official images.

These images do not just run some pre-configured program. They are instead configured to run a shell by default. In the case of the operating system images, it can be something like sh or bash and in the case of the programming languages or runtimes, it is usually their default language shell.

For example, if you are using the Ubuntu image, you can interact with the Ubuntu shell to install programs, navigate to folders or create a new file. In the same way, if you’re using a Node.js image, you might want to interact with the default language shell.

To achieve this, Docker provides the interactive --interactive or -it option which allows you to interact with the shell of your image when starting or running a container.

Here’s an example with an Ubuntu image:

docker container run --rm -it ubuntu

Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
5af00eab9784: Pull complete 
Digest: sha256:0bced47fffa3361afa981854fcabcd4577cd43cebbb808cea2b1f33a3dd7f508
Status: Downloaded newer image for ubuntu:latest

root@09d3c433639d:/# cat /etc/os-release 
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="<https://www.ubuntu.com/>"
SUPPORT_URL="<https://help.ubuntu.com/>"
BUG_REPORT_URL="<https://bugs.launchpad.net/ubuntu/>"
PRIVACY_POLICY_URL="<https://www.ubuntu.com/legal/terms-and-policies/privacy-policy>"
UBUNTU_CODENAME=jammy
root@09d3c433639d:/#

The -it option sets the stage for you to interact with any interactive program inside a container. This option is actually two separate options mashed together.

  • The i or -interactive option connects you to the input stream of the container so that you can send inputs to bash.

  • The t or -tty option makes sure that you get some good formatting and a native terminal-like experience by allocating a pseudo-tty.

You need to use the -it option whenever you want to run a container in interactive mode. Another example can be running the node image as follows:

docker container run -it node

Unable to find image 'node:latest' locally
latest: Pulling from library/node
42cbebf8bc11: Pull complete 
9a0518ec5756: Pull complete 
356172c718ac: Pull complete 
dddcd3ceb998: Pull complete 
abe47058ac42: Pull complete 
08ff2ee7b183: Pull complete 
70b6353bf75e: Pull complete 
2742b73156b1: Pull complete 
Digest: sha256:57391181388cd89ede79f371e09373824051eb0f708165dcd0965f18b3682f35
Status: Downloaded newer image for node:latest

Welcome to Node.js v20.3.1.
Type ".help" for more information.
> 5 + 5
10
>

How to Execute Commands Inside a Container

You can execute a command inside your container, assuming you want to execute a command that is not available in Windows operating while you’re using Windows at the moment.

You can easily spin up a container that uses a Linux operating system and execute the command immediately. You can even make it a one-time container as discussed above just to help you achieve a specific task in Linux.

For example, assume that you want to encode a string using the base64 program. This is something that's available in almost any Linux or Unix-based operating system (but not on Windows).

In this situation, you can quickly spin up a container using images like busybox and let it do the job.

The generic syntax for encoding a string using base64 is as follows:

echo -n solomon-secret | base64

# c29sb21vbi1zZWNyZXQ=

And the generic syntax for passing a command to a container that is not running is as follows:

docker container run <image name> <command>

To perform the base64 encoding using the busybox image, you can execute the following command:

docker container run --rm busybox sh -c "echo -n solomon-secret | base64

# c29sb21vbi1zZWNyZXQ=

What happens here is that, in a container run command, whatever you pass after the image name gets passed to the default entry point of the image to be executed.

Chapter 3: Building with Docker

In this chapter, I will work you through building dockerized applications with Docker.

You will learn how to install Docker in different operating systems, and most importantly, you will build a containerized JavaScript application by putting into practice all we have learned in the previous chapters.

Lastly, you will how to use Docker Compose to define and run multi-container Docker applications.

If you’re excited as I am, let’s dive right in.

How to Dockerize a JavaScript Application

In this lesson, you will learn how to Dockerize a simple JavaScript project. The project is a simple Express 5 Todo API that I have already built.

Here’s the Postman result of the API:

Docker Project ResultIf you want to follow along with creating the project, you can read through Express 5: The Ultimate Guide for the step-by-step guide or you can clone the repository from GitHub.

Create a Dockerfile

Create a Dockerfile in the root directory of the Express 5 Todo API project and add the following script:

# Create from a base image
FROM node:16.17.0-alpine

# Set working directory
WORKDIR /usr/src/app

# Copy Package.json files
COPY package*.json ./

# Install production-only dependencies
RUN npm ci --production

# Copy all files now
COPY . .

# Expose port 3000 to your host machine
EXPOSE 3000

# Run Npm Start command
CMD [ "npm", 'start']

Look at the comments in the script for the explanation.

After creating your Dockerfile and adding the necessary instructions to create a container, the next step is to build the image.

Build the image

We will use the docker build or docker image build command we explored above to build the image. Open your terminal and type in the following command in the directory where the Dockerfile is located:

docker build -t my-express-app .

When you enter the command, Docker will start pulling the base image and setting up your container for you. You will see a result similar to the one below if everything is successful.

Docker build command previewThe next step is to run the container that we’ve created.

Running the image

As we have already explored, you can use the Run command to run any image. Type the following command into your terminal to run your image:

docker run --name my-express-app-container -it -p 4000:3000 my-express-app

If this works, you should be greeted with a screenshot as shown below:

running container previewAs you can see, inside the container the post is 3000 but have mapped that point to 4000 in our host system. So to preview the API in the browser or access it with Postman, we will use port localhost:4000.

If everything works properly, if you visit localhost:4000you should see the result similar to the one below:

Docker project result previewWe have practically Dockerised a simple JavaScript application. We have created a Docker image using Dockerfile, build and run it into a container

Furthermore, you can practice all the commands we have listed above to master each of them because they will become handy as you build more complex applications.

Next, let’s look at how to Dockerize a full-stack application that includes a front end, a backend, and a database. This will help us understand how to Dockerize complex applications using Docker Compose.

Chapter 4: Advanced Docker

In the advanced Docker Guide, what you will learn includes:

  • Docker Compose

  • Docker Volumes/Storage

  • Docker Build Pattern

  • Caching and Layers in Docker

  • Docker Tags and Versioning

  • Docker Registering

  • Docker Networks

  • Docker Swarm

  • Dockerize FullStack App

Let me know in the comments section. I will share it with you.

Don't Stop Learning

Continue reading the What is Docker Compose? for $14.99 only or Get instant access to all current and upcoming courses and content through subscription.

Introduction to Docker Compose?

In the examples so far, we have only dockerized a single application. Now let’s imagine we want to Dockerize a complete full-stack application that includes the backend, Frontend, and maybe a database for now.

How do we achieve this, traditionally, you will say. Why not create a different Dockerfile for each or use Docker multi-stage build?

Well, that could work depending on your strategy and if you’re ready for the stress.

However, Docker has a solution for Dockerizing complex applications like this called Docker Compose.

What is Docker Compose?

Docker compose is a tool used to define and run multi-container Docker applications. You can define all the services that your application depends on and Docker Compose will install and configure each of the services and run it as one application.

Therefore, if you have the Backend, Frontend, and Database for a full-stack application, you can define all these services based on how you want your application to work using Docker Compose YAML file and Docker Compose will carry out your instructions in the file.

Docker Compose works in all environments including development, staging, production, testing, and also CI workflows. It has simple commands to manage the whole lifecycle of your application infrastructure such as:

  • Stopping, starting, and rebuilding all services with a single command.

  • Viewing the status of your running services.

  • Streaming the log output of running services.

  • Running a one-off command on a service.

Key Features of Docker Compose

Here are some of the key features when using Docker Compose to manage your application services.

  • Docker compose allows you to have multiple isolated environments as one single host.

  • It allows you to preserve volume storage when containers are created.

  • If you have multiple services, and you made changes to one, Docker Compose will only recreate the container that has changed.

  • Docker Compose supports variables and you can move a composition between environments such as dev, prod, staging, etc.

Let’s look at an example. I have a full stack project I have developed that includes a Backend API service, a frontend service with React, and a Postgre database service.

Installing Docker Compose

If you follow the instructions above to install Docker on your Windows and Mac, then Docker Compose is already installed for you. However, if you’re Linux and Docker Compose is not installed, then you need to follow the instruction in the official documentation to install Docker Compose.

If you’re unsure. Type the following command in your terminal to check if you already have Docker Compose installed.

docker compose version

You should be greeted with the version of Docker Compose currently installed on your local machine. Below is the version of Docker Compose I will be using on a Mac machine.

Docker Compose version v2.17.3

Now that we have installed Docker Compose, we are ready to dockerize our full-stack JavaScript application.

Dockerizing a Full Stack App with Docker Compose

To dockerize our full-stack application using Docker Compose, we will create a docker-compose.yaml file in the root directory of our application and define all the services that make up our full-stack application.

In our case, we have a full stack as shown in the demo video below:

[video here]

The application contains three services, the backend, the frontend, and a PostgreSQL database. Therefore, you can clone the project from this repository or structure your project as shown in the screenshot below:

docker compose project structureCreate a generic Docker Image

Now, let’s explore the content of the Dockerfile first before we look at the content of the docker-compose.yaml file.

FROM node:14.15.0

ARG PACKAGE_PATH=
ARG WORKING_DIR=

WORKDIR ${WORKING_DIR}

COPY ${PACKAGE_PATH}/package*.json ${WORKING_DIR}

RUN npm install --silent

COPY ${PACKAGE_PATH} ${WORKING_DIR}

VOLUME $WORKING_DIR/node_modules

CMD [ "npm", "start" ]

This Dockerfile is used for both the frontend and the backend since both are JavaScript application and depends on NodeJS as the base image.

Next, I declared some variables to hold different PACKAGE_PATH and WORKING_DIR based on what is supplied by each service.

Next, I set the WORKING_DIR based on what is supplied and COPY the files located in the location supplied to the PACKAGE_PATH argument.

Next, I run the npm install --silent and copy the remaining files. Lastly, I created a volume to mount our node_modules based on the WORKING_DIR supplied and start the application with the npm start command.

The Dockerfile is the same as the one above we used to create the Docker images excerpt we are making it generic to accept two different Node applications using Docker ARG or arguments.

We haven’t discussed Docker Volumes in this guide but you can visit the Docker Content Hub for more advanced topics like Volumes.

Create a Docker Compose file

Now, let’s explore the content of the docker-compose.yaml file. Open the file or create a new one and add the following code.


services:
  api:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        PACKAGE_PATH: api
        WORKING_DIR: /usr/src/
    expose:
      - 8000
    ports:
      - 8000:8000
    environment:
      - NODE_ENV=development
      - TOKEN_SECRET=mYsEcReT
      - PORT=8000
      - TOKEN=eyJhbGciOiJIUzI1NiJ9.c29sb21vbg.WsTPoPpKhR7fFW0KsfN0A3II5M_iKLPpbnV2OOJOKcc
      - BASE_URL=http://api:8000

    env_file:
      - ./common.env
    volumes:
      - ./api:/usr/src
    depends_on:
      - postgres

    command: >
      sh -c "npm install knex -g &&
             npx knex migrate:latest &&
             npx knex seed:run &&
             npm start &&
             npm test"

  frontend:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        PACKAGE_PATH: frontend
        WORKING_DIR: /usr/src/
    expose:
      - 3000
    ports:
      - 3000:3000
    environment:
      - REACT_APP_ENV=production
      - REACT_APP_BACKEND=http://0.0.0.0:8000/graphql
      - NODE_PATH=/usr/src/
      - REACT_APP_TOKEN=eyJhbGciOiJIUzI1NiJ9.c29sb21vbg.WsTPoPpKhR7fFW0KsfN0A3II5M_iKLPpbnV2OOJOKcc

    env_file:
      - ./common.env
    volumes:
      - ./frontend:/usr/src
    depends_on:
      - api
    command: ["npm", "start"]

  postgres:
    image: postgres
    restart: always
    env_file: ./common.env

Let me walk you through the nitty-gritty of this code snippet.

Before we continue with the code walkthrough, here’s the content of the common.env which all our services use. So add the following code to your common.env file which will be used to set up your Postgres instance.

DB_CLIENT=postgresql
POSTGRES_DB=strapi_test_db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_PORT=5432
POSTGRES_HOST=postgres

BASE_URL=http://{DOCKER_IP}:8000

N/B: The content of this .env file is only for preview. Never expose your environment variables or push them to Git.

Code Walkthrough

The Compose file is a YAML file defining services, networks, and volumes for a Docker application. The file is split into services, their properties, and the instructions to successfully create a container out of the service.

Services

The next section of the file is the part of the services. Here we define all the services that make up our application. In our case, the backend API, the frontend, and a PostgreSQL database as defined respectively.

  1. First, we have the API service:

  api:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        PACKAGE_PATH: api
        WORKING_DIR: /usr/src/
    expose:
      - 8000
    ports:
      - 8000:8000
    environment:
      - NODE_ENV=development
      - TOKEN_SECRET=mYsEcReT
      - PORT=8000
      - TOKEN=eyJhbGciOiJIUzI1NiJ9.c29sb21vbg.WsTPoPpKhR7fFW0KsfN0A3II5M_iKLPpbnV2OOJOKcc
      - BASE_URL=http://api:8000

    env_file:
      - ./common.env
    volumes:
      - ./api:/usr/src
    depends_on:
      - postgres

    command: >
      sh -c "npm install knex -g &&
             npx knex migrate:latest &&
             npx knex seed:run &&
             npm start &&
             npm test"

To create a container, we need an image. Therefore, in the build section of the API service, we pass . to the context property telling Docker Compose to use the current directory as the root directory so that we can find the Dockerfile we created earlier and pass the arguments to it as discussed earlier.

Here’s the snippet for the build section. This section will build an image out of the Dockerfile with the right values passed to the ARG arguments.

 build:
      context: .
      dockerfile: Dockerfile
      args:
        PACKAGE_PATH: api
        WORKING_DIR: /usr/src/

Next, we expose the container port which is 8000 using the expose property, after that, we also map the port to 8000 using the port property. This means that the container exposes port 8000 and we also map it to 8000 in our host machine. So [localhost:8000](<http://localhost:8000>) will show our API endpoint.

Next, we define some environment variables that are not sensitive and use the env_file property to define sensitive environment variables or common environment variables that will be used in all our services.

Next, we mount a volume storage from ./api to the folder inside our container /usr/src using the volumes property. Volumes are just a mechanism for persisting data generated by and used by Docker containers.

Next, we use the depends_on property to tell Docker Compose that the API service is depending on the postgres service which means that the API service will not start until the postgres is completely started successfully.

Lastly, we use the command property to run all the commands we need to set up our backend. First, we install knex globally because need it to run our database migration and database seeders. Lastly, we start our backend server and run tests with npm start and npm test commands respectively.

  1. Secondly, we have the Frontend service:

The next service is the frontend service, which is the same thing as the api service except we only change the PACKAGE_PATH and WORKING_DIR locations, map 3000 as the port, create individual environment variables, and make sure it depends on the api service.

  1. Third, we have the Postgres service:

The last service is the postgres service. Here we don’t need the build step because we using the official postgres image from Docker Hub. We set it to always restart if it fails at some point because our backend service depends on it. Lastly, we pass the common.env file to the env_file property which contains the properties needed to set up a new Postgresql instance.

postgres:
    image: postgres
    restart: always
    env_file: ./common.env

Conclusion: Docker

Don’t just learn Docker. Learn the concept of containerization also.

Whenever you're ready

There are 4 ways we can help you become a great backend engineer:

The MB Platform

Join 1000+ backend engineers learning backend engineering. Build real-world backend projects, learn from expert-vetted courses and roadmaps, track your learnings and set schedules, and solve backend engineering tasks, exercises, and challenges.

The MB Academy

The “MB Academy” is a 6-month intensive Advanced Backend Engineering BootCamp to produce great backend engineers.

Join Backend Weekly

If you like post like this, you will absolutely enjoy our exclusive weekly newsletter, Sharing exclusive backend engineering resources to help you become a great Backend Engineer.

Get Backend Jobs

Find over 2,000+ Tailored International Remote Backend Jobs or Reach 50,000+ backend engineers on the #1 Backend Engineering Job Board

Backend Tips, Every week

Backend Tips, Every week