Table of contents
- Docker Volume
- Docker Network
- Tasks:
- Task 1:
- Create a multi-container docker-compose file that will bring UP and bring DOWN containers in a single shot ( Example - Create application and database container )
- Use the docker-compose up command with the -d flag to start a multi-container application in detached mode.
- We will use the docker-compose ps command to view the status of all containers
- We will use docker-compose logs to view the logs of a specific service.
- We will use the docker-compose down command to stop and remove all containers, networks, and volumes associated with the application.
- Task 2:
- Use Docker Volumes and Named Volumes to share files and directories between multiple containers.
- Create two or more containers that read and write data to the same volume.
- Verify that the data is the same in all containers by using the docker exec command to run commands inside each container.
- Use the docker volume ls command to list all volumes and the docker volume rm command to remove the volume when you're done.
- Creating a Docker Volume for a Project in Git and mounting the volume to the project.
Docker Volume
Whenever we create and work on a container, there's a chance that the container gets crashed and all our data are lost. This is not ideal for many applications, so Docker provides a way to deal with it. Docker comes up with a concept of volume where a copy of the container is created and stored in the system using bind mounts, which bind a location on the host's disk to a location on the container's disk so that if by any chance the data is lost, the user can retrieve the data from the host system. Volumes are like virtual hard drives managed by Docker. Docker handles storing them on disk. It's easy to create and remove them using the Docker CLI but you need to set up the directories and manage them yourself.
Volumes work on both Linux and Windows containers and can be more safely shared among multiple containers. It also lets you store volumes on remote hosts or cloud providers.
Docker Network
Docker allows you to create virtual spaces called networks, where you can connect multiple containers (small packages that hold all the necessary files for a specific application to run) together. This way, the containers can communicate with each other and with the host machine (the computer on which the Docker is installed). When we run a container, it has its own storage space that is only accessible by that specific container. If we want to share that storage space with other containers, we can't do that. Docker networks configure communications between neighboring containers and external services. Containers must be connected to a Docker network to receive any network connectivity.
There are seven different types of Docker Network-
default bridge - When a Docker is started, a
default
bridge network is created automatically, and newly-started containers connect to it unless otherwise specified. Containers on the default bridge network can only access each other by IP addresses, unless the--link
option is used which is considered legacy. Ex-docker run -d --name container-name image-name
host - Containers that use the
host
network mode share the host's network stack without any isolation. They aren’t allocated their own IP addresses, and port binds will be published directly to the host’s network interface. For instance, if we run a container which binds to port 80 and we usehost
networking, the container's application is available on port 80 on the host's IP address. Host network is useful for performance optimization and in situations where a container needs to handle a large range of ports.Ex-
docker run -d --name container-name --network host image-name
overlay - The
overlay
network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.IPvLAN -
IPvLAN
is an advanced driver that offers precise control over the IPv4 and IPv6 addresses assigned to your containers, as well as layer 2 and 3 VLAN tagging and routing.macvlan -
macvlan
is another advanced option that allows containers to appear as physical devices on your network. It works by assigning each container in the network a unique MAC address.Custom bridge - It creates a private network on the host machine, allowing containers to communicate with each other using container names as host names. Containers containing the same bridge network can connect to each other via IP address.
Ex-
docker network create my-network
- to create a custom bridge network named my-network.docker run -d --name container-name --network my-network image-name
- to run the container inside the newly created custom network.None Network - The none network mode disables the networking for a container. Containers running in this mode have no network interfaces and are completely isolated from the network. It is useful when we want to run a container in a fully isolated environment. Ex-
docker run -d --name container-name --network none image-name
Tasks:
Task 1:
Create a multi-container docker-compose file that will bring UP and bring DOWN containers in a single shot ( Example - Create application and database container )
We have created a docker-compose file in our previous article which contains the configuration for running backend and database code. Let's view the docker-compose.yml file using cat docker-compose.yml
.
ubuntu@ip-172-31-40-139:~/dockerProjects/two-tier-app/two-tier-flask-app$ cat docker-compose.yml
version: '3'
services:
backend:
build:
context: .
ports:
- "5000:5000"
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: test@123
MYSQL_DB: twotier_database
depends_on:
- mysql
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: test@123
MYSQL_DATABASE: twotier_database
MYSQL_USER: devops
MYSQL_PASSWORD: devops
Use the
docker-compose up
command with the-d
flag to start a multi-container application in detached mode.
We will use docker-compose up -d
command to start the multi-container application in detached mode.
ubuntu@ip-172-31-40-139:~/dockerProjects/two-tier-app/two-tier-flask-app$ docker-compose up -d
Creating network "two-tier-flask-app_default" with the default driver
Creating two-tier-flask-app_mysql_1 ... done
Creating two-tier-flask-app_backend_1 ... done
We will use the
docker-compose ps
command to view the status of all containers
ubuntu@ip-172-31-40-139:~/dockerProjects/two-tier-app/two-tier-flask-app$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c8d17a1e107 two-tier-flask-app_backend "python app.py" 17 seconds ago Up 16 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp two-tier-flask-app_backend_1
a2ebcc8bdbc0 mysql:5.7 "docker-entrypoint.s…" 17 seconds ago Up 16 seconds 3306/tcp, 33060/tcp two-tier-flask-app_mysql_1
We will use
docker-compose logs
to view the logs of a specific service.
ubuntu@ip-172-31-40-139:~/dockerProjects/two-tier-app/two-tier-flask-app$ docker logs 9c8d17a1e107
* Serving Flask app 'app' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
* Running on http://172.22.0.3:5000
Press CTRL+C to quit
* Restarting with stat
* Debugger is active!
* Debugger PIN: 110-543-738
We will use the
docker-compose down
command to stop and remove all containers, networks, and volumes associated with the application.
ubuntu@ip-172-31-40-139:~/dockerProjects/two-tier-app/two-tier-flask-app$ docker-compose down
Stopping two-tier-flask-app_backend_1 ... done
Stopping two-tier-flask-app_mysql_1 ... done
Removing two-tier-flask-app_backend_1 ... done
Removing two-tier-flask-app_mysql_1 ... done
Removing network two-tier-flask-app_default
Task 2:
Use Docker Volumes and Named Volumes to share files and directories between multiple containers.
Firstly, we will create a folder named volumes and inside that we will be creating a docker volumes using docker volume create --name=<volume-name>
ubuntu@ip-172-31-47-73:~$ docker volume create --name=my-volume
my-volume
Now, we will be creating 2 containers - container1 and container2 using the previously created named volume - docker run -d --name <container-name> -v <volume-name>:/app <image-name:tagname>
Here, -d is used to run the container in detach mode, --name is used to assign name to the container and -v is used to establish a connection between the named volume and a directory within the container.
ubuntu@ip-172-31-47-73:~$ docker run -d --name container1 -v my-volume:/app node-todo:latest
3a8bcd6fa73e3e47e4ebdaab64c09ce5e2188be1d4d4cd953c7a8d674417fbee
ubuntu@ip-172-31-47-73:~$ docker run -d --name container2 -v my-volume:/app node-todo:latest
d9b7a53905bbbf5a02823dc04602475e97c7ec39a7f528e7e13c01fb9369a0b4
ubuntu@ip-172-31-47-73:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9b7a53905bb node-todo:latest "node app.js" 3 seconds ago Up 2 seconds 8000/tcp container2
3a8bcd6fa73e node-todo:latest "node app.js" 14 seconds ago Up 12 seconds 8000/tcp container1
Create two or more containers that read and write data to the same volume.
We have seen in the above example two containers are created in the same volume my-volume folder. We will be reading and writing data in the two different containers and see that their values are the same in both containers.
Verify that the data is the same in all containers by using the docker exec command to run commands inside each container.
As we can see above, two containers are created and are running. Let's get inside the first container using exec command and add some data in a file.
ubuntu@ip-172-31-47-73:~$ docker exec -it container1 sh
/app echo "Hello from Container1" > test.txt
/app exit
Now, we will go inside the second container and view the file test.txt.
ubuntu@ip-172-31-47-73:~$ docker exec -it container2 sh
/app ls
Dockerfile app.js package-lock.json test.js
Jenkinsfile docker-compose.yaml package.json test.txt
README.md node_modules sonar-project.properties views
/app cat test.txt
Hello from Container1
/app exit
We see that the same data is shared in both containers. Although, we made changes in Container1 but the same changes are present in Container2 as well.
Use the docker volume ls command to list all volumes and the docker volume rm command to remove the volume when you're done.
We can see the created volume using
docker volume ls
command below-
ubuntu@ip-172-31-40-139:~/volumes$ docker volume ls
DRIVER VOLUME NAME
local my-volume
local shared-volume
The created docker volume can be removed using docker volume rm <volume-name>
. We can see we have two volumes created but when we give the remove commands, it removes the shared-volume
.
ubuntu@ip-172-31-40-139:~/volumes$ docker volume rm shared-volume
shared-volume
ubuntu@ip-172-31-40-139:~/volumes$ docker volume ls
DRIVER VOLUME NAME
local my-volume
Creating a Docker Volume for a Project in Git and mounting the volume to the project.
Let's create a folder named django-app and clone a project from Git and mount it to a volume created in local.
ubuntu@ip-172-31-40-139:~/dockerProjects/django-app$ git clone https://github.com/LondheShubham153/django-todo-cicd.git
Cloning into 'django-todo-cicd'...
remote: Enumerating objects: 344, done.
remote: Counting objects: 100% (63/63), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 344 (delta 48), reused 37 (delta 37), pack-reused 281
Receiving objects: 100% (344/344), 130.78 KiB | 6.23 MiB/s, done.
Resolving deltas: 100% (169/169), done.
When we go inside the folder and give ls
command we can see the existing files inside it. Let's view the Dockerfile in it by giving cat Dockerfile
.
ubuntu@ip-172-31-40-139:~/dockerProjects/django-app/django-todo-cicd$ ls
Dockerfile LICENSE README.md db.sqlite3 docker-compose.yml k8s manage.py staticfiles todoApp todos volume
ubuntu@ip-172-31-40-139:~/dockerProjects/django-app/django-todo-cicd$ cat Dockerfile
FROM python:3
WORKDIR /data
RUN pip install django==3.2
COPY . .
RUN python manage.py migrate
EXPOSE 8000
CMD ["python","manage.py","runserver","0.0.0.0:8000"]
Let's build an image from the Dockerfile by giving docker build . -t django-app
ubuntu@ip-172-31-40-139:~/dockerProjects/django-app/django-todo-cicd$ docker build . -t django-app
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/
Sending build context to Docker daemon 551.9kB
Step 1/7 : FROM python:3
3: Pulling from library/python
de4cac68b616: Pull complete
d31b0195ec5f: Pull complete
9b1fd34c30b7: Pull complete
c485c4ba3831: Pull complete
9c94b131279a: Pull complete
4bc8eb4a36a3: Pull complete
470924304c24: Pull complete
8999ec22cbc0: Pull complete
Digest: sha256:02808bfd640d6fd360c30abc4261ad91aacacd9494f9ba4e5dcb0b8650661cf5
Status: Downloaded newer image for python:3
---> 28d8ca9ad96d
Step 2/7 : WORKDIR /data
---> Running in 87d062db5679
Removing intermediate container 87d062db5679
---> 5e898ca38e0c
Step 3/7 : RUN pip install django==3.2
---> Running in cc5afab708c5
Collecting django==3.2
Downloading Django-3.2-py3-none-any.whl (7.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.9/7.9 MB 7.0 MB/s eta 0:00:00
Collecting asgiref<4,>=3.3.2 (from django==3.2)
Obtaining dependency information for asgiref<4,>=3.3.2 from https://files.pythonhosted.org/packages/9b/80/b9051a4a07ad231558fcd8ffc89232711b4e618c15cb7a392a17384bbeef/asgiref-3.7.2-py3-none-any.whl.metadata
Downloading asgiref-3.7.2-py3-none-any.whl.metadata (9.2 kB)
Collecting pytz (from django==3.2)
Downloading pytz-2023.3-py2.py3-none-any.whl (502 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 502.3/502.3 kB 20.5 MB/s eta 0:00:00
Collecting sqlparse>=0.2.2 (from django==3.2)
Downloading sqlparse-0.4.4-py3-none-any.whl (41 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.2/41.2 kB 4.7 MB/s eta 0:00:00
Downloading asgiref-3.7.2-py3-none-any.whl (24 kB)
Installing collected packages: pytz, sqlparse, asgiref, django
Successfully installed asgiref-3.7.2 django-3.2 pytz-2023.3 sqlparse-0.4.4
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Removing intermediate container cc5afab708c5
---> 142b3e1de1f5
Step 4/7 : COPY . .
---> be4210011557
Step 5/7 : RUN python manage.py migrate
---> Running in f596a3f4e41d
System check identified some issues:
WARNINGS:
todos.Todo: (models.W042) Auto-created primary key used when not defining a primary key type, by default 'django.db.models.AutoField'.
HINT: Configure the DEFAULT_AUTO_FIELD setting or the TodosConfig.default_auto_field attribute to point to a subclass of AutoField, e.g. 'django.db.models.BigAutoField'.
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions, todos
Running migrations:
No migrations to apply.
Removing intermediate container f596a3f4e41d
---> 89b77ed0b7ca
Step 6/7 : EXPOSE 8000
---> Running in 37282e0501c6
Removing intermediate container 37282e0501c6
---> 2c6f70ff4132
Step 7/7 : CMD ["python","manage.py","runserver","0.0.0.0:8000"]
---> Running in 09bd7942ba3f
Removing intermediate container 09bd7942ba3f
---> 1ed8367846a5
Successfully built 1ed8367846a5
Successfully tagged django-app:latest
Check whether the image is created or not by giving docker images
. We can see django-app
is created.
ubuntu@ip-172-31-40-139:~/dockerProjects/django-app/django-todo-cicd$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
django-app latest 1ed8367846a5 About a minute ago 1.06GB
python 3 28d8ca9ad96d 8 days ago 1.01GB
Now, let's create another folder named django-volume which we will be mounting to the project. Use docker volume --create --name=<name-of-volume> --opt type=none --opt device=(location where volume has to be stored) --opt o=bind
. We can see a new volume has been created named django-app-volume
.
ubuntu@ip-172-31-40-139:~/dockerProjects/volumes/django-app-volume$ docker volume create --name=django-app-volume --opt type=none --opt device=/home/ubuntu/dockerProjects/volumes/django-app-volume --opt o=bind
django-app-volume
ubuntu@ip-172-31-40-139:~/dockerProjects/volumes/django-app-volume$ docker volume ls
DRIVER VOLUME NAME
local django-app-volume
Now, we will create and run the container using the newly created image and mount it to our local. Use - docker run -d --mount source=(volume-path),target=(data) -p <published-port> image:<tagname>
ubuntu@ip-172-31-40-139:~/dockerProjects/volumes/django-app-volume$ docker run -d --mount source=django-app-volume,target=/data -p 8000:8000 django-app:latest
2b463c58313867d13d1f4fd143ef242953aa1d3cbb882696c8b0c6f62409fe05
Now, if we give ls
command we can see the files of the project inside django-app-volume folder.
ubuntu@ip-172-31-40-139:~/dockerProjects/volumes/django-app-volume$ ls
db.sqlite3 k8s manage.py staticfiles todoApp todos
This shows that the data present in the running container gets copied to the volume. This way we can always create a backup of our data so that if by any chance the data is lost, the user can retrieve the data from the host system.
Thanks!
Happy Learning!
~Shilpi