"Streamline Your Docker Workflow: A Guide to Docker Volumes, Compose, and Multi-Container Deployment"

"Streamline Your Docker Workflow: A Guide to Docker Volumes, Compose, and Multi-Container Deployment"

1- What is Docker-compose?

Docker-compose is a tool that helps you run and manage multiple Docker containers as a single application.

In simpler terms, imagine you have a web application that consists of a web server, a database, and a caching server. Instead of manually starting each container one by one, you can use Docker-compose to define all the services in a single configuration file and start them with a single command.

Docker-compose also allows you to specify dependencies between services, manage environment variables and volumes, and scale your application up and down as needed.

Overall, Docker-compose simplifies the process of managing complex applications that require multiple Docker containers by providing an easy-to-use interface to define, run, and manage them.

2- What is Docker volume?

A Docker volume is a way to store and manage persistent data in Docker containers.

In simpler terms, when you run a Docker container, any data that the container creates or modifies is typically stored within the container's file system. However, this data is lost when the container is stopped or deleted. Docker volumes provide a way to persist data beyond the lifetime of a container.

A Docker volume is a directory that exists outside of the container's file system but can be mounted to the container as if it were part of the file system. This allows data to be written to the volume and accessed by the container even after the container is stopped or deleted.

Docker volumes can be used to store database files, logs, configuration files, and any other data that needs to persist across container restarts or updates. They can also be shared between multiple containers, allowing data to be easily exchanged and managed.

Overall, Docker volumes are a powerful tool for managing persistent data in Docker containers and are a key feature in building scalable and reliable containerized application

------------------TASK--------------------------

Create a multi-container docker-compose file:

1- First create an Instance (Server) or you can do it in your own system. Open the terminal and run the command for update and installation for Docker. Next

2- Run the command to give permission - The command sudo usermod -aG docker $USER is used to add the current user to the docker group on a Linux system. Then Reboot the system. In general, you don't need to reboot the system after adding a user to a group, but you do need to log out and log back in again for the changes to take effect.

3- Then install Docker-compose. Docker Compose is not included in the default installation of Docker because it is a separate tool that provides additional functionality beyond basic Docker functionality.

4- Next, create a docker-compose.yaml file using Vim. Here I am using a docker image for the build from my Dockerhub Account.

5- After that Next step is to run the docker-compose file by running the command- docker-compose up -d

6- Now check for running containers after the successful execution of the command by- docker ps or docker-compose ps

7- To inspect - The docker inspect command is used to retrieve detailed information about a Docker container, image, or network.

8- Now go to the Instance security group and make an inbound change to ADD IP 8000 because as we know I build my docker container on port exposing ip:8000 as shown in the yaml file. And save the rules.

9- Now copy the Instance IP address and add:8000 port. we see that the application is up and running.

10- Now we can check docker logs by - docker logs container ID .

The docker logs command is used to view the logs generated by a Docker container. When you run docker logs with the name or ID of a container, it returns the output generated by the container's stdout (standard output) and stderr (standard error) streams.

11- We can stop and start the docker container using the command as shown in below image - docker stop container ID && docker start container ID

12- We can remove the docker container by - docker rm container ID . But it needs to first stop and then remove as shown in the image below OR can direct remove by adding -f an option to force the removal of a running container before stopping the container.

13- Create a VOLUME to mount with the container and other services -

by going to the home directory and creating the volume directory and under volume create another directory of the name of the service in my case its - react-django. where we can create a volume to mount.

After creating a volume and before mounting the container to the volume, it needs to be stopped - as shown in the image

it's important to stop the container before mounting a volume to it. By stopping the container, you ensure that the content of the container's directory is not changing while the volume is being mounted. Once the volume is mounted and the container is started again, any changes made to the container's directory will be saved to the mounted volume, ensuring that the data persists between container restarts.

14- Run the container again by mounting it to the volume -

docker run -d -p port:port --mount source=volume_name,target=/path Image ID

15- After successfully mounting of volume to the container- go into the container by using the command - docker exec -it container ID bash . Inside the container, we see there is a directory named /app, which is mounted to the volume. Now to test cd app go inside the app directory and create a test file and exit. Now list ls on the react-django volume directory, we see there is a test file mounted there. now do the same in the volume directory and see the effect in the container app directory. As we can see test.txt which was created in volume is now mounted in the app directory.

Mounting is successful -as shown in the image below.

16- Now we ADD, replicas for the container -

As the task said we are creating multi-docker containers in addition to attaching a volume to it.

1-Below we edit and Add 2 replicas to the react-django-app so it can build 2 containers of the same image.

2-Also, we make some changes to ports as multiple containers need multiple ports to allocate themselves with port:8000

3-We also Added volume to the YAML file. So it mounts the containers to the same existing volume

4- Add external variable to the file as shown in the below image step-18, as I forgot to do so in this edit, but made the change afterward when adding the MYSQL service.

17- Run the docker-compose file after the changes are made - as we see 2 containers are built.

18- Now we ADD another service to the yaml file-

1- We Added Mysql service with a port exposing 3306:3306 as MySQL run on this port.

2- Also ADD environment variable - When it comes to a database container, such as MySQL, environment variables can be used to set configuration options such as the database name, username, password, and other settings that are needed to connect to and use the database.

3-Added external variable too: When you define a volume with the "external" parameter set to true, Docker Compose assumes that the volume already exists and does not attempt to create it. Instead, it simply uses the existing volume and attaches it to the relevant containers.

19- Now Run the docker-compose file with all the changes made.

As we can see from the below image in addition to the existing containers MYSQL service is being built and added.

20- Do docker images and docker ps to check the images and container to confirm.

21- Now we are looking inside the 2nd container to check if the volume is mounted or not. As we see in the below image - app/ directory existed in the container containing test files, But when we look inside MYSQL container the volume is not there.

22- We will stop the MYSQL container and then Run the command to mount the existing volume to the MYSQL container. As seen in the below image

23- Now we can go inside the MYSQL container - and we can see app/ directory is there with all the test files. as shown in the image

NOTE: It is generally recommended to use separate volumes for Other Services and your application data. This helps to keep your data organized and allows you to manage them separately.

Sharing the same volume for multiple services can be useful when you need to share data between services. This can be useful when you have services that need to access the same files or when you want to keep a shared cache or database that multiple services can access.

24- We can use docker-compose down - docker-compose down is a command used to stop and remove all the containers, networks, and volumes that were created by the docker-compose up command.

if any container still exists after the command docker-compose then, remove the container either forcefully or by first stopping and then removing it.

This Project was interesting in learning Docker and sharing my learning experience with you all. And I have added my GitHub URL for this project if you all want to try this, please do and share the experience.

GitHub - https://github.com/amitkmr076/Docker-compose.yaml-Deplyment-Project

Linkedin - https://www.linkedin.com/in/amit-kumar-3576191b4/

Thank you for reading this blog post! I hope that it has provided you with valuable information and insights. If you have any questions or comments, please feel free to share. Thanks again for your support and interest in my work. I look forward to sharing more content with you all soon!