"Mastering Docker Swarm and Cluster: A Guide to Multi-Container Deployment with Docker-Cluster and Beyond"
1- what is Docker-Swarm?
Docker Swarm is a container orchestration tool that allows you to manage a cluster of Docker hosts and run containerized applications across them. In simple terms, Docker Swarm helps you manage multiple Docker containers across multiple machines in a cluster.
With Docker Swarm, you can deploy and scale applications with ease, and ensure high availability and fault tolerance. Docker Swarm provides a range of features for managing and scheduling containers, including load balancing, service discovery, and automatic container recovery.
One of the key benefits of using Docker Swarm is that it abstracts away the underlying infrastructure and provides a simple, declarative interface for managing containers. This makes it easy to deploy and manage applications at scale, without having to worry about the complexities of the underlying infrastructure.
Overall, Docker Swarm is a powerful tool for managing containerized applications and is a popular choice for organizations looking to scale their container deployments.
2- What is Docker-Stack or Cluster?
Docker Stack or Docker Cluster refers to a group of machines that work together to run containerized applications. It allows you to deploy and manage multiple containers across multiple machines as a single, unified system.
A Docker Stack or Cluster typically consists of a manager node, which is responsible for coordinating and scheduling tasks across the cluster, and multiple worker nodes, which are responsible for running the actual containers.
Docker Stack or Cluster provides many benefits, including high availability, fault tolerance, and scalability. By distributing containerized applications across multiple machines, you can ensure that your applications are always available, even if one or more machines fail. This also allows you to easily scale your applications as needed, by adding or removing worker nodes from the cluster.
In addition, Docker Stack or Cluster provides a simple and consistent interface for deploying and managing containerized applications, regardless of the underlying infrastructure. This makes it easy to manage large, complex systems and allows you to focus on building great applications, rather than worrying about the underlying infrastructure.
Overall, Docker Stack or Cluster is a powerful tool for managing containerized applications at scale and is a popular choice for organizations looking to build and deploy modern, cloud-native applications.
-TASK-
Create a multi-container docker-cluster file:
1- First create multiple Instances or Servers. We can 2 or more than that as per requirements. Here we created 2 Instances 1-Master and 2-worker.
2- Next, Update all the servers and install Docker in all the servers.
sudo apt-get update && sudo apt install docker.io
3- Next, Run the command sudo docker swarm init
in the Master (Manager) Server, it will generate a Docker swarm token for the worker node to join with the Master Server. As seen in the below image.
4- Next we will make a change in the security group to add port 2377 as a docker swarm token- Docker Swarm uses a token to authenticate nodes and allow them to join a swarm. By default, this token is created with the IP address and port 2377, which is the default port used for swarm management communication.
5- Next, Copy and Paste the Docker Swarm Token in the Worker node to connect with the Master server-Run the command with Sudo
+ Token.
As shown in the image below.
6-The sudo docker node ls
the command is used to list all the nodes in a Docker Swarm cluster. When you run this command on a Docker Swarm manager node, it will display a list of all the nodes that have joined the swarm.
The command should be run on Master Server - The output of this command includes information about each node, such as its ID, hostname, availability status, and the role it is currently assigned (manager or worker).
7- In Docker Swarm, the sudo docker info
command provides information about the Docker Swarm cluster, including the number of nodes, the version of Docker running on each node, and the Swarm mode configuration settings.t will also provide information about the swarm's routing mesh, service discovery, and load balancing capabilities.
8-Next, We create a Docker Service with 2 Replicas using docker hub image-
sudo docker create --name your_service-name --replicas 2 --publish port:port image-name
We can create Replicas as we need - for example, I used 2.
9- To check services create we run the command - sudo docker service ls
And sudo docker ps
1- As we created 2 replicas, here we can see 2 containers replicated using the same image.
2- docker ps
shows only 1st container in the Master server and the other 2nd container is service to a worker node by the Master (manager)
Master (Manager) Server:
Worker Node (Server):
10-Next we ADD port 8000 to the security group of the Master server and to the worker (if created separately), otherwise it will auto-add the port into the Worker node if we add a port to the Master server.
Why Add port 8000 because we built our container allocating port 8000
11-As the below Image shows the web application is running in both the Master and worker server
12- In Docker Swarm, when you use the docker kill
command to stop a container running on a node, Docker Swarm automatically restarts the container on the same node or a different node in the cluster within seconds - this helps in running the stopped container automatically. As shown in the Image.
13- Now we create a Docker-cluster yaml file - To Run multiple containers and services with just an yaml file.
The reason why Docker cluster and Docker Compose YAML files look similar is that both are used to define the deployment and configuration of multiple containers within a distributed application architecture.
Docker Compose is used for local development and deployment, where a single host is used to run multiple containers of an application.
Docker Swarm, on the other hand, is used for clustering multiple Docker hosts together to create a distributed application architecture. In a Docker Swarm cluster, the Docker Compose YAML file is used to define the desired state of the application across the entire cluster.
14- Next, We run the command - to Deploy the Cluster file
sudo docker stack deploy -c docker cluster.yaml stack-name
15- Now we see that all the containers and MYSQL services are running.
1- MYSQL is running on the Master server.
2- And django-app is running on the Worker node.
16- Next we can create replicas by using the Scale function:
sudo docker service scale service-name=2
1- Use sudo docker service ls
to know the service name.
2- We see that django-app is replicated or scaled to 2. One in the master and the 2nd in the worker node.
17- The Worker node can leave and join the Docker-Swarm
1- sudo docker swarm leave
= To leave
2- sudo docker swarm join
+ token = To join the Swarm
18- To get the Docker swarm Token run this command in the Master node. then copy and paste using sudo in the Worker node.
sudo docker swarm join-token worker
Github : https://github.com/amitkmr076/Docker-stack.yaml--Docker-swarm--Deployment-project
Thank you for taking the time to read my blog on Docker deployment using Swarm and Cluster. I hope you found it informative and helpful in streamlining your Docker workflow.