"Mastering Microservices with Kubernetes: A Step-by-Step Guide and Troubleshooting"

"Mastering Microservices with Kubernetes: A Step-by-Step Guide and Troubleshooting"

·

4 min read

Kubernetes - Micro Service Project Along With Trouble Shooting.

Step-1: First we create two T2-Medium Instances for the project - 1st for Master Node and 2nd for Worker Node.

Kubernetes is designed to be highly available and resilient, which means it requires multiple nodes to ensure fault tolerance and high availability. If you are deploying a Kubernetes cluster, it is recommended that you use at least two or more t2.medium instances for the control plane and worker nodes.

Step-2: Connect to both Instances and Follow the Script provided for setting up both servers with Kubernetes also if you need help setting up Kubernetes follow the below blog link:

Script: https://github.com/RishikeshOps/Scripts/blob/main/k8sss.sh

Blog: https://amitblog.hashnode.dev/streamlining-your-infrastructure-a-step-by-step-guide-to-connecting-master-and-worker-nodes-in-kubernetes-and-deploying-a-django-todo-project

Step-3: Git Clone the code in Your Master Node from GitHub.

Git code link: https://github.com/LondheShubham153/django-todo-cicd

As you can see code is being cloned and we have several yaml files to build the project and run the application.

Step-4: First, In Master Node we apply the Deployment file which is "taskmaster.yml" to create pods

We can check the Worker node for running containers by docker ps command

For Scale out or Scale in we can run the command:

kubectl scale --replicas=2 deployment/pod-name

OR we can make changes to the deployment file in the replicas section.

Step-5: Next we apply the Service file to the containers to get exposed to the network -"taskmaster-svc" yaml file

Now, we expose port 30012 in the Master node because exposing the port in one node will apply to both servers because we launched both servers together, either wise if not then we have to expose the port on a worker node - Why port 30012, not 5000 because we expose port 30012 as Nodeport in the service yaml file

Copy the Worker node IP address because containers run on a worker node.

Open a new tab and paste the IP address with 30012 port

Step-6: PVs (persistence volume) and PVCs (persistence volume claim) In Kubernetes provide a way to manage and use persistent storage in a cluster, ensuring that data stored in containers is not lost when the Pod is recreated or rescheduled

Now we apply the persistence Volume yaml file to create a storage to data to be stored.

Apply the mongo-pv.yml file - persistence volume file and show status available and no claim

Next we have to apply a file to claim that storage we created by persistence volume claim file

Apply the persistence volume claim yaml file - mongo-pvc.yml

Now if we check the status by kubectl get pvc mongo-pvc - we can see that status is being "bound"

Step-7: Next, we apply the deployment file -"mongo.yml" - for mongo db.Below is the mongo db deployment file shown using cat command

Mongo db pod is created in the Master node

In the worker node run docker ps to see Mongo db container

To check mongo db is running we do docker exec -it conatiner-ID bash

Next, we apply the service yaml file for mongo db - "mongo-svc.yml"

Apply the service file and check the service status - kubectl get svc mongo service created. "svc and service are same"

Step-8: let's do a small Trouble-shooting - In the Worker node do docker ps to get containers now if we check the taskmaster container logs it shows an error.

An error will come like this - error code 400.

sorry, I forgot and was not able to get the error screenshot, but the error seems like this when we do "docker logs container-ID"

A 400 error indicates that there is an issue with the request sent by the client and it cannot be processed by the server. Clients should review their request and ensure that it adheres to the correct syntax and format specified by the HTTP protocol.

Step-9: The Simple Solution is to Add an "environment" for mongo db in the deployment file of the taskmaster deployment file

Again, check the docker logs of both replicas containers for the taskmaster in the Worker node - As we can see it shows "running in all addresses"

I appreciate you taking the time to read my blog. Thank you!