"Streamlining Your Infrastructure: A Step-by-Step Guide to Connecting Master and Worker Nodes in Kubernetes and Deploying a Django-ToDo Project"

"Streamlining Your Infrastructure: A Step-by-Step Guide to Connecting Master and Worker Nodes in Kubernetes and Deploying a Django-ToDo Project"

·

5 min read

Setup Master And Worker Node (Server):

Step-1:

Follow this script for installation commands and setup:

Link: https://github.com/RishikeshOps/Scripts/blob/main/k8sss.sh

Step-2: Launch two servers Master and worker-

Master Server needs to t2-medium server because it requires more storage and CPU

Step-3: In Both servers - Master And Worker nodes: Follow the commands in the script.

Step-4: Now, In the Master node - Become the root user sudo su because Kubernetes requires you to work as the root user.

kubeadm init and follow the other commands for Master node

Open port 6443 in the Master node in the security group - a token generate on this IP to connect both the master and node

Step-5: Now in Worker Node: follow the commands

As root user sudo su

Add --v=5 , At the end of the token

Step-6: In the Master node: run the command kubectl get nodes , as we can see the master and worker nodes are connected.

Step-7: Run Nginx through Image by kubectl - you get the command by simply searching in google - kubectl for nginx - just copy and run the command.

pods is created

In the worker node - do docker ps to see the containers, Nginx is running.

we cannot kill, stop or delete the docker container because it will auto-heal.

To delete we have to delete the pods in the Master node to stop containers.

TASK - Create Django-todo Project:

Step-1: First we create a Namespace so that we can build our cluster/pods in the namespace in isolation with all resources.

kubectl create namespace my_namespace And to get namespace

kubectl get namespace

Make a directory/folder for managing and storing pod files for deployment.

For the pod yaml file we can the syntax from Kubernete's official documentation

Step-2: Next, create pod.yaml file and copy and paste the syntax, and edit/write the syntax as pod requirement- add name, namespace, docker image, and port.

Apply the pod.yaml file kubectl apply -f pod.yml

Check worker node for running containers - build pushed to a worker node

In the Master node check pods with the namespace.

kubectl get pods -n my_namespace

Here in Worker node even if we kill the container with the docker kill command, it will auto-heal the container because the pod is still running in Master node

We have to delete the pod if want to kill/stop the container in the Master node

Step-3: Next, we create a deployment.yaml file. we can get the syntax from Kubernete's official page. Deployment files give more control over managing pods then, deploying with pods yaml file

Create a deployment.yaml file using VIM and copy-paste the syntax and add necessary configurations- name,namespace, labels, Replicas, image, port.

Apply the deployment file

Check pods in the Master node

We can check the running containers in the Worker node - docker ps

For scaling we can change Replicas in the deployment yaml file and Apply the deployment file.

From 2 pods it gets to 1 single pod after a change in replicas

watch kubectl get pods -n my_namespace this command let you see the live termination/increase of pods it refreshes every 2 seconds

Step-4: Next, we create a Service yaml file - to expose the network to the containers because containers are built in a local environment.

Create a service.yaml file and copy-paste the syntax and edit and add - kind, name, namespace, type and select a port between 30000-32767 because that's the range

Apply the Service yaml file

kubectl get services -n my_namespace - to get services

Expose the port that was given in service yaml file - 30011 in the Worker node as containers are running in the Worker node

Copy the worker node IP and add 30011 and run in a new tab - The application is running

We can get the Description of the deployed Service-

kubectl describe deployment.app/deployment-name -n my_namespace

Step-5: Next, we create a secret file for deploying MYSQL as it needs a password, so we use base64 to encrypt the password to add to a secret file

And also create a new namespace for Mysql to build the service inside the namespace

Secret yaml syntax - we add - kind, name, namespace, type, and encrypted password

Appy the secret yaml file and to get the secret file command is

kubectl apply -f secret.yaml

kubectl get secret -n my_namespace

we can also create a configMap yaml file - this file is used for non-encrypted files and the secret file is used for encrypted files

Now we create a MySQL deployment file the same as every deployment file just we need to add the environment for MySQL - we add the name of the environment, the name of the secret file, key- password.

Now Apply all the - secret, configMap, and deployment file

Check the pods - its created and running

kubectl get pods -n my_namespace

That's how we use Kubernetes to create pods and get containers running. This allows us to easily manage and scale our containerized applications with ease, providing a more efficient and reliable way to run applications in production.

Thank you for taking the time to read my blog. I hope you found the information helpful and informative. If you have any feedback or suggestions, please feel free to reach out to me.