How To Create Ansible Role for Launching Kubernetes Multi-Node Cluster
Hey guys hope you all are doing good so finally, I am here with an ansible project to create a multi-node cluster of Kubernetes. Today we are going to create two roles one for setting up the master node and the other for setting up the worker node.
Let's get started with creating an Ansible role for setting up the master node.
ansible-galaxy init role_name=> To make ansible role
From the above command empty role is created inside it head over to the tasks folder and open the main.yml file using your favorite editor and put the below code in it.
- name: installing docker
package:
name: docker
state: present
- name: adding damemon to docker
copy:
src: daemon.json
dest: /etc/docker/daemon.json
- name: enabling docker
service:
name: docker
state: restarted
enabled: yes
- name: adding kube repo
copy:
src: /k8s/k
dest: /etc/yum.repos.d/kubernetes.repo
- name: installing master software
package:
name:
- kubelet
- kubeadm
- kubectl
state: present
- name: enabling kubelet
service:
name: kubelet
state: started
enabled: yes
- name: config image
command:
cmd: "kubeadm config images pull"
- name: installing iproute
package:
name: iproute-tc
state: present
- name: config basic network
shell: 'echo "1" > /proc/sys/net/bridge/bridge-nf-call-ip tables'
- name: config master
shell: "kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
- name: config master to work
shell: mkdir -p $HOME/.kube
- name: config master to work
shell: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- name: config master to work
shell: sudo chown $(id -u):$(id -g) $HOME/.kube/config
- name: token creation
shell: kubeadm token create --print-join-command
register: token
- debug:
var: token.stdout
- name: saving output
local_action:
copy content="{{ token.stdout }}" dest="/k8s/newtoken"
- name: config Networking
shell: 'kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml'
Note:- Going to provide GitHub URL for all the code files if you copy from above you may get an indentation error.
After putting the above code in the main.yml file just save and close it that's it your first role for configuring the master node is done.
Let's understand line by line what is role doing
Step1 We have to install docker in our slave system for that we are using
- name: installing docker
package:
name: docker
state: present
Step2 We have to change the daemon group of docker for this we are using.
In the below code daemon.json is one file in my local system in which ansible is running that I am copying to slave with ansible same file will be provided in the git hub repo attached below
- name: adding damemon to docker
copy:
src: daemon.json
dest: /etc/docker/daemon.json
(daemon is a kind of API which help request and manage images, container etc)
Step3 We have to start the service and enable it that is achieved by
- name: enabling docker
service:
name: docker
state: restarted
enabled: yes
Step4 Once all this is done now we have to configure the yum repo to download some necessary software for configuring the master node.
In the below code daemon.json is one file in my local system in which ansible is running that I am copying to slave with ansible same file will be provided in the git hub repo attached below
- name: adding kube repo
copy:
src: /k8s/kubernetes.repo
dest: /etc/yum.repos.d/kubernetes.repo
Step5 After configuring the yum repo now we have to download the necessary software required for configuring the master node
- name: installing master software
package:
name:
- kubelet
- kubeadm
- kubectl
state: present
Step6 After downloading the software we have to enable the kubelet service.
- name: enabling kubelet
service:
name: kubelet
state: started
enabled: yes
Step7 After enabling kubelet we have to pull all docker images required for configuring the master node for that kubeadm provides one command that will fetch all docker images used for configuring the master node.
- name: config image
command:
cmd: "kubeadm config images pull"
Step 8 We also need software called iproute for configuring master node
- name: installing iproute
package:
name: iproute-tc
state: present
Step9 Now we have to run the final command to configure master node for Kubernetes
- name: config master
shell: "kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
Note:- Above command has only to be run once strictly else it will cause an error this is the reason it is suggested to run this role on master node only once
Step10 Now once you have done till step9 now we have to run three commands to make the kubectl command work in master so you can check no of pods and launch pod from master itself.
- name: config master to work
shell: mkdir -p $HOME/.kube
- name: config master to work
shell: sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- name: config master to work
shell: sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step11 Once you have done all the above steps you are done you can now go in master and can run the kubectl command to perform the operation but if you want to connect the node to the master node then we have to use one token provided by the master. For this, we are creating a token and saving that token in one file in our local system of ansible.
- name: token creation
shell: kubeadm token create --print-join-command
register: token
- debug:
var: token.stdout
- name: saving output
local_action:
copy content="{{ token.stdout }}" dest="/k8s/newtoken"
Step12 Now we have to configure the flannel network that help to make the connection between the master and worker node
- name: config Networking
shell: 'kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml'
That's it we are done with the configuration of the master node.
Now you can use your role to configure the Kubernetes master node.
To use the role we have to create a playbook and inside it, we have to pass the role we want to use as shown below
- hosts: master
roles:
— k8s_master
Here I am using aws ec2 instance to ssh in the instances I have used below inventory.
Note while launching an instance in AWS or any other cloud allow all traffic in a security firewall.
Once the master is configured now we are ready to configure the node while creating the node we have to give the token that we have saved in our file name token while configuring the master node.
Configuring master node and worker node are have similar steps but I preferred to create a sep role for both of them.
First, let's create one more role for configuring the worker node and inside it go to the tasks folder and open main.yml, and put the below code.
- name: installing docker
package:
name: docker
state: present
— name: adding damemon
copy:
src: daemon.json
dest: /etc/docker/daemon.json
— name: enabling docker
service:
name: docker
state: restarted
enabled: yes
— name: adding kube repo
copy:
src: /k8s/kubernetes.repo
dest: /etc/yum.repos.d/kubernetes.repo
— name: installing kubelet
package:
name:
— kubelet
— kubeadm
— kubect
state: present
— name: enabling kubeadm
service:
name: kubelet
state: started
enabled: yes
— name: installing iproute-tc
package:
name: iproute-tc
state: present
— name: config basic requirment
shell: ‘echo -e “net.bridge.bridge-nf-call-ip6tables = 1 \r\n net.bridge.bridge-nf-call-iptables=1” >> /etc/sysctl.d/k8s.conf’
— name: running token
shell: “{{token}}”
As you can see above code is similar to the code we have used for configuring the master node only a few new commands we have added.
The below command is used to configure basic networking requirements in the worker node.
— name: config basic requirment
shell: ‘echo -e “net.bridge.bridge-nf-call-ip6tables = 1 \r\n net.bridge.bridge-nf-call-iptables=1” >> /etc/sysctl.d/k8s.conf’
Below command is the final command in which we have to run the command/token provided by the master node here we are taking token from users while they run the playbook.
— name: running token
shell: “{{token}}”
That’s it we are done with the configuration of the worker node.
Now you can use your role to configure the Kubernetes worker node.
To use the role we have to create a playbook and inside it, we have to pass the role we want to use as shown below in the below playbook we are creating taking as input from the user and saving it in a variable called token.
- hosts: worker
vars_prompt:
- name: token
prompt: enter token
private: no
roles:
- k8s_workernode
Once you have made all the roles now you can run them using playbook as I have shown and enjoy the power of automation.
If want to download the above role directly can use the below commands.
ansible-galaxy install guptaadi123.k8s_masteransible-galaxy install guptaadi123.k8s_workernode
Once we have created a role we can push them to ansible-galaxy workspace who to push their will cover in the next article.
For playbooks and supporting daemon file and Kube repo file follow below GitHub link
Git repo:-https://github.com/guptaadi123/k8s_multinode-cluster.git
Git repo for configuring k8s master role:-https://github.com/guptaadi123/K8s_master.git
Git repo for configuring k8s worker role:-https://github.com/guptaadi123/k8s_workernode.git
Guys, here we come to the end of this blog I hope you all like it and found it informative. If have any query feel free to reach me :)
Guys follow me for such amazing blogs and if have any review then please let me know I will keep those points in my mind next time while writing blogs. If want to read more such blog to know more about me here is my website link https://sites.google.com/view/adityvgupta/home.Guys Please do not hesitate to keep 👏👏👏👏👏 for it (An Open Secret: You can clap up to 50 times for a post, and the best part is, it wouldn’t cost you anything), also feel free to share it across. This really means a lot to me.