How To create Kubernetes Multi-Node Cluster Over Multi-Cloud

Gupta Aditya
4 min readMay 20, 2021

Hey, guys hope you all are doing good in today's article we are going to set up our own Kubernetes multi-node cluster over multi-cloud(one node on AWS and one node Azure). I am considering you have read my previous article on how to create a Kubernetes cluster on aws if not please go and read it link given below.

We are going to set up the Kubernetes cluster in such a way that our master, one node will be in AWS and the other node will be in Azure cloud you could launch more in diff cloud as per your requirement. We have to only add one keyword while launching our master and all the other steps we did in the previous article are the same.

kubeadm init — pod-network-cidr={{ pod_network_cidr }} — control-plane-endpoint={{ control_plane_endpoint_ip }}:6443 — ignore-preflight-errors=NumCPU — ignore-preflight-errors=Mem

in the above command you can see we have — control-plane-endpoint={{ control_plane_endpoint_ip }}:6443 this think extra what is this doing basically while creating token they create token with public IP which can access via the internet while before they use private IP and we can not access them over the network and we have to be in the same network to access it but after using this we can access it easily via the internet.

For launching the worker node in aws all steps are the same as we did before. Please follow the below step to launch the worker node in Azure.

For setting worker node in azure create one Virtual machine in Azure using a RedHat image. After creating a virtual machine follow below steps same step can be used while launching node in GCP also using RedHat image

1. Install Docker

Create a yum repository for docker.

$ vim /etc/yum.repos.d/docker.repo

and following content to docker. repo file

baseurl =
gpgcheck = 0
name = Yum repo for docker

Run the following command to install Docker

$ yum install docker-ce --nobest

2. Install Python and required docker dependencies

$ yum install python3
$ pip3 install docker

3. Configure docker and start docker services

configure the cgroup driver for docker to systemd

$ mkdir /etc/docker
$ vim /etc/docker/daemon.json

Add this content to the /etc/docker/daemon.json file

“exec-opts”: [“native.cgroupdriver=systemd”]

Start and enable docker services

$ systemctl start docker$ systemctl enable docker

4. Install Kubernetes

Create a yum repo for Kubernetes

$ vim /etc/yum.repos.d/kubernetes.repo

Add the following content to /etc/yum.repos.d/kubernetes.repo file

baseurl =$basearch
gpgcheck = 0
name = Yum repo for Kubernetes

Install Kubelet, Kubeadm, and Kubectl

$ yum install kubelet kubeadm kubectl -y

Start and enable kubelet

$ systemctl start kubelet$ systemctl enable kubelet

5. Install iproute-tc package

$ yum install iproute-tc

Create k8s.conf file for bridging

$ vim /etc/sysctl.d/k8s.conf

Add the following content to /etc/sysctl.d/k8s.conf file

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Enabling bridging for kernel in the slave node

$ sysctl --system

6. Joining the slave node

Finally, run the kubeadm join command to join this slave node to the master node.

After this, you run kubectl get nodes commands to see pods connected to your master

As you can see above one node is of aws and one name-new node is from azure so we have successfully set up or Kubernetes multi-node cluster over multi-cloud.

Git repo for configuring k8s master role:-

Git repo for configuring k8s worker role(AWS):-

Guys, here we come to the end of this blog I hope you all like it and found it informative. If have any query feel free to reach me :)

Guys follow me for such amazing blogs and if have any review then please let me know I will keep those points in my mind next time while writing blogs. If want to read more such blog to know more about me here is my website link Please do not hesitate to keep 👏👏👏👏👏 for it (An Open Secret: You can clap up to 50 times for a post, and the best part is, it wouldn’t cost you anything), also feel free to share it across. This really means a lot to me.