Nginx LoadBalancer 10.27.245.187 pending> 80: 31621 / TCP 11s

esschtolts @ cloudshell: ~ (essch) $ sleep 60;

esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Nginx LoadBalancer 10.27.245.187 35.228.212.163 80: 31621 / TCP 1m

Let's check its work:

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 2> \ dev \ null | grep h1

Welcome to Nginx!

In order not to copy the full names every time, save them in variables (more about the JSONpath format in the Go documentation: https://golang.org/pkg/text/template/#pkg-overview):

esschtolts @ cloudshell: ~ (essch) $ pod1 = $ (kubectl get pods -o jsonpath = {. items [0] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ pod2 = $ (kubectl get pods -o jsonpath = {. items [1] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ pod3 = $ (kubectl get pods -o jsonpath = {. items [2] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ echo $ pod1 $ pod2 $ pod3

Nginx-65899c769f-9whdx Nginx-65899c769f-szwtd Nginx-65899c769f-zs6g5

Let's change the pages in each POD by copying the unique pages to each replica, and check the balancing by checking the distribution of requests across the POD:

esschtolts @ cloudshell: ~ (essch) $ echo 1> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod1}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ echo 2> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod2}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ echo 3> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod3}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

2

one

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

one

one

Let's check the failover of the cluster by deleting one POD:

esschtolts @ cloudshell: ~ (essch) $ kubectl delete pod $ {pod1} && kubectl get pods && sleep 10 && kubectl get pods

pod "Nginx-65899c769f-9whdx" deleted

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 0/1 ContainerCreating 0 1s

Nginx-65899c769f-9whdx 0/1 Terminating 0 54m

Nginx-65899c769f-szwtd 1/1 Running 0 54m

Nginx-65899c769f-zs6g5 1/1 Running 0 54m

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 1/1 Running 0 12s

Nginx-65899c769f-szwtd 1/1 Running 0 55m

Nginx-65899c769f-zs6g5 1/1 Running 0 55m

As we can see, immediately after the POD became unavailable (the process of deleting it began) its replacement began to be created. Soon, the cluster will fully restore its structure. After we have finished our experiments, remove the virtual machines with the cluster:

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters delete mycluster –zone europe-north1-a;

The following clusters will be deleted.

– [mycluster] in [europe-north1-a]

Do you want to continue (Y / n)? Y

Deleting cluster mycluster … done.

Deleted [https://container.googleapis.com/v1/projects/essch/zones/europe-north1-a/clusters/mycluster].

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

Total. We created a cluster and created a load balancer with just two run and expose commands, now we can go to the balancer's IP address and watch the NGINX welcome page in the browser. In this case, the cluster recovers itself, for this we emulated a failure of the pod by deleting it – it was created again.