In this post, will discuss about how to setup webserver on kubernetes with nginx. In this we explained how to create the go-server and nginx. Also we have add how to load balance the webserver.
Read More: How to setup Kubernetes on Ubuntu
When you have to consider kubernetes?
The general website runs on the server, but if this server is one, this problem will arise.
- When I did the campaign, the access suddenly concentrated, I could not endure the load, the whole web site went down
- I got a strange error when updating my server’s library
There is a counter measure to make multiple servers because there is no problem with this. It is called redundancy. I will skip the round robin story, but recently it’s naught to have access distributors like load balancers called AWS and GCP to distribute access to several servers.
If you are using cloud services, you can automatically increase or decrease the number of servers with a function called auto scaling. However, a new demand also comes up for this redundant configuration.
- If you decrease the number of servers, you cannot keep up with the number of servers when you come in suddenly, eventually the service will fall
- However, if you prepare a lot of them, money is waste when there is little access
- Although it is Immutable Infrastructure, it takes time to replace the server every time it is deployed
I was in trouble. There appears kubernetes (k8s below). k8s gives the following advantages for these problems.
- Multiple applications can be started regardless of the number of servers (computer resources can be optimized)
- You can increase or decrease the number automatically for each application depending on the load
- You can deploy the application without replacing the server. About a few seconds if it is light
To use k8s, prepare a master machine called master and a slave server called node, and install k8s respectively, each setting communication … it is troublesome.
But, if deployed on a managed service of k8s like GCP’s GKE, master and node are prepared arbitrarily in a few minutes, nodes which have become improper are automatically replaced, more nodes can be increased and decreased.
It became long with that, but let’s actually do hands on GKE from here.
Preparation
First is GCP setup. Since there is a free usage period, let’s register for reference such as introductory articles. Gcloud After kubectl finishing the setup including tokens etc, hit the next command.
With the time to make coffee, the above master and node (3 by default) stand up. node is equivalent to VM in GCP, instance in AWS. On GKE basically master is not touched.
# gcloud container clusters create k0 Creating cluster k 0 ... done. Created [ https : // conf . kubeconfig entry generated for k0. NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS k 0 asia - east 1 - a 1.4.6 x.x.x.x n 1 - standard - 1 1.4.6 3 RUNNING
Introduction
Let’s skip the details and move it for the time being.
# kubectl run nginx - image = nginx: 1.11.3
As for what happened, pod started to move within the node that we started earlier.
What is pod?
It is an image like a collection of docker containers running in the node. In k8s, a number of pods will be launched with the entire power of the nodes (nodes) as resources.
The interesting thing is that pods are shared as a whole node, such that one node has one podA and one podB, one node has three podBs, and so on.
It is possible to pack any number of docker containers in one pod. nginx, web application and redis. One internal IP is assigned to one pod and any number of volumes can be shared between containers.
The argument specifies the Docker Hub official nginx container. In other words, the command above means “make pod using nginx image”.
# kubectl get pods NAME READY STATUS RESTARTS AGE nginx - 527997929 - 1 m 6 n j 1/1 Running 0 22 s # kubectl get <name>
But at this point we cannot access it externally because the container is just running in nodes. For the external access, type the following command.
# kubectl expose deployment nginx - port 80 - type LoadBalancer
Now, service and load balancer are created.
What is service?
It is like a path for communicating with the outside against pods (a collection of pods). If one pod hangs/stopped, the request will come to service. It will route the request to next available pods.
Since LoadBalancer we specified options for this time, a load balancer with an external IP on GCP will be created and access to that external IP will come to service.
With this, the pod with nginx created earlier was attached to the external IP. Let’s check the service.
# kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT ( S )Â Â Â AGE kubernetes z.z.z.z <none> 443 / TCP 6 m nginx y.y.y.y x.x.x.x 80 / TCP 1 m
This time, service was created with the name nginx. It kubernetes is service for communicating from master to node together. Let’s access the external IP x.x.x.x of the created load balancer .
Make customized pod and service
we have used the default nginx image a little while ago, let’s customize it and make pod with nginx + web application composition.
# git clone https://github.com/foxutech/k8s-go-nginx.git # cd k8s-go-nginx # cd k8s
Here is deployment.yml for the setting of pod, the service.yml for service setting.
What is deployment?
You can manage both the Replication Controller and pod, which are the controllers that manage pods in batch. Since k8s is fast, since it is often the case that you created the Replication Controller and pod separately, it is easier to definitely use deployment.
Let’s see the contents of the setting.
# deployment.yml etc ...  strategy :    type : RollingUpdate ...      containers :      - name : go - server        image : motoskia/go-server ...        readinessProbe :          httpGet :            path : /readiness.html etc ...
↑ part of RollingUpdate and use, containers of image the case has changed the image that has been pulled in, it will be updated from the little one by the old image of the pod. It seems to be safe to deploy with this.
# service.yml spec : Â selector : Â Â Â name :Â web-server
Here, label of target pod is specified, and by associating the name of label of pod, service and pod are associated.
What is readinessProbe ? I will give you a brief overview of the mechanism of life and death monitoring.
Structure of life / death monitoring
We monitor internally whether pod is alive, just like the so-called load balancer mechanism. At each node, a process of monitoring and so on called kubelet is running and periodically calls pod “How are you feeling?” If it is dead, it will be swapped / increased / decreased by deployment (replication controller).
In the sample, since it is set to return 200 when / readiness.html comes in inside of the setting of nginx,
Actually move
Let’s create these with the following command.
# kubectl apply - f k 8 s / deployment.yml # kubectl apply -f k8s / service.yml
This will generate pod at the same time as deployment and generate service. As a command to confirm various details created
# kubectl describe {pods | services | deployments} <name>
Describe It is get a command to confirm more finely. Can you create it well? Access the service … Oh … Where should I access?
# service.yml spec :  selector :    name : web-server  type : LoadBalancer
In the introduction, we expose the service and linked it to the load balancer, but this time when we created the service we were building the load balancer at the same time.
Let’s check and access the external IP of service just as before.
We successfully accessed the web server via nginx! This is the image that was finally created.
When you want to erase the created one
# kubectl delete < type > <name>
For example, kubectl delete pods –all all pods will disappear. As long as deployment exists, The pod managed by the deployment revives infinitely, until you run  kubectl delete deployments web-server It will recreate the pod again and again.
On next post will see more clear about each components on Kubernetes.