We have summarized what we came across to choose Kubernetes into 12 items from the viewpoint of utilization value. Although there may be a part where my opinion is mixed depending on the thought, I attach a basis link. I think that Kubernetes will grow into a truly wonderful next generation IT infrastructure in the future,
Execution Foundation that enables frequent application releases
Automatic rollout and rollback of applications can be executed safely and smoothly by automating delicate tasks such as frequent new function releases and urgent bug fix replacement by Deployments.
- Provide a function to switch revised containers such as new function provision and defect correction without stopping while providing the application at the actual production.
- Container replacement strategy can be set so as to avoid service malfunction during switching and service stop due to application malfunction.
- If the application performs an unstable operation, it can automatically stop the rollout and roll back with the operator intervention.
High availability foundation suitable for unstoppable service
With Replication Controller realizing self-healing (self-recovery), we can build a service that will not stop.
- A set of closely coordinated containers is grouped together to form a unit called a pod, and a horizontally dispersed cluster is constructed to load balance by the internal proxy.
- The number of copies of the pod that can correspond to the request amount is set in advance, but when the number of pods running decreases due to the failure of the node, the necessary number of pods is automatically activated to maintain the capability.
- For maintaining capabilities, newly activated pods are automatically registered in the internal proxy and requests are allocated.
Expandable base according to the business situation
Horizontal auto-scaling of the pod Horizontal Pod Autoscaling by, can be adjusted to an appropriate scale in line with the scale of the business, it can contribute to the sales compression unnecessary cost.
- When exceeding the pod’s CPU operating rate threshold, processing capacity can be improved by automatically increasing the number of pods constituting the parallel distributed cluster.
- When the number of requests from the client decreases and the CPU operating rate decreases, the number of pods can be automatically reduced and the processing capacity can be adjusted.
- Automatic execution without human intervention such as automatic assignment of IP address to pod, deployment of pod to node as pod number increases.
Flexible operational infrastructure capable of hybrid configuration with virtual servers and bare metal
If you want to keep applications that are not suitable for execution in containers as they are in bare metal or virtual servers as is, and in the transitional state of modernizing applications, Headless services and Publishing services – service types , Kubernetes and Hybrid You can configure a system.
- Provide service discovery function outside the k8s cluster when you want to use bare metal such as fast database I / O processing, machine learning using GPGPU
- Because it works with on-premises servers and cloud virtual servers and k8s cluster pods, you can set up a service that abstracts external servers.
- Provide publish service for calling micro services running on k8s cluster from on-premise or cloud virtual servers.
Infrastructure suitable for production operation of applications developed with containers
You can deploy to the Kubernetes cluster by specifying the repository address and image name of the registered container after developing and testing the application. Since all the connection destination information for switching from the development environment to the production environment are set in the k8s cluster environment, it is possible to start the container as it is in the production environment without changing any application.
Operational base available on both on-premise and cloud
Software products and supporting hardware products to run Kubernetes and Kubernetes service is offered by mega cloud vendor and will not be locked in a specific place.
Public cloud Kubernetes service
- Amazon Elastic Container Service for Kubernetes (Amazon EKS)
- Microsoft Azure Container Service
- Google Cloud Platform Kubernetes Engine
- IBM Cloud Container Service
- Alibaba Cloud Container Service
Operational Foundation with Orchestration
Kubernetes is a YAML file setting that can implement load balancer, horizontal distributed clustering, storage management, internal service discovery and domain registration, greatly improving operational productivity.
- Service Discovery and Load Balancing Services Services
- Secretsettings hiding password
- Configuration information by environment Using ConfigMap
- Scalable batch job management Jobs
- Storage management Persistent Volumes
Standard technology not controlled by a specific company
Although Kubernetes was originally a Google project, it has been donated to CNCF and the project is operated as an open source, and development is proceeding as a technology not dependent on a specific company.
Lightweight and highly efficient container technology
Container technology based on Kubernetes launches much more lightweight and faster than a virtual machine that emulates hardware. It is an excellent portability-executable image for packaging the stack of required software such as the OS, middleware, development language libraries, etc. that the application depends on as a container. It will be possible to minimize the difference between the development environment and the production environment, greatly contributing to productivity improvement in application development and operation.
Active development continues
The major companies in the industry, namely, mega cloud companies, software and hardware manufacturers participate actively and development continues.
Use as a path to multi-cloud
Compatibility with other companies’ k8s environment is guaranteed by using the k8s infrastructure certified by CNCF “Kubernetes compliant certification program”. This makes it possible to use multiple k8s environments at the same time, which makes it possible to enhance business continuity and cost competitiveness by eliminating reliance on a specific company.
Elimination of waste by using high density server
Since Kubernetes is a container-based technology, it is not locked in to specific hardware or software configurations. Pods that are a set of containers are guaranteed to communicate and are subject to load balancing if they are within a k8s cluster on different servers. And the work involved in relocation is handled by k8s, and the number of pods is controlled by the strategy for maintaining performance at the time of migration, so you can relocate during production. This makes it possible to operate highly efficient infrastructure without waste.
- Managing Compute Resourcesto automatically solve the bipack problem.
Start learn Kubernetes with FoxuTech. Click here for K8s posts