When we are discussing about carbon impact of Kubernetes, also we should discuss about Kubernetes. As we aware Elasticity, Scalability and Resiliency are some of the key-enablers to host microservices on Kubernetes platform. But same time, today’s worlds seeing their cloud cost got increased after they migrated. In this post let’s understand about how to reduce Kubernetes Cost,
Introduction
The advantages of Kubernetes infrastructure like portability and scalability, its open-source base and the ability to increase developer’s productivity have made container technologies a popular choice for many companies, and Kubernetes has become the standard for running container-based apps across clouds. More than 80% of companies today run containers in production and 78% of them use Kubernetes services.
As the containerized infrastructure is obtaining widespread adoption and Kubernetes technologies are gaining momentum, it’s becoming crucial to understand how to get a clear picture of spending on K8s resources, enforce cost optimization opportunities and enhance Kubernetes performance.
The reality shows that it’s not enough just to use Kubernetes to get the best value of public clouds. According to a recent StackRox report, about 70% of companies detected misconfiguration in their Kubernetes environment.
A containerized structure creates significant difficulties with cloud transparency, allocation and performance that cause challenges in resource management and optimization.
The whitepaper goes over the top management challenges of Kubernetes performance, describes recommendations and technical tips in order to achieve K8s clusters transparency and overcome cost management and optimization issues.
It will help you build a solid management strategy for the Kubernetes environment, make a giant leap forward in improving application performance and reduce its infrastructure cost.
Challenges of analyzing Kubernetes costs
Let’s understand why Kubernetes is so complex when it comes to cost optimization:
- Shared compute resources – Applications are packaged in pods and run in shared compute resources. The monthly bill from the cloud provider will not give visibility at pod level and thus becomes a black hole for us.
- Right sizing of pods – If the pods are not sized with right requests and limits, this will have a direct impact to node being underutilized and paying for unused resources.
- Right sizing of persistent volumes – Cost will increase if the persistent volumes are not rightly sized.
- Orphan resources -  Over a period of time, there are many orphan volumes and resources exist in the cluster which are lying there and burning cost without being in used.
How to Control Kubernetes Cost?
There is no direct solution available to control the Kubernetes, but there are some tools which can helps us to analyze the cost and help to control the cost.
Understand What/Where you spending
Before we jump into major analysis on the controlling cost, we should have enough data to understand how much we are spending, on which resource we are spending and also there should be some best practices or recommendations which helps us to control the cost. As the platforms are wider now, it will be tough to analysis step by step. Here the Kubecost & OpenCost which helps to understand our spendings, and also provides some recommendation where we can save some costs.
Why Kubecost/OpenCost?
- Provides complete visibility of Kubernetes spend through cost allocation, optimization recommendations and governance.
- Provides estimated savings against key recommendations.
- Fully on-premise deployment, doesn’t require to egress any data to a remote service
This helps to save ~10-25% of your Kubernetes costs.
Optimize your Non-Production Environment
With Kubecost, once you right-sized the resource and cleaned up the orphan resources, still there are lot to optimize by running the non-prod Kubernetes clusters with zero to minimal load during non-business hours.
How we can achieve this? We can ask the project team to shutdown the resource when it is not in use, but human actions are always challenging. As the consistency may differs, To achieve this automatically, we need a solution that can scale down the pods in a cluster and with cluster auto-scaler reduces the number of worker nodes during non-business hours and again scale the pods and bring back the worker nodes to its original state during business hours.
With Kube-green, we were able to suspend pods and scale down our kubernetes cluster during non-business hours.
Kube-green to summarize is a Kubernetes controller that defines a Custom Resource Definition called SleepInfo which decides when to stop and start the pods in a namespace. You can plan to suspend pods for 8 hours on weekdays (non-business hours) and full days on weekend.
With this you are not only saving the cost, also CO2, which is good organization and also environment.
If you like to understand more about Kube-green, check out recent post: Understand About Kube-Green for Sustainability – FoxuTech.