Saturday, April 27, 2024
HomeKubernetesK8s TroubleshootingWhat Happens When Kubernetes Pod CPU & Memory Run High

What Happens When Kubernetes Pod CPU & Memory Run High

Kubernetes, the container orchestration platform of choice for many modern applications, thrives on efficient resource management. But what happens when pods, the fundamental units of deployment, start demanding more CPU or memory than they should? Buckle up, because we’re about to dive into the consequences of resource-hungry pods in the Kubernetes world.

The Two Biggies: CPU Throttling and OOM Killer

When a pod’s CPU usage breaches its limits, Kubernetes steps in with CPU throttling. This essentially acts like a dimmer switch, slowing down the container’s processes to ensure other pods get their fair share of CPU cycles. Imagine your pod struggling to run a marathon while wearing heavy boots; that’s CPU throttling in action.

Memory issues present a different beast. If a pod tries to gobble up more memory than its limit, the OOM killer swings into action. This ruthless entity identifies the most memory-hungry container and brutally terminates it, freeing up resources for other pods. Think of it as the eviction notice your landlord serves when you throw one too many epic house parties.

Let’s understand bit more in-detail about CPU throttling and Memory OOM.

Kubernetes CPU throttling

As mentioned, CPU Throttling is a behavior’s where processes are slowed when they are about to reach some resource limits.

Similar to the memory case, these limits could be:

  • A Kubernetes Limit set on the container.
  • A Kubernetes ResourceQuota set on the namespace.
  • The node’s actual Memory size.

Think of the following analogy. We have a highway with some traffic where:

  • CPU is the road.
  • Vehicles represent the process, where each one has a different size.
  • Multiple lanes represent having several cores.
  • A request would be an exclusive road, like a bike lane.

Throttling here is represented as a traffic jam: eventually, all processes will run, but everything will be slower.

CPU process in Kubernetes

CPU is handled in Kubernetes with shares. Each CPU core is divided into 1024 shares, then divided between all processes running by using the cgroups (control groups) feature of the Linux kernel.

If the CPU can handle all current processes, then no action is needed. If processes are using more than 100% of the CPU, then shares come into place. As any Linux Kernel, Kubernetes uses the CFS (Completely Fair Scheduler) mechanism, so the processes with more shares will get more CPU time.

Unlike memory, Kubernetes won’t kill Pods because of throttling.

CPU overcommitment

As we saw in the limits and requests article, it’s important to set limits or requests when we want to restrict the resource consumption of our processes. Nevertheless, beware of setting up total requests larger than the actual CPU size, as this means that every container should have a guaranteed amount of CPU.

Monitoring Kubernetes CPU throttling

You can check how close a process is to the Kubernetes limits using following promQL:

(sum by (namespace,pod,container)(rate(container_cpu_usage_seconds_total{container!=""}[5m])) / sum by (namespace,pod,container)(kube_pod_container_resource_limits{resource="cpu"})) > 0.8

In case we want to track the amount of throttling happening in our cluster, cadvisor provides container_cpu_cfs_throttled_periods_total and container_cpu_cfs_periods_total. With these two, you can easily calculate the % of throttling in all CPU periods.

Kubernetes OOM

Every container in a Pod needs memory to run. Kubernetes limits are set per container in either a Pod definition or a Deployment definition.

All modern Unix systems have a way to kill processes in case they need to reclaim memory. This will be marked as Error 137 or OOMKilled. This Exit Code 137 means that the process used more memory than the allowed amount and had to be terminated.

This is a feature present in Linux, where the kernel sets an oom_score value for the process running in the system. Additionally, it allows setting a value called oom_score_adj, which is used by Kubernetes to allow Quality of Service. It also features an OOM Killer, which will review the process and terminate those that are using more memory than they should.

Note that in Kubernetes, a process can reach any of these limits:

  • A Kubernetes Limit set on the container.
  • A Kubernetes ResourceQuota set on the namespace.
  • The node’s actual Memory size.

here’s a use case when the Pod’s memory usage is very high and assume the Pod is a part of a deployment.

Memory overcommitment

Limits can be higher than requests, so the sum of all limits can be higher than node capacity. This is called overcommit and it is very common. In practice, if all containers use more memory than requested, it can exhaust the memory in the node. This usually causes the death of some pods in order to free some memory.

Monitoring Kubernetes OOM

When using node exporter in Prometheus, there’s one metric called node_vmstat_oom_kill. It’s important to track when an OOM kill happens, but you might want to get ahead and have visibility of such an event before it happens.

Instead, you can check how close a process is to the Kubernetes limits:

(sum by (namespace,pod,container) (rate(container_cpu_usage_seconds_total{container!=""}[5m])) / sum by (namespace,pod,container) (kube_pod_container_resource_limits{resource="cpu"})) > 0.8

Consequences of Resource Overload:

Either high CPU or memory or both the usage in pods lead to a chain reaction of unpleasantness:

  • Performance Degradation: Sluggish pods, like athletes running on fumes, struggle to fulfil their tasks, leading to delayed responses, increased latency, and ultimately, unhappy users.
  • Instability and Crashes: Throttled pods become vulnerable to crashes, especially under peak loads. Imagine trying to lift a barbell after running a marathon; things are bound to get wobbly.
  • Resource Competition and Evictions: Resource-hungry pods hog the spotlight, starving others of vital resources. The OOM killer’s merciless hand can then claim innocent bystanders, causing cascading failures.
  • Costly Inefficiencies: Overprovisioning resources to avoid throttling and evictions wastes precious resources and increases infrastructure costs. Think of renting a mansion just to avoid sharing a kitchen; it’s not exactly budget-friendly.

Diagnosing the Root Cause:

  • Resource Spikes vs. Sustained Usage: Distinguish between temporary resource spikes and sustained high consumption. Spikes might indicate workload bursts, while sustained usage could signal inefficient code, configuration issues, or even malware. Think of the difference between a sudden burst of traffic on a highway and a chronically congested road.
  • Bottleneck Identification: Analyze resource utilization across CPU, memory, and disk I/O. Bottlenecks in one area might not necessarily translate to high CPU or memory usage. Think of a car stuck in traffic due to a roadblock, not engine limitations.
  • Log Analysis: Logs can provide valuable clues about pod behavior’s and resource consumption. Look for errors, warnings, and abnormal resource usage patterns to pinpoint the root cause. Think of a ship’s logbook revealing clues about the cause of rough seas.

Taming the Resource Monsters:

So, how do we keep our pods from turning into resource-guzzling gremlins? Here are some strategies:

  • Resource Requests and Limits: Set realistic resource requests (desired minimum) and limits (maximum allowed) for your pods based on their actual needs. Don’t be a party animal asking for unlimited pizza!
  • Monitoring and Alerting: Continuously monitor pod resource usage and set up alerts for anomalies. Early detection prevents full-blown resource meltdowns.
  • Horizontal/Vertical Pod Autoscaling (HPA/VPA): Let HPA/VPA dynamically adjust the number of pods based on resource demand. This ensures efficient resource utilization without overprovisioning.
  • Optimize Your Containers: Analyze and optimize your container images to minimize resource consumption. Think of it as fine-tuning your athlete’s training regime for peak performance.

Proactive Strategies:

  • Predictive Scaling: Employ machine learning-powered tools to predict future resource demands and proactively scale pods based on anticipated workload. This can prevent resource spikes and ensure smooth operation even during peak traffic times. Imagine a weather forecasting system for resource consumption, allowing the captain to prepare for incoming storms.
  • Chaos Engineering: Introduce controlled failures and resource constraints to test cluster resilience and identify potential vulnerabilities. This proactive approach helps prevent real-world disruptions by simulating and mitigating issues before they occur. Think of running fire drills on a ship to ensure everyone is prepared for an actual fire.
  • Container Security: High resource usage can sometimes be a symptom of malicious activity within containers. Deploying security tools and implementing best practices can protect against vulnerabilities and unauthorized resource consumption. Think of fortifying the ship’s defense’s to prevent pirates from siphoning off resources.

Beyond Efficiency:

  • Performance Tuning: Optimize pod configurations and application code to reduce resource consumption without compromising performance. This can involve code profiling, memory allocation adjustments, and container image optimization. Think of fine-tuning the engine and sails of the ship for optimal speed and fuel efficiency.
  • Microservices Architecture: Breaking down applications into smaller, independent microservices can improve resource utilization and isolation. This allows for scaling individual services independently based on needs, preventing resource waste for underutilized modules. Think of dividing the cargo into smaller, manageable units instead of carrying everything in one giant container.
  • Resource Sharing and Isolation: Utilize Kubernetes features like resource quotas and namespaces to share resources efficiently and prevent single pods from monopolizing available resources. This creates a more equitable and predictable resource allocation environment. Think of establishing fair trade agreements and designated zones within the port to optimize resource utilization and prevent chaos.

By incorporating these deeper insights and proactive strategies, we can transform our Kubernetes clusters from reactive resource battlegrounds into proactive, efficient, and cost-effective ecosystems. Remember, the journey for optimal resource management is continuous, requiring constant monitoring, adaptation, and innovation.

Conclusion

High CPU and memory usage in Kubernetes pods is a recipe for disaster. By understanding the consequences and implementing proper resource management techniques, you can keep your cluster running smoothly and avoid the wrath of the OOM killer. Remember, in the Kubernetes ecosystem, a balanced resources are key to healthy and happy pods!

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments