Tuesday, November 5, 2024
HomeKubernetesUnderstand about Carbon Impacts of Kubernetes

Understand about Carbon Impacts of Kubernetes

As of today, the important topic in businesses is sustainability. Well, yes but what are we speaking mostly? The longer we keep the lights on or the water running, the more resources we consume. That only means a bigger carbon footprint? The answer is no, It’s the same for running the software with high compute power. The more computing power and digital storage we need, the more electricity we use.

We should acknowledge and accept there is a problem. IT is a large and growing part of global warming problem; by 2030 it’s predicted that 21% of all the energy consumed in the world will be by IT. Energy that is still, for the vast majority, generated from fossil fuels. The internet accounts for 3.7% of global carbon emissions; slightly more than aviation, and we all know we need to fly less… but what will do for IT? Will consider stopping building or using, the technology? Obviously no, as it is part of the growth and requirement. 

Considering the facts, we never think about environmental sustainability, when we think about Kubernetes. But finding sustainability within cloud computing offers an effective way to reduce your environmental footprint while also cutting cloud costs.

As the adoption of Kubernetes continuously growing and understanding how this open-source orchestration platform can help reduce the environmental carbon (CO2) impact of our anu digital lives should consider top of mind.

One big reason is newly adopted regulations. The US and European Union have issued Carbon emissions regulations and now have goals to reduce output by 40% and 55% by 2030 respectively. Also, reporting requirements are real, and companies will be penalized, depending on if they go over or stay under their emission quotas. Building efficient infrastructure has even started showing up in areas like the “Well Architected” framework, where practitioners are now required to report on infrastructure efficiency which could someday require more specific requirements.

CO2 in the cloud-first world

The carbon intensity of electricity or grams of carbon emissions per kWh is explained through the amount of CO2 emissions produced per kWh of electricity. However, different forms of energy production have very different carbon intensities, usually measured by finding the grams of carbon per kWh of energy being produced.

Carbon intensity offers a useful way to examine the climate impact of power and how cloud customers are reducing their emissions. Data from the International Energy Agency tells us, the global power source averages 545 grams/kWh. In terms of the cloud, the average AWS power mix carbon intensity is about 393 grams/kWh. When figured this way, it’s easy to see how large-scale cloud providers use a power mix (or combination of various fuels) that is 28% less carbon intensive than the global average.

The study also reveals the fraction of required energy (16%), when combined with the fraction of carbon intensity of power mix (72%), equates to only 12% of the carbon emissions. This math reveals an estimated 88% reduction in the carbon footprint for customers using a managed cloud service like AWS instead of an on-premise data center.

Image from accenture.com

Even though cloud providers such as AWS, Azure, and Google have increased their energy usage with renewables, evolving cloud technologies are demanding more and more compute power—and thus, electricity. Data centers are currently releasing enough greenhouse gasses to be considered direct contributors to climate change, as they can use tens of thousands of hardware and up to 100 megawatts of electricity.

These numbers remind us that adding a pillar of sustainability to general cloud best practices, as well as Kuberntes-based ones, can help organizations measure, manage, improve, and budget properly for their containerized workloads.

The Impact of Kubernetes on CO2

The introduction of Kubernetes into cloud environments has implications for two of the three variables highlighted in the previous section: server utilization and emissions intensity. We will examine both in the following paragraphs.

The National Resources Defense Council believes server utilization to be the single most important factor to determine efficiency and thus carbon footprint. Higher utilization means less resource wastage, fewer machines, reduced infrastructure footprint and less power required to run it. This, in turn, leads to a reduction in the carbon emissions associated with operating infrastructure.

Kubernetes is becoming a popular option for enterprises, who often adopt it due to its efficiency and speed. In fact, this is one of the top features that attracts them to Kubernetes.

Clearer explanation of the change could have something to do with Amazon Web Services and a retailer, which had better results after they started using Kubernetes. They ran thousands of virtual machines on AWS before switching to Kubernetes but averaged only 8% CPU utilization. This changed to an average of 70% post-kubernetes.

The high utilization means that Nordstrom can operate the same workloads with only 1/10th of the VMs they needed pre-Kubernetes and allows them to scale down their infrastructure. This does wonders for the carbon footprint of Nordstrom itself, which can now report a 90% reduction in the carbon emissions associated with their use of AWS VMs.

Another variable associated with carbon emissions that Kubernetes can potentially impact is emissions intensity. Since Kubernetes is inherently portable, it allows much greater control over decisions about workload placement.

The Low-CO2 K8 Scheduler is worth looking into for this purpose, as it makes scaling decisions based on CO2 emissions intensity in different regions. There are a lot of reasons to choose a carbon-neutral cloud provider. For example, when you scale up your workloads in the designated area, there won’t be any more risk of contributing to climate change and you will also get access to better performance servers.

Think about it this way. The average car produces about 404 grams of CO2 per every mile driven. For clusters with hundreds of worker nodes, the common assumption is that at least 30% are wasted, meaning not used and/or could be safely powered off. Powering down just 15 unused servers (Kubernetes worker nodes) is equivalent to 1000 miles (1600 Km) driven—or similar to reducing ~1000Kg (2200 pounds) of CO2 emission, each month.

To achieve Kubernetes capacity optimization, practitioners need to understand the critical nature of right-sizing and proper resource provisioning. Proper tooling can help optimize resource provisioning without jeopardizing performance or resilience—helping companies both reduce their costs and carbon footprint.

Having said, just moving only to Kubernetes doesn’t help to reduce the carbon emission, you may need to optimize your microservice for sustainability – you can read our previous post to understand more details How to optimize your Microservice Architecture for sustainability (foxutech.com)

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments