Thursday, March 28, 2024
HomeKubernetesKubernetes Best Practices to consider on 2023

Kubernetes Best Practices to consider on 2023

Kubernetes(k8s) is one of the best container orchestration tools, you may see the adoption to it growing rapidly, why? One of the main reasons is automation. Apart from this Kubernetes provides wide-range of advantages like, workload discovery, self-healing, and containerized application scaling, near future IaC.

But really how much application team using Kubernetes on production? The count may not very good, compare to adoption ratio. As still security is big concerns, it is delaying. How we can fix? Well, there are lot of best practices need to be introduced across each layer of implementation. Not only to cover security, other then that, we need to establish more best practices to have Kubernetes stability along with security.

In this article we have identifies some critical Kubernetes best practices that you can plan to Kubernetes implementation planning to improve your Kubernetes security, performance, and costs.

Best Practices

  • Always run with Stable Version

Very first best practices for any software tool are run the package with stable version, same applies to Kubernetes. What is advantage of running Kubernetes stable version? As it mostly patched for any security or performance issues. There will almost certainly be more community-based or vendor-provided support available as well. Finally, the K8s best practice allows you to avoid security, performance, and cost anomalies that could jeopardize your service delivery.

  • Yamllint

If you are developer, and you are trying to deploy a manifest, you may experience the pain.! Yes, it sometime consumes our time more then what we planned, so we may feel YAML difficult to use. Then you may pick yamllint, which helps to manage multiple documents in a single file. There are also Kubernetes-specific linters available, you can add to your IDE or pipeline or CLI. Which saves your time.

You can lint your manifests and follow best practices with kube-score. Kubeval will also lint your manifests. However, it only checks for validity. The dry-run option on kubectl in Kubernetes 1.13 allows Kubernetes to inspect but not apply your manifests. This feature allows you to validate your YAML files for K8s.

  • Versioning the config/manifests

As we are all slowly moving towards GitOps, it is best to keep all-config files, such as deployment, services, and ingress ones in your favorites’ version control system. GitHub is the most popular, open-source, and distributed version control platform for that, but others include GitLab, Bitbucket, and GoGs.

This helps to check and track what change done and who did, so it also helps on critical situation to roll back the change, re-create, or restore your cluster to ensure stability and security.

Also make sure to write declarative YAML files instead of using imperative kubectl commands like kubectl run. A declarative approach allows you to specify the desired state, and Kubernetes will figure out how to get there. This is a way will helps to maintain the versioning.

  • A GitOps Workflow

GitOps is an excellent model for automating all tasks, including CI/CD pipelines, with Git serving as the single source of truth. A GitOps framework can assist you in the following ways, in addition to increasing productivity:

  1. Accelerate deployments
  2. Enhance error tracking
  3. Automate your CI/CD workflows.

Finally, using the GitOps approach simplifies cluster management and speeds up app development.

When resources are sporadic, production clusters may fail in the absence of resource limits and requests. Excess resources can be consumed by pods in a cluster, increasing your Kubernetes costs. Furthermore, if pods consume too much CPU or memory and the scheduler is unable to add new pods, nodes can crash. To avoid this case, you can implement the resources.

You are always recommended to use Namespaces, as namespaces assist teams in logically partitioning a cluster into sub-clusters. This is especially useful when you want to share a Kubernetes cluster among multiple projects or teams at the same time. Namespaces allow development, testing, and production teams to collaborate within the same cluster without overwriting or interfering with each other’s projects.

Kubernetes ships with three namespaces: default, kube-system, and kube-public. A cluster can support multiple namespaces that are logically separate but can communicate with one another. Also, you can define the resources on namespace level, along with separate resources, which will further help to manage the resources allocation and management.

  • Always use Pods with Deployments, ReplicaSets, And Jobs

As much as possible, avoid using naked pods. Naked pods cannot be rescheduled in the event of a node failure because they are not bound to a Deployment or ReplicaSet.

A deployment achieves two goals:

  1. Creates a ReplicaSet to keep the desired number of pods.
  2. Defines a replacement strategy for pods, such as a RollingUpdate.

Unless you have a strict restart Policy: Never use cases, deploying is almost always more efficient than creating pods directly.

  • Label your Kubernetes resources

Labels are key/value pairs that help you identify the characteristics of a specific resource in Kubernetes clusters. Labels also allow you to filter and select objects with kubectl, allowing you to quickly identify objects based on a specific characteristic.

Even if you don’t think you’ll use them right away, labelling your objects is a good idea. Also, use as many descriptive labels as possible to differentiate between the resources that your team will be working on. Objects can be labelled by owner, version, instance, component, managed by, project, team, confidentiality level, compliance aspects, and other criteria in organization level. Also, this helps to simplifies cost management for the organizations.

Liveness probes check the health of long-lived pods on a regular basis, preventing Kubernetes from routing traffic to unhealthy ones. Kubernetes (kubelet default policy) restarts pods that fail a health check, ensuring your app’s availability.

The probe checks the health of the pod to see if it receives a response. No response indicates that your app is not running or unhealthy on that pod, causing the probe to launch a new pod and run the application there.

Another point. You must first run a start-up probe, a third type of probe that alerts K8s when a pod’s start-up sequence is complete. If a pod’s start-up probe is incomplete, the liveness and readiness probes do not target it.

  • Keep It Stateless

Stateless apps are generally easier to manage than stateful apps, though this is changing as Kubernetes Operators gain popularity.

A stateless backend eliminates the need for teams new to Kubernetes to maintain long-running connections that limit scalability.

Stateless apps also make it easier to migrate and scale on demand. Just one more thing. Keeping workloads stateless allows you to use spot instances.

Here’s the deal. One disadvantage of using Spots Instances is that providers such as AWS and Azure frequently require the cheap compute resources to be returned on short notice, which can disrupt your workload. You can circumvent this issue by making your application stateless.

  • Define Network Policies and Setup firewall

A network policy in Kubernetes helps to specify which traffic is allowed and which shouldn’t. It’s similar to putting firewalls between pods in a Kubernetes cluster. Regardless of how traffic moves between pods in your environment, it will only be permitted if your network policies allow it.

You must define authorized connections and specify which pods the policy should apply to before you can create a network policy. So, you can make sure only allowed traffics are allowed.

Also Set up a firewall in front of the cluster to limit external requests from reaching the API server, in addition to network policies to control internal traffic within your cluster. This can be accomplished through the use of regular or port firewall rules.

Additionally, make sure that IP addresses are whitelisted and that open ports are restricted.

  • Set Up Role Based Access Controls

Control who can access the Kubernetes API and what permissions they have with Role- Based Access Control (RBAC). RBAC is usually enabled by default in Kubernetes 1.6 and beyond (later for some managed providers), but if you have upgraded since then and haven’t changed your configuration, you’ll want to double-check your settings. Because of the way Kubernetes authorization controllers are combined, you must both enable RBAC and disable legacy Attribute-Based Access Control (ABAC).

Once RBAC is being enforced, you still need to use it effectively. Cluster-wide permissions should generally be avoided in flavor of namespace-specific permissions. Avoid giving anyone cluster admin privileges, even for debugging — it is much more secure to grant access only as needed on a case-by-case basis.

You can explore the cluster roles and roles using `kubectl get clusterrolebinding` or `kubectl get rolebinding –all-namespaces`. Quickly check who is granted the special “cluster-admin” role; in this example, it’s just the “masters” group:

If your application needs access to the Kubernetes API, create service accounts individually and give them the smallest set of permissions needed at each use site. This is better than granting overly broad permissions to the default account for a namespace.

Most applications don’t need to access the API at all; ‘automountServiceAccountToken‘ can be set to “false” for these.

  • Encryption

“Secrets” in Kubernetes are objects used to hold sensitive information about users, such as passwords, keys, tokens, and many other types of information. They limit the exploitable attack surface. In addition, they provide flexibility to the pod life cycle so they can gain access to sensitive data. Secrets are namespaced objects (maximum length: 1 MB) kept in tmpfs on the nodes and password-protected.

The API server in etcd maintains secrets in plain text – that’s why it is necessary to enable encryption in the ETCD configuration file. As a result, even if an attacker has the ETCD data, he will not crack it. Cracking ETCD data is only possible if the attacker obtains the key used to encrypt the data; in Kubernetes, the key is held locally on the API server; otherwise, the data cannot be cracked.

For Kubernetes to function correctly, the organization must enable firewalls and open some ports. As an example, specific ports must be open on the Primary configuration. These include 6443, 2379-2380, 10250, 8472, and numerous others. In addition, several ports must be open on Worker Nodes, including ports 10250, 10255, and 8472.

  • Harden node security

You can follow these three steps to improve the security posture on your nodes:

  1. Ensure the host is secure and configured correctly. One way to do so is to check your configuration against CIS Benchmarks; many products feature an auto checker that will assess conformance with these standards automatically.
  2. Control network access to sensitive ports. Make sure that your network blocks access to ports used by kubelet, including 10250 and 10255. Consider limiting access to the Kubernetes API server except from trusted networks. Malicious users have abused access to these ports to run cryptocurrency miners in clusters that are not configured to require authentication and authorization on the kubelet API server.
  3. Minimize administrative access to Kubernetes nodes. Access to the nodes in your cluster should generally be restricted — debugging and other tasks can usually be handled without direct access to the node.
  • Turn on audit logging

Make sure you have audit logs enabled and are monitoring them for anomalous or unwanted API calls, especially any authorization failures — these log entries will have a status message “Forbidden.” Authorization failures could mean that an attacker is trying to abuse stolen credentials. Managed Kubernetes providers, including GKE, provide access to this data in their cloud console and may allow you to set up alerts on authorization failures.

Final Thought

Follow these recommendations for a more secure Kubernetes cluster. Remember, even after you follow these tips to configure your Kubernetes cluster securely, you will still need to build security into other aspects of your container configurations and their runtime operations. As you improve the security of your tech stack, look for tools that provide a central point of governance for your container deployments and deliver continuous monitoring and protection for your containers and cloud-native applications.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments