Tuesday, December 3, 2024
HomeKubernetesK8s TroubleshootingKubernetes StatefulSets – How to Troubleshoot and Best Practices

Kubernetes StatefulSets – How to Troubleshoot and Best Practices

Are you planning to deploy a database in the Kubernetes cluster? If so, there is lot to consider and should have proper strategy, why so? As Database is very critical for any business. We aware Kubernetes is a container orchestration tool that uses many controllers to run applications as containers (Pods). In that, One of these controllers is called StatefulSet, which is used to run stateful applications. This is one we need to deploy our database on Kubernetes.

Deploying stateful applications in the Kubernetes cluster can be a complex task. This is because the stateful application expects primary-replica architecture and a fixed Pod name. How to do in Kubernetes? Well, The StatefulSets controller in Kubernetes addresses this problem while deploying the stateful application in the Kubernetes cluster.

In this article, you will learn more about what StatefulSets in the Kubernetes cluster are and when to use them, as well as how to deploy the stateful application using the StatefulSets controller in a step-by-step example.

What Are Stateful Applications?

Stateful applications are applications that store data and keep tracking it. All databases, such as MySQL, Oracle, and PostgreSQL, are examples of stateful applications. Stateless applications, on the other hand, do not keep the data ex: Nginx. For each request, the stateless application will receive new data and process it.

In a modern web application, the stateless application connects with stateful applications to serve the user’s request. A Node.js application is a stateless application that receives new data on each request from the user. This application is then connected with a stateful application, such as a any database(Ex: MySQL, PostgeSQL, etc), to process the data. Database stores data and keeps updating the data based on the user’s request.

Lets see in details with some example about statefulsets in the Kubernetes cluster—How to create, How to troubleshoot and best practices.

What is Kubernetes Statefulsets

A StatefulSet is a set of pods with a unique, persistent hostname and ID. StatefulSets are designed to run stateful applications in Kubernetes with dedicated persistent storage. When pods run as part of a StatefulSet, Kubernetes keeps state data in the persistent storage volumes of the StatefulSet, even if the pods shut down.

StatefulSets are commonly used to run replicated databases with a unique persistent ID for each pod. Even if the pod is rescheduled to another machine, or moved to an entirely different data center, its identity is preserved. Persistent IDs allow you to associate specific storage volumes with pods throughout their lifecycle.

Kubernetes Statefulset

A new feature in Kubernetes 1.14 that is beneficial to StatefulSets is local persistent volumes. A local persistent volume is a local disk attached directly to a single Kubernetes node, which acts as a persistent storage resource for Kubernetes nodes. This means that you can attach and detach the same disk to multiple machines without needing remote services.

You can check our more articles on Kubernetes troubleshooting.

StatefulSet vs. DaemonSet vs. Deployment

StatefulSets, DaemonSets, and Deployments are different ways to deploy pods in Kubernetes. All three of these are defined via YAML configuration. When you apply this configuration in your cluster, an object is created, which is then managed by the relevant Kubernetes controller.

The key differences between these three objects can be described as follows:

  • StatefulSets run one or more pods with a persistent ID and persistent volumes, which is suitable for running stateful applications.
  • DaemonSets run one or more pods across the entire cluster or a certain set of nodes. This can be used to run administrative workloads such as logging and monitoring components.
  • Deployments run one or more pods, allowing you to define how many replicas of the pods need to run, on which types of nodes, and which deployment strategy should be used (for example, a Rolling deployment which replaces pods with a new version one by one, to prevent downtime).

When to Use StatefulSets

There are several reasons to consider using StatefulSets. Here are two examples:

  1. Assume you deployed a MySQL database in the Kubernetes cluster and scaled this to three replicas, and a frontend application wants to access the MySQL cluster to read and write data. The read request will be forwarded to three Pods. However, the write request will only be forwarded to the first (primary) Pod, and the data will be synced with the other Pods. You can achieve this by using StatefulSets.
  2. Deleting or scaling down a StatefulSet will not delete the volumes associated with the stateful application. This gives you your data safety. If you delete the MySQL Pod or if the MySQL Pod restarts, you can have access to the data in the same volume.
  3. A Redis pod that has access to a volume, but you want it to maintain access to the same volume even if it is redeployed or restarted.
  4. A Cassandra cluster and have each node maintain access to its data.
  5. A webapp that needs to communicate with its replicas using known predefined network identifiers.

Creating StatefulSets

Let’s create a statefulset file and deploy it using kubectl apply. After you create a StatefulSet, it continuously monitors the cluster and makes sure that the specified number of pods are running and available.

When a StatefulSet detects a pod that failed or was evicted from its node, it automatically deploys a new node with the same unique ID, connected to the same persistent storage, and with the same configuration as the original pod (for example, resource requests and limits). This ensures that clients who were previously served by the failed pod can resume their transactions.

The following example describes a manifest file for a StatefulSet. Typically, a StatefulSet is defined together with a Service object, which receives traffic and forwards it to the StatefulSet.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
     —name: nginx
        image: nginx
        ports:
       —containerPort: 80
          name: web
        volumeMounts:
       —name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
 —metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

A few important points about the StatefulSet manifest:

  • The code above creates a StatefulSet called web, containing three pods running a NGINX container image.
  • The specs.selector.matchLabels.app field must match the template.metadata.labels field (both are set to app: nginx). This ensures that the StatefulSet can correctly identify containers it is managing.
  • The pod exposes the web port defined as port 80.
  • volumeClaimTemplates provides storage using a PersistentVolumeClaim called www. This requests a storage volume that enables ReadWriteOnce access and has 1GB of storage.
  • A mount path is specified by template.spec.volumeMounts and is called www. This path indicates that storage volumes should be mounted in the /usr/share/nginx/html folder within the container.

How to Debug a StatefulSet

If you are experiencing issues with a StatefulSet, take the following steps to debug it:

Step 1: List Pods in the StatefulSet

List all pods belonging to a StatefulSet using this command. Be sure to define the label specified in your StatefulSet manifest (substitute it for myapp below):

# kubectl get pods -l app=myapp

The following pod statuses indicate a problem with the pod:

  • Failed – all containers in the pod terminated, and at least one container exited with non-zero status or was forcibly terminated by Kubernetes.
  • Unknown – pods status could not be retrieved by Kubenetes, typically due to a communication error with the node.

You can run kubectl describe pod [pod-name] to get more information about pods that appear to be malfunctioning.

Step 2: Debug Individual Pods

Once you identify a problem with a pod, you will find it difficult to debug it, because the StatefulSet automatically terminates malfunctioning pods. To enable debugging, StatefulSets provide a special annotation you can use to suspend all controller actions in a pod, in particular scaling operations, allowing you to debug it.

Use this command to set the initialize=”false” annotation and prevent the StatefulSet from scaling the problematic pod:

# kubectl annotate pods [pod-name]
pod.alpha.kubernetes.io/initialized="false" --overwrite

This will pause all operations of the StatefulSet on the pod and will prevent the StatefulSet from scaling down (deleting) the pod. You can then set a debug hook and execute commands within the pod’s containers, without interference from scaling operations.

Note that when initialized is set to "false", the entire StatefulSet will become unresponsive if the pod is unhealthy or unavailable. When you are done debugging, run the same command, and set the annotation to "true".

Step 3: Step-wise Initialization

If you didn’t succeed in debugging the pod using the above technique, this could mean there are race conditions when the StatefulSet is bootstrapped by Kubernetes. To overcome this, you can set initialized="false" in the StatefulSet manifest, and then create it in the cluster with this annotation.

Here is how to add the annotation directly to the manifest:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: my-app
spec:
  serviceName: "my-app"
  replicas: 2
  template:
    metadata:
      labels:
        app: my-app
      annotations:
        pod.alpha.kubernetes.io/initialized: "false"
...

Now, when you use kubectl apply to create the StatefulSet, the following process will take place:

  • The StatefulSet loads the first pod and waits.
  • You can debug the pod to make sure it initializes correctly.
  • When you are done examining the pod, set the annotation to initialize=”true” on the latest pod that was created:
# kubectl annotate pods [pod-name]
pod.alpha.kubernetes.io/initialized="true" --overwrite

The StatefulSet will now load the next pod and wait to allow you to debug it. Repeat steps 3-4 until you examine all pods in the StatefulSet.

How to delete the statefulset

Normally, when using a StatefulSet you do not need to manually remove StatefulSet pods. The StatefulSet controller is responsible for creating, resizing, and removing members of the StatefulSet, to make sure that the specified number of pods are ready to receive requests. A StatefulSet guarantees that, at most, one pod with a particular ID is running in the cluster at any given time (this is called the “at most one” semantic).

When debugging a StatefulSet, you might need to manually force a delete of pods. But be very careful when doing this because it can violate the “at most one” semantic. StatefulSets are used to run distributed applications that require reliable network identity and storage. Having multiple members with the same ID can cause system failure and data loss (for example it can create a split-brain scenario).

To delete a StatefulSet and all its pods, run this command:

# kubectl delete statefulsets

After removing the StatefulSet itself you may need to remove the associated Service object:

# kubectl delete service

If you need to delete the StatefulSet objects but keep the pods, run this command instead:

# kubectl delete -f --cascade=orphan

Later, to delete the individual pods, use this command (substituting myapp with the label used by your pods):

# kubectl delete pods -l app=myapp

These steps will help you identify basic issues with StatefulSets and resolve them. However, in many real-life scenarios, troubleshooting will be more complex. You will need to consider multiple aspects of the Kubernetes environment and diagnose issues in multiple moving parts.

Best Practices

If you are planning to deploy stateful applications, such as Oracle, MySQL, Elasticsearch, and MongoDB, then using StatefulSets is a great option. The following points need to be considered while creating stateful applications:

  1. Create a separate namespace for databases.
  2. Place all the needed components for stateful applications, such as ConfigMaps, Secrets, and Services, in the particular namespace.
  3. Put your custom scripts in the ConfigMaps.
  4. Use headless service instead of load balancer service while creating Service objects.
  5. Use the HashiCorp Vault for storing your Secrets.
  6. Use the persistent volume storage for storing the data. Then your data won’t be deleted even if the Pod dies or crashes.

Deployment objects are the most used controller to create Pods in Kubernetes. You can easily scale these Pods by mentioning replication count in the manifest file. For stateless applications, using Deployment objects is most suitable. For example, assume you are planning to deploy your Node.js application and you want to scale the Node.js application to five replicas. In this case, the Deployment object is well suited.

The diagram below shows how Deployment and StatefulSets assign names to the Pods.

deployment vs statefulsets

StatefulSets create ordinal service endpoints for each Pod created using the replica option. The diagram below shows how the stateful Pod endpoints are created with ordinal numbering and how they communicate with each other.

statefulset application

Hope this article helped to understand statefulset, keep follow to learn more about Kubernetes and devops tools.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments