Tuesday, November 5, 2024
HomeKubernetesK8s TroubleshootingHow to Fix Kubernetes Objects Stuck in Terminating state – Kubernetes Troubleshooting

How to Fix Kubernetes Objects Stuck in Terminating state – Kubernetes Troubleshooting

When you are working on Kubernetes, sometimes we need to clean the Kubernetes objects. You may experience, Kubernetes object deletion get stuck, we need to perform some actions to delete forcefully. In this will see how we can remove the finalizer and clean the Kubernetes objects.

You may experience or heard about sometimes, Kubernetes objects like namespace get stuck in deletion. Even I have personally experienced when I was learning Kubernetes. When I was trying to troubleshoot the issue, it was completed hit and trail, there was multiple suggestion across and it took time to fix it or more over, understanding the issue.

This article prepared to help all to understand the issues and how to fix it.

Lets discuss following topics today,

  • Why the Kubernetes object goes to terminating state infinitely?
  • Why the deployment/s were not deleted even after deleting the k8s NameSpace?
  • What is Finalizers and Owner reference in k8s?
  • Solution for deleting the object which are in the terminating state.

How the delete works?

Kubernetes has its own way of managing memory and resources so does its own Garbage Collection System. It is a systematic way to remove unused/unutilized space.  Programming Languages like Java/GO and the Servers built on them all have this process to ensure the memory is managed optimally.

Now, Kubernetes being the modern solution, It does manage its resources and perform Garbage collections when the resources are deleted and in various other contexts too.

Now lets come back to our Kubectl delete and why we are talking about Garbage Collection now.

Kubernetes adds a special tag or annotation to the resource called Finalizers when it creates resources which have multiple dependencies. Here is the definition able Finalizers.

Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned.

Finalizers are the keys to tell Kubernetes API that, there are few resources to be deleted or taken care of before this particular resource is deleted.

For example. When you are trying to delete a Persistent Volume the associated persistent volume claims and the pods bound to it would be affected so you must follow the proper order.

Same way, When you are trying to delete an Ingress it might be associated with some infrastructure items like Load Balancers and Target Groups which needs to be deleted first before delete the ingress.

For more understanding, lets create the namespace and deployment with finalizers.

Create the NameSpace and add the Finalizer

Here is the command to create the namespace called, demo-namespace

# kubectl create ns demo-namespace

This command helps to view the namespace manifest.

# kubectl edit ns demo-namespace
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2022-12-10T02:13:53Z"
  labels:
    kubernetes.io/metadata.name: demo-namespace
  name: demo-namespace
  finalizers:
  - kubernetes

Create a test deployment and add Finalizer

Lets create a deployment called demo in namespace demo-namespace

# kubectl create deployment demo --image=nginx -n demo-namespace

Here is the manifests for deployment.

# kubectl edit deployment demo -n demo-namespace
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-12-10T03:03:18Z"
  finalizers:
  - kubernetes
  generation: 1
  labels:
    app: demo
  name: demo

Issue 1: The NameSpace was stuck in Terminating state

To understand it, lets delete the NameSpace that we created.

# kubectl delete ns demo-namespace
namespace "demo-namespace" deleted

It will show the deleted message but won’t exit, because it is waiting at the Finalizers.

# kubectl get ns demo-namespace
NAME              STATUS        AGE
demo-namespace   Terminating   10m56s

As it’s in terminating state we cannot perform any edit operation on this objects manifest, it will be in read only. To fix this, we need to remove those entries from the manifest. Here will see how we can do this.

Solution 1: Dump the current conf, modify and apply

Here will take the manifest output and lets remove the finalizer entries.

# kubectl get ns demo-namespace -o json > demo-namespace.json

Edit the json file and remove the Finalizer entry

# cat demo-namespace.json
{
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
        "creationTimestamp": "2022-12-10T03:03:18Z",
        "deletionTimestamp": "2022-12-10T03:43:42Z",
        "finalizers": [
            "kubernetes"
        ],
        "labels": {
            "kubernetes.io/metadata.name": "demo-namespace"
        },
        "name": "demo-namespace",
        "resourceVersion": "725110",
        "uid": "a3yc4w03-2343-4136-f3a3-t7z2324b9n1v"
    },
    "spec": {
        "finalizers": [
            "kubernetes"
        ]
    },
    "status": {

After modifying the file (remove finalizer array) execute the following:

# kubectl replace --raw "/api/v1/namespaces/demo-namespace/finalize" -f ./demo-namespace.json

After running the above command, the namespace should now be absent from your k8s cluster.

Solution 2: By using kubectl patch

You can simply use this command to remove the finalizer from the manifest.

# kubectl patch ns/demo-namespace --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'

For more details, you can refer: https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/

Issue 2: Orphan deployments in deleted NameSpace

The deployments were not deleted even after the NameSpace deleted. But in normal case, deleting the NameSpace will do the other objects cleanup and if that fail because of some Finalizers, NameSpace will stay in Terminating state.

As that deployment was also in the terminating state (read only), we cannot edit the manifest. To fix this deployment to get delete, we can use patch command.

# kubectl get ns demo-namespace
Error from server (NotFound): namespaces "demo-namespace" not found
# kubectl get all -n demo-namespace
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo     0/1     0            0           59m

In this case we cannot delete the deployment using kubectl delete. Use the following command:

# kubectl patch deployment.apps/demo -n demo-namespace --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'

Same you can apply for any other objects like, Pod, job, ingress, svc.

# kubectl patch <pod|job|ingress|pvc> <name-of-resource> \
-p '{"metadata":{"finalizers":[]}}' – type=merge

If you have more objects where we need to remove the finalizer, do it in a loop.

for i in `cat file.txt`; do kubectl patch $i -n demo-namespace --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'; done

If you are more interested about how to gracefully shutdown the pod, you can refer Kubernetes Pod Graceful Shutdown – How?

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments