Thursday, November 21, 2024
HomeKubernetesK8s TroubleshootingHow to Restart Kubernetes Pods using Kubectl

How to Restart Kubernetes Pods using Kubectl

As Kubernetes pods are important and it should be always running, but sometime we may have to restart the pods to make any changes or debug any issues like OOM. In this post will see how to restart Kubernetes pods using kubectl.

What is pod?

Pods are the smallest deployable units of computing that we can create and manage in Kubernetes. A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging if our cluster offers this.

Introduction

As there is no kubectl restart [podname] command for use with K8S (with Docker you can use docker restart [container_id] ), so we may have to perform restart with different commands.

Why we need to restart a pod

Before we see, how to restart, lets understand possible reasons we need to restart the pod.

  • Sometime application may get fail due to resource issue, mostly happens due to OOM.
  • A pod is stuck in a terminating state. This is found by looking for pods that have had all of their containers terminated yet the pod is still functioning. This usually happens when a cluster node is taken out of service unexpectedly, and the cluster scheduler and controller-manager cannot clean up all the pods on that node.
  • An application error can’t be fixed.
  • Timeouts.
  • Mistaken deployments.
  • Requesting persistent volumes that are not available.

Pod Status

The status of a pod tells you what stage of the lifecycle it’s at currently. There are five stages in the lifecycle of a pod:

  • Pending: This state shows at least one container within the pod has not yet been created.
  • Running: All containers have been created, and the pod has been bound to a Node. At this point, the containers are running, or are being started or restarted.
  • Succeeded: All containers in the pod have been successfully terminated and will not be restarted.
  • Failed: All containers have been terminated, and at least one container has failed. The failed container exists in a non-zero state.
  • Unknown: The status of the pod cannot be obtained.

If you notice a pod in an undesirable state where the status is showing as ‘error’, you might try a ‘restart’ as part of your troubleshooting to get things back to normal operations. You may also see the status ‘CrashLoopBackOff’, which the default when an error is encountered, and K8S tries to restart the pod automatically.

Restart a pod

You can use the following methods to ‘restart’ a pod with kubectl. Once new pods are re-created, they will have a different name than the old ones. Run kubectl get pods command to check the new pod details.

Method 1: kubectl rollout restart

This method is the recommended way, as it will not bring the application down completely, as pods will be keep functioning.

The controller kills one pod at a time, relying on the ReplicaSet to scale up new pods until all of them are newer than the moment the controller resumed. Rolling out restart is the ideal approach to restarting your pods because your application will not be affected or go down.

For rolling out a restart, use the following command:

# kubectl rollout restart deployment <deployment_name> -n <namespace>

Method 2: kubectl scale

This method is not recommended always, as it will bring your application down. If the application can be down, then this method can be used as it can be a quicker alternative to the kubectl rollout restart method.

If there is no YAML file associated with the deployment, you can set the number of replicas to 0.

# kubectl scale deployment <deployment name> -n <namespace> --replicas=0

This terminates the pods. Once scaling is complete the replicas can be scaled back up as needed (to at least 1):

# kubectl scale deployment <deployment name> -n <namespace> --replicas=5

Check the pods start using,

# kubectl get pods -n <namespace>

Method 3: kubectl delete pod and kubectl delete replicaset

Each pod can be deleted individually if required:

# kubectl delete pod <pod_name> -n <namespace>

Doing this will cause the pod to be recreated because K8S is declarative, it will create a new pod based on the specified configuration.

It will be time consuming sometime, but if you are using label, you can delete the pod using it. you can use following command,

# kubectl delete pod -l “app:myapp” -n <namespace>

Another approach if there are lots of pods, then ReplicaSet can be deleted instead:

# kubectl delete replicaset <name> -n <namespace>

Method 4: kubectl get pod | kubectl replace

The pod to be replaced can be retrieved using the kubectl get pod to get the YAML statement of the currently running pod and pass it to the kubectl replace command with the –force flag specified in order to achieve a restart. This is useful if there is no YAML file available and the pod is started.

# kubectl get pod <pod_name> -n <namespace> -o yaml | kubectl replace --force -f -

Method 5: kubectl set env

Setting or changing an environment variable associated with the pod will cause it to restart to take the change. The example below sets the environment variable FUNCTION_GROUP to the date specified, causing the pod to restart.

# kubectl set env deployment <deployment name> -n <namespace> FUNCTION_GROUP="$(groupname)"

This should help to understand how we can restart the pods, but please be noted just restart is not solution for all the time. Sometime we may need invest more time to analyze the issue and fix it.

You can check our other Kubernetes troubleshooting documents on https://foxutech.com/category/kubernetes/k8s-troubleshooting/

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments