Friday, April 19, 2024
HomeKubernetesKubernetes Pods

Kubernetes Pods

We have seen about the Kubernetes and more details so far, this series we are planning to start explaining the Kubernetes objects. Part of that lets start from Kubernetes pods, this could give details explanation about how to create a pod, delete pod, how it works and how we can create it.  

What is a Pod?

Pods are the smallest deployable computing units that Kubernetes allows you to create and manage.

A Pod is a group of one or more containers with shared storage and network resources, as well as a specification for how the containers should be run. The contents of a Pod are always co-located, co-scheduled, and run in a shared context. A Pod represents an application-specific “logical host”: it contains one or more coupled application containers.

Pods are containers that abstract one or more containers in your Kubernetes cluster. Each container in a pod shares the pod’s resources, such as IP and storage.

Pod usage patterns

Pods can be used in two main ways:

  • Pods that run a single container. The simplest and most common Pod pattern is a single container per pod, where the single container represents an entire application. In this case, you can think of a Pod as a wrapper.
  • Pods that run multiple containers that need to work together. Pods with multiple containers are primarily used to support co-located, co-managed programs that need to share resources. These co-located containers might form a single cohesive unit of service—one container serving files from a shared volume while another container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.

Each Pod is meant to run a single instance of a given application. If you want to run multiple instances, you should use one Pod for each instance of the application. This is generally referred to as replication. Replicated Pods are created and managed as a group by a controller, such as a Deployment.

How to create a Pod?

Using the Pod Object

Create a YAML file named demo.yaml and paste the code below:

apiVersion: v1
kind: Pod
metadata:
    name: demo
spec:
   containers:
     - image: demo/demo
        name: demo
        ports:
        - containerPort: 8080

Run the command below in your terminal to automatically generate a pod in your cluster.

# kubectl apply -f demo.yaml

Using the ReplicaSet

Create a YAML file named demo_rs.yaml and paste the code below:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: demo
  labels:
    app: demoApp
    tier: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: backend
  template:
    metadata:
      labels:
        tier: backend
    spec:
      containers:
      - name: demo
        image: demo/demo

Run the command below in your terminal to create a ReplicaSet in your cluster. This ensures that the demo pod has a set number of pods always running.

# kubectl apply -f demo_rs.yaml

Using Deployments

Create a YAML file named demo_deployment.yaml and paste the code below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
  labels:
    app: demoApp
    tier: backend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: backend
  template:
    metadata:
      labels:
        tier: backend
    spec:
      containers:
      - name: demo
        image: demo/demo

Run the following command in your terminal.

# kubectl apply -f demo_deployment.yaml

Deployment works similarly to ReplicaSet in that it ensures that a certain number of pods are available in the cluster. However, Deployment includes the option of rolling updates. This ensures that when you update your Deployment file, it reaches the desired state without causing downtime. For example, if you replace your container image with a new version in your YAML file, Deployment updates all the pods in your cluster one by one, so that if there is a bug with the new image, you can quickly roll back without causing downtime.

Essential features provided by pods to Kubernetes users:

Disposable

Pods in Kubernetes can be created and deleted as needed. For example, you can add pods to handle a spike in application traffic and then remove them when the traffic returns to normal.

Your Kubernetes controller can delete pods after they have completed their process execution, as well as when your cluster does not have enough resources to run the pod.

Run multiple containers

Pods can abstract multiple containers that share the pod’s resources like network, storage, and memory. Although it is preferable to have a one-to-one relationship between a pod and a container, there are times when it is necessary to run multiple containers in a pod.

Assume you deployed an application on your Kubernetes cluster and set up volume storage to save your application logs. However, viewing and analysing your application logs from your command-line terminal becomes difficult. Rather than manually exporting these logs to a log analyser tool, you can create a separate service that extracts logs from storage and sends them to a central log system for proper visualization and analysis. This can be accomplished with a multi-container pod and the shared volume storage and network feature, which allows multiple containers in the same pod to share the same storage and network.

Run in a node environment

Kubernetes pods are so tightly coupled to the node environment that if the node fails, so do all the pods associated with it. A node may also evict pods from its environment if the pods consume more memory than the node allows. Furthermore, if a pod consumes more CPU resources than usual, the node throttles that pod’s CPU resource consumption. This is done to keep the node healthy for as long as possible.

Pods and Nodes Working Together

Nodes are either physical or virtual computing machines that run the processes of your pods. Managing your applications on multiple nodes independently can be a herculean task. Pods are an abstraction of your containerized applications that run on Kubernetes cluster nodes. To improve efficiency and performance, Kubernetes uses the control plane to distribute workloads to available nodes in the cluster.

Pods and Containers Working Together

Containers are applications that come with their execution environments packaged together. They bundle all of the dependencies, libraries, and binaries required for your application to function, abstracting away the differences in operating system distribution and the underlying infrastructure.

Pods were introduced to ensure that Kubernetes was not tied to a single vendor. They abstract the containers in the cluster, allowing you to package your application with any Kubernetes-approved vendor.

Pods Networking

Every pod in the Kubernetes cluster has its own IP address. The containers in a pod share network resources such as IP addresses and ports, and they can communicate with one another via localhost. However, if a container needs to communicate with another container in a different pod, it is best to use Kubernetes services.

Storage

The use of the container’s file system for storage is ephemeral, which means that the storage is only available while the container is running. If you stop and restart your container, everything you’ve saved will be lost. Use volumes for consistent storage that isn’t dependent on containers.

Some volumes are available until the pod dies, whereas others are not. Kubernetes supports a variety of storage volumes.

Pod lifecycle

Pods are ephemeral. They are not designed to run forever, and when a Pod is terminated it cannot be brought back. In general, Pods do not disappear until they are deleted by a user or by a controller.

Pod Lifecycle

Pods do not “heal” or repair themselves. For example, if a Pod is scheduled on a node which later fails, the Pod is deleted. Similarly, if a Pod is evicted from a node for any reason, the Pod does not replace itself.

Each Pod has a PodStatus API object, which is represented by a Pod’s status field. Pods publish their phase to the status: phase field. The phase of a Pod is a high-level summary of the Pod in its current state.

When you run kubectl get pod to inspect a Pod running on your cluster, a Pod can be in one of the following possible phases:

  • Pending: Pod has been created and accepted by the cluster, but one or more of its containers are not yet running. This phase includes time spent being scheduled on a node and downloading images.
  • Running: Pod has been bound to a node, and all of the containers have been created. At least one container is running, is in the process of starting, or is restarting.
  • Succeeded: All containers in the Pod have terminated successfully. Terminated Pods do not restart.
  • Failed: All containers in the Pod have terminated, and at least one container has terminated in failure. A container “fails” if it exits with a non-zero status.
  • Unknown: The state of the Pod cannot be determined.

Additionally, PodStatus contains an array called PodConditions, which is represented in the Pod manifest as conditions. The field has a type and status field. conditions indicate more specifically the conditions within the Pod that are causing its current status.

The type field can contain PodScheduledReadyInitialized, and Unschedulable. The status field corresponds with the type field, and can contain True, False, or Unknown.

Conditions

  1. PodScheduled: This means the pod has been scheduled for a node.
  2. Initialized: This means that init containers have been completed successfully. These are containers that run before the main app containers. They must run completely before the next container can run. If the init container fails, it’s restarted until it succeeds.
  3. ContainersReady: This shows that all the containers are in a ready state.
  4. Ready: The pod is ready to receive and process requests.

You can read more about Kubernetes pod creation from our previous article.

Container probes

probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:

  • ExecAction (performed with the help of the container runtime)
  • TCPSocketAction (checked directly by the kubelet)
  • HTTPGetAction (checked directly by the kubelet)

You can read more about probes in the Pod Lifecycle documentation.

Termination

It is preferable to gracefully terminate a pod when it is no longer required rather than abruptly terminate it. This ensures that the pod cleans up before being completely deleted. Kubernetes by default provides a graceful termination period of thirty seconds from the time the termination request is initiated.

# kubectl delete pod my_app — grace-period=45

You can read more about pod restart from our previous post.

Hope this may help you, will see other kubernetes objects in coming posts. Also you can read about Kubernetes probes and Kubernetes Requests and limits.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments