Friday, March 29, 2024
HomeKubernetesK8s TroubleshootingKubernetes Pod Creation - What happen when we are create a pod?

Kubernetes Pod Creation – What happen when we are create a pod?

Recently, we were discussing most about how to troubleshooting the issues on Kubernetes, part of that, today will see complete operation details happens part of Kubernetes pod creation to understand, what happens when we deploy a pod separately or with some service. This is one of the important topics to understand for troubleshooting or identifying what went wrong in your cluster.

When pods are getting created? Answer will be when we execute a rolling update, scale deployments, or every new release, or every job and cronjob, etc. Right, anytime thought or asked what happens when you hit kubectl apply? Well, lets see what happens with some scenarios also, here.

For that lets take one sample pod.yaml for this tutorials and walk with that.

apiVersion: v1
kind: Pod
metadata:
  name: pod-demo
spec:
  containers:
    - name: webserver
      image: nginx
      ports:
        - name: webserver
          containerPort: 80

As we know we can just apply this yaml with following command,

kubectl apply -f pod.yaml

What happens, now? Let’s see.

You can watch this post in video also

Store the state in Etcd

When you apply the definition will be received and inspected by the API servers and same time it stores in etcd. Also, it will be add to the Scheduler’s queue.

Once it is added to schedule, kube-scheduler inspected the yaml file and collect the details defined in that like resources etc, based on that it picks the best node to run it using filters and predicates.

At last, the pod will be marked as scheduled, node assigned and state stored in Etcd. This is not end, so we have crossed just phase 1, and so far, all completed in control plane or master node and stored the state in the database.

Update pod details on etcd

Well, what is next and what happen next

The kubelet — the Kubernetes agent

We all know Kubelet’s job to poll the control plane or master node for updates.

If there is any pod to create, it will collect the details and creates it.

Again, Not done yet.

The kubelet doesn’t create the Pod by itself. Instead, it delegates the work to three other components:

  1. The Container Runtime Interface (CRI) â€” the component that creates the containers for the Pod.
  2. The Container Network Interface (CNI) â€” the component that connects the containers to the cluster network and assigns IP addresses.
  3. The Container Storage Interface (CSI) â€” the component that mounts volumes in your containers.

If you worked on docker, then you may aware how the containers are getting created, same done by the Container Runtime Interface (CRI), as like the below command,

docker run -d <image-name>

The Container Networking Interface (CNI) is always interesting because it is in charge of:

  1. Generating a valid IP address for the Pod.
  2. Connecting the container to the network.

When the Container Network Interface finishes its job, the Pod is connected to the network and with valid IP address assigned.

Good, is this end? As we the pod got created and IP got assigned, you may ask now the traffic should serve right, there is a trick. This two-operation done on node or in Kubelet side, so far control plane or master node thinks, still the pod is getting created. So far, all the details only know by Kubelet, so there is no traffic will be routed by the control plane.

It’s the role of the kubelet to collect all the details of the Pod such as the IP address and report them back to the control plane it will be stored on etcd.

If you query in etcd, it will show status of the pod and IP address details.

kubelet to control plane

Good, The Pod is created and ready to use.

Again, this is end of journey if the pod not part any service. If the pod is part of any service, still there some additional steps to complete. What is that? Let’s see.

Services

Before we proceeding, lets understand bit about services, When we create a Service, mainly following 2 information we are concern about.

  1. The selector, which is used to specify the Pods that will receive the traffic.
  2. The targetPort â€” the port used by the Pods to receive traffic.

A typical YAML definition for the Service looks like this:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  ports:
  - port: 80
    targetPort: 81
  selector:
    name: app-demo

When you apply the Service to the cluster with kubectl apply, Kubernetes finds all the Pods that have the same label as the selector (name: app-demo) and collects their IP addresses — but only if they passed the Readiness probe.

Once it has the IP details, it stores in etcd as an endpoint by concatenates the IP address and the port.

Let’s assume the IP is 192.0.0.1 and the targetPort is 81, Kubernetes concatenates the two values and calls them an endpoint – 192.0.0.1:81

The Endpoint object is a real object in Kubernetes and for every Service Kubernetes automatically creates an Endpoint object. The Endpoint collects all the IP addresses and ports from the Pods.

Every time when you create the pod, the Endpoint object is updated with a new list of endpoints when:

  1. A Pod is created.
  2. A Pod is deleted.
  3. A label is modified on the Pod.

So, it is Kubelet’s job to every time whenever there is any change update to master node, Kubernetes updates all the endpoints to reflect the change.

service details on etcd

Are you ready to start using your Pod?

Definitely not.

Consuming endpoints

Endpoints are used by several components in Kubernetes. Example, Kube-proxy uses the endpoints to set up iptables rules on the Nodes, this is repeated task for kube-proxy whenever there are new changes.

As like kube-proxy, Ingress controller is another object uses the same list of endpoints.

Yon can see below ingress manifest,  we are specifying the Service as the destination:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-demo
spec:
  rules:
  - http:
      paths:
      - backend:
          service:
            name: service-demo
            port:
              number: 80
        path: /
        pathType: Prefix

In reality, the traffic is not routed to the Service.

Instead, the Ingress controller sets up a subscription to be notified every time the endpoints for that Service change. With that, The Ingress routes the traffic directly to the Pods by skipping the Service.

As you can imagine, every time there is a change to an Endpoint (the object), the Ingress retrieves the new list of IP addresses and ports and reconfigures the controller to include the new Pods.

ingress reads the endpoint

CoreDNS, the DNS component in the cluster, is another example.

If you use Services of type Headless, CoreDNS will have to subscribe to changes to the endpoints and reconfigure itself every time an endpoint is added or removed.

The same endpoints are consumed by service meshes such as Istio or Linkerd, by cloud providers to create Services of type:LoadBalancer and countless operators.

You must remember that several components subscribe to change to endpoints and they might receive notifications about endpoint updates at different times.

Well now we are really done. Yeah this all happens in a second when you just apply the manifest, and seeing the pod running.


You can follow us on social media, to get some regular updates

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments