A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well.
Pods in a Kubernetes cluster can be used in two main ways:
- Pods that run a single container. The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.
- Pods that run multiple containers that need to work together. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service–one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.
Why Pod instead Single Container?
While it would seem simpler to just deploy a single container directly, there are good reasons to add a layer of abstraction represented by the Pod. A container is an existing entity, which refers to a specific thing. That specific thing might be a Docker container, but it might also be a rkt container, or a VM managed by Virtlet. Each of these has different requirements.
What’s more, to manage a container, Kubernetes needs additional information, such as a restart policy, which defines what to do with a container when it terminates, or a liveness probe, which defines an action to detect if a process in a container is still alive from the application’s perspective, such as a web server responding to HTTP requests.
Instead of overloading the existing “thing” with additional properties, Kubernetes architects have decided to use a new entity, the Pod, that logically contains (wraps) one or more containers that should be managed as a single entity.
How Kubernetes allow more than one container in a Pod?
Containers in a Pod run on a “logical host”; they use the same network namespace (in other words, the same IP address and port space), and the same IPC namespace. They can also use shared volumes. These properties make it possible for these containers to efficiently communicate, ensuring data locality. Also, Pods enable you to manage several tightly coupled application containers as a single unit.
So if an application needs several containers running on the same host, why not just make a single container with everything you need? Well first, you’re likely to violate the “one process per container” principle. This is important because With multiple processes in the same container, it is harder to troubleshoot the container because logs from different processes will be mixed together, and it is harder manage the processes lifecycle, for example to take care of “zombie” processes when their parent process dies. Second, using several containers for an application is simpler, more transparent, and enables decoupling software dependencies. Also, more granular containers can be reused between teams.
To launch a pod using the container image nginx and exposing a HTTP API on port 80, execute:
# kubectl run test --image=nginx --port=80
We can now see that the pod is running:
# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-app-6cc6d7964d-rvkwr1/1 Running 0 1m # kubectl describe pod nginx-app-6cc6d7964d-rvkwr | grep IP: IP: 172.17.0.3
From within the cluster this pod is accessible via the pod IP 172.17.0.3, which we’ve learned from the kubectl describe command above:
[cluster] $ curl 172.17.0.3:80/info
{"host": "172.17.0.3:80", "version": "0.5.0", "from": "172.17.0.1"}
Note that kubectl run creates a deployment, so in order to get rid of the pod you have to execute
# kubectl delete deployment test
Using configuration file
You can also create a pod from a configuration file. In this case the pod is running the already known nginx image from above along with a generic CentOS container in the name of testkubee:
# vi pods.yaml
apiVersion: v1
kind: Pod
metadata:
name: testkubee
spec:
containers:
- name: test
image: nginx
ports:
- containerPort: 80
- name: shell
image: centos:7
command:
- "bin/bash"
- "-c"
- "sleep 1000"
# kubectl create -f pods.yaml
# kubectl get pods
NAME READY STATUS RESTARTS AGE
testkubee 2/2 Running 0 14m
Now we can exec into the CentOS container and access the nginx on localhost:
# kubectl exec testkubee -c shell -i -t -- bash [root@testkubee /]# curl -s localhost:80/info {"host": "localhost:80", "version": "0.5.0", "from": "127.0.0.1"}
Specify the resources field in the pod to influence how much CPU and/or RAM a container in a pod can use (here: 64MB of RAM and 0.5 CPUs):
# vim pods_cpu.yaml
# kubectl create -f pods_cpu.yaml
# kubectl describe pod constraintpod
By creating pods, Kubernetes provides a great deal of flexibility for orchestrating how containers behave, and how they communicate with each other. They can share file volumes, they can communicate over the network, and they can even communicate using IPC.