Thursday, November 21, 2024
HomeInterview QAKubernetes Interview Questions & Answers

Kubernetes Interview Questions & Answers

Here are some Kubernetes Interview Questions and Answers.

1. What is etcd, and what role does it play in a Kubernetes cluster?

Etcd is a distributed key-value store that stores the configuration data of a Kubernetes cluster. It is primarily used to store the state of the cluster and provides a reliable source of truth for cluster consistency. In a production environment, it is recommended to have an etcd cluster with a minimum of three nodes for high availability.

2. What is the difference between Kubernetes deployment and Kubernetes StatefulSets?

A Kubernetes deployment is suitable for stateless applications, while a Statefulset is ideal for stateful applications like databases. A deployment is designed to handle simple scaling and zero downtime rolling updates. In contrast, a Statefulset offers more guarantees on the ordering and uniqueness of pods and persistent storage.

3. What is a Kubernetes ingress? How is it used?

A Kubernetes ingress is an API object that allows external traffic to be routed to the appropriate Kubernetes services based on the incoming request’s URL or host. It is used to expose HTTP and HTTPS routes to the Kubernetes cluster.

4. How do you handle secrets and configuration management in Kubernetes?

Kubernetes provides a built-in secret’s API resource for securely storing sensitive information such as passwords, API keys, and other confidential data. Kubernetes also allows for configuration management using config maps, which can be used to store non-sensitive configuration data as key-value pairs.

5. What is a Kubernetes network policy? How does it work?

A Kubernetes network policy is a specification that defines how groups of pods can communicate with each other and with the outside world. It is used to enforce network traffic rules that restrict access to pods based on their labels or namespaces. Network policies use selectors and rules to allow or deny incoming or outgoing traffic between pods.

6. How do you automate Kubernetes deployments?

Kubernetes deployments can be automated using various tools such as Helm, Kubernetes Operators, or GitOps workflows. Helm is a package manager for Kubernetes that allows users to define, install, and upgrade Kubernetes applications. While Kubernetes Operators are a Kubernetes-native way of automating application management, GitOps relies on Git as the source of truth for defining and deploying Kubernetes applications.

7. How do you scale Kubernetes applications horizontally and vertically?

Scaling Kubernetes applications can be done horizontally or vertically. Horizontal scaling involves adding more replicas of the application, while vertical scaling increases the resources of the individual pod. Kubernetes supports both types of scaling, and it can be easily achieved by modifying the replica count or resource limits of a deployment or Statefulset.

8. What is the difference between a Kubernetes namespace and a label?

A Kubernetes namespace is a way to divide cluster resources between multiple users or teams. It provides a way to isolate resources and prevent naming conflicts. On the other hand, Kubernetes labels are key-value pairs attached to Kubernetes objects to help identify and organize them.

9. What is the difference between a Kubernetes Daemonset and a Kubernetes Statefulset?

Both Kubernetes Daemonsets and Statefulsets are used to manage pods, but they have different use cases. Daemonsets are used for running pods on every node in a cluster, while Statefulsets are used for deploying stable, ordered pods with unique network identities.

10. What is a storage class in Kubernetes? How is it used?

Answer: A storage class is a Kubernetes object that defines the type of storage that can be used by a pod or a persistent volume claim (PVC). Storage classes are used to dynamically provision storage resources based on the requirements of the application.

11. How do you handle storage in Kubernetes? What are the various types of storage you can use?

Answer: Kubernetes provides several options for handling storage, including local storage, hostPath volumes, network-attached storage (NAS), and cloud-based storage. Each option has its pros and cons depending on the specific use case.

12. What is a Kubernetes controller? Name a few different types of controllers.

Answer: A Kubernetes controller is a control loop that watches over a desired state of a Kubernetes object and takes action to ensure the current state matches the desired state. Some common types of controllers include ReplicaSet, Deployment, Statefulset, and Daemonset.

13. How does Kubernetes handle load balancing and network traffic routing?

Answer: Kubernetes uses a Service object to handle load balancing and network traffic routing. A Service provides a single IP address and DNS name for a set of pods and routes traffic to those pods based on a set of rules defined by the user.

14. What is a Kubernetes secret, and how is it different from a Kubernetes configuration map?

Answer: A Kubernetes secret is an object used to store sensitive information, such as a password or API key. A configuration map, on the other hand, is used to store configuration data that a pod or container can consume.

Vice-Versa: What is a Kubernetes ConfigMap? How is it different from a Kubernetes Secret?

Answer: A Kubernetes ConfigMap is an object used to store configuration data that a pod or container can consume. On the other hand, a Kubernetes Secret is used to store sensitive information, such as a password or API key.

15. How do you deploy a stateful application in Kubernetes?

Answer: Deploying a stateful application in Kubernetes requires using Statefulsets, which provides guarantees around the ordering and uniqueness of pod startup and termination.

16. What is a Kubernetes deployment rollout strategy? Name a few different types of strategies.

Answer: A Kubernetes deployment rollout strategy is used to update a deployed application to a new version. Some common deployment strategies include RollingUpdate, Recreate, and Blue/Green.

17. How does Kubernetes handle security and access control? What are some best practices for securing a Kubernetes cluster?

Answer: Kubernetes provides several built-in security features, such as role-based access control (RBAC), pod security policies, and network policies. Best practices for securing a Kubernetes cluster include applying security updates regularly, using strong authentication and access controls, and using network segmentation to separate resources.

18. What is a Kubernetes Operator, and how is it used?

Answer: A Kubernetes Operator is a method for packaging, deploying, and managing Kubernetes-native applications. An Operator defines a set of custom resources and controllers to automate the management of complex applications.

19. What is the role of a Kubernetes Service Mesh, and why would you use one?

Answer: A Service Mesh is a dedicated infrastructure layer designed to manage service-to-service communication within a Kubernetes cluster. Service Meshes provide authentication, authorization, and observability features for distributed systems.

20. How do you manage resource requests and limits in Kubernetes?

Answer: Kubernetes provides several mechanisms for managing resource requests and limits, including Pod resource requests and limits, and the Kubernetes Horizontal Pod Autoscaler.

21. What is a Kubernetes Helm Chart, and how can it help with application deployment?

Answer: A Kubernetes Helm Chart is a collection of Kubernetes manifest files packaged together in a single archive. Helm Charts can simplify application deployment and management by providing a standard way to package and version applications.

22. What is the difference between a Kubernetes deployment and a Kubernetes Daemonset?

Answer: A Kubernetes deployment manages a set of identical replicas of a defined application instance. It ensures that the desired number of replicas are running and monitors their health. Deployments manage the creation, update, and scaling of pods, which are the basic units in Kubernetes.

On the other hand, a Kubernetes Daemonset ensures that all the nodes in a cluster run a copy of a specific pod. A Daemonset controller creates pods on each node in the cluster and then monitors them to ensure they are healthy. Daemonsets are helpful for deploying cluster-level applications such as log collectors and monitoring agents.

In summary, a Kubernetes deployment is used to manage multiple identical replica pods while a Kubernetes Daemonset is used to ensure that a specific pod runs on all nodes in a cluster.

23. What is a Kubernetes CRD (Custom Resource Definition), and how can you use it to extend Kubernetes functionality?

Answer: A Custom Resource Definition (CRD) is used to create new Kubernetes resources unavailable in the Kubernetes core. It is a way of extending Kubernetes functionality by defining custom resources that can be used to create Kubernetes objects, such as pods, services, and deployments.

Custom resources can represent any Kubernetes object type and can be used to create custom controllers that programmatically manage these resources. For example, you can create a CRD for a custom application load balancer and then use a custom controller to manage the load balancer.

In summary, a Kubernetes CRD allows you to create custom resources that extend Kubernetes functionality beyond its core features. You can use custom resources to create custom controllers that manage these resources programmatically.

24. What is PDB (Pod Disruption Budget)?

Answer: A Kubernetes administrator can create a deployment of a kind: PodDisruptionBudget for high availability of the application, it makes sure that the minimum number is running pods are respected as mentioned by the attribute minAvailable spec file. This is useful while performing a drain where the drain will halt until the PDB is respected to ensure the High Availability (HA) of the application. The following spec file also shows minAvailable as 2 which implies the minimum number of an available pod (even after the election).

25. What’s the init container and when it can be used?

Answer: init containers will set a stage for you before running the actual POD.

Wait for some time before starting the app Container with a command like sleep 60.

Clone a git repository into a volume.

26. How to troubleshoot if the POD is not getting scheduled?

Answer: In K8’s scheduler is responsible to spawn pods into nodes. There are many factors that can lead to unstartable POD. The most common one is running out of resources, use the commands like kubectl describe <POD> -n <Namespace> to see the reason why POD is not started. Also, keep an eye on kubectl to get events to see all events coming from the cluster.

27. How to run a POD on a particular node?

Answer: Various methods are available to achieve it.

nodeName: specify the name of a node in POD spec configuration, it will try to run the POD on a specific node.

nodeSelector: Assign a specific label to the node which has special resources and use the same label in POD spec so that POD will run only on that node.

nodeaffinities: required DuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution are hard and soft requirements for running the POD on specific nodes. This will be replacing nodeSelector in the future. It depends on the node labels

28. Why should namespaces be used? How does using the default namespace cause problems?

Answer: Over the course of time, using the default namespace alone is proving to be difficult, since you are unable to get a good overview of all the applications you can manage within the cluster as a whole. The namespaces allow applications to be organized into groups that make sense, such as a namespace for all monitoring applications and another for all security applications.

Additionally, namespaces can be used for managing Blue/Green environments, in which each namespace contains its own version of an app as well as sharing resources with other namespaces (such as logging or monitoring). It is also possible to have one cluster with multiple teams using namespaces. The use of the same cluster by multiple teams may lead to conflict.  Suppose they end up creating an app that has the same name, this means that one team will override the app created by the other team as Kubernetes prohibits two apps with the same name (within the same namespace).

29. Suppose a company built on monolithic architecture handles numerous products. Now, as the company expands in today’s scaling industry, their monolithic architecture started causing problems.

How do you think the company shifted from monolithic to microservices and deploy their services containers?

Answer: As the company’s goal is to shift from their monolithic application to microservices, they can end up building piece by piece, in parallel and just switch configurations in the background. Then they can put each of these built-in microservices on the Kubernetes platform. So, they can start by migrating their services once or twice and monitor them to make sure everything is running stable. Once they feel everything is going well, then they can migrate the rest of the application into their Kubernetes cluster.

30. Suppose a company wants to revise it’s deployment methods and wants to build a platform which is much more scalable and responsive.

How do you think this company can achieve this to satisfy their customers?

Answer: To give millions of clients the digital experience they would expect, the company needs a platform that is scalable, and responsive, so that they could quickly get data to the client website. Now, to do this the company should move from their private data centers (if they are using any) to any cloud environment such as AWS. Not only this, but they should also implement the microservice architecture so that they can start using Docker containers. Once they have the base framework ready, then they can start using the best orchestration platform available i.e., Kubernetes. This would enable the teams to be autonomous in building applications and delivering them very quickly.

31.  Suppose a company wants to optimize the distribution of its workloads, by adopting new technologies.

How can the company achieve this distribution of resources efficiently?

Answer: The solution to this problem is none other than Kubernetes. Kubernetes makes sure that the resources are optimized efficiently, and only those resources are used which are needed by that application. So, with the usage of the best container orchestration tool, the company can achieve the distribution of resources efficiently.

32. Consider a carpooling company wants to increase their number of servers by simultaneously scaling their platform.

How do you think will the company deal with the servers and their installation?

Answer: The company can adopt the concept of containerization. Once they deploy all their application into containers, they can use Kubernetes for orchestration and use container monitoring tools like Prometheus to monitor the actions in containers. So, with such usage of containers, giving them better capacity planning in the data center because they will now have fewer constraints due to this abstraction between the services and the hardware they run on.

33. If a pod exceeds its memory “limit” what signal is sent to the process?

Answer: SIGKILL as immediately terminates the container and spawns a new one with OOM error. The OS, if using a cgroup based containerisation (docker, rkt, etc), will do the OOM killing. Kubernetes simply sets the cgroup limits but is not ultimately responsible for killing the processes.`SIGTERM` is sent to PID 1 and k8s waits for (default of 30 seconds) `terminationGracePeriodSeconds` before sending the `SIGKILL` or you can change that time with terminationGracePeriodSeconds in the pod. If your container will eventually exit, it should be fine to have a long grace period. If you want a graceful restart it would have to do it inside the pod. If you don’t want it killed, then you shouldn’t set a memory `limit` on the pod and there’s not a way to disable it for the whole node. Also, when the liveness probe fails, the container will SIGTERM and SIGKILL after some grace period.

This, along with other interview questions for Kubernetes, is a regular feature in Kubernetes interviews, be ready to tackle it with the approach mentioned.

Check our Previous Post on Devops Interview Questions and answers – FoxuTech.

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments