Missing required field “selector” in Kubernetes -Troubleshooting

0
1131
Missing required field

If you are using latest version of Kubernetes and your manifests were created before Kubernetes version 1.16, then you may face the error missing required field “selector”. This should happen mostly on a Kubernetes Deployment, Daemonset or other resources if you are moving from an old version to a newer version.

The fix for issue, you need to add the spec.selector field to your YAML if it is not present or if it’s empty then provide a proper value.

Let’s take an example to understand this. Below we have an old YAML file, which used to work fine in Kubernetes older version as back then, a default value was automatically set for the spec.selector field but not anymore. The spec.Selector no longer defaults to .spec.template.metadata.labels and will need to be explicitly set.

kind: Deployment

metadata:

  name: web-server

spec:

  replicas: 2

  strategy:

    type: RollingUpdate

  template:

    metadata:

      labels:

        name: web-server

        tier: backend

As you can see we are not used the spec.selector field in the above YAML file, we are also using the extension/v1beta1 version which is no longer used in latest version of Kubernetes (will create separate post to how to fix that).

Let’s focus on the spec.selector field. The above YAML file creates a Kubernetes deployment. You may get this issue with Kubernetes Deployment, but the solution is simple.

We will change the above YAML to:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  replicas: 2
  selector:
    matchLabels:
      name: web-server
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        name: web-server
        tier: backend
...
...

Notice the following section added to the spec field in above YAML:

  selector:
    matchLabels:
      name: web-server

That is what is required to solve this issue. The matchLabels field should have the key-value pair that we specify in the template field. You can have labels as component: serviceName or maybe k8s-app: serviceName, then that should be provided in the matchLabels field in spec.selector field.

The selector field defines how the Daemonset or Deployment finds which Pods to manage. In the above YAML code, we just used a label that is defined in the Pod template (name: web-server). But more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.

Please be noted, As Kubernetes is still undergoing changes, we may see similar issues here and there. We should keep track the changes and should modify the fields as per that Kubernetes version we are using. Another recommendation is we should keep upgrade the Kubernetes at earliest, to align with then security compliance and also changes like this will be fixed asap. Otherwise, we may need to work lot and should perform multiple changes at times.

Keep follow us, we are preparing series of Kubernetes troubleshooting guides, we will keep post regularly to help all.

Check our Kubernetes Troubleshooting Series on https://foxutech.com/category/kubernetes/k8s-troubleshooting/

You can follow us on social media, to get some short knowledges regularly.

Google search engine