So far, we have seen how to setup an Azure Kubernetes Service and deploy SonarQube on AKS. Also, we are exploring GitOps with AKS. Before heading further more in-details and example about GitOps and other application deployments. Let’s discuss something about monitoring here, for this we like to take one of the best and popular open-source metric monitoring solution. Let’s see how to setup Prometheus on Azure Kubernetes Service.
Prometheus
Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
Prometheus is a time-series database with a UI and sophisticated querying language (PromQL). Prometheus can scrape metrics, counters, gauges and histograms over HTTP using plaintext or a more efficient protocol.
Prometheus’ main distinguishing features as compared to other monitoring systems are:
- a multi-dimensional data model (time series defined by metric name and set of key/value dimensions)
- a flexible query language to leverage this dimensional
- no dependency on distributed storage; single server nodes are autonomous
- timeseries collection happens via a pull model over HTTP
- pushing timeseries is supported via an intermediary gateway
- targets are discovered via service discovery or static configuration
- multiple modes of graphing and dash boarding support
- support for hierarchical and horizontal federation
When the /metrics endpoint is embedded within an existing application it’s referred to as instrumentation and when the /metrics endpoint is part of a stand-alone process the project call that an Exporter.
Main components of Prometheus
- Prometheus server
- Client libraries
- Push gateway
- Exporters
- Alert manager
- various support tools
Operators and Custom Resource Definitions
When you install the Prometheus operator with Helm you will get an operator and a set of custom resource definitions.
An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling.
To be able to make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your applications that run on Kubernetes. You can think of Operators as the runtime that manages this type of application on Kubernetes.
Prometheus operator
The Prometheus operator uses 3 CRD’s to greatly simplify the configuration required to run Prometheus in your Kubernetes clusters. These 3 types are:
- Prometheus, which defines a desired Prometheus deployment. The Operator ensures at all times that a deployment matching the resource definition is running.
- ServiceMonitor, which declaratively specifies how groups of services should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.
- Alertmanager, which defines a desired Alertmanager deployment. The Operator ensures at all times that a deployment matching the resource definition is running.
When you deploy a Prometheus the Prometheus operator will ensure a new instance of Prometheus server is made available in your cluster. A Prometheus resource definition has a serviceMonitorSelector that specifies which servicemonitor resources should be used by this instance of Prometheus server. A servicemonitor specifies how the Prometheus server should monitor a service or a group of services. The Prometheus operator will generate and apply the required configuration for the Prometheus server.
When you install the Kubernetes operator in your cluster you will get the operator and the above mentioned CRD’s but you will not get any prometheus server or any service monitor instances by default. To start monitoring however all you to do is deploy a Prometheus resource with the right serviceMonitorSelector and deploy a servicemonitor resource.
Prerequisites
Before we proceed, this post considers following items in-place. If not please use the reference links and make it available.
- An AKS Cluster (if you don’t have a cluster, to create you need Terraform and Azure CLI. To install, please refer Create Azure Kubernetes Service using Terraform
- Helm3 installed – Follow the page to get more detail Helm 3 – FoxuTech
Let’s Start Deploy on AKS
Before we deploy lets create the namespace to running the Prometheus separately. Follow the below command to deploy the Prometheus operator.
# kubectl create namespace monitoring
Once the namespace is ready, you can running the following command to add public Prometheus repo to helm.
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# helm repo update
To install run following command,
# helm install prometheus-foxutech prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
Once the installation completed, you run following command to check the resources part of the namespace.
# kubectl -n monitoring get all -l "release=prometheus-foxutech"
You can use following command to check the resources list individually.
# kubectl get pods -n monitoring
# kubectl get prometheus -n monitoring
# kubectl get prometheusrules -n monitoring
# kubectl get servicemonitor -n monitoring
# kubectl get cm -n monitoring
# kubectl get secrets -n monitoring
Expose the Prometheus
To achieve this, you can do 2 ways, one port-forward and another is changing the service type from ClusterIP to LoadBalancer. Please note, secure and preferable solution here is port-forward only, so please make sure you are using port-forward on production use.
In this example, let see how to achieve the both, let’s check first how to use loadBalancer.
Expose Services to Public IP’s
Get all the associated services list, using following command.
In my case followings are the service name, please change the service names correctly.
# kubectl edit svc prometheus-foxutech-kube-p-prometheus -n monitoring
...
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.228.101
clusterIPs:
- 10.0.228.101
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http-web
nodePort: 30974
port: 9090
protocol: TCP
targetPort: 9090
selector:
app.kubernetes.io/name: prometheus
prometheus: prometheus-foxutech-kube-p-prometheus
sessionAffinity: None
type: LoadBalancer
...
Follow the same instructions for alertmanager and Grafana services.
# kubectl edit svc prometheus-foxutech-kube-p-alertmanager -n monitoring
# kubectl edit svc prometheus-foxutech-grafana -n monitoring
Note: These settings should not generally be used in production. The endpoints should be secured behind an Ingress.
Once you completed you should see the external IP on all the services. For that you can access the Prometheus using port 9090, alertmanager using port 9093 and Grafana using 80.
Prometheus:
Alertmanager:
Grafana:
To login Grafana dashboard using “admin” as username and “prom-operator” as password.
Port-Forward
By default, all the monitoring options for Prometheus will be enabled. Let’s leave it this way for now.
Prometheus
Create a port forward to access the Prometheus query interface.
# kubectl port-forward --namespace monitoring svc/prometheus-foxutech-kube-p-prometheus 9090
Open http://localhost:9090 in your web browser and explore the UI to see the raw metrics inside Prometheus.
Create a port forward to access the Prometheus query interface.
Alertmanager
# kubectl port-forward --namespace monitoring svc/prometheus-foxutech-kube-p-alertmanager 9093
Grafana
# kubectl port-forward --namespace monitoring svc/prometheus-foxutech-grafana 8080:80
You will need to stop the previous port forward command, or run this in another terminal if you would like to run them side by side.
Open http://localhost:8080 in your web browser.
Now our monitoring setup is ready, in coming post, lets explore some metrics configuration and basic alerts.