Prometheus and Grafana install on a Kubernetes cluster using helm

This page summarizes the projects mentioned and recommended in the original post on

Our great sponsors
  • OPS - Build and Run Open Source Unikernels
  • Scout APM - Less time debugging, more time building
  • SonarQube - Static code analysis for 29 languages.
  • helm-charts

    Prometheus community Helm charts

    Below are some quick notes on how I setup Prometheus and Grafana on a Kubernetes cluster using helm. [+] Have a K8S cluster already. [email protected]:~$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kmaster2 Ready control-plane,master 9d v1.22.2 Ubuntu 18.04 LTS 4.15.0-161-generic docker://20.10.9 knode3 Ready 9d v1.22.2 Ubuntu 18.04 LTS 4.15.0-161-generic docker://20.10.9 knode4 Ready 9d v1.22.2 Ubuntu 18.04 LTS 4.15.0-161-generic docker://20.10.9 [email protected]:~$ All three nodes have below OS details:- [email protected]:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04 LTS Release: 18.04 Codename: bionic [email protected]:~$ [+] Install helm on kmaster2 ( ) I preffered using apt-get curl | sudo apt-key add - sudo apt-get install apt-transport-https --yes echo "deb all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm [+] Then install prometheus-community/kube-prometheus-stack helm repo add prometheus-community helm repo update [+] Create a namespace for keeping the charts in its own namespace kubectl create ns prometheus [+] Install prometheus-community/kube-prometheus-stack helm install prometheus prometheus-community/kube-prometheus-stack [email protected]:~$ helm list -n prometheus NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION prometheus prometheus 1 2021-10-22 10:07:13.399835228 -0700 PDT deployed kube-prometheus-stack-19.2.2 0.50.0 [email protected]:~$ [+] Check all the objects created: [email protected]:~/$ kubectl get all -n prometheus NAME READY STATUS RESTARTS AGE pod/alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 44s pod/prometheus-grafana-b8cd4d67-4t9wb 2/2 Running 0 3m39s pod/prometheus-kube-prometheus-operator-bcdfdbc79-cf8cc 1/1 Running 0 3m39s pod/prometheus-kube-state-metrics-58c5cd6ddb-9xtmt 1/1 Running 0 3m39s pod/prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 44s pod/prometheus-prometheus-node-exporter-46f6g 1/1 Running 0 3m41s pod/prometheus-prometheus-node-exporter-sc6c7 1/1 Running 0 3m41s pod/prometheus-prometheus-node-exporter-zzq2q 1/1 Running 0 3m41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 3m14s service/prometheus-grafana ClusterIP 80/TCP 3m44s service/prometheus-kube-prometheus-alertmanager ClusterIP 9093/TCP 3m45s service/prometheus-kube-prometheus-operator ClusterIP 443/TCP 3m48s service/prometheus-kube-prometheus-prometheus ClusterIP 9090/TCP 3m50s service/prometheus-kube-state-metrics ClusterIP 8080/TCP 3m50s service/prometheus-operated ClusterIP None 9090/TCP 3m11s service/prometheus-prometheus-node-exporter ClusterIP 9100/TCP 3m47s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/prometheus-prometheus-node-exporter 3 3 3 3 3 3m44s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/prometheus-grafana 1/1 1 1 3m43s deployment.apps/prometheus-kube-prometheus-operator 1/1 1 1 3m43s deployment.apps/prometheus-kube-state-metrics 1/1 1 1 3m43s NAME DESIRED CURRENT READY AGE replicaset.apps/prometheus-grafana-b8cd4d67 1 1 1 3m42s replicaset.apps/prometheus-kube-prometheus-operator-bcdfdbc79 1 1 1 3m42s replicaset.apps/prometheus-kube-state-metrics-58c5cd6ddb 1 1 1 3m42s NAME READY AGE statefulset.apps/alertmanager-prometheus-kube-prometheus-alertmanager 1/1 3m13s statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus 1/1 3m11s [email protected]:~/$ [+] Get the dashboard port information and default user created [email protected]:~/$ kubectl get pods -o=custom-columns=NameSpace:.metadata.namespace,,CONTAINERS:.spec.containers[*].name -n prometheus NameSpace NAME CONTAINERS prometheus alertmanager-prometheus-kube-prometheus-alertmanager-0 alertmanager,config-reloader prometheus prometheus-grafana-b8cd4d67-4t9wb grafana-sc-dashboard,grafana prometheus prometheus-kube-prometheus-operator-bcdfdbc79-cf8cc kube-prometheus-stack prometheus prometheus-kube-state-metrics-58c5cd6ddb-9xtmt kube-state-metrics prometheus prometheus-prometheus-kube-prometheus-prometheus-0 prometheus,config-reloader prometheus prometheus-prometheus-node-exporter-46f6g node-exporter prometheus prometheus-prometheus-node-exporter-sc6c7 node-exporter prometheus prometheus-prometheus-node-exporter-zzq2q node-exporter [email protected]:~/$ Note from above the POD of interest "prometheus-grafana-b8cd4d67-4t9wb" and the container of interest is "grafana". [+] Get the HTTP port number and user info: [email protected]:~/$ kubectl logs prometheus-grafana-b8cd4d67-4t9wb -c grafana -n prometheus | grep -E "Listen|default admin" t=2021-10-22T17:09:45+0000 lvl=info msg="Created default admin" logger=sqlstore user=admin t=2021-10-22T17:09:46+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket= [email protected]:~/$ [+] Password for grafana is "prom-operator" lookup from here: [+] Review Grafana dashboard by just using the POD (port-forward) [email protected]:~$ kubectl port-forward -n prometheus pod/prometheus-grafana-b8cd4d67-4t9wb 3000 Forwarding from -> 3000 Forwarding from [::1]:3000 -> 3000 go to ( admin / prom-operator ) [+] Review prometheus dashboard by just using the POD & container logs (port-forward) [email protected]:~$ kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 -n prometheus -c prometheus | grep -i 9090 level=info ts=2021-10-22T17:38:39.008Z caller=web.go:541 component=web msg="Start listening for connections" address= [email protected]:~$ [email protected]:~$ kubectl port-forward -n prometheus prometheus-prometheus-kube-prometheus-prometheus-0 9090 Forwarding from -> 9090 Forwarding from [::1]:9090 -> 9090 [+] Create a quick SVC to just use the Grafana deployment on a nodeport [email protected]:~$ kubectl get pod -n prometheus -l NAME READY STATUS RESTARTS AGE prometheus-grafana-b8cd4d67-4t9wb 2/2 Running 2 (54m ago) 80m [email protected]:~$ [email protected]:~$ kubectl get deployment -n prometheus -l NAME READY UP-TO-DATE AVAILABLE AGE prometheus-grafana 1/1 1 1 80m [email protected]:~$ [email protected]:~$ [email protected]:~$ kubectl expose deployment prometheus-grafana -n prometheus --name=prometheus-svc --port=3000 --type=NodePort service/prometheus-svc exposed [email protected]:~$ [email protected]:~$ kubectl get svc -n prometheus | grep -i prometheus-svc prometheus-svc NodePort 3000:30371/TCP 73s [email protected]:~$ Now in a browser go to any of the cluster nodes IP and port 30371 to get into the grafana dashboard In my cluster I went to : ( admin / prom-operator ) [+] For a constant service you can do: [email protected]:~$ kubectl expose deployment prometheus-grafana -n prometheus --name=prometheus-svc --port=3000 --type=NodePort --dry-run=client -o yaml > grafana.yaml [email protected]:~$ Edit the YAML file to add "nodePort: 30000" [email protected]:~$ cat grafana.yaml apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: prometheus Helm grafana 8.2.1 grafana-6.17.2 name: prometheus-svc namespace: prometheus spec: ports: - port: 3000 nodePort: 30000 protocol: TCP targetPort: 3000 selector: prometheus grafana type: NodePort status: loadBalancer: {} [email protected]:~$ [email protected]:~$ kubectl apply -f grafana.yaml service/prometheus-svc created [email protected]:~$ [email protected]:~$ kubectl get svc -n prometheus | grep -i prometheus-svc prometheus-svc NodePort 3000:30000/TCP 20s [email protected]:~$ [email protected]:~$ kubectl describe svc prometheus-grafana -n prometheus Name: prometheus-grafana Namespace: prometheus Labels: Annotations: prometheus prometheus Selector:, Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: IPs: Port: service 80/TCP TargetPort: 3000/TCP Endpoints: Session Affinity: None Events: [email protected]:~$ Now in a browser go to any of the cluster nodes IP and port 30000 to get into the grafana dashboard In my cluster I went to : ( admin / prom-operator ) and it works :)

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts