krr
kube-state-metrics
krr | kube-state-metrics | |
---|---|---|
5 | 33 | |
2,311 | 5,137 | |
13.8% | 1.8% | |
9.2 | 9.1 | |
7 days ago | 4 days ago | |
Python | Go | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
krr
-
What is the role of QoS for Pods?
Thanks buddy. I have seen a tool recently by robusta but not sure if helpful or not. Haven't tried it yet. https://github.com/robusta-dev/krr
-
Preventing wasted resources
To calculate the optimal values you will need to have some historical metrics from a timeframe with representative load flowing through the pods. Then you can check the CPU/Memory usage metrics to see typical/max usage. There are also various tools that can help you with recommendations, but they will most likely require metrics history too, e.g. https://github.com/robusta-dev/krr
-
Who is using VerticalPodAutoscaler?
VPA is ok. I thought this project, https://github.com/robusta-dev/krr, from Robusta looked promising (for sizing containers). I'd also look at continuous profiling solutions like Prodfiler or Parca.
- KRR - accurate resource recommendations based on historical Prometheus data
- Show HN: Prometheus-based resource recommendations for Kubernetes
kube-state-metrics
- Do we have any Prometheus metric to get the kubernetes cluster-level CPU/Memory requests/limits?
-
10 Kubernetes Visualization Tool that You Can't Afford to Miss
git clone https://github.com/kubernetes/kube-state-metrics.git
-
Why is the Prometheus metric 'kube_pod_completion_time' returning empty query results?
https://github.com/kubernetes/kube-state-metrics/blob/main/docs/pod-metrics.md According to this github repo completion is responsible of termination date if I correctly understood .
-
Google Kubernetes Engine's metrics vs Self-managed
kube-state-metrics
-
Prometheus node exporter and cadvisor to send metrics to central prometheus cluster
Those are entirely different types of data. You can get that from something like kube-state-metrics
-
Scaling kube-state-metrics in large cluster
I've never had a cluster of that size, so take it with a grain of salt - but maybe you could try purpose-based sharding? KSM has allowlist and denylist config flags, for configuring which metrics it exposes https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md
-
Questions about Kubernetes
Kubernetes itself will not notify you, the way I've seen people do this, is to use something like kube-state-metrics or node_exporter, export that to Prometheus (or preferrably VictoriaMetrics because Prometheus is terrible IMO), and then setup alarms on that with alertmanager or equivalent, or just look at dashboards regularly with Grafana. Realistically I recommend only setting alerts on disk usage and application/database latency. CPU and memory utilization isn't a great metric to alert on a lot of the time.
-
EKS scalability best practices
Another tip that you could consider spelling out a little more, is to monitor the number of resources created by Kind. This is somewhat mentioned for jobs and services, but any Kind of which thousands of resources are created will put stress on the control-plane. The total number of resources per namespace/cluster can be monitored with kube-state-metrics. KSM does not emit metrics of resources created from CRDs. These metrics can be implemented with KSM's custom resource state metrics: https://github.com/kubernetes/kube-state-metrics/blob/main/docs/customresourcestate-metrics.md
-
Observability-Landscape-as-Code in Practice
We then have various other Metrics called Kubernetes Workload Metrics. These are the dashboards with names that start with “Kubernetes / Compute Resources / Workload”. These dashboards are specific to the services you are running. They take into account the Kubernetes Workloads in your various namespaces, using kube-state-metrics. For a closer look, check out otel_demo_app_k8s_dashboard.tf.
-
Kubernetes Costs: Effective Cost Optimization Strategies To Reduce Your k8s Bill
The first step to optimizing costs is gaining visibility into your costs using tools. Kubernetes provides a Metrics Server and kube-state-metrics that can give you the overall picture of resource utilization by your cluster. There are more tools that provide more granular breakdowns and provide dashboards with business metrics, infra cost, and alerting functionalities. Here are some strategies to optimize your resource utilization and cloud bills on k8s.
What are some alternatives?
kptop - CLI tool for Kubernetes that provides pretty monitoring for Nodes, Pods, Containers, and PVCs resources on the terminal through Prometheus metrics
cadvisor - Analyzes resource usage and performance characteristics of running containers.
optscale - FinOps and MLOps platform to run ML/AI and regular cloud workloads with optimal performance and cost.
metrics-server - Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
prometheus-enhanced-snmp-exporter - Enhanced prometheus SNMP exporter with multithreading support and variable SNMP pooling interval
php-fpm_exporter - A prometheus exporter for PHP-FPM.
prom2teams - prom2teams is an HTTP server built with Python that receives alert notifications from a previously configured Prometheus Alertmanager instance and forwards it to Microsoft Teams using defined connectors
k3s - Lightweight Kubernetes
hitron-exporter - Hitron CGN series Prometheus exporter
kubespray - Deploy a Production Ready Kubernetes Cluster
django-prometheus - Export Django monitoring metrics for Prometheus.io
kube-metrics-adapter - General purpose metrics adapter for Kubernetes HPA metrics