kube-state-metrics
rancher
Our great sponsors
kube-state-metrics | rancher | |
---|---|---|
33 | 89 | |
5,086 | 22,517 | |
2.1% | 0.8% | |
8.9 | 9.9 | |
7 days ago | 5 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kube-state-metrics
- Do we have any Prometheus metric to get the kubernetes cluster-level CPU/Memory requests/limits?
-
10 Kubernetes Visualization Tool that You Can't Afford to Miss
git clone https://github.com/kubernetes/kube-state-metrics.git
-
Why is the Prometheus metric 'kube_pod_completion_time' returning empty query results?
https://github.com/kubernetes/kube-state-metrics/blob/main/docs/pod-metrics.md According to this github repo completion is responsible of termination date if I correctly understood .
-
Google Kubernetes Engine's metrics vs Self-managed
kube-state-metrics
-
Prometheus node exporter and cadvisor to send metrics to central prometheus cluster
Those are entirely different types of data. You can get that from something like kube-state-metrics
-
Scaling kube-state-metrics in large cluster
I've never had a cluster of that size, so take it with a grain of salt - but maybe you could try purpose-based sharding? KSM has allowlist and denylist config flags, for configuring which metrics it exposes https://github.com/kubernetes/kube-state-metrics/blob/main/docs/cli-arguments.md
-
Questions about Kubernetes
Kubernetes itself will not notify you, the way I've seen people do this, is to use something like kube-state-metrics or node_exporter, export that to Prometheus (or preferrably VictoriaMetrics because Prometheus is terrible IMO), and then setup alarms on that with alertmanager or equivalent, or just look at dashboards regularly with Grafana. Realistically I recommend only setting alerts on disk usage and application/database latency. CPU and memory utilization isn't a great metric to alert on a lot of the time.
-
EKS scalability best practices
Another tip that you could consider spelling out a little more, is to monitor the number of resources created by Kind. This is somewhat mentioned for jobs and services, but any Kind of which thousands of resources are created will put stress on the control-plane. The total number of resources per namespace/cluster can be monitored with kube-state-metrics. KSM does not emit metrics of resources created from CRDs. These metrics can be implemented with KSM's custom resource state metrics: https://github.com/kubernetes/kube-state-metrics/blob/main/docs/customresourcestate-metrics.md
-
Observability-Landscape-as-Code in Practice
We then have various other Metrics called Kubernetes Workload Metrics. These are the dashboards with names that start with “Kubernetes / Compute Resources / Workload”. These dashboards are specific to the services you are running. They take into account the Kubernetes Workloads in your various namespaces, using kube-state-metrics. For a closer look, check out otel_demo_app_k8s_dashboard.tf.
-
Kubernetes Costs: Effective Cost Optimization Strategies To Reduce Your k8s Bill
The first step to optimizing costs is gaining visibility into your costs using tools. Kubernetes provides a Metrics Server and kube-state-metrics that can give you the overall picture of resource utilization by your cluster. There are more tools that provide more granular breakdowns and provide dashboards with business metrics, infra cost, and alerting functionalities. Here are some strategies to optimize your resource utilization and cloud bills on k8s.
rancher
-
OpenTF Announces Fork of Terraform
Did something happen to the Apache 2 rancher? https://github.com/rancher/rancher/blob/v2.7.5/LICENSE RKE2 is similarly Apache 2: https://github.com/rancher/rke2/blob/v1.26.7%2Brke2r1/LICENS...
-
Kubernetes / Rancher 2, mongo-replicaset with Local Storage Volume deployment
I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.
- Trouble with RKE2 HA Setup: Part 2
-
Critical vulnerability (CVE-2023-22651) in Rancher 2.7.2 - Update to 2.7.3
CVE-2023-22651 is rated 9.9/10 : https://github.com/rancher/rancher/security/advisories/GHSA-6m9f-pj6w-w87g
-
What's your take if DevOps colleague always got new initiative / idea?
Depends. When I came into my last company I immediately noticed the lack of reproducible environments. Brought this up a few times and was met with some resistance because "we didn't have the capacity"... Until prod went down and it took us 23 hours to bring it back up due to spaghetti terraform.
-
Questions about Rancher Launched/imported AKS
For the latest releases of rancher: https://github.com/rancher/rancher/releases When is Rancher 2.7.1 going to be released? The Rancher support matrix for 2.7.1 shows k8s v1.24.6 as the highest supported version and Azure will drop AKS v1.24 in a few months... Should this be a concern for us? What could happen if we create our cluster with Rancher for an unsupported K8s version? 1.25 for example. - Rancher 2.7.2 just got released including support for 1.25. I have however tested running unsupported versions before, unless there is major deprecations in the kubernetes API it is fine in my experience. If we move to AKS imported clusters, in case we add node pools, and upgrade the cluster, will those changes be reflected in the Rancher Platform? - Yep! If we face some issues by running an unsupported K8s version on Rancher Launched K8s clusters, is it possible to remove it from Rancher, do the stuff we need, and then import it into the platform? - Yes, however be careful and do testing before doing in prod. From top of mind: Remove cluster from rancher (if imported), if rancher created you might want to revoke ranchers SA key for the cluster first (so it can't remove it). Delete the cattle-system namespace, and any other cattle-* namespaces you don't want to keep. And do your thing. It looks like AKS is faster than Rancher regarding supported Kubernetes versions... We would like to know if Rancher will always be on track with AKS regarding the removal of K8s version support and new versions. - In my experience yes. (Been using rancher on all three clouds for a 4 years now). What are exactly the big differences between imported AKS and Rancher-launched AKS? What should we look at, and what issues can we face when using one or another? - The main difference is that rancher will not be able to upgrade the cluster for you. You will have to do that yourself.
-
rancher2_bootstrap.admin resource fail after Kubernetes v1.23.15
variable "rancher" { type = object({ namespace = string version = string branch = string chart_set = list(object({ name = string value = string })) }) default = { namespace = "cattle-system" # There is a bug with destroying the cloud credentials in version 2.6.9 until 2.7.1 and will be fixed in next release 2.7.2. # See https://github.com/rancher/rancher/issues/39300 version = "2.7.0" branch = "stable" chart_set = [ { name = "replicas" value = 3 }, { name = "ingress.ingressClassName" value = "nginx-external" }, { name = "ingress.tls.source" value = "rancher" }, # There is a bug with the uninstallation of Rancher due to missing priorityClassName of rancher-webhook # The priorityClassName need to be set # See https://github.com/rancher/rancher/issues/40935 { name = "priorityClassName" value = "system-node-critical" } ] } description = "Rancher Helm chart properties." }
-
Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow
When I searched DuckDuckGo instead, the 12th link actually had the real answer. It's in this issue on Rancher's GitHub. Turns out the Rancher admin needs to be in all of the Keycloak groups they want to have show up in the auto-populated picklist in Rancher. Being a Keycloak admin and even creating the groups isn't good enough. Frustratingly, the "caveat" note the Rancher guy is pointing to that says this is only present in the guide to setting up Keycloak for SAML, but apparently this is also true for OIDC.
-
How to enable TLS 1.3 protocol
Explicitly set TLS 1.3 in Rancher, though it could be a bug in Rancher: https://github.com/rancher/rancher/issues/35654
-
Rancher deployment, hanging on login and setup pages
Thanks. Yeah looks like this might work: https://github.com/rancher/rancher/releases/tag/v2.7.2-rc3
What are some alternatives?
cadvisor - Analyzes resource usage and performance characteristics of running containers.
podman - Podman: A tool for managing OCI containers and pods.
metrics-server - Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
lens - Lens - The way the world runs Kubernetes
php-fpm_exporter - A prometheus exporter for PHP-FPM.
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
k3s - Lightweight Kubernetes
kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
kubespray - Deploy a Production Ready Kubernetes Cluster
cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle
kube-metrics-adapter - General purpose metrics adapter for Kubernetes HPA metrics