descheduler
kube-prometheus
Our great sponsors
descheduler | kube-prometheus | |
---|---|---|
27 | 41 | |
4,058 | 6,270 | |
2.4% | 2.7% | |
0.0 | 8.8 | |
3 days ago | 5 days ago | |
Go | Jsonnet | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
descheduler
- Any advice to rebalance and reallocation pod to spread among low usage nodes with existing deployment
- What Wishlist Features Would You Like To See From K8s?
-
Schedule on Least Utilized Node
maybe descheduler can help? https://github.com/kubernetes-sigs/descheduler
-
I have 3 nodes. One of the nodes suddenly went down. How do I make the pods spread evenly to the other nodes?
Surprised this wasn't suggested yet, you can also use a software like the k8s Descheduler that executes periodically to rebalance your workloads across the existing nodes.
-
Leader Election In Kubernetes
Here an example of coordination api in Go https://github.com/kubernetes-sigs/descheduler/commit/3cbae5e72ba53447a609e6001755ff395e6eeceb https://github.com/kubernetes-sigs/descheduler/commit/0a52af9ab82a52fd8c864a81f4033736f11aab34
-
Ask HN: Who else is working/on call over Christmas?
This is something a (now former) colleague of mine pointed out: that the kubernetes descheduler can enforce a maximum lifetime[0] that sort of forces continual reboots. So if your system cannot tolerate running for a long time continously, this is one method to gracefully restart long running pods.
[0]: https://github.com/kubernetes-sigs/descheduler#podlifetime
- Cluster auto heal?
-
K8S Operators - How do you reserve on every node resources for system daemonsets ?
no it does not... thats why tools like https://github.com/kubernetes-sigs/descheduler exist..
- Kubernetes Descheduler
-
Kubernetes Cordon: How It Works and When to Use It
You might want to take a look at descheduler: https://github.com/kubernetes-sigs/descheduler
kube-prometheus
-
Upgrading Hundreds of Kubernetes Clusters
The last one is mostly an observability stack with Prometheus, Metric server, and Prometheus adapter to have excellent insights into what is happening on the cluster. You can reuse the same stack for autoscaling by repurposing all the data collected for monitoring.
-
Unfork with ArgoCD
kustomize Kube Prometheus
-
Smart-Cash project -Adding monitoring to EKS using Prometheus operator
On the other hand, the Kube-prometheus project provides documentation and scripts to operate end-to-end Kubernetes cluster monitoring using the Prometheus Operator, making easier the process of monitoring the Kubernetes cluster.
-
Scaling Temporal: The Basics
For our load testing we’ve deployed Temporal on Kubernetes, and we’re using MySQL for the persistence backend. The MySQL instance has 4 CPU cores and 32GB RAM, and each Temporal service (Frontend, History, Matching, and Worker) has 2 pods, with requests for 1 CPU core and 1GB RAM as a starting point. We’re not setting CPU limits for our pods—see our upcoming Temporal on Kubernetes post for more details on why. For monitoring we’ll use Prometheus and Grafana, installed via the kube-prometheus stack, giving us some useful Kubernetes metrics.
-
How do you set up Grafana alert for your cluster? Which mixins library?
The 2 most common approaches I have seen are kube-prometheus-stack and kube-prometheus..
-
Issues with "victoria-metrics-k8s-stack", monitoring k8s targets
- I'm missing a lot of the Grafana dashboards that are provisioned during the deployment, not sure why as it has worked before, and wanted to add them after install... I believe it's different ConfigMaps like the one in kube-prometheus but I was wondering if there's a way to force provisioning them all again at once (multiple k8s, node_exporter, vm, etc)?
-
what metrics are most important for checking kubernetes cluster health?
Check out the kube Prometheus project -- https://github.com/prometheus-operator/kube-prometheus It's a bit heavy, but the included recording rules and dashboards give you a great start at understanding your cluster.
-
Easy Prometheus/Grafana Setup With Dashboards Repo
The actual link to the prometheus/grafana bundle: https://github.com/prometheus-operator/kube-prometheus
-
How To Configure Kube-Prometheus
Here’s a list of what’s installed: https://github.com/prometheus-operator/kube-prometheus/tree/main/manifests
- How to install a user managed Prometheus and Grafana instance on OpenShift 4?
What are some alternatives?
autoscaler - Autoscaling components for Kubernetes
metrics-server - Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
pod-reaper - Rule based pod killing kubernetes controller
helm-charts - Prometheus community Helm charts
nfs-subdir-external-provisioner - Dynamic sub-dir volume provisioner on a remote NFS server.
prometheus-operator - Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes
aws-ebs-csi-driver - CSI driver for Amazon EBS https://aws.amazon.com/ebs/
kube-thanos - Kubernetes specific configuration for deploying Thanos.
kube-scheduler-simulator - The simulator for the Kubernetes scheduler
sloth - 🦥 Easy and simple Prometheus SLO (service level objectives) generator
threaded-cron-task-engine - An multi-threaded cron/supervisord replacement which offers a bit more and is dead simple
ansible-prometheus - Deploy Prometheus monitoring system