enhancements
k8s-prometheus-adapter
enhancements | k8s-prometheus-adapter | |
---|---|---|
58 | 13 | |
3,257 | 1,824 | |
0.7% | 0.5% | |
9.7 | 6.2 | |
6 days ago | 12 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
enhancements
-
IBM to buy HashiCorp in $6.4B deal
> was always told early on that although they supported vault on kubernetes via a helm chart, they did not recommend using it on anything but EC2 instances (because of "security" which never really made sense their reasoning).
The reasoning is basically that there are some security and isolation guarantees you don't get in Kubernetes that you do get on bare metal or (to a somewhat lesser extent) in VMs.
In particular for Kubernetes, Vault wants to run as a non-root user and set the IPC_LOCK capability when it starts to prevent its memory from being swapped to disk. While in Docker you can directly enable this by adding capabilities when you launch the container, Kubernetes has an issue because of the way it handles non-root container users specified in a pod manifest, detailed in a (long-dormant) KEP: https://github.com/kubernetes/enhancements/blob/master/keps/... (tl;dr: Kubernetes runs the container process as root, with the specified capabilities added, but then switches it to the non-root UID, which causes the explicitly-added capabilities to be dropped).
You can work around this by rebuilding the container and setting the capability directly on the binary, but the upstream build of the binary and the one in the container image don't come with that set (because the user should set it at runtime if running the container image directly, and the systemd unit sets it via systemd if running as a systemd service, so there's no need to do that except for working around Kubernetes' ambient-capability issue).
> It always surprised me how these conversations went. "Well we don't really recommend kubernetes so we won't support (feature)."
-
Exploring cgroups v2 and MemoryQoS With EKS and Bottlerocket
0 is not the request we've defined. And that makes sense. Memory QoS has been in alpha since Kubernetes 1.22 (August 2021) and according to the KEP data was still in alpha as of 1.27.
-
Jenkins Agents On Kubernetes
Note: There's actually a Structured Authentication Config established via KEP-3331. It's in v1.28 as a feature flag gated option and removes the limitation of only having one OIDC provider. I may look into doing an article on it, but for now I'll deal with the issue in a manner that should work even with a bit older versions versions of Kubernetes.
-
Isint release cycle becoming a bit crazy with monthly releases and deprecations ?
Kubernetes supports a skew policy of n+2 between API server and kubelet. This means if your CP and DP are both on 1.20, you could upgrade your control plane twice (1.20 -> 1.21 -> 1.22) before you need to upgrade your data plane. And when it comes time to upgrade your data plane you can jump from 1.20 to 1.22 to minimize update churn. In the future, this skew will be opened to n+3 https://github.com/kubernetes/enhancements/tree/master/keps/sig-architecture/3935-oldest-node-newest-control-plane
-
Kubernetes SidecarContainers feature is merged
The KEP (Kubernetes Enhancement Proposal) is linked to in the PR [1]. From the summary:
> Sidecar containers are a new type of containers that start among the Init containers, run through the lifecycle of the Pod and don’t block pod termination. Kubelet makes a best effort to keep them alive and running while other containers are running.
[1] https://github.com/kubernetes/enhancements/tree/master/keps/...
-
What's there in K8s 1.27
This is where the new feature of mutable scheduling directives for jobs comes into play. This feature enables the updating of a job's scheduling directives before it begins. Essentially, it allows custom queue controllers to influence pod placement without needing to directly handle the assignment of pods to nodes themselves. To learn more about this check out the Kubernetes Enhancement Proposal 2926.
-
Dependencies between Services
What your asking is a (vanilla) Kubernetes non-goal, others have mentioned fluxcd and other add ons that provide primitives for dependency aware deployments. The problem space is so large, that it's unreasonable to to address these concerns in Kubernetes itself, instead, make it extensible... Look at this KEP for example: https://github.com/kubernetes/enhancements/issues/753 Sidecar containers have existed, and been named as such since WAY before that KEP's inception, defining what these things should and shouldn't do is largely arbitrary. Aka: your use-case is niche, if you don't like the behavior, use flux or argo, or write something yourself.
- When you learn the Sidecar Container KEP got dropped from the Kubernets release. Again.
-
Kubernetes 1.27 will be out next week! - Learn what's new and what's deprecated - Group volume snapshots - Pod resource updates - kubectl subcommands … And more!
If further interested, I may recommend checking out the KEP. I love how they document the decision making, and all these edge cases :).
-
How can I force assign an IP to my Load Balancer ingress in “status.loadBalancer”?
See https://kubernetes.io/docs/reference/kubectl/conventions/#subresources and https://github.com/kubernetes/enhancements/issues/2590
k8s-prometheus-adapter
-
Upgrading Hundreds of Kubernetes Clusters
The last one is mostly an observability stack with Prometheus, Metric server, and Prometheus adapter to have excellent insights into what is happening on the cluster. You can reuse the same stack for autoscaling by repurposing all the data collected for monitoring.
-
Helm: Is there a way to access templates of a sibling subchart
I'm deploying kube-prometheus-stack along with prometheus-adapter in my monitoring stack for custom metrics.
-
Deploy prometheus-adapter with kube-prometheus-stack monitoring stack?
I would like to see if anyone deployed prometheus-adapter and kube-prometheus-stack together for monitoring?
-
Horizontal Pod Autoscale
For us it is saturation of CPU and thread pool. It's implemented by exposing metrics of the thread pool to prometheus and turning that into a custom metric. (see) Looking at scaling based on job queue length next.
-
Steps to write own adaptor
If you are using Prometheus or kube-prometheus-stack, you will need https://github.com/kubernetes-sigs/prometheus-adapter We are using it to scale our Pods based on number of messages in RabbitMQ queue. There also a walkthrough on https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md
-
Monitoring Your Spacelift Account via Prometheus
A prometheus-adapter installation.
-
Advanced Features of Kubernetes' Horizontal Pod Autoscaler
Prometheus adapter to get custom/external metrics from Prometheus instance into Kubernetes API.
-
Pod spread by percentage
I never tested this but you have customized metrics API if the value % is available should work from my point of view Check this here https://github.com/kubernetes-sigs/prometheus-adapter
-
Practical Introduction to Kubernetes Autoscaling Tools with Linode Kubernetes Engine
CPU and memory might not be the right metrics for your application to make scaling decisions. In such cases, you can use HPA (or VPA) with custom metrics as an alternative. To use custom metrics for autoscaling, you can use a custom metrics adapter instead of the Kubernetes Metrics Server. Popular custom metrics adapters are the Prometheus adapter and Kubernetes Event-Driven Autoscaler (KEDA).
- How to scale containers that are unrelated to physical traits like CPU or Memory?
What are some alternatives?
kubeconform - A FAST Kubernetes manifests validator, with support for Custom Resources!
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
spark-operator - Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
metrics-server - Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
kubernetes-json-schema - Schemas for every version of every object in every version of Kubernetes
k9s - 🐶 Kubernetes CLI To Manage Your Clusters In Style!
klipper-lb - Embedded service load balancer in Klipper
cluster-proportional-autoscaler - Kubernetes Cluster Proportional Autoscaler Container
Hey - HTTP load generator, ApacheBench (ab) replacement
spring-auto-scaling-k8
connaisseur - An admission controller that integrates Container Image Signature Verification into a Kubernetes cluster
prometheus - The Prometheus monitoring system and time series database.