autoscaler
conduit
Our great sponsors
autoscaler | conduit | |
---|---|---|
89 | 33 | |
7,617 | 10,330 | |
1.6% | 1.1% | |
9.5 | 9.9 | |
about 9 hours ago | 5 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autoscaler
-
Upgrading Hundreds of Kubernetes Clusters
We use Cluster Autoscaler to automatically adjust the number of nodes (cluster size) based on your actual usage to ensure efficiency. Additionally, we deploy Vertical and Horizontal Pod Autoscalers to scale your applications' resources as their needs change automatically.
-
Not Everything Is Google's Fault (Just Most Things)
> * Hetzner: cheap, good service, the finest pets in the world, no cattle
You can absolutely do cattle with Hetzner. They support imaging and immutable infrastructure. They don't have a native auto scaling equivalent, but if you're using Kubernetes, they have a cluster autoscaler: https://github.com/kubernetes/autoscaler/blob/master/cluster...
-
Kubernetes(K8s) Autoscaler — a detailed look at the design and implementation of VPA
Here we take the VPA as a starting point to analyze the design and implementation principles of the VPA in Autoscaler. The source code for this article is based on Autoscaler HEAD fbe25e1.
- Scaling with Karpenter and Empty Pod(A.k.a Overprovisioning)
-
Reducing Cloud Costs on Kubernetes Dev Envs
Autoscaling over EKS can be accomplished using either the cluster-autoscaler project or Karpenter. If you want to use Spot instances, consider using Karpenter, as it has better integrations with AWS for optimizing spot pricing and availability, minimizing interruptions, and falling back to on-demand nodes if no spot instances are available.
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
Autoscaling is already provided on OVH, but we don't use it for now. Autoscaler has to be manually installed on the AWS/EKS cluster.
-
relevant way of scaling pods
do you mean this: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/README.md
-
Kubernetes Cluster Maintenance
Read more about this scaler in detail here!
-
Anyone running Windows nodes in your clusters?
We have a default node group of Linux hosts, but there's a secondary nodegroup of Windows hosts that is typically scaled down to 0. When a team's build runs, a pod is scheduled based on their definition. Cluster-autoscaler will check the nodeSelector and automatically spin up a node from that nodegroup if necessary.
-
How to make sure Kubernetes autoscaler not deleting the nodes which runs specific pod
I am running a Kubernetes cluster(AWS EKS one) with Autoscaler pod So that Cluster will autoscale according to the resource request within the cluster.
conduit
-
Optimal JMX Exposure Strategy for Kubernetes Multi-Node Architecture
Leverage a service mesh like Istio or Linkerd to manage communication between microservices within the Kubernetes cluster. These service meshes can be configured to intercept JMX traffic and enforce access control policies. Benefits:
-
Linkerd no longer shipping open source, stable releases
Looks like CNCF waved them through Graduation anyway, let's look at policies from July 28, 2021 when they were deemed "Graduated"
All maintainers of the LinkerD project had @boyant.io email addresses. [0] They do list 4 other members of a "Steering Committee", but LinkerD's GOVERNANCE.md gives all of the power to maintainers: [1]
> Ideally, all project decisions are resolved by maintainer consensus. If this is not possible, maintainers may call a vote. The voting process is a simple majority in which each maintainer receives one vote.
And CNCF Graduation policy says a project must "Have committers from at least two organizations" [2]. So it appears that the CNCF accepted the "Steering Committee" as an acceptable 2nd committer, even though the Governance policy still gave the maintainers all of the power.
I would like to know if the Steering Committee voted to remove stable releases from an un-biased position acting in the best interest of the project, or if they were simply ignored or not even advised on the decision.
I'm all for Boyant doing what they need to do to make money and survive as a Company. But at that point my opinion is that they should withdraw the project from the CNCF and stop pretending like the foundation has any influence on the project's governance.
[0] https://github.com/linkerd/linkerd2/blob/489ca1e3189b6a5289d...
-
Ultimate EKS Baseline Cluster: Part 1 - Provision EKS
From here, we can explore other developments and tutorials on Kubernetes, such as o11y or observability (PLG, ELK, ELF, TICK, Jaeger, Pyroscope), service mesh (Linkerd, Istio, NSM, Consul Connect, Cillium), and progressive delivery (ArgoCD, FluxCD, Spinnaker).
-
Istio moved to CNCF Graduation stage
https://linkerd.io/ is a much lighter-weight alternative but you do still get some of the fancy things like mtls without needing any manual configuration. Install it, label your namespaces, and let it do it's thing!
-
Custom Authorization
Would it be possible to create a custom extension with the code that authorize traffic based on my custom access token?
-
API release strategies with API Gateway
Open source API Gateway (Apache APISIX and Traefik), Service Mesh (Istio and Linkerd) solutions are capable of doing traffic splitting and implementing functionalities like Canary Release and Blue-Green deployment. With canary testing, you can make a critical examination of a new release of an API by selecting only a small portion of your user base. We will cover the canary release next section.
-
GKE with Consul Service Mesh
I have experimented with other service meshes and I was able to get up to speed quickly: Linkerd = 1 day, Istio = 3 days, NGINX Service Mesh = 5 days, but Consul Connect service mesh took at least 11 days to get off the ground. This is by far the most complex solution available.
-
How is a service mesh implemented on low level?
https://github.com/linkerd/linkerd2 (random example)
- Kubernetes operator written in rust
-
What is a service mesh?
Out of the number of service mesh solutions that exist, the most popular open source ones are: Linkerd, Istio, and Consul. Here at Koyeb, we are using Kuma.
What are some alternatives?
karpenter-provider-aws - Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Zone of Control - ⬡ Zone of Control is a hexagonal turn-based strategy game written in Rust. [DISCONTINUED]
cluster-proportional-autoscaler - Kubernetes Cluster Proportional Autoscaler Container
Parallel
aws-ebs-csi-driver - CSI driver for Amazon EBS https://aws.amazon.com/ebs/
Fractalide - Reusable Reproducible Composable Software
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
descheduler - Descheduler for Kubernetes
istio - Connect, secure, control, and observe services.
k3s-aws-terraform-cluster - Deploy an high available K3s cluster on Amazon AWS
traefik - The Cloud Native Application Proxy