autoscaler
aws-ebs-csi-driver
Our great sponsors
autoscaler | aws-ebs-csi-driver | |
---|---|---|
89 | 13 | |
7,617 | 915 | |
1.6% | 2.5% | |
9.5 | 9.4 | |
about 16 hours ago | about 9 hours ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
autoscaler
-
Upgrading Hundreds of Kubernetes Clusters
We use Cluster Autoscaler to automatically adjust the number of nodes (cluster size) based on your actual usage to ensure efficiency. Additionally, we deploy Vertical and Horizontal Pod Autoscalers to scale your applications' resources as their needs change automatically.
-
Not Everything Is Google's Fault (Just Most Things)
> * Hetzner: cheap, good service, the finest pets in the world, no cattle
You can absolutely do cattle with Hetzner. They support imaging and immutable infrastructure. They don't have a native auto scaling equivalent, but if you're using Kubernetes, they have a cluster autoscaler: https://github.com/kubernetes/autoscaler/blob/master/cluster...
-
Kubernetes(K8s) Autoscaler — a detailed look at the design and implementation of VPA
Here we take the VPA as a starting point to analyze the design and implementation principles of the VPA in Autoscaler. The source code for this article is based on Autoscaler HEAD fbe25e1.
- Scaling with Karpenter and Empty Pod(A.k.a Overprovisioning)
-
Reducing Cloud Costs on Kubernetes Dev Envs
Autoscaling over EKS can be accomplished using either the cluster-autoscaler project or Karpenter. If you want to use Spot instances, consider using Karpenter, as it has better integrations with AWS for optimizing spot pricing and availability, minimizing interruptions, and falling back to on-demand nodes if no spot instances are available.
-
☸️ Managed Kubernetes : Our dev is on AWS, our prod is on OVH
Autoscaling is already provided on OVH, but we don't use it for now. Autoscaler has to be manually installed on the AWS/EKS cluster.
-
relevant way of scaling pods
do you mean this: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/README.md
-
Kubernetes Cluster Maintenance
Read more about this scaler in detail here!
-
Anyone running Windows nodes in your clusters?
We have a default node group of Linux hosts, but there's a secondary nodegroup of Windows hosts that is typically scaled down to 0. When a team's build runs, a pod is scheduled based on their definition. Cluster-autoscaler will check the nodeSelector and automatically spin up a node from that nodegroup if necessary.
-
How to make sure Kubernetes autoscaler not deleting the nodes which runs specific pod
I am running a Kubernetes cluster(AWS EKS one) with Autoscaler pod So that Cluster will autoscale according to the resource request within the cluster.
aws-ebs-csi-driver
-
AWS EBS CSI driver
The AWS EBS CSI Driver relies on IAM permissions to communicate with Amazon EBS for volume management on behalf of the user. The example policy can be used to define the required permissions. Additionally, AWS provides a managed policy at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
-
PV/PVC Not working after k8s upgrade to 1.25
I looks like the driver's permissions to invoke the EBS APIs was revoked and/or changed. When you install the EBS CSI addon you can either inherit permissions from the worker node or you can choose an IRSA role (preferred). If you use IRSA, the service account that the EBS CSI driver uses should have an annotation that references the ARN of the IAM role you selected, e.g. eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role. You can see an example of the IAM policy the driver needs here, https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/fb6d456558fb291b13f855454c1525c7acaf7046/docs/example-iam-policy.json.
- Confused about kubernetes storage
-
Unable to Access AWS EKS Cluter after creating using Terraform
I'm know it's possible to write terraform code that exhibits that issue, but that's not the case in my experience. I'm using helm to deploy aws's ebs csi driver in the above setup. As you mentioned, if the eks cluster was destroyed before the helm provider attempted to use its API to destroy the helm deployment, it would cause problems. And I don't run into that issue. It's not luck of timing, either - I also have a CI process that deploys all of this, tests, and deletes it all that has succeeded hundreds of times.
-
Introduction to Day 2 Kubernetes
Any Kubernetes cluster requires persistent storage - whether organizations choose to begin with an on-premise Kubernetes cluster and migrate to the public cloud, or provision a Kubernetes cluster using a managed service in the cloud. Kubernetes supports multiple types of persistent storage – from object storage (such as Azure Blob storage or Google Cloud Storage), block storage (such as Amazon EBS, Azure Disk, or Google Persistent Disk), or file sharing storage (such as Amazon EFS, Azure Files or Google Cloud Filestore). The fact that each cloud provider has its implementation of persistent storage adds to the complexity of storage management, not to mention a scenario where an organization is provisioning Kubernetes clusters over several cloud providers. To succeed in managing Kubernetes clusters over a long period, knowing which storage type to use for each scenario, requires storage expertise.
-
Dealing with EC2 Instance volume limits in EKS
Lots of info in this issue: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1163
-
Help me understand real use cases of k8s, I can’t wrap my head around it
aws-ebs-csi-driver
- How is a PersistentVolumeClaim consistent?
-
EKS PVC <-> EBS volume associations after cluster recreation
Hello, we are running an EKS cluster (1.20) with aws-ebs-csi-driver (1.4.0). After recreating our whole cluster we can observe that the EBS volumes from our PVCs still exist but the "mapping" to the PVCs is gone.
-
A PVC Operator which Uploads Data to S3 on Delete and Downloads on Create
OP could probably just layer their own CSI driver on top of an existing one (a la aws-ebs-csi-driver), but there's still several problems:
What are some alternatives?
karpenter-provider-aws - Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
ceph-csi - CSI driver for Ceph
cluster-proportional-autoscaler - Kubernetes Cluster Proportional Autoscaler Container
aws-efs-csi-driver - CSI Driver for Amazon EFS https://aws.amazon.com/efs/
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
descheduler - Descheduler for Kubernetes
topolvm - Capacity-aware CSI plugin for Kubernetes
k3s-aws-terraform-cluster - Deploy an high available K3s cluster on Amazon AWS
aws-node-termination-handler - Gracefully handle EC2 instance shutdown within Kubernetes
aws-iam-authenticator - A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster