karpenter-provider-aws
rancher
karpenter-provider-aws | rancher | |
---|---|---|
47 | 89 | |
5,902 | 22,559 | |
3.1% | 0.6% | |
9.9 | 9.9 | |
3 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
karpenter-provider-aws
- Karpenter
-
Stress testing Karpenter with EKS and Qovery
If you’re not familiar with Karpenter — watch my quick intro. But in a nutshell, Karpenter is a better node autoscaler for Kubernetes (say goodbye to wasted compute resources). It is open-source and built by the AWS team. Qovery is an Internal Developer Platform I’m a co-founder) that we’ll use to spin up our EKS cluster with Karpenter.
- Tortoise: Shell-Shockingly-Good Kubernetes Autoscaling
-
Five tools to add to your K8s cluster
Karpenter
-
Architecting for Resilience: Crafting Opinionated EKS Clusters with Karpenter & Cilium Cluster Mesh — Part 1
Here are a few reference links about the previous services and tools: What is Amazon EKS? Cluster Mesh Karpenter
- Scaling with Karpenter and Empty Pod(A.k.a Overprovisioning)
-
Reducing Cloud Costs on Kubernetes Dev Envs
Autoscaling over EKS can be accomplished using either the cluster-autoscaler project or Karpenter. If you want to use Spot instances, consider using Karpenter, as it has better integrations with AWS for optimizing spot pricing and availability, minimizing interruptions, and falling back to on-demand nodes if no spot instances are available.
-
Help required
Kubernetes has its own learning curve, but when tools like Karpenter exist it's kinda hard to beat for "auto-scaled compute" that is vendor agnostic. We leverage Karpenter for burst in our vSphere environment as well as our EC2 environment. Karpenter is invoking roughly the same Terraform code in both cases, just using different modules for the particular virtualization. Say we want to go to Azure and GCP -- we add an Azure and GCP module to the same Terraform codebase, and not much else needs to change from the "scale up / scale down" perspective.
-
Workload Operator. What do you think?
Also https://github.com/aws/karpenter/issues/331
-
Running Airflow task intensive Dags on Fargate.
Why don't you stick to the KubernetesPodOperator though? I fail to see a benefit in using the ECS operator considering you're already running Airflow in EKS. You can look into something like karpenter to manage your nodes.
rancher
-
OpenTF Announces Fork of Terraform
Did something happen to the Apache 2 rancher? https://github.com/rancher/rancher/blob/v2.7.5/LICENSE RKE2 is similarly Apache 2: https://github.com/rancher/rke2/blob/v1.26.7%2Brke2r1/LICENS...
-
Kubernetes / Rancher 2, mongo-replicaset with Local Storage Volume deployment
I follow the 4 ABCD steps bellow, but the first pod deployment never ends. What's wrong in it? Logs and result screens are at the end. Detailed configuration can be found here.
- Trouble with RKE2 HA Setup: Part 2
-
Critical vulnerability (CVE-2023-22651) in Rancher 2.7.2 - Update to 2.7.3
CVE-2023-22651 is rated 9.9/10 : https://github.com/rancher/rancher/security/advisories/GHSA-6m9f-pj6w-w87g
-
What's your take if DevOps colleague always got new initiative / idea?
Depends. When I came into my last company I immediately noticed the lack of reproducible environments. Brought this up a few times and was met with some resistance because "we didn't have the capacity"... Until prod went down and it took us 23 hours to bring it back up due to spaghetti terraform.
-
Questions about Rancher Launched/imported AKS
For the latest releases of rancher: https://github.com/rancher/rancher/releases When is Rancher 2.7.1 going to be released? The Rancher support matrix for 2.7.1 shows k8s v1.24.6 as the highest supported version and Azure will drop AKS v1.24 in a few months... Should this be a concern for us? What could happen if we create our cluster with Rancher for an unsupported K8s version? 1.25 for example. - Rancher 2.7.2 just got released including support for 1.25. I have however tested running unsupported versions before, unless there is major deprecations in the kubernetes API it is fine in my experience. If we move to AKS imported clusters, in case we add node pools, and upgrade the cluster, will those changes be reflected in the Rancher Platform? - Yep! If we face some issues by running an unsupported K8s version on Rancher Launched K8s clusters, is it possible to remove it from Rancher, do the stuff we need, and then import it into the platform? - Yes, however be careful and do testing before doing in prod. From top of mind: Remove cluster from rancher (if imported), if rancher created you might want to revoke ranchers SA key for the cluster first (so it can't remove it). Delete the cattle-system namespace, and any other cattle-* namespaces you don't want to keep. And do your thing. It looks like AKS is faster than Rancher regarding supported Kubernetes versions... We would like to know if Rancher will always be on track with AKS regarding the removal of K8s version support and new versions. - In my experience yes. (Been using rancher on all three clouds for a 4 years now). What are exactly the big differences between imported AKS and Rancher-launched AKS? What should we look at, and what issues can we face when using one or another? - The main difference is that rancher will not be able to upgrade the cluster for you. You will have to do that yourself.
-
rancher2_bootstrap.admin resource fail after Kubernetes v1.23.15
variable "rancher" { type = object({ namespace = string version = string branch = string chart_set = list(object({ name = string value = string })) }) default = { namespace = "cattle-system" # There is a bug with destroying the cloud credentials in version 2.6.9 until 2.7.1 and will be fixed in next release 2.7.2. # See https://github.com/rancher/rancher/issues/39300 version = "2.7.0" branch = "stable" chart_set = [ { name = "replicas" value = 3 }, { name = "ingress.ingressClassName" value = "nginx-external" }, { name = "ingress.tls.source" value = "rancher" }, # There is a bug with the uninstallation of Rancher due to missing priorityClassName of rancher-webhook # The priorityClassName need to be set # See https://github.com/rancher/rancher/issues/40935 { name = "priorityClassName" value = "system-node-critical" } ] } description = "Rancher Helm chart properties." }
-
Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow
When I searched DuckDuckGo instead, the 12th link actually had the real answer. It's in this issue on Rancher's GitHub. Turns out the Rancher admin needs to be in all of the Keycloak groups they want to have show up in the auto-populated picklist in Rancher. Being a Keycloak admin and even creating the groups isn't good enough. Frustratingly, the "caveat" note the Rancher guy is pointing to that says this is only present in the guide to setting up Keycloak for SAML, but apparently this is also true for OIDC.
-
How to enable TLS 1.3 protocol
Explicitly set TLS 1.3 in Rancher, though it could be a bug in Rancher: https://github.com/rancher/rancher/issues/35654
-
Rancher deployment, hanging on login and setup pages
Thanks. Yeah looks like this might work: https://github.com/rancher/rancher/releases/tag/v2.7.2-rc3
What are some alternatives?
keda - KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
podman - Podman: A tool for managing OCI containers and pods.
autoscaler - Autoscaling components for Kubernetes
lens - Lens - The way the world runs Kubernetes
bedrock - Automation for Production Kubernetes Clusters with a GitOps Workflow
microk8s - MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
karpenterwebsite
kubesphere - The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
dapr - Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.
cluster-api - Home for Cluster API, a subproject of sig-cluster-lifecycle
camel-k - Apache Camel K is a lightweight integration platform, born on Kubernetes, with serverless superpowers
kubespray - Deploy a Production Ready Kubernetes Cluster