aws-ebs-csi-driver
external-dns
aws-ebs-csi-driver | external-dns | |
---|---|---|
13 | 79 | |
920 | 7,266 | |
1.7% | 0.8% | |
9.4 | 9.6 | |
7 days ago | 1 day ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aws-ebs-csi-driver
-
AWS EBS CSI driver
The AWS EBS CSI Driver relies on IAM permissions to communicate with Amazon EBS for volume management on behalf of the user. The example policy can be used to define the required permissions. Additionally, AWS provides a managed policy at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
-
PV/PVC Not working after k8s upgrade to 1.25
I looks like the driver's permissions to invoke the EBS APIs was revoked and/or changed. When you install the EBS CSI addon you can either inherit permissions from the worker node or you can choose an IRSA role (preferred). If you use IRSA, the service account that the EBS CSI driver uses should have an annotation that references the ARN of the IAM role you selected, e.g. eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role. You can see an example of the IAM policy the driver needs here, https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/fb6d456558fb291b13f855454c1525c7acaf7046/docs/example-iam-policy.json.
- Confused about kubernetes storage
-
Unable to Access AWS EKS Cluter after creating using Terraform
I'm know it's possible to write terraform code that exhibits that issue, but that's not the case in my experience. I'm using helm to deploy aws's ebs csi driver in the above setup. As you mentioned, if the eks cluster was destroyed before the helm provider attempted to use its API to destroy the helm deployment, it would cause problems. And I don't run into that issue. It's not luck of timing, either - I also have a CI process that deploys all of this, tests, and deletes it all that has succeeded hundreds of times.
-
Introduction to Day 2 Kubernetes
Any Kubernetes cluster requires persistent storage - whether organizations choose to begin with an on-premise Kubernetes cluster and migrate to the public cloud, or provision a Kubernetes cluster using a managed service in the cloud. Kubernetes supports multiple types of persistent storage – from object storage (such as Azure Blob storage or Google Cloud Storage), block storage (such as Amazon EBS, Azure Disk, or Google Persistent Disk), or file sharing storage (such as Amazon EFS, Azure Files or Google Cloud Filestore). The fact that each cloud provider has its implementation of persistent storage adds to the complexity of storage management, not to mention a scenario where an organization is provisioning Kubernetes clusters over several cloud providers. To succeed in managing Kubernetes clusters over a long period, knowing which storage type to use for each scenario, requires storage expertise.
-
Dealing with EC2 Instance volume limits in EKS
Lots of info in this issue: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1163
-
Help me understand real use cases of k8s, I can’t wrap my head around it
aws-ebs-csi-driver
- How is a PersistentVolumeClaim consistent?
-
EKS PVC <-> EBS volume associations after cluster recreation
Hello, we are running an EKS cluster (1.20) with aws-ebs-csi-driver (1.4.0). After recreating our whole cluster we can observe that the EBS volumes from our PVCs still exist but the "mapping" to the PVCs is gone.
-
A PVC Operator which Uploads Data to S3 on Delete and Downloads on Create
OP could probably just layer their own CSI driver on top of an existing one (a la aws-ebs-csi-driver), but there's still several problems:
external-dns
-
Upgrading Hundreds of Kubernetes Clusters
The second one is a combination of tools: External DNS, cert-manager, and NGINX ingress. Using these as a stack, you can quickly deploy an application, making it available through a DNS with a TLS without much effort via simple annotations. When I first discovered External DNS, I was amazed at its quality.
-
Kubernetes External DNS provider for Hetzner
One of the reasons why I chose Hetzner was that it WAS supported by the ExternalDNS project. I didn't quite understand why the Hetzner provider was pulled, but I saw that an attempt of re-adding it was refused, on the ground that the upcoming webhook architecture would have allowed to better maintain providers.
-
Istio Multi-Cluster Setup
Write a custom controller for the external DNS controller, or setup some form of ArgoCD app / appset templating.
-
Looking for ExternalDns alternative for non k8s environment
so I am looking at having an automated way for new routers registered in Traefik to also have the corresponding DNS entry added to my Pihole instance similar to external-dns but obviously, this is exclusive to ingress on k8s environments. my current setup is traefik in a container on unraid.
-
Is a Load Balancer necessary for a HA Cluster?
You technically don’t need to run a load balancer or have a virtual IP for your control plane. If you control your dns, you can add an A record pointing to all IPs for your control plane nodes. It won’t load balance your traffic, but combined with something like External DNS it gives you HA for the control plane.
-
How can I assign an EIP to a Kubernetes deployment?
I normally deploy external-dns, which automatically updates DNS with the ingress controller's external IP address.
-
Registering DNS with Windows Domain DNS
Background: Having a look I can see this https://github.com/kubernetes-sigs/external-dns
-
Cluster nodes on different networks
3) Use the Kubernetes External-DNS. I've never used this, but this is assuming it can update DNS for each pods/app to point to the correct Node (it'd need to update my Homelab DNS running on Windows Server)
-
I am stuck on learning how to provision K8s in AWS. Security groups? ALB? ACM? R53?
So here’s the solution I have taken for our current stack. EKS and its dependencies are created through terraform using the eks module as well as provision a route53 subdomain and a wildcard cert. Once we have that created, I have installed this deployment into the cluster via the helm module: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/. This allows me to use kuberentes resources (load balancers or ingress objects) and it will handle all the provisioning of load balancers and security groups for me, based on my application yaml and annotations. We also use https://github.com/kubernetes-sigs/external-dns to manage all of our specific host names for the applications through annotations. So to generally put, terraform manages out Kubernetes clusters, and Kubernetes manages the deployment of anything needed for the application including volumes, load balancers, hostnames though Kubernetes system deployments
-
How to expose services/apps to my home network with custom DNS names
Metallb for your load balancer (layer2 mode will do) NginX-ingress, will be spot on for internal home apps External-dns to publish your dns record to your Dns server at home, https://github.com/kubernetes-sigs/external-dns
What are some alternatives?
autoscaler - Autoscaling components for Kubernetes
metallb - A network load-balancer implementation for Kubernetes using standard routing protocols
ceph-csi - CSI driver for Ceph
cloudflare-ingress-controller - A Kubernetes ingress controller for Cloudflare's Argo Tunnels
aws-efs-csi-driver - CSI Driver for Amazon EFS https://aws.amazon.com/efs/
ingress-nginx - Ingress-NGINX Controller for Kubernetes
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
crossplane - The Cloud Native Control Plane
topolvm - Capacity-aware CSI plugin for Kubernetes
PowerDNS - PowerDNS Authoritative, PowerDNS Recursor, dnsdist
descheduler - Descheduler for Kubernetes
awx-operator - An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible. 🤖