aws-ebs-csi-driver
Kyverno
aws-ebs-csi-driver | Kyverno | |
---|---|---|
13 | 35 | |
923 | 5,119 | |
1.7% | 1.6% | |
9.4 | 9.9 | |
1 day ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aws-ebs-csi-driver
-
AWS EBS CSI driver
The AWS EBS CSI Driver relies on IAM permissions to communicate with Amazon EBS for volume management on behalf of the user. The example policy can be used to define the required permissions. Additionally, AWS provides a managed policy at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
-
PV/PVC Not working after k8s upgrade to 1.25
I looks like the driver's permissions to invoke the EBS APIs was revoked and/or changed. When you install the EBS CSI addon you can either inherit permissions from the worker node or you can choose an IRSA role (preferred). If you use IRSA, the service account that the EBS CSI driver uses should have an annotation that references the ARN of the IAM role you selected, e.g. eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role. You can see an example of the IAM policy the driver needs here, https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/fb6d456558fb291b13f855454c1525c7acaf7046/docs/example-iam-policy.json.
- Confused about kubernetes storage
-
Unable to Access AWS EKS Cluter after creating using Terraform
I'm know it's possible to write terraform code that exhibits that issue, but that's not the case in my experience. I'm using helm to deploy aws's ebs csi driver in the above setup. As you mentioned, if the eks cluster was destroyed before the helm provider attempted to use its API to destroy the helm deployment, it would cause problems. And I don't run into that issue. It's not luck of timing, either - I also have a CI process that deploys all of this, tests, and deletes it all that has succeeded hundreds of times.
-
Introduction to Day 2 Kubernetes
Any Kubernetes cluster requires persistent storage - whether organizations choose to begin with an on-premise Kubernetes cluster and migrate to the public cloud, or provision a Kubernetes cluster using a managed service in the cloud. Kubernetes supports multiple types of persistent storage – from object storage (such as Azure Blob storage or Google Cloud Storage), block storage (such as Amazon EBS, Azure Disk, or Google Persistent Disk), or file sharing storage (such as Amazon EFS, Azure Files or Google Cloud Filestore). The fact that each cloud provider has its implementation of persistent storage adds to the complexity of storage management, not to mention a scenario where an organization is provisioning Kubernetes clusters over several cloud providers. To succeed in managing Kubernetes clusters over a long period, knowing which storage type to use for each scenario, requires storage expertise.
-
Dealing with EC2 Instance volume limits in EKS
Lots of info in this issue: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1163
-
Help me understand real use cases of k8s, I can’t wrap my head around it
aws-ebs-csi-driver
- How is a PersistentVolumeClaim consistent?
-
EKS PVC <-> EBS volume associations after cluster recreation
Hello, we are running an EKS cluster (1.20) with aws-ebs-csi-driver (1.4.0). After recreating our whole cluster we can observe that the EBS volumes from our PVCs still exist but the "mapping" to the PVCs is gone.
-
A PVC Operator which Uploads Data to S3 on Delete and Downloads on Create
OP could probably just layer their own CSI driver on top of an existing one (a la aws-ebs-csi-driver), but there's still several problems:
Kyverno
-
Stop 'k rollout restart deploy' from restarting everything?
Anyway, I haven’t checked for sure as I’m away from laptop but it should be possible to use something like Kyverno to block that operation. We had to do similar in the past to hotfix a bug in our CLI tool. I wrote a blog post about it that might give you an idea: https://www.giantswarm.io/blog/restricting-cluster-admin-permissions
-
An Overview of Kubernetes Security Projects at KubeCon Europe 2023
Cosign is used for signing containers through a variety of different methods. It has strong integration with other open source tools, such as Kyverno.
- Kyverno
-
container signing and verification using cosign and kyverno
cosign: https://docs.sigstore.dev/cosign/overview/ kyverno: https://kyverno.io/
-
Introduction to Day 2 Kubernetes
Kyverno - Kubernetes Native Policy Management
-
Admission controller to mutate cpu requests?
You could use a policy tool like kyverno or OPA.
-
Multi-tenancy with ProjectSveltos
Kyverno is present in the management cluster;
-
Did I miss something here, regarding network policies and helm templates? (Slightly ranty)
You do still have to create a policy for every namespace, but don't have to worry about labeling individual pods. We're starting to move to Helm/kustomize for our namespaces to deploy default things like network policies to each one, and we're also starting to use kyverno more, which I think is a little more purpose built for this type of thing than metacontroller is.
-
kubernetes provider resources v1 vs non-v1 is it just me or is this dumb?
I knew it was unsupported so about 6 months ago I had started an effort to switch to Kyverno, which is far better and actually supported. The version of Kyverno I was using had a v1beta1 AdmissionController. Fortunately that was in a helm chart so easily caught by pluto before my upgrade.
-
Kyverno Policy As Code Using CDK8S
Kyverno Kyverno is a policy engine designed for Kubernetes, Kyverno policies can validate, mutate, and generate Kubernetes resources plus ensure OCI image supply chain security.
What are some alternatives?
autoscaler - Autoscaling components for Kubernetes
falco - Cloud Native Runtime Security
ceph-csi - CSI driver for Ceph
gatekeeper - 🐊 Gatekeeper - Policy Controller for Kubernetes
aws-efs-csi-driver - CSI Driver for Amazon EFS https://aws.amazon.com/efs/
Kubewarden - Kubewarden is a policy engine for Kubernetes. It helps with keeping your Kubernetes clusters secure and compliant. Kubewarden policies can be written using regular programming languages or Domain Specific Languages (DSL) sugh as Rego. Policies are compiled into WebAssembly modules that are then distributed using traditional container registries.
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
OPA (Open Policy Agent) - Open Policy Agent (OPA) is an open source, general-purpose policy engine.
topolvm - Capacity-aware CSI plugin for Kubernetes
k-rail - Kubernetes security tool for policy enforcement
descheduler - Descheduler for Kubernetes
checkov - Prevent cloud misconfigurations and find vulnerabilities during build-time in infrastructure as code, container images and open source packages with Checkov by Bridgecrew.