secrets-store-csi-driver-provider-gcp
aws-efs-csi-driver
secrets-store-csi-driver-provider-gcp | aws-efs-csi-driver | |
---|---|---|
6 | 11 | |
224 | 683 | |
-0.4% | 0.4% | |
6.8 | 8.5 | |
3 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
secrets-store-csi-driver-provider-gcp
- Bridging the Gap: Leveraging Secret Store CSI Drivers to Access Secrets from Google Secret Manager in GKE Cluster
-
Shhhh... Kubernetes Secrets Are Not Really Secret!
The driver can also sync changes to secrets. The driver currently supports Vault, AWS, Azure, and GCP providers. Secrets Store CSI Driver can also sync provider secrets as Kubernetes secrets; if required, this behavior needs to be explicitly enabled during installation.
-
A better way to manage secrets: reference an external secret defined in the cloud provider environment (please support the idea or give your feedback)
GCP SS-CSI driver
-
How to Inject Secret From Google Secret Manager into GKE Cluster using Helm Chart?
That's interesting actually, Google provides their own rpvider for the Secrets Store CSI Driver: https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
-
Has anyone here used Secret Manager before?
Consider: if you have a tool like terraform managing your infra components including your data layer, you likely want to manage those reaources in a different lifecycle from your application code. Applications may also likely managed using a different toolset (kubectl, helm, scaffold, etc.). In this case, secret Manager acts as the secure configuration bridge between the tools, keeping the secrets out of human hands. As certs and passwords are generated on the infra side, those values can be stored as secrets in SM. Application workloads - backed by service accounts having access to read the secret - can decrypt during launch and use the secret as needed. You can use common patterns in both GKE (via thesecrets store csi driver ) and Cloud Run for consuming secrets in this way.
-
How to access secrets in GCP secret manager from PODs
I prefer https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
aws-efs-csi-driver
-
Implementing AWS EKS with EFS for dynamic volume provisioning using Terraform. Kubernetes Series - Episode 5
In the past I was have problems with GID allocator, something related to this problem.
-
AWS EFS CSI: Mount Target vs Access Point
However, the docs (https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/dynamic_provisioning/README.md) are telling me to create EFS Mount Targets in the EKS subnets. Thats fine.
-
EKS Fargate supports additional Ephemeral Storage
Fargate storage A Pod running on Fargate automatically mounts an Amazon EFS file system. You can't use dynamic persistent volume provisioning with Fargate nodes, but you can use static provisioning. For more information, see Amazon EFS CSI Driver on GitHub.
-
EFS CSI - Dynamic Provisioning and Disaster Recovery?
I guess something like this might go a long way to solve the problem https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/640 ? Though I see it isn't merged yet
-
Mounting EFS in EKS cluster: example deployment fails
I am currently trying to create an EFS for use within an EKS cluster. I've followed all the instructions, and everything seems to be working for the most part. However, when trying to apply the multiple_pods example deployment from here, the pods cannot succesfully mount the file system. The PV and PVC are both bound and look good, however the pods do not start and yield the following error message:
-
How can 2 deployments using aws-efs-csi-provider share data on the same mount?
In each namespace, create a PV/PVC using the same fixed volume path. See "Volume Path in EKS CSI Driver" To make this work however, you MUST pre-create this volume path in your EFS (I usually just have an EC2 instance with it mounted to work on). From the docs above "Note: this feature requires the sub directory to mount precreated on EFS before consuming the volume from"
- Confused about kubernetes storage
-
Confused abut EKS gp2 default storage class - can i use it or not?
resource "aws_iam_policy" "eks_efs_csi_driver_policy" { # https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/iam-policy-example.json policy = file("./6.AWSEFSpolicy.json") name = "aws-efs-csi-policy" }
- How is a PersistentVolumeClaim consistent?
-
EKS IAM Deep Dive
efs - IAM Policy for AWS EFS CSI Driver.
What are some alternatives?
secrets-store-csi-driver - Secrets Store CSI driver for Kubernetes secrets - Integrates secrets stores with Kubernetes via a CSI volume.
ceph-csi - CSI driver for Ceph
Reloader - A Kubernetes controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it!
vault-csi-provider - HashiCorp Vault Provider for Secret Store CSI Driver
csi-gcs - Kubernetes CSI driver for Google Cloud Storage
aws-ebs-csi-driver - CSI driver for Amazon EBS https://aws.amazon.com/ebs/
smcache - golang autocert cache implementation for GCP Secret Manager
kiam - Integrate AWS IAM with Kubernetes
berglas - A tool for managing secrets on Google Cloud
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
secrets-store-csi-driver-provider-aws - The AWS provider for the Secrets Store CSI Driver allows you to fetch secrets from AWS Secrets Manager and AWS Systems Manager Parameter Store, and mount them into Kubernetes pods.
amazon-cloudwatch-agent - CloudWatch Agent enables you to collect and export host-level metrics and logs on instances running Linux or Windows server.