aws-ebs-csi-driver
seaweedfs
aws-ebs-csi-driver | seaweedfs | |
---|---|---|
13 | 34 | |
920 | 21,076 | |
1.7% | 1.0% | |
9.4 | 9.9 | |
7 days ago | 6 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
aws-ebs-csi-driver
-
AWS EBS CSI driver
The AWS EBS CSI Driver relies on IAM permissions to communicate with Amazon EBS for volume management on behalf of the user. The example policy can be used to define the required permissions. Additionally, AWS provides a managed policy at ARN arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
-
PV/PVC Not working after k8s upgrade to 1.25
I looks like the driver's permissions to invoke the EBS APIs was revoked and/or changed. When you install the EBS CSI addon you can either inherit permissions from the worker node or you can choose an IRSA role (preferred). If you use IRSA, the service account that the EBS CSI driver uses should have an annotation that references the ARN of the IAM role you selected, e.g. eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-role. You can see an example of the IAM policy the driver needs here, https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/fb6d456558fb291b13f855454c1525c7acaf7046/docs/example-iam-policy.json.
- Confused about kubernetes storage
-
Unable to Access AWS EKS Cluter after creating using Terraform
I'm know it's possible to write terraform code that exhibits that issue, but that's not the case in my experience. I'm using helm to deploy aws's ebs csi driver in the above setup. As you mentioned, if the eks cluster was destroyed before the helm provider attempted to use its API to destroy the helm deployment, it would cause problems. And I don't run into that issue. It's not luck of timing, either - I also have a CI process that deploys all of this, tests, and deletes it all that has succeeded hundreds of times.
-
Introduction to Day 2 Kubernetes
Any Kubernetes cluster requires persistent storage - whether organizations choose to begin with an on-premise Kubernetes cluster and migrate to the public cloud, or provision a Kubernetes cluster using a managed service in the cloud. Kubernetes supports multiple types of persistent storage – from object storage (such as Azure Blob storage or Google Cloud Storage), block storage (such as Amazon EBS, Azure Disk, or Google Persistent Disk), or file sharing storage (such as Amazon EFS, Azure Files or Google Cloud Filestore). The fact that each cloud provider has its implementation of persistent storage adds to the complexity of storage management, not to mention a scenario where an organization is provisioning Kubernetes clusters over several cloud providers. To succeed in managing Kubernetes clusters over a long period, knowing which storage type to use for each scenario, requires storage expertise.
-
Dealing with EC2 Instance volume limits in EKS
Lots of info in this issue: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1163
-
Help me understand real use cases of k8s, I can’t wrap my head around it
aws-ebs-csi-driver
- How is a PersistentVolumeClaim consistent?
-
EKS PVC <-> EBS volume associations after cluster recreation
Hello, we are running an EKS cluster (1.20) with aws-ebs-csi-driver (1.4.0). After recreating our whole cluster we can observe that the EBS volumes from our PVCs still exist but the "mapping" to the PVCs is gone.
-
A PVC Operator which Uploads Data to S3 on Delete and Downloads on Create
OP could probably just layer their own CSI driver on top of an existing one (a la aws-ebs-csi-driver), but there's still several problems:
seaweedfs
-
DwarFS – The Deduplicating Warp-Speed Advanced Read-Only File System
Whoops: WebDAV:
https://news.ycombinator.com/item?id=39417503
SeaweedFS supports WebDAV. https://github.com/seaweedfs/seaweedfs/wiki/WebDAV
I'm not able to find if both/restic supports mounting backups as WebDAV, but in theory there's nothing stopping you.
It's 100% user space (expose a rest service) and supported by a bunch of file-browsers with a bit of a network aware component to it as well.
-
Billion File Filesystem
If you want/need to take out the metadata, there's some nice solutions for that https://github.com/seaweedfs/seaweedfs
-
SeaweedFS fast distributed storage system for blobs, objects, files and datalake
I posted this on https://github.com/seaweedfs/seaweedfs/discussions/5290
-
DuckDB + dbt for a serverless event correlation pipeline?
I like the idea of using SeaweedFS as an intermediate layer with object write notifications going to SQS, RabbitMQ, or a local file, which could also allow me to observe the changes to different files through a metric collection layer like Prometheus and Grafana.
-
Show HN: OpenSign – The open source alternative to DocuSign
> Theoretically they could swap with minio but last time we used it it was not a drop-in replacement yet.
Depends on whether AGPL v3 works for you or not (or whether you decide to pay them), I guess: https://min.io/pricing
I've actually been looking for more open alternatives, but haven't found much.
Zenko CloudServer seemed to be somewhat promising, but doesn't seem to be managed very actively: https://github.com/scality/cloudserver/issues/4986 (their Docker images on DockerHub were last updated 10 months ago, which is what the homepage links to; blog doesn't seem active since 2019, forums don't have much going on, despite some action on GitHub still)
There was also Garage, but that one is also AGPL v3: https://garagehq.deuxfleurs.fr/
The closest I got was discovering that SeaweedFS has an S3 compatible mode: https://github.com/seaweedfs/seaweedfs
- The Tailscale Universal Docker Mod
- SeaweedFS
- Google Cloud Storage FUSE
- Experience running rook-ceph in production/large clusters
-
First Homelab as a 19yr old Software Developer
SeaweedFS S3 Gateway for Joplin notes
What are some alternatives?
autoscaler - Autoscaling components for Kubernetes
minio - The Object Store for AI Data Infrastructure
ceph-csi - CSI driver for Ceph
Ceph - Ceph is a distributed object, block, and file storage platform
aws-efs-csi-driver - CSI Driver for Amazon EFS https://aws.amazon.com/efs/
garage - (Mirror) S3-compatible object store for small self-hosted geo-distributed deployments. Main repo: https://git.deuxfleurs.fr/Deuxfleurs/garage
aws-load-balancer-controller - A Kubernetes controller for Elastic Load Balancers
cubefs - cloud-native file store
topolvm - Capacity-aware CSI plugin for Kubernetes
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
descheduler - Descheduler for Kubernetes
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)