gcsfuse
gcp-filestore-csi-driver
Our great sponsors
gcsfuse | gcp-filestore-csi-driver | |
---|---|---|
31 | 2 | |
1,977 | 81 | |
1.5% | - | |
9.7 | 8.8 | |
about 13 hours ago | 4 days ago | |
Go | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gcsfuse
-
Gcsfuse: A user-space file system for interacting with Google Cloud Storage
It uses FUSE and there's three types of Kernel cache you could use with FUSE (although, it seems like gcsfuse is exposing only one):
1. Cache of file attributes in the Kernel (this is controlled by "stat-cache-ttl" value - https://github.com/GoogleCloudPlatform/gcsfuse/blob/7dc5c7ff...)
-
Does Cloud file store have life cycle management feature ?
You're looking at Filestore because your software can only write to a mounted file system? If so, you could mount a Google Cloud Storage bucket with Fuse. I haven't used Fuse myself in production, but it may be worth trying out for your workload.
-
Google Cloud Storage FUSE
Is this the same gcsfuse that's been around for years, only now with official Google support?
https://github.com/GoogleCloudPlatform/gcsfuse
- Suggestions on data transfer between VM instances
-
RANT: MICROSOFT'S INABILITY TO SUPPORT THEIR OWN HARDWARE IS GOING TO KILL ME
I'm pretty sure the storage for a stopped VM vs a disk image will be the same. Cheaper if you can store your data in GCS bucket? Take a look at GCS-Fuse to mount storage buckets into a VM.
-
Ongoing Incident in Google Cloud
Currently being tracked here: https://github.com/GoogleCloudPlatform/gcsfuse/issues/961
gcp-filestore-csi-driver
-
Google Cloud Storage FUSE
Hi Ofek,
I am a contributor who works on the Google Cloud Storage FUSE CSI Driver project. The project is partially inspired by your CSI implementation. Thank you so much for the contribution to the Kubernetes community. However, I would like to clarify a few things regarding your post.
The Cloud Storage FUSE CSI Driver project does not have “in large part copied code” from your implementation. The initial commit you referred to in the post was based on a fork of another open source project: https://github.com/kubernetes-sigs/gcp-filestore-csi-driver. If you compare the Google Cloud Storage FUSE CSI Driver repo with the Google Cloud Filestore CSI Driver repo, you will notice the obvious similarities, in terms of the code structure, the Dockerfile, the usage of Kustomize, and the way the CSI is implemented. Moreover, the design of the Google Cloud Storage FUSE CSI Driver included a proxy server, and then evolved to a sidecar container mode, which are all significantly different from your implementation.
As for the Dockerfile annotations you pointed out in the initial commit, I did follow the pattern in your repo because I thought it was the standard way to declare the copyright. However, it didn't take me too long to realize that the Dockerfile annotations are not required, so I removed them.
Thank you again for your contribution to the open source community. I have included your project link on the readme page. I take the copyright very seriously, so please feel free to directly create issues or PRs on the Cloud Storage FUSE CSI Driver GitHub project page if I missed any other copyright information.
-
Introduction to Day 2 Kubernetes
Any Kubernetes cluster requires persistent storage - whether organizations choose to begin with an on-premise Kubernetes cluster and migrate to the public cloud, or provision a Kubernetes cluster using a managed service in the cloud. Kubernetes supports multiple types of persistent storage – from object storage (such as Azure Blob storage or Google Cloud Storage), block storage (such as Amazon EBS, Azure Disk, or Google Persistent Disk), or file sharing storage (such as Amazon EFS, Azure Files or Google Cloud Filestore). The fact that each cloud provider has its implementation of persistent storage adds to the complexity of storage management, not to mention a scenario where an organization is provisioning Kubernetes clusters over several cloud providers. To succeed in managing Kubernetes clusters over a long period, knowing which storage type to use for each scenario, requires storage expertise.
What are some alternatives?
google-drive-ocamlfuse - FUSE filesystem over Google Drive
gcs-fuse-csi-driver - The Google Cloud Storage FUSE Container Storage Interface (CSI) Plugin.
goofys - a high-performance, POSIX-ish Amazon S3 file system written in Go
gcp-compute-persistent-disk-csi-driver - The Google Compute Engine Persistent Disk (GCE PD) Container Storage Interface (CSI) Storage Plugin.
juicefs - JuiceFS is a distributed POSIX file system built on top of Redis and S3.
blob-csi-driver - Azure Blob Storage CSI driver
afero - A FileSystem Abstraction System for Go
geesefs - Finally, a good FUSE FS implementation over S3
fsnotify - Cross-platform file system notifications for Go.
curve - Curve is a sandbox project hosted by the CNCF Foundation. It's cloud-native, high-performance, and easy to operate. Curve is an open-source distributed storage system for block and shared file storage.
go-systemd - Go bindings to systemd socket activation, journal, D-Bus, and unit files