Seaweed File System
autotier
Our great sponsors
Seaweed File System | autotier | |
---|---|---|
49 | 7 | |
14,960 | 226 | |
- | 10.6% | |
9.9 | 2.0 | |
over 1 year ago | 3 months ago | |
Go | C++ | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Seaweed File System
- An open-source distributed object storage service
-
Moving to github.com/seaweedfs/seaweedfs
FYI: Planning to move from github.com/chrislusf/seaweedfs to github.com/seaweedfs/seaweedfs in the coming days. It may cause some problem for package reference, building, documents, and links. Sorry for the change!
-
S3 Isn't Getting Cheaper
Besides storage itself, S3 API access cost can be high if frequently accessed. And latency is unpredicatble.
You can use SeaweedFS Remote Object Store Gateway to cache S3 (or any S3 API compatible vendors) to local servers, and access them at local network speed, and asynchronously sync back to S3.
https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remot...
- ### Release 3.12 · chrislusf/seaweedfs
-
Minio in production
If you are looking at MinIO you might find SeaweedFS interesting as well.
- SeaweedFS and YDB
-
Cost effective managed key-value store?
I believe what you want is a horizontally scalable object store with tiered storage. SeaweedFS is free / open source https://github.com/chrislusf/seaweedfs
- A way to store and query large (up to 1GB) user defined objects.
-
Question: does anyone know Storage Provider with S3 as persistence layer?
I don't know if it fits all of your requests, but you can take a look at seaweedfs, which is pretty good
-
Introducing Garage, our self-hosted distributed object storage solution
Seaweedfs deserves a mention here for comparison as well.
autotier
-
Is it okay to mismatch NVMe SSDs for the ZFS mirror root pool?
maybe tiered storage for HDD pool (if I find a good solution - I was thinking about 45drives autotier but not much info about it)
-
Trying to achieve something dumb. Questions regarding running two layers of LVM cache or other ridiculous storage tiering.
If using LVM isn't an option for something like this I did find autotier which appears to implement everything I need under a FUSE but I'm concerned about the performance overhead of that. Since the entire point of storage tiering would be improving performance would a FUSE system be practical?
-
Have folks here ditched ZFS for UNRAID? Looking for your thoughts
My workload is primarily Plex media server, NFS/SMB. Going to UNRAID for everything could work; but I could also do a unraid hybrid setup with 2 disks ZFS mirror for my documents and photos while leaving the movies+tv files in other disks (this would increase my hdd count from 4 to 5 though due to parity needs)... a third option (probably very time consuming) is openmediavault + https://github.com/45Drives/autotier + ZFS (on each disk, simply to get the ability to 'zfs send/receive' to backup to another zfs server nightly)
-
Kicking the tires on UNRAID trial - some quick questions from someone coming from ZFS land...
I just installed unraid to test and see if I should be moving all my files to it from ZFS for my home media server (movies, tv, large backups, etc). I think the feature I want to exploit the most of unraid is the possibility to use a nvme-cache drive then purge data to disks after XX days or so - this basically feels like https://github.com/45Drives/autotier in a way.
-
Autotier or similar on NixOS
Is anyone using Autotier (https://github.com/45Drives/autotier) or similar on NixOS? How did you set it up?
-
Any news on Qtier or similar reaching QuTS Hero?
Given current pricing, larger SSDs, in terms of about 1TB aren't actually twice the price of 512GB ones, which in turn aren't twice as expensive as the 256GB units, seem to be better buys. At the same time, there are benefits to going SSDs for certain capacities and workloads, while in many cases a correctly functioning tiering system is more beneficial. Qnap's Qtier, however, is using dm-thin/md-adm features that are not in use/available by design in QuTS Hero and as such their regular QTS's feature of Qtier is unavailable on the ZFS-based variant. SSDs can be used for SSD pools or caches (as in ZIL for writes/L2ARC for reads/metadata for metadata and deduplication tables or a mix of those) only, similar to QES. Contrary to QES target audience, many ZFS-powered units are reaching customer base that is more home, SOHO or even light on the budget, so going with "huge" (in terms of disk quantity and overall capacity) and resilient all-flash pools is merely a dream, so the caches are more or less the goto option. As stated earlier, this leads to often either overexpenditure or underutilisation of the disk resources. Still, Qnap seems to have been quite tight-lipped about any possibility of Qtier-like solution being offered in their QuTSHero stack for a long time (presumably either because they don't want to offer it to not canibalise any of the remaining comparative QTS only or QES solution sales, or due to their inablity to come up with a solution). Given that the underlying system is a tuned linux, I wonder why autotier-like (https://github.com/45Drives/autotier) FUSE filesystem has not been prepared as a "replacement"/alternative. Sure, block-level monitoring/tiering would have been even better but the nature of COW/ZFS does not play well with regular block-monitoring tools (one could splice something up but that would require yet another huge table for block counts tracking to combine spinning rust and silicon traps in persistent and ephemeral levels into one global map for storage). However, a FUSE filesystem of autotier deals in file-level analysis, which would greatly increase usefulness of a mixed SSD and HDD pools appliance possibly without incurring much of a performance impact. Qnap certainly has the manpower or funds to prepare a solution that is unique/optimised for their devices that while not as good at moving hot data around as the block-level solution, would be the next best thing without too much of a hustle and it could still joggle around data when required. The question then remains, why haven't Qnap gone through with releasing such a solution, or maybe there is one in the works that Qnap plans to add as a showcase for QuTS Hero v.5? It could greatly improve the QuTS perspective in devices such as TS-673A (wherein one could than get a 3-tier solution with 2x NVMe SSDs for hottest files/data or caches, and the 6 drivebays populated with 3x HDDs for cold and 3x SATA SSDs for warm data layers) or its 8-bay counterpart, or the TS-h973AX (wherein we're getting 3 tiers by design with 2x U.2 for hot sauce or caches + 2x SATA SSDs for warm data + 5x HDDs for cold storage) as example units of somewhat lower pricing in the ZFS-based range/spectrum. This could certainly boost sales of the ZFS units. Or maybe, I've just been deaf and blind to the corporate PR to that degree that I've missed Qnap official unveiling of a future showcase piece plans for later? Or maybe such features have crept silently in some beta releases already and been dropped due to some issues?
-
Hot-Warm-Cold Caching/Storage on Linux with Two SSDs and an HDD
I also found autotier, which implements a sort-of storage tiering solution on top of ZFS.
What are some alternatives?
minio - The Object Store for AI Data Infrastructure
seaweedfs - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
Ceph - Ceph is a distributed object, block, and file storage platform
react-native-mmkv-storage - An ultra fast (0.0002s read/write), small & encrypted mobile key-value storage framework for React Native written in C++ using JSI
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
knoxite - A data storage & backup system
Apache Hadoop - Apache Hadoop
openHistorian - The Open Source Time-Series Data Historian
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Java-SerialX - Store Java objects into JSON or any format you want! SerialX is a powerful lightweight utility library to serialize Java objects programmatically via tweakable recursive descent parser for custom domain-specific languages!
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]