seaweedfs
autotier
Our great sponsors
seaweedfs | autotier | |
---|---|---|
34 | 7 | |
21,013 | 226 | |
2.3% | 10.6% | |
9.9 | 2.0 | |
6 days ago | 2 months ago | |
Go | C++ | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seaweedfs
-
DwarFS – The Deduplicating Warp-Speed Advanced Read-Only File System
Whoops: WebDAV:
https://news.ycombinator.com/item?id=39417503
SeaweedFS supports WebDAV. https://github.com/seaweedfs/seaweedfs/wiki/WebDAV
I'm not able to find if both/restic supports mounting backups as WebDAV, but in theory there's nothing stopping you.
It's 100% user space (expose a rest service) and supported by a bunch of file-browsers with a bit of a network aware component to it as well.
-
Billion File Filesystem
If you want/need to take out the metadata, there's some nice solutions for that https://github.com/seaweedfs/seaweedfs
-
SeaweedFS fast distributed storage system for blobs, objects, files and datalake
I posted this on https://github.com/seaweedfs/seaweedfs/discussions/5290
-
DuckDB + dbt for a serverless event correlation pipeline?
I like the idea of using SeaweedFS as an intermediate layer with object write notifications going to SQS, RabbitMQ, or a local file, which could also allow me to observe the changes to different files through a metric collection layer like Prometheus and Grafana.
-
Show HN: OpenSign – The open source alternative to DocuSign
> Theoretically they could swap with minio but last time we used it it was not a drop-in replacement yet.
Depends on whether AGPL v3 works for you or not (or whether you decide to pay them), I guess: https://min.io/pricing
I've actually been looking for more open alternatives, but haven't found much.
Zenko CloudServer seemed to be somewhat promising, but doesn't seem to be managed very actively: https://github.com/scality/cloudserver/issues/4986 (their Docker images on DockerHub were last updated 10 months ago, which is what the homepage links to; blog doesn't seem active since 2019, forums don't have much going on, despite some action on GitHub still)
There was also Garage, but that one is also AGPL v3: https://garagehq.deuxfleurs.fr/
The closest I got was discovering that SeaweedFS has an S3 compatible mode: https://github.com/seaweedfs/seaweedfs
- The Tailscale Universal Docker Mod
- SeaweedFS
- Google Cloud Storage FUSE
- Experience running rook-ceph in production/large clusters
-
First Homelab as a 19yr old Software Developer
SeaweedFS S3 Gateway for Joplin notes
autotier
-
Is it okay to mismatch NVMe SSDs for the ZFS mirror root pool?
maybe tiered storage for HDD pool (if I find a good solution - I was thinking about 45drives autotier but not much info about it)
-
Trying to achieve something dumb. Questions regarding running two layers of LVM cache or other ridiculous storage tiering.
If using LVM isn't an option for something like this I did find autotier which appears to implement everything I need under a FUSE but I'm concerned about the performance overhead of that. Since the entire point of storage tiering would be improving performance would a FUSE system be practical?
-
Have folks here ditched ZFS for UNRAID? Looking for your thoughts
My workload is primarily Plex media server, NFS/SMB. Going to UNRAID for everything could work; but I could also do a unraid hybrid setup with 2 disks ZFS mirror for my documents and photos while leaving the movies+tv files in other disks (this would increase my hdd count from 4 to 5 though due to parity needs)... a third option (probably very time consuming) is openmediavault + https://github.com/45Drives/autotier + ZFS (on each disk, simply to get the ability to 'zfs send/receive' to backup to another zfs server nightly)
-
Kicking the tires on UNRAID trial - some quick questions from someone coming from ZFS land...
I just installed unraid to test and see if I should be moving all my files to it from ZFS for my home media server (movies, tv, large backups, etc). I think the feature I want to exploit the most of unraid is the possibility to use a nvme-cache drive then purge data to disks after XX days or so - this basically feels like https://github.com/45Drives/autotier in a way.
-
Autotier or similar on NixOS
Is anyone using Autotier (https://github.com/45Drives/autotier) or similar on NixOS? How did you set it up?
-
Any news on Qtier or similar reaching QuTS Hero?
Given current pricing, larger SSDs, in terms of about 1TB aren't actually twice the price of 512GB ones, which in turn aren't twice as expensive as the 256GB units, seem to be better buys. At the same time, there are benefits to going SSDs for certain capacities and workloads, while in many cases a correctly functioning tiering system is more beneficial. Qnap's Qtier, however, is using dm-thin/md-adm features that are not in use/available by design in QuTS Hero and as such their regular QTS's feature of Qtier is unavailable on the ZFS-based variant. SSDs can be used for SSD pools or caches (as in ZIL for writes/L2ARC for reads/metadata for metadata and deduplication tables or a mix of those) only, similar to QES. Contrary to QES target audience, many ZFS-powered units are reaching customer base that is more home, SOHO or even light on the budget, so going with "huge" (in terms of disk quantity and overall capacity) and resilient all-flash pools is merely a dream, so the caches are more or less the goto option. As stated earlier, this leads to often either overexpenditure or underutilisation of the disk resources. Still, Qnap seems to have been quite tight-lipped about any possibility of Qtier-like solution being offered in their QuTSHero stack for a long time (presumably either because they don't want to offer it to not canibalise any of the remaining comparative QTS only or QES solution sales, or due to their inablity to come up with a solution). Given that the underlying system is a tuned linux, I wonder why autotier-like (https://github.com/45Drives/autotier) FUSE filesystem has not been prepared as a "replacement"/alternative. Sure, block-level monitoring/tiering would have been even better but the nature of COW/ZFS does not play well with regular block-monitoring tools (one could splice something up but that would require yet another huge table for block counts tracking to combine spinning rust and silicon traps in persistent and ephemeral levels into one global map for storage). However, a FUSE filesystem of autotier deals in file-level analysis, which would greatly increase usefulness of a mixed SSD and HDD pools appliance possibly without incurring much of a performance impact. Qnap certainly has the manpower or funds to prepare a solution that is unique/optimised for their devices that while not as good at moving hot data around as the block-level solution, would be the next best thing without too much of a hustle and it could still joggle around data when required. The question then remains, why haven't Qnap gone through with releasing such a solution, or maybe there is one in the works that Qnap plans to add as a showcase for QuTS Hero v.5? It could greatly improve the QuTS perspective in devices such as TS-673A (wherein one could than get a 3-tier solution with 2x NVMe SSDs for hottest files/data or caches, and the 6 drivebays populated with 3x HDDs for cold and 3x SATA SSDs for warm data layers) or its 8-bay counterpart, or the TS-h973AX (wherein we're getting 3 tiers by design with 2x U.2 for hot sauce or caches + 2x SATA SSDs for warm data + 5x HDDs for cold storage) as example units of somewhat lower pricing in the ZFS-based range/spectrum. This could certainly boost sales of the ZFS units. Or maybe, I've just been deaf and blind to the corporate PR to that degree that I've missed Qnap official unveiling of a future showcase piece plans for later? Or maybe such features have crept silently in some beta releases already and been dropped due to some issues?
-
Hot-Warm-Cold Caching/Storage on Linux with Two SSDs and an HDD
I also found autotier, which implements a sort-of storage tiering solution on top of ZFS.
What are some alternatives?
minio - The Object Store for AI Data Infrastructure
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
Ceph - Ceph is a distributed object, block, and file storage platform
react-native-mmkv-storage - An ultra fast (0.0002s read/write), small & encrypted mobile key-value storage framework for React Native written in C++ using JSI
garage - (Mirror) S3-compatible object store for small self-hosted geo-distributed deployments. Main repo: https://git.deuxfleurs.fr/Deuxfleurs/garage
knoxite - A data storage & backup system
cubefs - cloud-native file store
openHistorian - The Open Source Time-Series Data Historian
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
Java-SerialX - Store Java objects into JSON or any format you want! SerialX is a powerful lightweight utility library to serialize Java objects programmatically via tweakable recursive descent parser for custom domain-specific languages!
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
sanoid - These are policy-driven snapshot management and replication tools which use OpenZFS for underlying next-gen storage. (Btrfs support plans are shelved unless and until btrfs becomes reliable.)