sandcrawler
Seaweed File System
sandcrawler | Seaweed File System | |
---|---|---|
2 | 49 | |
23 | 14,960 | |
- | - | |
0.0 | 9.9 | |
over 1 year ago | almost 2 years ago | |
HTML | Go | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sandcrawler
-
Internet Archive Infrastructure
Check this experiment they did https://github.com/internetarchive/sandcrawler/blob/master/p... . This was just a part of the infrastructure.
Point is ceph & friends have a lot of overhead. Example in ceph: by default, a file in S3 layer is split into 4MB chunks, and each of those chunks is replicated or erasure-coded. Using the same erasure coding as wasabi,b2-cloud, which is 16+4=20 (or 17+3=20), each of those 4MB chunks is split into 20 shards of ~200KB each. Each of those shards ends up having ~512B to 4KB of metadata.
So from 10KB to 80KB of metadata for single 4MB chunk.
- Fast and easily scalable self-hosted storage solution
Seaweed File System
- An open-source distributed object storage service
-
Moving to github.com/seaweedfs/seaweedfs
FYI: Planning to move from github.com/chrislusf/seaweedfs to github.com/seaweedfs/seaweedfs in the coming days. It may cause some problem for package reference, building, documents, and links. Sorry for the change!
-
S3 Isn't Getting Cheaper
Besides storage itself, S3 API access cost can be high if frequently accessed. And latency is unpredicatble.
You can use SeaweedFS Remote Object Store Gateway to cache S3 (or any S3 API compatible vendors) to local servers, and access them at local network speed, and asynchronously sync back to S3.
https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remot...
- ### Release 3.12 ยท chrislusf/seaweedfs
-
Minio in production
If you are looking at MinIO you might find SeaweedFS interesting as well.
- SeaweedFS and YDB
-
Cost effective managed key-value store?
I believe what you want is a horizontally scalable object store with tiered storage. SeaweedFS is free / open source https://github.com/chrislusf/seaweedfs
- A way to store and query large (up to 1GB) user defined objects.
-
Question: does anyone know Storage Provider with S3 as persistence layer?
I don't know if it fits all of your requests, but you can take a look at seaweedfs, which is pretty good
-
Introducing Garage, our self-hosted distributed object storage solution
Seaweedfs deserves a mention here for comparison as well.
What are some alternatives?
pywb - Core Python Web Archiving Toolkit for replay and recording of web archives
minio - The Object Store for AI Data Infrastructure
ArchiveBox - ๐ Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
Ceph - Ceph is a distributed object, block, and file storage platform
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
Apache Hadoop - Apache Hadoop
MooseFS - MooseFS โ Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
Docker - Notary is a project that allows anyone to have trust over arbitrary collections of data
autotier - A passthrough FUSE filesystem that intelligently moves files between storage tiers based on frequency of use, file age, and tier fullness.
Alluxio (formerly Tachyon) - Alluxio, data orchestration for analytics and machine learning in the cloud