wg-meshconf
Seaweed File System
wg-meshconf | Seaweed File System | |
---|---|---|
6 | 49 | |
882 | 14,960 | |
- | - | |
0.0 | 9.9 | |
24 days ago | almost 2 years ago | |
Python | Go | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wg-meshconf
- Wireguard mesh between 4 pc similar to Tailscale
-
Updated MinIO NVMe Benchmarks: 2.6Tpbs on Get and 1.6 on Put
my experience, i dont know if this is comparable, but from my memory (i have not made any notes on that), i've tried min.io in december and switched to seaweed a weeks ago, because my usecase was transition from local file storage to DFS + also enable our developers to transition from local filesystem to s3. Since my resources are limited (vsphere VM) with 3 hosts + different disks, i tried to set up a 3 vm cluster with minio first, after i did some research on different systems (ceph, longhorn.io, ..) i wanted to have an easy setup-able system, which supports s3. I relied a lot on what people measured and chose min.io first because it supported mount via s3. Then i tried to copy over about 34 million files (mostly few bytes, but can also be 1Gbyte), with a mass of about 4.2TB. I tried different methods, rsync, cp, cp with parallelism,.. and i took me about 3 days to copy over 300GB of data at best. Then i also found out that it was impossible to list files. We have one single folder with over 300k projects (guid) beneath (growing). After that i gave seaweed a shot. Why i did not used it firsthand was documentation was a bit confusing and it did not gave me all the answers i needed as fast as minio did.
Now, my seaweed setup is a 3 vm cluster with 3 disks per vm (1TB) each. I configured a wireguard mesh (https://github.com/k4yt3x/wg-meshconf) between the VMs and configured master and volumes server to talk to each other via wireguard IPs securely. I also configured ufw to only allow communication between http/gRPC ports. I also configured a filer (using leveldb3) to use wireguard IPs (master and volumes) and let it communicate with some specific servers on the outside (ufw).
After that i mounted the filer via weed.mount on that specific server and tried to copy over the same files/folders. after 2 days i copied over about 1.5 TB of the data via rsync. There was also no problem with file listing and accessing the filer from different machines while uploading stuff. But there is a overhead when reading and creating lots of small files. File listing is even faster than local btrfs file listing.
chris is also very nice and fast fixing bugs.
-
Connect to wireguard server over a wireguard server -> client connection
Hey you should post your wg0.conf If you would like to build a WireGuard mesh try this: https://github.com/k4yt3x/wg-meshconf
-
How to add new client to wireguard in VPS without getting public IP changed on the client?
There are two factors at play here. The client's public IP actually depends on the gateway they use on accessing the internet. You can disable routing and your clients will keep their public IP and general internet access won't go through the VPS. However, if you want the traffic between "clients" also skip the VPS, then you want a mesh network. wesher and wg-meshconf can help you on configuring them.
-
Wiretrustee: WireGuard-Based Mesh Network
Looks great!
I've been using wg-meshconf[1] to assist in setting up Wireguard Mesh Networks on Linux for a while, works amazing!
A massive use case is to setup Kubernetes clusters, where network encryption is extremely important.
[1]: https://github.com/k4yt3x/wg-meshconf
- WireGuard full mesh configuration generator
Seaweed File System
- An open-source distributed object storage service
-
Moving to github.com/seaweedfs/seaweedfs
FYI: Planning to move from github.com/chrislusf/seaweedfs to github.com/seaweedfs/seaweedfs in the coming days. It may cause some problem for package reference, building, documents, and links. Sorry for the change!
-
S3 Isn't Getting Cheaper
Besides storage itself, S3 API access cost can be high if frequently accessed. And latency is unpredicatble.
You can use SeaweedFS Remote Object Store Gateway to cache S3 (or any S3 API compatible vendors) to local servers, and access them at local network speed, and asynchronously sync back to S3.
https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remot...
- ### Release 3.12 · chrislusf/seaweedfs
-
Minio in production
If you are looking at MinIO you might find SeaweedFS interesting as well.
- SeaweedFS and YDB
-
Cost effective managed key-value store?
I believe what you want is a horizontally scalable object store with tiered storage. SeaweedFS is free / open source https://github.com/chrislusf/seaweedfs
- A way to store and query large (up to 1GB) user defined objects.
-
Question: does anyone know Storage Provider with S3 as persistence layer?
I don't know if it fits all of your requests, but you can take a look at seaweedfs, which is pretty good
-
Introducing Garage, our self-hosted distributed object storage solution
Seaweedfs deserves a mention here for comparison as well.
What are some alternatives?
wesher - wireguard overlay mesh network manager
minio - The Object Store for AI Data Infrastructure
headscale - An open source, self-hosted implementation of the Tailscale control server
Ceph - Ceph is a distributed object, block, and file storage platform
tinc - a VPN daemon
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017
cjdns - An encrypted IPv6 network using public-key cryptography for address allocation and a distributed hash table for routing.
Apache Hadoop - Apache Hadoop
netbird - Connect your devices into a single secure private WireGuard®-based mesh network with SSO/MFA and simple access controls.
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Netmaker - Netmaker makes networks with WireGuard. Netmaker automates fast, secure, and distributed virtual networks.
lizardfs - LizardFS is an Open Source Distributed File System licensed under GPLv3.