Tahoe-LAFS
Apache Hadoop
Our great sponsors
Tahoe-LAFS | Apache Hadoop | |
---|---|---|
9 | 26 | |
1,276 | 14,316 | |
0.4% | 0.9% | |
9.6 | 9.9 | |
about 1 month ago | 2 days ago | |
Python | Java | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Tahoe-LAFS
-
Distributed Network File System
You could also look at Tahoe-LAFS which I keep meaning to try: https://tahoe-lafs.org/
-
Merging with diff3: the “three-way merge”
Then there are Darcs and Pijul, which use a theory of patches.
So Pijul manages to have lossless merges by actually storing a directed graph (though of course, you will still need to decide how to flatten that into a displayed file) :
https://jneem.github.io/pijul/
And because uses more information about the history, it is able to do smarter merges (if I am not mistaken, even compared to the OP ?) :
https://tahoe-lafs.org/~zooko/badmerge/simple.html
https://pijul.org/faq
- The Tahoe-LAFS decentralized secure filesystem Version 1.17.0
-
The Underwhelming Impact of Software Engineering Research (April 2022)
Good news for you: I'm well on the way to solving the problem of better code merging. Specifically, the algorithms I am developing appear to be able to do a correct merge on both [1] and [2]. They also appear capable of merging binary data.
The tradeoff is that people need to write some code to tell the VCS about the format of each binary file type or semantics of each programming language.
The biggest problem is that, like Rust, a new VCS has to be well-executed to make its innovation stick. We'll see if I succeed.
[1]: https://tahoe-lafs.org/~zooko/badmerge/simple.html
[2]: https://tahoe-lafs.org/~zooko/badmerge/concrete-bad-semantic...
- Anybody know of server selfhosted software that can unify or pool multiple cloud storage accounts ?
-
Nextcloud listened to Linus "Unraid Friends" idea (maybe) and implemented P2P backup in Nextcloud Hub II !
u/nextale shared a couple options: Tahoe-LAFS , Duplicati an Retroshare
-
Anything similar to StorJ? For self hosted purposes?
Only thing that comes close is https://tahoe-lafs.org
-
About Linus' WAN show notes about backups and losing data: I think there does exists something that he describes that fits the bill
There is Tahoe-LAFS which is decentralized open-source software where you can add remote storage servers (for example on a friends server) to store your data but the server does not have the encryption keys. Your data is encrypted before it leaves your computer (they call it "Least Authority File System" or LAFS because only you hold the keys, the storage servers just store the data). The data is encrypted in-transit and on-rest and supports multiple nodes so even if one of the servers burn down you still have the same data elsewhere. I believe they offer a commercial storage solution but you and your friends could install it for yourselves and run a closed network.
-
DEFFS - my custom FUSE filesystem
do you know https://tahoe-lafs.org? your goals sound similar.
Apache Hadoop
-
Getting thousands of files of output back from a container
Did you check out tools like https://hadoop.apache.org/ ?
-
Trying to run hadoop using docker
check out the various dockerfiles bundled with hadoop on GitHub. you can point to them from within docker-compose. they haven't been updated in a couple years tho.
- Unveiling the Analytics Industry in Bangalore
-
5 Best Practices For Data Integration To Boost ROI And Efficiency
There are different ways to implement parallel dataflows, such as using parallel data processing frameworks like Apache Hadoop, Apache Spark, and Apache Flink, or using cloud-based services like Amazon EMR and Google Cloud Dataflow. It is also possible to use parallel dataflow frameworks to handle big data and distributed computing, like Apache Nifi and Apache Kafka.
- Hadoop or Spark?
-
Data Engineering and DataOps: A Beginner's Guide to Building Data Solutions and Solving Real-World Challenges
There are several frameworks available for batch processing, such as Hadoop, Apache Storm, and DataTorrent RTS.
-
Effortlessly Set Up a Hadoop Multi-Node Cluster on Windows Machines with Our Step-by-Step Guide
A copy of Hadoop installed on each of these machines. You can download Hadoop from the Apache website, or you can use a distribution like Cloudera or Hortonworks.
-
In One Minute : Hadoop
The Apache™ Hadoop™ project develops open-source software for reliable, scalable, distributed computing.
-
Elon Musk dissolves Twitter's board of directors
So, clearly with your AP CS class and PLC logic knowledge, if you were dumped into a codebase like Hadoop, QT, or TensorFlow you'd be able to quickly and competently analyze what is going on with that code, understand all the libraries used, know the reasons why certain compromises were made, and be able to make suggestions on how to restructure the code in a different way? Because I've been programming for coming up on two decades and unless a system is within the domains that I have experience in, I would not be able to provide any useful information without a massive onboarding timeline, and definitely wouldn't be able to help redesign anything until actually coding within the system for a significant amount of time.
-
A peek into Location Data Science at Ola
This requires the use of distributed computation tools such as Spark and Hadoop, Flink and Kafka are used. But for occasional experimentation, Pandas, Geopandas and Dask are some of the commonly used tools.
What are some alternatives?
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
GlusterFS - Gluster Filesystem : Build your distributed storage in minutes
Ceph - Ceph is a distributed object, block, and file storage platform
Seaweed File System - SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding. [Moved to: https://github.com/seaweedfs/seaweedfs]
Weka
Nextcloud - ☁️ Nextcloud server, a safe home for all your data
MooseFS - MooseFS – Open Source, Petabyte, Fault-Tolerant, Highly Performing, Scalable Network Distributed File System (Software-Defined Storage)
Camlistore - Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.
GlusterFS - Web Content for gluster.org -- Deprecated as of September 2017