ratarmount VS stealth

Compare ratarmount vs stealth and see what are their differences.

ratarmount

Access large archives as a filesystem efficiently, e.g., TAR, RAR, ZIP, GZ, BZ2, XZ, ZSTD archives (by mxmlnkn)

stealth

:rocket: Stealth - Secure, Peer-to-Peer, Private and Automateable Web Browser/Scraper/Proxy (by tholian-network)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
ratarmount stealth
10 26
640 997
- 1.1%
9.1 0.0
23 days ago 8 months ago
Python JavaScript
MIT License GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

ratarmount

Posts with mentions or reviews of ratarmount. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-09-04.
  • Ratarmount: Access large archives as a filesystem efficiently
    1 project | news.ycombinator.com | 10 Apr 2024
  • Show HN: Rapidgzip – Parallel Gzip Decompressing with 10 GB/S
    3 projects | news.ycombinator.com | 4 Sep 2023
  • Ratarmount: Random Access Tar Mount
    1 project | news.ycombinator.com | 14 May 2023
  • Ask HN: Most interesting tech you built for just yourself?
    149 projects | news.ycombinator.com | 27 Apr 2023
    This is basically the same reason why I started with ratarmount (https://github.com/mxmlnkn/ratarmount) but the focus was more on runtime performance and random access and as the name suggests it started out with access to recursive tar archives. The current version should also work for your use case with recursive zips.
  • Looking for advice uploading data while at uni. I need to split the data i need to upload to carry it with me
    2 projects | /r/DataHoarder | 11 Oct 2022
    As an added complication this would need to work under windows (i need onenote and that's win only :/ ) ; this alone makes the majority of solutions that i came up with impossible. One way could've been splitting the data onto various tar files and then mounting those with rartarmount but...linux only :( .
  • How Much Faster Is Making a Tar Archive Without Gzip?
    8 projects | news.ycombinator.com | 10 Oct 2022
    Pragzip actually decompress in parallel and also access at random. I did a Show HN here: https://news.ycombinator.com/item?id=32366959

    indexed_gzip https://github.com/pauldmccarthy/indexed_gzip can also do random access but is not parallel.

    Both have to do a linear scan first though. The implementations however can do the linear scan on-demand, i.e., they scan only as far as needed.

    bzip2 works very well with this approach. xz only works with this approach when compressed with multiple blocks. Similar is true for zstd.

    For zstd, there also exists a seekable variant, which stores the block index at the end as metadata to avoid the linear scan. indexed_zstd offers random access to those files https://github.com/martinellimarco/indexed_zstd

    I wrote pragzip and also combined all of the other random access compression backends in ratarmount to offer random access to TAR files that is magnitudes faster than archivemount: https://github.com/mxmlnkn/ratarmount

  • Ratarmount – Fast transparent access to archives through FUSE
    2 projects | news.ycombinator.com | 10 Mar 2022
    Or via the experimental AppImage I created this week:

        wget -O ratarmount 'https://github.com/mxmlnkn/ratarmount/releases/download/v0.10.0/ratarmount-manylinux2014_x86_64.AppImage'
  • Hop: 25x faster than unzip and 10x faster than tar at reading individual files
    10 projects | news.ycombinator.com | 10 Nov 2021
    I've recently been looking into this same issue because I analyse a lot of data like sosreports or other tar/compressed data from customer systems. Currently I untar these onto my zfs filesystem which works out OK because it has zstd compression enabled but I end up decompressing and recompressing which is quite expensive as often the files are GBs or more compressed.

    But I've started using a tool called "ratarmount" (https://github.com/mxmlnkn/ratarmount) which creates an index once (and something I could automate our upload system to generate in advance, but you can also just process it lcoally) and then lets you fuse mount the file. This works pretty great with the only exception that I can't create scratch files inside the directory layout which in the past I'd wanted to do.

    I was surprised how hard a problem to solve it is to get a bundle file format that is indexable and compressed with a good and fast compression algorithm which mostly boils down to zstd at this point.

    While it works quite well, especially with gzip and bzip2, sadly the zstd and xz (and some other compression formats) don't allow for decompressing only parts of a file by default, even though it's possible the default tools aren't doing it. The nitty gritty details are summarised here:

  • Is there a way to accelerate extracting .tar contents?
    1 project | /r/linuxquestions | 29 Jun 2021
    Well, you could try to skip extraction and access the tar archive using ratarmount, and stack overlayfs on top to allow writing, but that will have an impact on compilation time.

stealth

Posts with mentions or reviews of stealth. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-27.
  • Ask HN: Most interesting tech you built for just yourself?
    149 projects | news.ycombinator.com | 27 Apr 2023
    Two years ago I decided to built my own web browser, with the underlying idea to use the internet more efficiently (and to force cache everything).

    Took a while to find the architecture, but it's still an unfinished ambitious project. You can probably spend forever working on HTML and CSS fixes alone...

    [1] https://github.com/tholian-network/stealth

  • The FBI Identified a Tor User
    3 projects | news.ycombinator.com | 17 Jan 2023
    From a technological point of view, TOR still has a couple of flaws which make it vulnerable to the metadata logging systems of ISPs:

    - it needs a trailing non-zero buffer, randomized by the size of the payload, so that stream sizes and durations don't match

    - it needs a request scattering feature, so that the requests for a specific website don't get proxied through the same nodes/paths

    - it needs a failsafe browser engine, which doesn't give a flying damn about WebRTC and decides to actively drop features.

    - it needs to stop monkey-patching out ("stubbing") the APIs that are compromising user privacy, and start removing those features.

    I myself started a WebKit fork a while ago but eventually had to give up due to the sheer amount of work required to maintain such an engine project. I called it RetroKit [1], and I documented what kind of features in WebKit were already usable for tracking and had to be removed.

    I'm sorry to be blunt here, but all that user privacy valueing electron bullshit that uses embedded chrome in the background doesn't cut it anymore. And neither does Firefox that literally goes rogue in an endless loop of requests when you block their tracking domains. The config settings in Firefox don't change shit anymore, and it will keep requesting the tracking domains. It does it also in Librefox and all the *wolf profile variants, just use a local eBPF firewall to verify. I added my non-complete opensnitch ruleset to my dotfiles for others to try out. [3]

    If I would rewrite a browser engine today, I'd probably go for golang. But golang probably makes handling arbitrary network data a huge pain, so it's kinda useless for failsafe html5 parsing.

    [1] https://github.com/tholian-network/retrokit

    [2] (the browser using retrokit) https://github.com/tholian-network/stealth

    [3] https://github.com/cookiengineer/dotfiles/tree/master/softwa...

  • The Iran Firewall: A preliminary report
    3 projects | news.ycombinator.com | 28 Oct 2022
    Most of the things you mentioned are implemented in the "Browser" that I've built. It's using multicast DNS to discover neighboring running instances and it has an offline cache first mentality, which means that e.g. download streams are shared among local peers.

    Global peer discovery is solved via mapping of identifiers via the reserved TLD, and via mutual TLS for identification and verification. So peers are basically pinned client certificates in your local settings.

    Works for most cases, had to implement a couple of breakout tunnel protocols though, so that peer discovery works failsafe when known IPs/ASNs are blocked.

    Relaying and scattering traffic works automatically, so that no correlation of IPs to scraped websites can be done by an MITM. Tunnel protocols are all generically implemented, DNS exfiltration, HTTPS smuggling, ICMP tunnels, and pwnat work already pretty failsafe.

    Lots of work to be done though, and had to focus on couple other things first before I can get back to the project.

    [1] https://github.com/tholian-network/stealth

  • There are no Internet Browsers that cannot be tracked, or are there?
    3 projects | /r/hacking | 17 Sep 2022
    I'm trying to go a different route with Stealth, my programmable peer-to-peer web browser that can offload and relay traffic intelligently - and with RetroKit, my WebKit fork that aims to remove all JavaScript APIs that can be used for fingerprinting and/or tracking.
  • Ask HN: How you would redesign a web browser?
    1 project | news.ycombinator.com | 14 Feb 2022
    I think that in order to increase privacy and - more importantly - reduce the attack surface of a Web Browser more inefficiently, there will have to be two modes of web browsing.

    Regular browsing - in my opinion - should default to privacy and security first, whereas trust to web apps should be granted on a per-domain basis. This is basically what I'm doing in a crappy manner, where I have all my Browser Extensions in regular browsing mode with uBlock Origin, Cookie Autodelete and whatnot... and where I use Incognito Mode to use Web Apps.

    In the future I believe that a Web Browser that's decentralized has an almost infinite amount of advantages when it comes to bypassing censorship, increasing trust and the ledging aspect of (temporary) online resources.

    Currently, my idea of building a sane architecture of a Web Browser is that the Browser itself is actually a locally running peer-to-peer web scraper service, and the "frontend or GUI" is a bundled webview that's pointing to localhost:someport. Web Apps can then be used by spawning a new webview instance that's sandboxed with its profile in a temporary folder, so it cannot infect/spread across the regular profile folder that's being used for the "regular private browsing" mode.

    This architecture allows all kinds of benefits, as everything can be filtered, cleaned, verified (, and shared with other peers) at the network level - whereas Browser Extensions currently cannot filter any HTTP responses because there's no API for that.

    AdBlockers currently are based on a disallow-list based concept, which means the advantage is always on the advertising side, and by default nothing is filtered; and scammers/blackhats have always the advantage. Once you add it to a filter list, lots of people's machines have been compromised already. But what if AdBlockers change instead to an allow-list based concept - meaning that the Browser maintains a list of resources that are allowed to load per-domain, and the default being just text and images?

    If you want to take a look at where it's at right now [1] [2], my Browser is open source; and I hope to fund development via a access fees for a peer-to-peer "Knowledge Tracker" that allows to share automations for the web with other peers, aka macros, reader-mode like extraction beacons, and other awesome treats (p2p search and recommendations are basically included in this concept).

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/retrokit

  • No-JavaScript Fingerprinting
    4 projects | news.ycombinator.com | 6 Feb 2022
    Note that among a sea of tracked browsers, the untrackable browser shines like a bright star.

    Statistical analysis of these values over time (matched with client hints, ETags, If-Modified-Since, and IPs) will make most browsers uniquely identifiable.

    If the malicious vendor is good, they even correlate the size and order of requests. Because that's unique as well and can identify TOR browsers pretty easily.

    It's like saying "I can't be tracked, because I use Linux". Guess what, as long as nobody in your town uses Linux, you are the most trackable person.

    I decided to go with the "behave as the statistical norm expects you to behave" and created my browser/scraper [1] and forked WebKit into a webview [2] that doesn't support anything that can be used for tracking; with the idea that those tracking features can be shimmed and faked.

    I personally think this is the only way to be untrackable these days. Because let's be honest, nobody uses Firefox with ETP in my town anymore :(

    WebKit was a good start of this because at least some of the features were implemented behind compiler flags...whereas all other browsers and engines can't be built without say, WebRTC support, or say, without Audio Worklets which are for themselves enough to be uniquely identified.

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/retrokit

    (both WIP)

  • We Have A Browser Monopoly Again and Firefox is The Only Alternative Out There
    6 projects | /r/programming | 1 Jan 2022
    Currently my primary motivation factor is my own Browser Stealth that I'm building; and due to lack of alternatives.
  • Tholian® Stealth - Secure, Peer-to-Peer, Private and Automatable Web Browser/Scraper/Proxy for the Web of Truth and Knowledge. Goals: increased Privacy, increased Automation, adaptive Semantic Understanding. Web Scraper + Web Service + Web Proxy
    1 project | /r/AltTech | 21 Oct 2021
  • Pirate Party member: GDPR-compliant Whois will lead to 'doxxing and death lists'
    3 projects | news.ycombinator.com | 17 Oct 2021
    I'm building a peer to peer Browser network that relies on trust ratios/factor in order to find out the seed/leech ratio of sharing content, producing content etc.

    The problem I'm currently trying to solve is that I had the idea to have a vendor profile that contains the necessary information for IP ranges (ASN, organization, region, country, ISP/NAT etc) so that the discovery service for that doesn't have to do this.

    It's like the basic idea of an offline "map of the internet" that should be an approximation of who does what in which amount of data (e.g. data center IPs aren't trustworthy or same ISP-NATed IP could be censored the same when it comes to blocked websites etc).

    At this point it's a big experiment and I'm not sure whether I'm fundamentally wrong about this as I don't have any data to back it up.

    If you're curious, it's part of the Stealth Browser I'm building [1] and [2]

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/stealth-vendor

  • A climate activist arrested after ProtonMail provided his IP address
    3 projects | news.ycombinator.com | 5 Sep 2021
    > Does anyone here have a feasible way to solve this?

    Current solutions like TOR, I2P, VPNs and/or mobile proxy services are just a matter of time and legality until they come obsolete.

    TOR and I2P are worth a shit if everybody knows it was a TOR exit node, and cloudflare shows you tracking captchas anyways.

    Same for VPNs and mobile proxies, most are known due to their static IP ranges. Note that most mobile proxy services actually use malware installed on smartphones, so technically you're helping the blackhats by using them, and technically if the federal agencies find out you are probably in some lawsuits filed as an anonymous party that helped them DDoS a victim party.

    I am convinced that the only way to solve this is by simply not downloading the website from its origin. The origin tracks you, so don't talk to them. Talk to your peers and receive a ledged copy of it instead.

    The only problem is that this contradicts all that came after Web 2.0, because every website _wants_ unique identities for every person visiting them; including ETag-based tracking mechanisms of CDNs.

    I think it's not possible with supporting Web Browser APIs the same way in JavaScript (as of now, due to fetch and XHR and how WebSockets are abused for HDCP/DRM to prevent caching), but I think that a static website delivering network with a trustless cryptography based peer-to-peer end-to-end encrypted statistically-correct cache is certainly feasible. I believe that because that's exactly what I'm building for the last two years [1].

    [1] https://github.com/tholian-network/stealth

What are some alternatives?

When comparing ratarmount and stealth you can also consider the following projects:

tarindexer - python module for indexing tar files for fast access

Holy-Unblocker - Holy Unblocker is a web proxy service that helps you access websites that may be blocked by your network or browser. It does this securely and with additional features.

asar - Simple extensive tar-like archive format with indexing

nyxt - Nyxt - the hacker's browser.

PyFilesystem2 - Python's Filesystem abstraction layer

cname-trackers - This repository contains a list of popular CNAME trackers

pixz - Parallel, indexed xz compressor

ClearURLs-Addon - ClearURLs is an add-on based on the new WebExtensions technology and will automatically remove tracking elements from URLs to help protect your privacy.

InstaPy - 📷 Instagram Bot - Tool for automated Instagram interactions

FTL - The Pi-hole FTL engine

icoextract - Extract icons from Windows PE files (.exe/.dll)

brotab - Control your browser's tabs from the command line