scripts VS stealth

Compare scripts vs stealth and see what are their differences.

scripts

Various scripts I wrote when using FreeBSD/Linux/UNIX systems for 15+ years. (by vermaden)

stealth

:rocket: Stealth - Secure, Peer-to-Peer, Private and Automateable Web Browser/Scraper/Proxy (by tholian-network)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
scripts stealth
16 26
139 992
- 0.4%
7.7 0.0
about 2 months ago 7 months ago
Shell JavaScript
- GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

scripts

Posts with mentions or reviews of scripts. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • Anyone here daily drive FreeBSD as their operating system?
    2 projects | /r/freebsd | 10 Dec 2023
    Check out Vermaden's site: https://vermaden.wordpress.com/
  • Ask HN: Most interesting tech you built for just yourself?
    149 projects | news.ycombinator.com | 27 Apr 2023
    I mostly do interesting stuff on FreeBSD and its all documented in as detailed form as possible here:

    - https://vermaden.wordpress.com/

    Regards,

  • Problems that i encountered on FreeBSD and solution
    1 project | /r/freebsd | 10 Apr 2023
    You might want to check out Vermaden https://vermaden.wordpress.com/ And Robonuggie https://youtube.com/@RoboNuggie Both excellent resources on how to get things done on the desktop in FreeBSD. Salute
  • FreeBSD Desktop Users: Suggestions for a New User?
    1 project | /r/freebsd | 16 Feb 2023
  • Should I just migrate to *BSD?
    1 project | /r/freebsd | 28 Jan 2023
  • Ask HN: How do people find your blog?
    3 projects | news.ycombinator.com | 18 Dec 2022
    I always wanted to start and write on my blog - just to share some things that other may find useful.

    I started with something simple - entirely preloaded (all howtos) and static:

    1. http://www.strony.toya.net.pl/~vermaden/links.htm

    I assume no one ever entered it ... besides me of course.

    Then some time later - I though that having that 'static' links site is pointless - lets start 'proper' blog this time. I have chosen Gogle Blogspot this time.

    2. https://vermaden.blogspot.com/

    ... and after several posts I generally abandoned it.

    Several years later I made a decision to make another blog ... but this time with some strategy behind.

    3. https://vermaden.wordpress.com/

    This (3rd) attempt was 'successful' and people sometimes actually visit my blog - sometimes even comment. In March of 2023 I will 'celebrate' the 5th year of that blog. I have made about 100 posts there and I made about 100,000+ views per year:

    - https://i.imgur.com/raWvrZj.png

    What is the secret of [3.] being successful and [1.] and [2.] definitely not? Sharing.

    I do not know what blog (subject matter) you are trying to share - but for IT/UNIX/BSD/Linux related blogs (as mine) you need to share each post on these mediums:

    - mastodon

    - twitter

    - lobsters

    - hacker news

    - FreeBSD forums

    - reddit (r/BSD)

    - reddit (r/FreeBSD)

    - reddit (r/unix)

    - reddit (r/linux)

    - linkedin

    Not sure about Facebook/Meta as their 'ecosystem' definitely does not suit my needs.

    You need to ask yourself where and how people would try to find your content. They would definitely not browse a catalog of blogs. Maybe they wil ltry the search engine ... but search engines only pick up sites that are somewhat popular. They omit pages/blogs that are 'unknown'. How blogs are known? By many links pointing to them.

    In other words - if you do not share your work/posts on all 'relevant' platforms - then you will 'die' in a 'non-known' hell.

    If you believe your work - and it is work, you 'waste' your time to write/share these things you do - is valuable - then share them in all possible mediums/medias. If your content is good - you have to do nothing else. If your content is crap - You will immediately get feedback about it :D

    One of the things that I really appreciate was the feedback I got. I often assumed that I know a lot about 'X' topic - just to change my mind after several comments later and providing and UPDATE to my blog post :)

    I do not know what should I add here more so I will end my comment - but feel free to ask if You have any questions.

    Regards,

  • Desktop friendly forks
    2 projects | /r/freebsd | 24 Nov 2022
  • I want to switch to BSD
    3 projects | /r/freebsd | 30 Sep 2022
    After you get it all running using the cooltrainer site, then go to https://vermaden.wordpress.com/ which has some most excellent tasty config changes to make your boot time shorter, and your desktop work better. ALONG with tons of configs to help you configure different desktops and desktop apps/configs.
  • Ask HN: Can I see your scripts?
    73 projects | news.ycombinator.com | 15 Aug 2022
  • Resume
    1 project | /r/freebsd | 11 Jun 2022
    Any suggestions on how to get resume to work after suspending on a Thinkpad X260? I have read https://vermaden.wordpress.com/ and got suspend to work it just locks and have to reboot.

stealth

Posts with mentions or reviews of stealth. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-27.
  • Ask HN: Most interesting tech you built for just yourself?
    149 projects | news.ycombinator.com | 27 Apr 2023
    Two years ago I decided to built my own web browser, with the underlying idea to use the internet more efficiently (and to force cache everything).

    Took a while to find the architecture, but it's still an unfinished ambitious project. You can probably spend forever working on HTML and CSS fixes alone...

    [1] https://github.com/tholian-network/stealth

  • The FBI Identified a Tor User
    3 projects | news.ycombinator.com | 17 Jan 2023
    From a technological point of view, TOR still has a couple of flaws which make it vulnerable to the metadata logging systems of ISPs:

    - it needs a trailing non-zero buffer, randomized by the size of the payload, so that stream sizes and durations don't match

    - it needs a request scattering feature, so that the requests for a specific website don't get proxied through the same nodes/paths

    - it needs a failsafe browser engine, which doesn't give a flying damn about WebRTC and decides to actively drop features.

    - it needs to stop monkey-patching out ("stubbing") the APIs that are compromising user privacy, and start removing those features.

    I myself started a WebKit fork a while ago but eventually had to give up due to the sheer amount of work required to maintain such an engine project. I called it RetroKit [1], and I documented what kind of features in WebKit were already usable for tracking and had to be removed.

    I'm sorry to be blunt here, but all that user privacy valueing electron bullshit that uses embedded chrome in the background doesn't cut it anymore. And neither does Firefox that literally goes rogue in an endless loop of requests when you block their tracking domains. The config settings in Firefox don't change shit anymore, and it will keep requesting the tracking domains. It does it also in Librefox and all the *wolf profile variants, just use a local eBPF firewall to verify. I added my non-complete opensnitch ruleset to my dotfiles for others to try out. [3]

    If I would rewrite a browser engine today, I'd probably go for golang. But golang probably makes handling arbitrary network data a huge pain, so it's kinda useless for failsafe html5 parsing.

    [1] https://github.com/tholian-network/retrokit

    [2] (the browser using retrokit) https://github.com/tholian-network/stealth

    [3] https://github.com/cookiengineer/dotfiles/tree/master/softwa...

  • The Iran Firewall: A preliminary report
    3 projects | news.ycombinator.com | 28 Oct 2022
    Most of the things you mentioned are implemented in the "Browser" that I've built. It's using multicast DNS to discover neighboring running instances and it has an offline cache first mentality, which means that e.g. download streams are shared among local peers.

    Global peer discovery is solved via mapping of identifiers via the reserved TLD, and via mutual TLS for identification and verification. So peers are basically pinned client certificates in your local settings.

    Works for most cases, had to implement a couple of breakout tunnel protocols though, so that peer discovery works failsafe when known IPs/ASNs are blocked.

    Relaying and scattering traffic works automatically, so that no correlation of IPs to scraped websites can be done by an MITM. Tunnel protocols are all generically implemented, DNS exfiltration, HTTPS smuggling, ICMP tunnels, and pwnat work already pretty failsafe.

    Lots of work to be done though, and had to focus on couple other things first before I can get back to the project.

    [1] https://github.com/tholian-network/stealth

  • There are no Internet Browsers that cannot be tracked, or are there?
    3 projects | /r/hacking | 17 Sep 2022
    I'm trying to go a different route with Stealth, my programmable peer-to-peer web browser that can offload and relay traffic intelligently - and with RetroKit, my WebKit fork that aims to remove all JavaScript APIs that can be used for fingerprinting and/or tracking.
  • Ask HN: How you would redesign a web browser?
    1 project | news.ycombinator.com | 14 Feb 2022
    I think that in order to increase privacy and - more importantly - reduce the attack surface of a Web Browser more inefficiently, there will have to be two modes of web browsing.

    Regular browsing - in my opinion - should default to privacy and security first, whereas trust to web apps should be granted on a per-domain basis. This is basically what I'm doing in a crappy manner, where I have all my Browser Extensions in regular browsing mode with uBlock Origin, Cookie Autodelete and whatnot... and where I use Incognito Mode to use Web Apps.

    In the future I believe that a Web Browser that's decentralized has an almost infinite amount of advantages when it comes to bypassing censorship, increasing trust and the ledging aspect of (temporary) online resources.

    Currently, my idea of building a sane architecture of a Web Browser is that the Browser itself is actually a locally running peer-to-peer web scraper service, and the "frontend or GUI" is a bundled webview that's pointing to localhost:someport. Web Apps can then be used by spawning a new webview instance that's sandboxed with its profile in a temporary folder, so it cannot infect/spread across the regular profile folder that's being used for the "regular private browsing" mode.

    This architecture allows all kinds of benefits, as everything can be filtered, cleaned, verified (, and shared with other peers) at the network level - whereas Browser Extensions currently cannot filter any HTTP responses because there's no API for that.

    AdBlockers currently are based on a disallow-list based concept, which means the advantage is always on the advertising side, and by default nothing is filtered; and scammers/blackhats have always the advantage. Once you add it to a filter list, lots of people's machines have been compromised already. But what if AdBlockers change instead to an allow-list based concept - meaning that the Browser maintains a list of resources that are allowed to load per-domain, and the default being just text and images?

    If you want to take a look at where it's at right now [1] [2], my Browser is open source; and I hope to fund development via a access fees for a peer-to-peer "Knowledge Tracker" that allows to share automations for the web with other peers, aka macros, reader-mode like extraction beacons, and other awesome treats (p2p search and recommendations are basically included in this concept).

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/retrokit

  • No-JavaScript Fingerprinting
    4 projects | news.ycombinator.com | 6 Feb 2022
    Note that among a sea of tracked browsers, the untrackable browser shines like a bright star.

    Statistical analysis of these values over time (matched with client hints, ETags, If-Modified-Since, and IPs) will make most browsers uniquely identifiable.

    If the malicious vendor is good, they even correlate the size and order of requests. Because that's unique as well and can identify TOR browsers pretty easily.

    It's like saying "I can't be tracked, because I use Linux". Guess what, as long as nobody in your town uses Linux, you are the most trackable person.

    I decided to go with the "behave as the statistical norm expects you to behave" and created my browser/scraper [1] and forked WebKit into a webview [2] that doesn't support anything that can be used for tracking; with the idea that those tracking features can be shimmed and faked.

    I personally think this is the only way to be untrackable these days. Because let's be honest, nobody uses Firefox with ETP in my town anymore :(

    WebKit was a good start of this because at least some of the features were implemented behind compiler flags...whereas all other browsers and engines can't be built without say, WebRTC support, or say, without Audio Worklets which are for themselves enough to be uniquely identified.

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/retrokit

    (both WIP)

  • We Have A Browser Monopoly Again and Firefox is The Only Alternative Out There
    6 projects | /r/programming | 1 Jan 2022
    Currently my primary motivation factor is my own Browser Stealth that I'm building; and due to lack of alternatives.
  • Tholian® Stealth - Secure, Peer-to-Peer, Private and Automatable Web Browser/Scraper/Proxy for the Web of Truth and Knowledge. Goals: increased Privacy, increased Automation, adaptive Semantic Understanding. Web Scraper + Web Service + Web Proxy
    1 project | /r/AltTech | 21 Oct 2021
  • Pirate Party member: GDPR-compliant Whois will lead to 'doxxing and death lists'
    3 projects | news.ycombinator.com | 17 Oct 2021
    I'm building a peer to peer Browser network that relies on trust ratios/factor in order to find out the seed/leech ratio of sharing content, producing content etc.

    The problem I'm currently trying to solve is that I had the idea to have a vendor profile that contains the necessary information for IP ranges (ASN, organization, region, country, ISP/NAT etc) so that the discovery service for that doesn't have to do this.

    It's like the basic idea of an offline "map of the internet" that should be an approximation of who does what in which amount of data (e.g. data center IPs aren't trustworthy or same ISP-NATed IP could be censored the same when it comes to blocked websites etc).

    At this point it's a big experiment and I'm not sure whether I'm fundamentally wrong about this as I don't have any data to back it up.

    If you're curious, it's part of the Stealth Browser I'm building [1] and [2]

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/stealth-vendor

  • A climate activist arrested after ProtonMail provided his IP address
    3 projects | news.ycombinator.com | 5 Sep 2021
    > Does anyone here have a feasible way to solve this?

    Current solutions like TOR, I2P, VPNs and/or mobile proxy services are just a matter of time and legality until they come obsolete.

    TOR and I2P are worth a shit if everybody knows it was a TOR exit node, and cloudflare shows you tracking captchas anyways.

    Same for VPNs and mobile proxies, most are known due to their static IP ranges. Note that most mobile proxy services actually use malware installed on smartphones, so technically you're helping the blackhats by using them, and technically if the federal agencies find out you are probably in some lawsuits filed as an anonymous party that helped them DDoS a victim party.

    I am convinced that the only way to solve this is by simply not downloading the website from its origin. The origin tracks you, so don't talk to them. Talk to your peers and receive a ledged copy of it instead.

    The only problem is that this contradicts all that came after Web 2.0, because every website _wants_ unique identities for every person visiting them; including ETag-based tracking mechanisms of CDNs.

    I think it's not possible with supporting Web Browser APIs the same way in JavaScript (as of now, due to fetch and XHR and how WebSockets are abused for HDCP/DRM to prevent caching), but I think that a static website delivering network with a trustless cryptography based peer-to-peer end-to-end encrypted statistically-correct cache is certainly feasible. I believe that because that's exactly what I'm building for the last two years [1].

    [1] https://github.com/tholian-network/stealth

What are some alternatives?

When comparing scripts and stealth you can also consider the following projects:

freshports - The website part of FreshPorts

Holy-Unblocker - Holy Unblocker is a web proxy service that helps you access websites that may be blocked by your network or browser. It does this securely and with additional features.

mergerfs - a featureful union filesystem

nyxt - Nyxt - the hacker's browser.

snapraid - A backup program for disk arrays. It stores parity information of your data and it recovers from up to six disk failures

cname-trackers - This repository contains a list of popular CNAME trackers

autobots - ⚡️ Scripts & dotfiles for automation and/or bootstrapping new system setup

ClearURLs-Addon - ClearURLs is an add-on based on the new WebExtensions technology and will automatically remove tracking elements from URLs to help protect your privacy.

malten - Anonymous ephemeral messaging

FTL - The Pi-hole FTL engine

exhibitor - Snappy and delightful React component workshop

brotab - Control your browser's tabs from the command line