Holy-Unblocker VS stealth

Compare Holy-Unblocker vs stealth and see what are their differences.

Holy-Unblocker

Holy Unblocker is a secure web proxy service supporting numerous sites while concentrating on detail with design, mechanics, and features. Bypass web filters regardless of whether it is an extension or network-based. (by QuiteAFancyEmerald)

stealth

:rocket: Stealth - Secure, Peer-to-Peer, Private and Automateable Web Browser/Scraper/Proxy (by tholian-network)
Our great sponsors
  • Appwrite - The Open Source Firebase alternative introduces iOS support
  • Scout APM - Less time debugging, more time building
  • SonarQube - Static code analysis for 29 languages.
Holy-Unblocker stealth
2 22
225 817
11.1% 1.3%
8.7 8.3
21 days ago 3 months ago
JavaScript JavaScript
MIT License GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Holy-Unblocker

Posts with mentions or reviews of Holy-Unblocker. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-12.

stealth

Posts with mentions or reviews of stealth. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-02-06.
  • Ask HN: How you would redesign a web browser?
    1 project | news.ycombinator.com | 14 Feb 2022
    I think that in order to increase privacy and - more importantly - reduce the attack surface of a Web Browser more inefficiently, there will have to be two modes of web browsing.

    Regular browsing - in my opinion - should default to privacy and security first, whereas trust to web apps should be granted on a per-domain basis. This is basically what I'm doing in a crappy manner, where I have all my Browser Extensions in regular browsing mode with uBlock Origin, Cookie Autodelete and whatnot... and where I use Incognito Mode to use Web Apps.

    In the future I believe that a Web Browser that's decentralized has an almost infinite amount of advantages when it comes to bypassing censorship, increasing trust and the ledging aspect of (temporary) online resources.

    Currently, my idea of building a sane architecture of a Web Browser is that the Browser itself is actually a locally running peer-to-peer web scraper service, and the "frontend or GUI" is a bundled webview that's pointing to localhost:someport. Web Apps can then be used by spawning a new webview instance that's sandboxed with its profile in a temporary folder, so it cannot infect/spread across the regular profile folder that's being used for the "regular private browsing" mode.

    This architecture allows all kinds of benefits, as everything can be filtered, cleaned, verified (, and shared with other peers) at the network level - whereas Browser Extensions currently cannot filter any HTTP responses because there's no API for that.

    AdBlockers currently are based on a disallow-list based concept, which means the advantage is always on the advertising side, and by default nothing is filtered; and scammers/blackhats have always the advantage. Once you add it to a filter list, lots of people's machines have been compromised already. But what if AdBlockers change instead to an allow-list based concept - meaning that the Browser maintains a list of resources that are allowed to load per-domain, and the default being just text and images?

    If you want to take a look at where it's at right now [1] [2], my Browser is open source; and I hope to fund development via a access fees for a peer-to-peer "Knowledge Tracker" that allows to share automations for the web with other peers, aka macros, reader-mode like extraction beacons, and other awesome treats (p2p search and recommendations are basically included in this concept).

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/retrokit

  • No-JavaScript Fingerprinting
    4 projects | news.ycombinator.com | 6 Feb 2022
    Note that among a sea of tracked browsers, the untrackable browser shines like a bright star.

    Statistical analysis of these values over time (matched with client hints, ETags, If-Modified-Since, and IPs) will make most browsers uniquely identifiable.

    If the malicious vendor is good, they even correlate the size and order of requests. Because that's unique as well and can identify TOR browsers pretty easily.

    It's like saying "I can't be tracked, because I use Linux". Guess what, as long as nobody in your town uses Linux, you are the most trackable person.

    I decided to go with the "behave as the statistical norm expects you to behave" and created my browser/scraper [1] and forked WebKit into a webview [2] that doesn't support anything that can be used for tracking; with the idea that those tracking features can be shimmed and faked.

    I personally think this is the only way to be untrackable these days. Because let's be honest, nobody uses Firefox with ETP in my town anymore :(

    WebKit was a good start of this because at least some of the features were implemented behind compiler flags...whereas all other browsers and engines can't be built without say, WebRTC support, or say, without Audio Worklets which are for themselves enough to be uniquely identified.

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/retrokit

    (both WIP)

  • We Have A Browser Monopoly Again and Firefox is The Only Alternative Out There
    6 projects | reddit.com/r/programming | 1 Jan 2022
    Currently my primary motivation factor is my own Browser Stealth that I'm building; and due to lack of alternatives.
  • Tholian® Stealth - Secure, Peer-to-Peer, Private and Automatable Web Browser/Scraper/Proxy for the Web of Truth and Knowledge. Goals: increased Privacy, increased Automation, adaptive Semantic Understanding. Web Scraper + Web Service + Web Proxy
    1 project | reddit.com/r/AltTech | 21 Oct 2021
  • Pirate Party member: GDPR-compliant Whois will lead to 'doxxing and death lists'
    3 projects | news.ycombinator.com | 17 Oct 2021
    I'm building a peer to peer Browser network that relies on trust ratios/factor in order to find out the seed/leech ratio of sharing content, producing content etc.

    The problem I'm currently trying to solve is that I had the idea to have a vendor profile that contains the necessary information for IP ranges (ASN, organization, region, country, ISP/NAT etc) so that the discovery service for that doesn't have to do this.

    It's like the basic idea of an offline "map of the internet" that should be an approximation of who does what in which amount of data (e.g. data center IPs aren't trustworthy or same ISP-NATed IP could be censored the same when it comes to blocked websites etc).

    At this point it's a big experiment and I'm not sure whether I'm fundamentally wrong about this as I don't have any data to back it up.

    If you're curious, it's part of the Stealth Browser I'm building [1] and [2]

    [1] https://github.com/tholian-network/stealth

    [2] https://github.com/tholian-network/stealth-vendor

  • A climate activist arrested after ProtonMail provided his IP address
    3 projects | news.ycombinator.com | 5 Sep 2021
    > Does anyone here have a feasible way to solve this?

    Current solutions like TOR, I2P, VPNs and/or mobile proxy services are just a matter of time and legality until they come obsolete.

    TOR and I2P are worth a shit if everybody knows it was a TOR exit node, and cloudflare shows you tracking captchas anyways.

    Same for VPNs and mobile proxies, most are known due to their static IP ranges. Note that most mobile proxy services actually use malware installed on smartphones, so technically you're helping the blackhats by using them, and technically if the federal agencies find out you are probably in some lawsuits filed as an anonymous party that helped them DDoS a victim party.

    I am convinced that the only way to solve this is by simply not downloading the website from its origin. The origin tracks you, so don't talk to them. Talk to your peers and receive a ledged copy of it instead.

    The only problem is that this contradicts all that came after Web 2.0, because every website _wants_ unique identities for every person visiting them; including ETag-based tracking mechanisms of CDNs.

    I think it's not possible with supporting Web Browser APIs the same way in JavaScript (as of now, due to fetch and XHR and how WebSockets are abused for HDCP/DRM to prevent caching), but I think that a static website delivering network with a trustless cryptography based peer-to-peer end-to-end encrypted statistically-correct cache is certainly feasible. I believe that because that's exactly what I'm building for the last two years [1].

    [1] https://github.com/tholian-network/stealth

  • Request for Feedback on Network Concept
    1 project | reddit.com/r/hacking | 1 Sep 2021
    I wanted to ask whether you could provide feedback on a Networking Concept that I'm building for my peer-to-peer Web Browser. I wrote down the mechanics here and wanted to ask for feedback - to check whether or not I'm wrong or overseeing something critical.
  • How can information warfare be remediated without censoring citizens?
    1 project | reddit.com/r/cybersecurity | 28 Aug 2021
    Disclaimer: I am building a peer-to-peer Web Browser to fix exactly this, based on Compositional Game Theory ideas combined with locally diverse learning agents. It's pretty much the identical problem as "sorting the most efficient peers in your bucket" when you think about it, but with a different criteria; such as bias in different topics.
  • A plan to rescue the Web from the Internet
    1 project | news.ycombinator.com | 18 Jul 2021
    Honestly, I thought about the very same problems a lot. And not only from a user's privacy perspective but also from a browser's perspective and from the perspective of a god's eye's view that big tech companies have.

    Google's or Cloudflare's DNS alone probably has so much data that any ISP pales in comparison.

    The web is broken and it gets more and more rotten over time. Google found out they don't need cookies to track people, so they invented FLoC and decided to actively don't give a damn about any privacy regulation on the planet - activating the tracking mechanism for everyone without any real influence or decision on the user's side.

    I think the web3 idea will never exist once we try to reinvent the wheel. New protocols and new ports can easily be blocked. At some point they're just portrayed like TOR, as a tool for evil rather than good. DNS over TLS just gets its 853 port blocked by most ISPs so it's useless as a tool to shift the means of control here, and only DNS over HTTPs makes a real difference.

    Given the nature how the web works, how it's transported and how dynamically it's being changed in its resources over time - I think it's more important to mention the idea of statistical truth in the moment, when I was using it.

    I don't have the bandwidth to download gigabytes of blockchains for a single website. I don't know how hashes work, but I know how URLs work.

    The strength of a peer to peer system is not ledging, it's offloading. Every cached download that's offloaded via a closeby peer's cache is another tracking prevented. Every request that looks and behaves like Chrome's rendering engine on Windows/MacOS is another user saved from network fingerprint identification.

    Privacy is not keeping things to yourself. Real Privacy is looking like everybody else. And that is not only a user-agent string, but things you download, assets that are requested, hardware you use, and web fonts you have installed as well.

    For my own Browser I've decided to push the boundaries a little by implementing those features with the primary goal of being able to make the web work when you're offline or just connected to trusted peers (that have the URLs already cached). [1]

    You don't need custom protocols there. You don't need to roll your own crypto.

    HTTP, WebSockets and TLS are already peer to peer.

    [1] https://github.com/tholian-network/stealth (highly alpha)

  • I bought ISO 8601-1:2019 and 8601-2:2019. Ask me anything.
    1 project | reddit.com/r/ISO8601 | 2 Apr 2021

What are some alternatives?

When comparing Holy-Unblocker and stealth you can also consider the following projects:

stealth - :rocket: Stealth - Secure, Peer-to-Peer, Private and Automateable Web Browser/Scraper/Proxy [Moved to: https://github.com/tholian-network/stealth]

PHP-Proxy - Proxy Application built on php-proxy library ready to be installed on your server

ergo - The management of multiple apps running over different ports made easy

mathgames

Ultraviolet - Highly sophisticated proxy used for evading internet censorship or accessing websites in a controlled sandbox using the power of service-workers and more!

cname-trackers - This repository contains a list of popular CNAME trackers

ClearURLs-Addon - ClearURLs is an add-on based on the new WebExtensions technology and will automatically remove tracking elements from URLs to help protect your privacy.

nyxt - Nyxt - the hacker's power-browser.

brotab - Control your browser's tabs from the command line

web-bugs - A place to report bugs on websites.

exwm - Emacs X Window Manager

bypass-paywalls-firefox-clean