tabist
stealth
Our great sponsors
tabist | stealth | |
---|---|---|
4 | 26 | |
44 | 988 | |
- | 2.2% | |
0.0 | 0.0 | |
over 5 years ago | 7 months ago | |
JavaScript | JavaScript | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tabist
-
TabFS – a browser extension that mounts the browser tabs as a filesystem
I have a extension that does that almost pretty identically to what you are looking for... it does list out every page you have but doesn't list the url. there is an unreleased version that I use that has a tab dumping to json that I use for just that session restore reason.
Maybe I'll finish the updated version and release it soon.
feel free to check it out: https://github.com/fiveNinePlusR/tabist
https://addons.mozilla.org/en-US/firefox/addon/tabist/
https://chrome.google.com/webstore/detail/tabist/hdjegjggiog...
-
My Bad Habit of Hoarding Information
I wrote a little webext to help me find tabs in a visual way grouped by window. middle click closes the tab and left click brings the tab you click on to the forefront. It's simple but something I use many times every day.
feel free to try it out:
https://addons.mozilla.org/en-US/firefox/addon/tabist/
https://chrome.google.com/webstore/detail/tabist/hdjegjggiog...
-
Overcoming Tab Overload
for a lot of people you search for something to solve a problem. for instance debugging an issue. you middle click on a bunch of promising tabs and then go through them. if there is some useful information on that page you leave it open but it's rare that it's the only thing you need to know to solve your problem. another use is some API you need to use so you'd open up a bunch of tabs on the functions you are exploring how you need to use them.
I also separate the issue by window too and also use tabs and windows as temporary bookmarks really. not worthy of a full bookmark but not finished with.
I created an webextension to deal with handling those tabs because having a bunch of tabs across a bunch of windows is not the most ergonomic without one. might be useful for someone here I suppose: https://github.com/fiveNinePlusR/tabist
https://chrome.google.com/webstore/detail/tabist/hdjegjggiog...
-
TabFS: Mount the Browser Tabs as a Filesystem
I am like your friend... basically tabs are a "working memory" that you don't want to store permanently in bookmarks. each window or sets of windows is typically a different topic that is being research on with a bunch of middle clicks to open tabs. I have so many open that I wrote a small webext for it that shows a page of all your tabs that you can click on to navigate to that tab with a click. just a nicer interface to see all the windows open and all the tabs. https://github.com/fiveNinePlusR/tabist
stealth
-
Ask HN: Most interesting tech you built for just yourself?
Two years ago I decided to built my own web browser, with the underlying idea to use the internet more efficiently (and to force cache everything).
Took a while to find the architecture, but it's still an unfinished ambitious project. You can probably spend forever working on HTML and CSS fixes alone...
-
The FBI Identified a Tor User
From a technological point of view, TOR still has a couple of flaws which make it vulnerable to the metadata logging systems of ISPs:
- it needs a trailing non-zero buffer, randomized by the size of the payload, so that stream sizes and durations don't match
- it needs a request scattering feature, so that the requests for a specific website don't get proxied through the same nodes/paths
- it needs a failsafe browser engine, which doesn't give a flying damn about WebRTC and decides to actively drop features.
- it needs to stop monkey-patching out ("stubbing") the APIs that are compromising user privacy, and start removing those features.
I myself started a WebKit fork a while ago but eventually had to give up due to the sheer amount of work required to maintain such an engine project. I called it RetroKit [1], and I documented what kind of features in WebKit were already usable for tracking and had to be removed.
I'm sorry to be blunt here, but all that user privacy valueing electron bullshit that uses embedded chrome in the background doesn't cut it anymore. And neither does Firefox that literally goes rogue in an endless loop of requests when you block their tracking domains. The config settings in Firefox don't change shit anymore, and it will keep requesting the tracking domains. It does it also in Librefox and all the *wolf profile variants, just use a local eBPF firewall to verify. I added my non-complete opensnitch ruleset to my dotfiles for others to try out. [3]
If I would rewrite a browser engine today, I'd probably go for golang. But golang probably makes handling arbitrary network data a huge pain, so it's kinda useless for failsafe html5 parsing.
[1] https://github.com/tholian-network/retrokit
[2] (the browser using retrokit) https://github.com/tholian-network/stealth
[3] https://github.com/cookiengineer/dotfiles/tree/master/softwa...
-
The Iran Firewall: A preliminary report
Most of the things you mentioned are implemented in the "Browser" that I've built. It's using multicast DNS to discover neighboring running instances and it has an offline cache first mentality, which means that e.g. download streams are shared among local peers.
Global peer discovery is solved via mapping of identifiers via the reserved TLD, and via mutual TLS for identification and verification. So peers are basically pinned client certificates in your local settings.
Works for most cases, had to implement a couple of breakout tunnel protocols though, so that peer discovery works failsafe when known IPs/ASNs are blocked.
Relaying and scattering traffic works automatically, so that no correlation of IPs to scraped websites can be done by an MITM. Tunnel protocols are all generically implemented, DNS exfiltration, HTTPS smuggling, ICMP tunnels, and pwnat work already pretty failsafe.
Lots of work to be done though, and had to focus on couple other things first before I can get back to the project.
-
There are no Internet Browsers that cannot be tracked, or are there?
I'm trying to go a different route with Stealth, my programmable peer-to-peer web browser that can offload and relay traffic intelligently - and with RetroKit, my WebKit fork that aims to remove all JavaScript APIs that can be used for fingerprinting and/or tracking.
-
No-JavaScript Fingerprinting
Note that among a sea of tracked browsers, the untrackable browser shines like a bright star.
Statistical analysis of these values over time (matched with client hints, ETags, If-Modified-Since, and IPs) will make most browsers uniquely identifiable.
If the malicious vendor is good, they even correlate the size and order of requests. Because that's unique as well and can identify TOR browsers pretty easily.
It's like saying "I can't be tracked, because I use Linux". Guess what, as long as nobody in your town uses Linux, you are the most trackable person.
I decided to go with the "behave as the statistical norm expects you to behave" and created my browser/scraper [1] and forked WebKit into a webview [2] that doesn't support anything that can be used for tracking; with the idea that those tracking features can be shimmed and faked.
I personally think this is the only way to be untrackable these days. Because let's be honest, nobody uses Firefox with ETP in my town anymore :(
WebKit was a good start of this because at least some of the features were implemented behind compiler flags...whereas all other browsers and engines can't be built without say, WebRTC support, or say, without Audio Worklets which are for themselves enough to be uniquely identified.
[1] https://github.com/tholian-network/stealth
[2] https://github.com/tholian-network/retrokit
(both WIP)
-
We Have A Browser Monopoly Again and Firefox is The Only Alternative Out There
Currently my primary motivation factor is my own Browser Stealth that I'm building; and due to lack of alternatives.
-
Pirate Party member: GDPR-compliant Whois will lead to 'doxxing and death lists'
I'm building a peer to peer Browser network that relies on trust ratios/factor in order to find out the seed/leech ratio of sharing content, producing content etc.
The problem I'm currently trying to solve is that I had the idea to have a vendor profile that contains the necessary information for IP ranges (ASN, organization, region, country, ISP/NAT etc) so that the discovery service for that doesn't have to do this.
It's like the basic idea of an offline "map of the internet" that should be an approximation of who does what in which amount of data (e.g. data center IPs aren't trustworthy or same ISP-NATed IP could be censored the same when it comes to blocked websites etc).
At this point it's a big experiment and I'm not sure whether I'm fundamentally wrong about this as I don't have any data to back it up.
If you're curious, it's part of the Stealth Browser I'm building [1] and [2]
-
A climate activist arrested after ProtonMail provided his IP address
> Does anyone here have a feasible way to solve this?
Current solutions like TOR, I2P, VPNs and/or mobile proxy services are just a matter of time and legality until they come obsolete.
TOR and I2P are worth a shit if everybody knows it was a TOR exit node, and cloudflare shows you tracking captchas anyways.
Same for VPNs and mobile proxies, most are known due to their static IP ranges. Note that most mobile proxy services actually use malware installed on smartphones, so technically you're helping the blackhats by using them, and technically if the federal agencies find out you are probably in some lawsuits filed as an anonymous party that helped them DDoS a victim party.
I am convinced that the only way to solve this is by simply not downloading the website from its origin. The origin tracks you, so don't talk to them. Talk to your peers and receive a ledged copy of it instead.
The only problem is that this contradicts all that came after Web 2.0, because every website _wants_ unique identities for every person visiting them; including ETag-based tracking mechanisms of CDNs.
I think it's not possible with supporting Web Browser APIs the same way in JavaScript (as of now, due to fetch and XHR and how WebSockets are abused for HDCP/DRM to prevent caching), but I think that a static website delivering network with a trustless cryptography based peer-to-peer end-to-end encrypted statistically-correct cache is certainly feasible. I believe that because that's exactly what I'm building for the last two years [1].
-
Google Removed ClearURLs Extension from Chrome Web Store
I agree with you there. For my stealth browser I decided to go with a different JSON based format [1] that can rewrite the URL parameters via wildcards (for both * at the start and end of both key and val).
It has the idea that you can audit a website and only list the allowed parameters there, so that a website search or sorting order or filters can still work.
I built my browser on an allowlist based concept because it seemed too impossible to maintain all bad urls, domains, parameters on the web. Most websites have more tracking than content in them, so I decided on maintaining lists to select the content rather than the ads and trackers.
[1] https://github.com/tholian-network/stealth/blob/X0/profile/p...
-
AdGuard publishes a list of 6K+ trackers abusing the CNAME cloaking technique
Maybe you want to take a look at stealth [1] as you've seem to understood the real strength of a browser with a decentralized (and delegateable) request system.
Personally I don't think the strength of p2p is only trust, it's delegation. If you have peer to peer encryption running, the possibilities are endless.
What are some alternatives?
Holy-Unblocker - Holy Unblocker is a web proxy service that helps you access websites that may be blocked by your network or browser. It does this securely and with additional features. [MOVED TO A NEW REPO]
nyxt - Nyxt - the hacker's browser.
cname-trackers - This repository contains a list of popular CNAME trackers
ClearURLs-Addon - ClearURLs is an add-on based on the new WebExtensions technology and will automatically remove tracking elements from URLs to help protect your privacy.
brotab - Control your browser's tabs from the command line
FTL - The Pi-hole FTL engine
bypass-paywalls-firefox-clean
floc - This proposal has been replaced by the Topics API.
web-bugs - A place to report bugs on websites.
auto-tab-discard - Use native tab discarding method to automatically reduce memory usage of inactive tabs
exwm - Emacs X Window Manager