browsertrix-crawler
monolith
browsertrix-crawler | monolith | |
---|---|---|
13 | 23 | |
552 | 10,086 | |
4.9% | 25.3% | |
9.1 | 7.2 | |
6 days ago | 5 days ago | |
TypeScript | Rust | |
GNU Affero General Public License v3.0 | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
browsertrix-crawler
-
Webrecorder: Capture interactive websites and replay them at a later time
(Disclaimer: I work at Webrecorder)
Our automated crawler browsertrix-crawler (https://github.com/webrecorder/browsertrix-crawler) uses Puppeteer to run browsers that we archive in by loading pages, running behaviors such as auto-scroll, and then record the request/response traffic. We have some custom behavior for some social media and video sites to make sure that content is appropriate captured. It is a bit of a cat-and-mouse game as we have to continue to update these behaviors as sites change, but for the most part it works pretty well.
The trickier part is in replaying the archived websites, as a certain amount of re-writing has to happen in order to make sure the HTML and JS are working with archived assets rather than the live web. One implementation of this is replayweb.page (https://github.com/webrecorder/replayweb.page), which does all of the rewriting client-side in the browser. This sets you interact with archived websites in WARC or WACZ format as if interacting with the original site.
-
Come back, c2.com, we still need you
I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.
[0] https://github.com/webrecorder/browsertrix-crawler
-
Alternative to HTTrack (website copier) as of 2023?
I have started using the tools from https://webrecorder.net like Browsertrix Crawler and they have been working great. The web archive format is open source and very portable. The crawler even crawls and saves YouTube videos embedded on pages which is awesome.
-
Halomaps, which has been the main hub for Halo modding content for almost 20 years, is having it's forums shut down on Feb 1st. A massive amount of content will be lost if it's not archived.
This looks like a good candidate for https://github.com/webrecorder/browsertrix-crawler.
- Offline Internet Archive
- Options to backup https://trythatsoap.com/?
- How to Download All of Wikipedia onto a USB Flash Drive
-
Ask HN: Best approaches to archiving interactive web journalism/writing
I just learned about this organization, Saving Ukrainian Cultural Heritage Online (SUCHO): https://www.sucho.org/
They seem to be using various tools, like Browsertrix: https://github.com/webrecorder/browsertrix-crawler
It sounds promising for interactive sites:
> Support for custom browser behaviors, using Browsertix Behaviors including autoscroll, video autoplay and site-specific behaviors
Browsertrix links to https://replayweb.page/ for a way to view an archived site.
-
How is ArchiveBox?
If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.
-
Looking for suggestions for archiving Google Groups
I recommend this: https://github.com/webrecorder/browsertrix-crawler
monolith
-
🛠️Non-AI Open Source Projects that are 🔥
Monolith is a CLI tool for saving complete web pages as a single HTML file.
-
An Introduction to the WARC File
I have never used monolith to say with any certainty, but two things in your description are worth highlighting between the goals of WARC versus the umpteen bazillion "save this one page I'm looking at as a single file" type projects:
1. WARC is designed, as a goal, to archive the request-response handshake. It does not get into the business of trying to make it easy for a browser to subsequently display that content, since that's a browser's problem
2. Using your cited project specifically, observe the number of "well, save it but ..." options <https://github.com/Y2Z/monolith#options> which is in stark contrast to the archiving goals I just spoke about. It's not a good snapshot of history if the server responded with `content-type: text/html;charset=iso-8859-1` back in the 90s but "modern tools" want everything to be UTF-8 so we'll just convert it, shall we? Bah, I don't like JavaScript, so we'll just toss that out, shall we? And so on
For 100% clarity: monolith, and similar, may work fantastic for any individual's workflow, and I'm not here to yuck anyone's yum; but I do want to highlight that all things being equal it should always be possible to derive monolith files from warc files because the warc files are (or at least have the goal of) perfect fidelity of what the exchange was. I would guess only pcap files would be of higher fidelity, but also a lot more extraneous or potentially privacy violating details
- Reddit limits the use of API to 1000,Let's work together to save the content of StableDiffusion Subreddit as a team
-
nix-init: Create Nix packages with just the URL, with support for dependency inference, license detection, hash prefetching, and more
console $ nix-init default.nix -u https://github.com/Y2Z/monolith [...] (press enter to select the defaults) $ nix-build -E "(import { }).callPackage ./. { }" [...] $ result/bin/monilith --version monolith 2.7.0
-
What is the best free, least likely to discontinue, high data allowance app/service for saving articles/webpages permanently?
For example, here’s a command-line tool to save webpages as HTML files: https://github.com/Y2Z/monolith
- Offline Internet Archive
-
Rust Easy! Modern Cross-platform Command Line Tools to Supercharge Your Terminal
monolith: Convert any webpage into a single HTML file with all assets inlined.
-
Is there a way to (bulk) save all tabs as a pdf document in a quick way?
There is also a program (monolith: https://github.com/Y2Z/monolith) that does the same
-
Is there a good list of up-to-date data archiving tools for different websites?
besides wget, for single pages I use monolith https://github.com/Y2Z/monolith
-
Ask HN: Full-text browser history search forever?
You can pipe the URLs through something like monolith[1].
https://github.com/Y2Z/monolith
What are some alternatives?
grab-site - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
SingleFile - Web Extension for saving a faithful copy of a complete web page in a single HTML file
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
Photon - Incredibly fast crawler designed for OSINT.
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
remodeling - The original wiki rewritten as a single page application
shrface - Extend eww/nov with org-mode features, archive web pages to org files with shr.
replayweb.page - Serverless replay of web archives directly in the browser
archivy - Archivy is a self-hostable knowledge repository that allows you to learn and retain information in your own personal and extensible wiki.
wiktextract - Wiktionary dump file parser and multilingual data extractor
Wallabag - wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.