awesome-web-archiving
browsertrix-crawler
awesome-web-archiving | browsertrix-crawler | |
---|---|---|
13 | 13 | |
1,811 | 540 | |
1.7% | 2.8% | |
5.2 | 9.1 | |
8 days ago | 8 days ago | |
TypeScript | ||
Creative Commons Zero v1.0 Universal | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
awesome-web-archiving
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
https://github.com/iipc/awesome-web-archiving/blob/main/READ...
-
DPReview.com is going down effective April 10.
People have pasted this around, https://github.com/iipc/awesome-web-archiving Could probably do it with wget if you had enough time?
- DPReview.com to close on April 10 after 25 years of operation
-
This Layoff Does Not Exist: tech layoff announcements but weird
Maybe something on this list can help you https://github.com/iipc/awesome-web-archiving
-
Software to keep Website pages "alive"?
Awesome Web Archiving has a longer list of tools and software
-
How to Download All of Wikipedia onto a USB Flash Drive
Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.
ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.
WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.
ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.
browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.
For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving
[1] https://github.com/ArchiveBox/ArchiveBox
- Self Hosted Roundup #14
- SingleFile: Save a Complete Web Page into a Single HTML File
- [HELP] Starting Out for a Beginner
- Reflections as the Internet Archive turns 25
browsertrix-crawler
-
Webrecorder: Capture interactive websites and replay them at a later time
(Disclaimer: I work at Webrecorder)
Our automated crawler browsertrix-crawler (https://github.com/webrecorder/browsertrix-crawler) uses Puppeteer to run browsers that we archive in by loading pages, running behaviors such as auto-scroll, and then record the request/response traffic. We have some custom behavior for some social media and video sites to make sure that content is appropriate captured. It is a bit of a cat-and-mouse game as we have to continue to update these behaviors as sites change, but for the most part it works pretty well.
The trickier part is in replaying the archived websites, as a certain amount of re-writing has to happen in order to make sure the HTML and JS are working with archived assets rather than the live web. One implementation of this is replayweb.page (https://github.com/webrecorder/replayweb.page), which does all of the rewriting client-side in the browser. This sets you interact with archived websites in WARC or WACZ format as if interacting with the original site.
-
Come back, c2.com, we still need you
I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.
[0] https://github.com/webrecorder/browsertrix-crawler
-
Alternative to HTTrack (website copier) as of 2023?
I have started using the tools from https://webrecorder.net like Browsertrix Crawler and they have been working great. The web archive format is open source and very portable. The crawler even crawls and saves YouTube videos embedded on pages which is awesome.
-
Halomaps, which has been the main hub for Halo modding content for almost 20 years, is having it's forums shut down on Feb 1st. A massive amount of content will be lost if it's not archived.
This looks like a good candidate for https://github.com/webrecorder/browsertrix-crawler.
- Offline Internet Archive
- Options to backup https://trythatsoap.com/?
- How to Download All of Wikipedia onto a USB Flash Drive
-
Ask HN: Best approaches to archiving interactive web journalism/writing
I just learned about this organization, Saving Ukrainian Cultural Heritage Online (SUCHO): https://www.sucho.org/
They seem to be using various tools, like Browsertrix: https://github.com/webrecorder/browsertrix-crawler
It sounds promising for interactive sites:
> Support for custom browser behaviors, using Browsertix Behaviors including autoscroll, video autoplay and site-specific behaviors
Browsertrix links to https://replayweb.page/ for a way to view an archived site.
-
How is ArchiveBox?
If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.
-
Looking for suggestions for archiving Google Groups
I recommend this: https://github.com/webrecorder/browsertrix-crawler
What are some alternatives?
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
grab-site - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
obelisk - Go package and CLI tool for saving web page as single HTML file
Photon - Incredibly fast crawler designed for OSINT.
SingleFile-MV3 - SingleFile version compatible with Manifest V3. The future, right now!
remodeling - The original wiki rewritten as a single page application
firefox-scrapbook - ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit.
replayweb.page - Serverless replay of web archives directly in the browser
youtube-dl - Command-line program to download videos from YouTube.com and other video sites
wiktextract - Wiktionary dump file parser and multilingual data extractor