grab-site
browsertrix-crawler
Our great sponsors
grab-site | browsertrix-crawler | |
---|---|---|
30 | 13 | |
1,247 | 516 | |
9.3% | 7.0% | |
3.8 | 9.0 | |
7 days ago | 5 days ago | |
Python | TypeScript | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grab-site
-
Ask HN: How can I back up an old vBulletin forum without admin access?
The format you want is WARC. Even the Library of Congress uses it. There are many many WARC scrapers. I'd look at what the Internet Archive recommends. A quick search turned up this from the Archive Team and Jason Scott https://github.com/ArchiveTeam/grab-site (https://wiki.archiveteam.org/index.php/Who_We_Are) but I found that in less than 15 seconds of searching so do your own diligence.
-
How are you archiving websites you visit?
After a lot of searching for a similar topic, this is a tool I found which works pretty well: https://github.com/ArchiveTeam/grab-site
-
Help building or mirroring docs.microsoft.com
Crawling is of course the other option. I've seen https://github.com/ArchiveTeam/grab-site in the wiki, but I'm unsure how to host the resulting .warc archives.
-
How to mirror multiple websites correctly?
It's a completely different tool, but I like using grab-site https://github.com/archiveteam/grab-site . Try --wpull-args=--span-hosts='' or something to make it mirror all subdomains. It outputs in WARC format which can be read with a site like https://replayweb.page.
-
Stack Overflow Developer Story Data Dump (10 whole MB !)
Thusly, as a bit of a statement, here's your "I will do it myself even if I have to bash my head against the wall" collection of the Developer Story on 10-20 top users. I know there are some blogs on old web design, perhaps it might be worth their while as a memento of an era bygone. And as for myself, I am looking into setting up a dedicated server for either grab-site or ArchiveBox. Possibly both!
-
Need Local Website Archiver Recommendation
https://github.com/ArchiveTeam/grab-site is easy to use and records in the WARC container format.
-
How to scrape an entire website/all of its content?
take a look at grab-site by ArchiveTeam, it's a very powerful tool for mirroring websites.
- How to archive a website that's shutting down soon
-
I have a list of reddit posts I want to save on my harddrive. Whats the easiest way?
Try using a tool such as grab-site. https://github.com/archiveteam/grab-site
browsertrix-crawler
-
Webrecorder: Capture interactive websites and replay them at a later time
(Disclaimer: I work at Webrecorder)
Our automated crawler browsertrix-crawler (https://github.com/webrecorder/browsertrix-crawler) uses Puppeteer to run browsers that we archive in by loading pages, running behaviors such as auto-scroll, and then record the request/response traffic. We have some custom behavior for some social media and video sites to make sure that content is appropriate captured. It is a bit of a cat-and-mouse game as we have to continue to update these behaviors as sites change, but for the most part it works pretty well.
The trickier part is in replaying the archived websites, as a certain amount of re-writing has to happen in order to make sure the HTML and JS are working with archived assets rather than the live web. One implementation of this is replayweb.page (https://github.com/webrecorder/replayweb.page), which does all of the rewriting client-side in the browser. This sets you interact with archived websites in WARC or WACZ format as if interacting with the original site.
-
Come back, c2.com, we still need you
I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.
-
Alternative to HTTrack (website copier) as of 2023?
I have started using the tools from https://webrecorder.net like Browsertrix Crawler and they have been working great. The web archive format is open source and very portable. The crawler even crawls and saves YouTube videos embedded on pages which is awesome.
- Offline Internet Archive
- Options to backup https://trythatsoap.com/?
- How to Download All of Wikipedia onto a USB Flash Drive
-
Ask HN: Best approaches to archiving interactive web journalism/writing
I just learned about this organization, Saving Ukrainian Cultural Heritage Online (SUCHO): https://www.sucho.org/
They seem to be using various tools, like Browsertrix: https://github.com/webrecorder/browsertrix-crawler
It sounds promising for interactive sites:
> Support for custom browser behaviors, using Browsertix Behaviors including autoscroll, video autoplay and site-specific behaviors
Browsertrix links to https://replayweb.page/ for a way to view an archived site.
-
How is ArchiveBox?
If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.
-
How to save/copy/archive a website that is going to be closed down?
I'm looking for the same tool. https://github.com/webrecorder/browsertrix-crawler claims to do the job but it doesn't scale well, runs only on a single machine, doesn't support resumes, etc.
- Saving entire Websites
What are some alternatives?
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
docker-swag - Nginx webserver and reverse proxy with php support and a built-in Certbot (Let's Encrypt) client. It also contains fail2ban for intrusion prevention.
awesome-datahoarding - List of data-hoarding related tools
wpull - Wget-compatible web downloader and crawler.
replayweb.page - Serverless replay of web archives directly in the browser
Photon - Incredibly fast crawler designed for OSINT.
docker-templates
remodeling - The original wiki rewritten as a single page application
win32 - Public mirror for win32-pr
wiktextract - Wiktionary dump file parser and multilingual data extractor