wget-lua
Wget-AT is a modern Wget with Lua hooks, Zstandard (+dictionary) WARC compression and URL-agnostic deduplication. (by ArchiveTeam)
grab-site
The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns (by ArchiveTeam)
Our great sponsors
wget-lua | grab-site | |
---|---|---|
2 | 30 | |
81 | 1,261 | |
- | 3.6% | |
6.1 | 3.8 | |
3 months ago | about 1 month ago | |
C | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wget-lua
Posts with mentions or reviews of wget-lua.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-02-10.
-
Alternative to HTTrack (website copier) as of 2023?
You're using it wrong, rtfm, wget is still the standard. It's also extensible beyond the base feature set, take for example wget-lua ArchiveTeams well maintained go to for near all scraping projects by the group.
-
Kiwix - Access Wikipedia (And More) With no Internet
There are updates changed names but still use more frequent updates than the dumps to get started. I know there is kiwix and xowa. Could probably build it up to current and use wget-at to scrap wikipedia solo. If you want it in html it'll proboy only be a hundred 100TB give or take. I'm wondering if any of the groups are still active on IRC. Saw mentions of a few but I lost my place in all the mobile chrome tabs.
grab-site
Posts with mentions or reviews of grab-site.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-29.
-
Ask HN: How can I back up an old vBulletin forum without admin access?
The format you want is WARC. Even the Library of Congress uses it. There are many many WARC scrapers. I'd look at what the Internet Archive recommends. A quick search turned up this from the Archive Team and Jason Scott https://github.com/ArchiveTeam/grab-site (https://wiki.archiveteam.org/index.php/Who_We_Are) but I found that in less than 15 seconds of searching so do your own diligence.
-
struggling to download websites
You can use grab-site with --no-offsite-links and --igsets=mediawiki.
- Internet Archive Down, will be up and running soon (i hope).
-
best tool for downloading forum posts in real-time?
Does the forum provide real-time notification for new posts? Like maybe a RSS feed, or a 'New' section? If so, some scripting around grab-site or httrack could grab them quickly.
-
How are you archiving websites you visit?
After a lot of searching for a similar topic, this is a tool I found which works pretty well: https://github.com/ArchiveTeam/grab-site
-
Help building or mirroring docs.microsoft.com
Crawling is of course the other option. I've seen https://github.com/ArchiveTeam/grab-site in the wiki, but I'm unsure how to host the resulting .warc archives.
- grab-site: The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
- Data hoarders, start backing up government websites and news articles as well
-
How to mirror multiple websites correctly?
It's a completely different tool, but I like using grab-site https://github.com/archiveteam/grab-site . Try --wpull-args=--span-hosts='' or something to make it mirror all subdomains. It outputs in WARC format which can be read with a site like https://replayweb.page.
-
Stack Overflow Developer Story Data Dump (10 whole MB !)
Thusly, as a bit of a statement, here's your "I will do it myself even if I have to bash my head against the wall" collection of the Developer Story on 10-20 top users. I know there are some blogs on old web design, perhaps it might be worth their while as a memento of an era bygone. And as for myself, I am looking into setting up a dedicated server for either grab-site or ArchiveBox. Possibly both!