detectorist-scraper
grab-site
detectorist-scraper | grab-site | |
---|---|---|
1 | 30 | |
30 | 1,272 | |
- | 1.7% | |
10.0 | 3.8 | |
over 8 years ago | 2 months ago | |
Python | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
detectorist-scraper
-
Ask HN: How can I back up an old vBulletin forum without admin access?
> I'm just not sure what the intermediate steps would be to get something usable like a vBulletin…
Once you have a crawl, you'll likely want to convert that unstructured data to structured data. For example, if I look at https://www.vbulletin.org/forum/portal.php, the thread title and hierarchy is in
, posts are in, etc. I see an old project (https://github.com/IanLondon/detectorist-scraper) that did this and may be a useful place to start, and I imagine there are others.Once you have the structured data, You can decide whether to use it to build a static site, to import it into another forum, etc.
grab-site
-
Ask HN: How can I back up an old vBulletin forum without admin access?
The format you want is WARC. Even the Library of Congress uses it. There are many many WARC scrapers. I'd look at what the Internet Archive recommends. A quick search turned up this from the Archive Team and Jason Scott https://github.com/ArchiveTeam/grab-site (https://wiki.archiveteam.org/index.php/Who_We_Are) but I found that in less than 15 seconds of searching so do your own diligence.
-
struggling to download websites
You can use grab-site with --no-offsite-links and --igsets=mediawiki.
- Internet Archive Down, will be up and running soon (i hope).
-
best tool for downloading forum posts in real-time?
Does the forum provide real-time notification for new posts? Like maybe a RSS feed, or a 'New' section? If so, some scripting around grab-site or httrack could grab them quickly.
-
How are you archiving websites you visit?
After a lot of searching for a similar topic, this is a tool I found which works pretty well: https://github.com/ArchiveTeam/grab-site
-
Help building or mirroring docs.microsoft.com
Crawling is of course the other option. I've seen https://github.com/ArchiveTeam/grab-site in the wiki, but I'm unsure how to host the resulting .warc archives.
- grab-site: The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
- Data hoarders, start backing up government websites and news articles as well
-
How to mirror multiple websites correctly?
It's a completely different tool, but I like using grab-site https://github.com/archiveteam/grab-site . Try --wpull-args=--span-hosts='' or something to make it mirror all subdomains. It outputs in WARC format which can be read with a site like https://replayweb.page.
-
Stack Overflow Developer Story Data Dump (10 whole MB !)
Thusly, as a bit of a statement, here's your "I will do it myself even if I have to bash my head against the wall" collection of the Developer Story on 10-20 top users. I know there are some blogs on old web design, perhaps it might be worth their while as a memento of an era bygone. And as for myself, I am looking into setting up a dedicated server for either grab-site or ArchiveBox. Possibly both!
What are some alternatives?
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
browsertrix-crawler - Run a high-fidelity browser-based crawler in a single Docker container
docker-swag - Nginx webserver and reverse proxy with php support and a built-in Certbot (Let's Encrypt) client. It also contains fail2ban for intrusion prevention.
awesome-datahoarding - List of data-hoarding related tools
wpull - Wget-compatible web downloader and crawler.
replayweb.page - Serverless replay of web archives directly in the browser
docker-templates
win32 - Public mirror for win32-pr
briefkasten - 📮 Self hosted bookmarking app
Collect - A server to collect & archive websites that also supports video downloads
collect - ODK Collect is an Android app for filling out forms. It's been used to collect billions of data points in challenging environments around the world. Contribute and make the world a better place! ✨📋✨
bitextor - Bitextor generates translation memories from multilingual websites