grab-site
awesome-datahoarding
Our great sponsors
grab-site | awesome-datahoarding | |
---|---|---|
30 | 6 | |
1,258 | 1,001 | |
3.3% | - | |
3.8 | 4.9 | |
28 days ago | 7 months ago | |
Python | ||
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grab-site
-
Ask HN: How can I back up an old vBulletin forum without admin access?
The format you want is WARC. Even the Library of Congress uses it. There are many many WARC scrapers. I'd look at what the Internet Archive recommends. A quick search turned up this from the Archive Team and Jason Scott https://github.com/ArchiveTeam/grab-site (https://wiki.archiveteam.org/index.php/Who_We_Are) but I found that in less than 15 seconds of searching so do your own diligence.
-
struggling to download websites
You can use grab-site with --no-offsite-links and --igsets=mediawiki.
- Internet Archive Down, will be up and running soon (i hope).
-
best tool for downloading forum posts in real-time?
Does the forum provide real-time notification for new posts? Like maybe a RSS feed, or a 'New' section? If so, some scripting around grab-site or httrack could grab them quickly.
-
How are you archiving websites you visit?
After a lot of searching for a similar topic, this is a tool I found which works pretty well: https://github.com/ArchiveTeam/grab-site
-
Help building or mirroring docs.microsoft.com
Crawling is of course the other option. I've seen https://github.com/ArchiveTeam/grab-site in the wiki, but I'm unsure how to host the resulting .warc archives.
- grab-site: The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
- Data hoarders, start backing up government websites and news articles as well
-
How to mirror multiple websites correctly?
It's a completely different tool, but I like using grab-site https://github.com/archiveteam/grab-site . Try --wpull-args=--span-hosts='' or something to make it mirror all subdomains. It outputs in WARC format which can be read with a site like https://replayweb.page.
-
Stack Overflow Developer Story Data Dump (10 whole MB !)
Thusly, as a bit of a statement, here's your "I will do it myself even if I have to bash my head against the wall" collection of the Developer Story on 10-20 top users. I know there are some blogs on old web design, perhaps it might be worth their while as a memento of an era bygone. And as for myself, I am looking into setting up a dedicated server for either grab-site or ArchiveBox. Possibly both!
awesome-datahoarding
-
All my life was a bloody lecher. Now I have a VPN and wanna pay you all back - but how?
maybe you can check at this sub or this github.
- Ask HN: Looking for a great tool to archive websites
-
need some guidance
Welcome! You are clearly in the right place. If I can give any advice, it would be to take a look at these two links: Awesome-DataHoarding and the wiki of this subreddit. I wish I had both of these resources when I started.
-
How to get started?
i have some stuff in mind, but i'm looking for tools to download it. I found a list https://github.com/simon987/awesome-datahoarding so that answers my own question, mostly. I'm just looking for some tips on how to store my data now.
-
Looking for open source software to scrape webpages but also make them searchable with a webui. (locally hosted)
You might also be interested in this list, those alternatives listed are really great and better, some support the WARC format (that my program doesn't).
- Trying to find a Github containing list of tool projects for backing up (discord, other places)
What are some alternatives?
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
Collect - A server to collect & archive websites that also supports video downloads
browsertrix-crawler - Run a high-fidelity browser-based crawler in a single Docker container
SingleFile - Web Extension for saving a faithful copy of a complete web page in a single HTML file
docker-swag - Nginx webserver and reverse proxy with php support and a built-in Certbot (Let's Encrypt) client. It also contains fail2ban for intrusion prevention.
wpull - Wget-compatible web downloader and crawler.
replayweb.page - Serverless replay of web archives directly in the browser
docker-templates
win32 - Public mirror for win32-pr
briefkasten - 📮 Self hosted bookmarking app
collect - ODK Collect is an Android app for filling out forms. It's been used to collect billions of data points in challenging environments around the world. Contribute and make the world a better place! ✨📋✨