grab-site
briefkasten
Our great sponsors
grab-site | briefkasten | |
---|---|---|
30 | 7 | |
1,260 | 744 | |
3.5% | - | |
3.8 | 5.4 | |
about 1 month ago | about 1 month ago | |
Python | JavaScript | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grab-site
-
Ask HN: How can I back up an old vBulletin forum without admin access?
The format you want is WARC. Even the Library of Congress uses it. There are many many WARC scrapers. I'd look at what the Internet Archive recommends. A quick search turned up this from the Archive Team and Jason Scott https://github.com/ArchiveTeam/grab-site (https://wiki.archiveteam.org/index.php/Who_We_Are) but I found that in less than 15 seconds of searching so do your own diligence.
-
struggling to download websites
You can use grab-site with --no-offsite-links and --igsets=mediawiki.
- Internet Archive Down, will be up and running soon (i hope).
-
best tool for downloading forum posts in real-time?
Does the forum provide real-time notification for new posts? Like maybe a RSS feed, or a 'New' section? If so, some scripting around grab-site or httrack could grab them quickly.
-
How are you archiving websites you visit?
After a lot of searching for a similar topic, this is a tool I found which works pretty well: https://github.com/ArchiveTeam/grab-site
-
Help building or mirroring docs.microsoft.com
Crawling is of course the other option. I've seen https://github.com/ArchiveTeam/grab-site in the wiki, but I'm unsure how to host the resulting .warc archives.
- grab-site: The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns
- Data hoarders, start backing up government websites and news articles as well
-
How to mirror multiple websites correctly?
It's a completely different tool, but I like using grab-site https://github.com/archiveteam/grab-site . Try --wpull-args=--span-hosts='' or something to make it mirror all subdomains. It outputs in WARC format which can be read with a site like https://replayweb.page.
-
Stack Overflow Developer Story Data Dump (10 whole MB !)
Thusly, as a bit of a statement, here's your "I will do it myself even if I have to bash my head against the wall" collection of the Developer Story on 10-20 top users. I know there are some blogs on old web design, perhaps it might be worth their while as a memento of an era bygone. And as for myself, I am looking into setting up a dedicated server for either grab-site or ArchiveBox. Possibly both!
briefkasten
- Grimoire: Open-Source bookmark manager with extra features
- Alternative to Raindrop.io?
-
How are you archiving websites you visit?
Some others I looked at: https://github.com/Kovah/LinkAce/ (PWA) https://github.com/sissbruecker/linkding https://github.com/ndom91/briefkasten (PWA) https://github.com/Daniel31x13/link-warden (PDF)
-
Linkace is dead simple to install
In that case, I have an awesome project for you: Briefkasten, https://github.com/ndom91/briefkasten
-
Help needed deploying this amazing Docker
I need help deploying this docker container in my homeserver. Briefkasten is a self-hosted bookmarking application (Demo). Briefkasten has a lot of nice features that I would like to host in my network but I'm not experienced with building my own docker files. Especially those with a lot of dependencies.
-
Bookmarks manager Similar to Google Bookmarks
"Briefkasten": https://github.com/ndom91/briefkasten
What are some alternatives?
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
LinkAce - LinkAce is a self-hosted archive to collect links of your favorite websites.
browsertrix-crawler - Run a high-fidelity browser-based crawler in a single Docker container
nextacular - An open-source starter kit that will help you build full-stack multi-tenant SaaS platforms efficiently and help you focus on developing your core SaaS features. Built on top of popular and modern technologies such as Next JS, Tailwind, Prisma, and Stripe.
docker-swag - Nginx webserver and reverse proxy with php support and a built-in Certbot (Let's Encrypt) client. It also contains fail2ban for intrusion prevention.
linkwarden - ⚡️⚡️⚡️Self-hosted collaborative bookmark manager to collect, organize, and preserve webpages and articles.
awesome-datahoarding - List of data-hoarding related tools
platforms - A full-stack Next.js app with multi-tenancy and custom domain support. Built with Next.js App Router and the Vercel Domains API.
wpull - Wget-compatible web downloader and crawler.
portfolia - My personal website
replayweb.page - Serverless replay of web archives directly in the browser
nexum - Starter for Fullstack Applications based on Next.js, Prisma & GraphQL.