zim-tools
awesome-web-archiving
zim-tools | awesome-web-archiving | |
---|---|---|
4 | 13 | |
111 | 1,830 | |
1.8% | 2.7% | |
7.9 | 5.2 | |
14 days ago | 8 days ago | |
C++ | ||
GNU General Public License v3.0 only | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zim-tools
- Trying to Install wget-2-zim on Linux Mint 18.1
-
How to Download All of Wikipedia onto a USB Flash Drive
It looks like Kiwix uses the ZIM file format, which appears to have diffing support [0] (see zimdiff and zimpatch). That said, it doesn't look like Kiwix actually publishes those diffs.
[0] https://github.com/openzim/zim-tools/tree/master/src
-
I found a way to read the pali canon offline on my smartphone, for free
One problem that remains is to bring the access to insight webpage to the ".zim" format that the kiwix app can process. Here it seems the only way to do that is a tool called "zimwriterfs" that can be found in another opensource project that defines the basis of the offline reader format and is used by kiwix I believe. https://github.com/openzim/zim-tools ...this was no problem for me, as the tool can be accessed easily from standard linux distributions like ubuntu. Creating the .zim file from the unzipped access to insight offline version is a single command line, after preparing a little icon that will be used for the site in kiwix. There seems to be no windows version of that tool available though...
-
Zim-Tools 3.0.0 is out!
More info at https://github.com/openzim/zim-tools
awesome-web-archiving
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
https://github.com/iipc/awesome-web-archiving/blob/main/READ...
-
DPReview.com is going down effective April 10.
People have pasted this around, https://github.com/iipc/awesome-web-archiving Could probably do it with wget if you had enough time?
- DPReview.com to close on April 10 after 25 years of operation
-
This Layoff Does Not Exist: tech layoff announcements but weird
Maybe something on this list can help you https://github.com/iipc/awesome-web-archiving
-
Software to keep Website pages "alive"?
Awesome Web Archiving has a longer list of tools and software
-
How to Download All of Wikipedia onto a USB Flash Drive
Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.
ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.
WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.
ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.
browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.
For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving
[1] https://github.com/ArchiveBox/ArchiveBox
- Self Hosted Roundup #14
- SingleFile: Save a Complete Web Page into a Single HTML File
- [HELP] Starting Out for a Beginner
- Reflections as the Internet Archive turns 25
What are some alternatives?
kiwix-tools - Command line Kiwix tools: kiwix-serve, kiwix-manage, ...
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
libzim - Reference implementation of the ZIM specification
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
F3D - Fast and minimalist 3D viewer.
obelisk - Go package and CLI tool for saving web page as single HTML file
wiktextract - Wiktionary dump file parser and multilingual data extractor
SingleFile-MV3 - SingleFile version compatible with Manifest V3. The future, right now!
browsertrix-crawler - Run a high-fidelity browser-based crawler in a single Docker container
firefox-scrapbook - ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit.
zim-requests - Want a new ZIM file? Propose ZIM content improvements or fixes? Here you are!
youtube-dl - Command-line program to download videos from YouTube.com and other video sites