percollate
zimit
Our great sponsors
percollate | zimit | |
---|---|---|
14 | 9 | |
4,108 | 231 | |
- | 8.7% | |
5.9 | 7.9 | |
3 months ago | 7 days ago | |
JavaScript | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
percollate
-
The Case Against AI Everything, Everywhere, All at Once
You can still choose automation. The easier route for me is to use wallabag to save the article. Then on my remarkable tablet I can grab a very readable document with https://github.com/koreader/koreader.
The other option is to use https://github.com/danburzo/percollate to convert a webpage to a nice document directly. I use both tools depending on my needs.
-
Share my down(load) function!
This function is just a simple combination with yt-dlp and percollate.
- Selfhosted service to screenshot websites - but I'm not finding the options I need
-
Reverse Engineering or Recreating the Chrome Extension?
If someone hasn't already done this and I can't figure out how they are converting HTML, I have also considered using Percollate to convert, then sending to ReMarkable via rmapi.
-
ArchiveBox Alternative
The Cli Tool Percollate offers a different approach, but is also very good: https://github.com/danburzo/percollate
- Reading web articles on the reMarkable
-
Is there a command line program to convert web pages into readable markdown/htm/pdf format? preferably markdown
Concerning pdf there is the well known wkhtmltopdf , but let me say that I love the not so well known percollate
- CLI to turn web pages into beautiful, readable PDF, ePub, or HTML docs
-
Show HN: Lurnby, a tool for better learning, is now open source
Since I'm working on a similar project, this is how I am planning to pull content from the web, utilizing percollate[1] to get the HTML content, I haven't written any implementation for this in Python yet.
If you don't mind me asking, how were you going to implement spaced repetition? Since the Incremental Reading algorithm has never been published as far as I know.
[1]: https://github.com/danburzo/percollate
- What Are The Best Linux Apps?
zimit
-
Zim vs WARC ?
There are clearly similarities between the two, given that Kiwix put resources into making WARC content available in ZIM archives (i.e. Zimit-style ZIMs, created with the Zimit scraper and warc2zim backend). But as u/IMayBeABitShy said, the ZIM specification focuses on providing a highly compressed container that is readable on-the-fly (i.e. by decompressing only the needed content to show an article), whereas WARC, or rather the compressed version WACZ, is merely a zipped version of the WARC data (request headers and responses). It is also readable on-the-fly, but compression will not be as optimal as the zstandard compression used by modern ZIM archives.
-
What's the "best" way to make your own ZIMs (in docker)?
I'm looking at making my own ZIM though not sure the best way to go about it. I've seen zimit on Github and the mwoffliner on Github too.
-
How do I zimit listings with slideshows?
You would have more chances at getting a technical reply to your technical issue by hitting https://github.com/openzim/zimit/issues I believe
-
Openzim/zimit using docker on Windows 10; mounting the volume with the complete .zim file works how exactly?
The zimit readme says it uses /output as the default directory, so we can use that as the name for our docker's volume.
-
Prepping for the end of the internet.
Zimit. This tool allows you to convert an existing website into an offline ZIM archive. https://hub.docker.com/r/openzim/zimit
-
Reading from the web offline and distraction-free
which worked quite well for most sites, but still very far from a general-purpose solution.
There is also more powerful/general-purpose scraper that generates a ZIM file here: https://github.com/openzim/zimit
It would be really nice to a "common" scraper code base that takes care of scraping (possibly with a real headless browser) and outputs all assets as files + info as JSON. This common code base could then be used by all kinds of programs to package the content as standalone HTML zip files, ePub, ZIM, or even PDF for crazy people like me who like to print things ;)
-
You can now create your own zim files!
There is a limit at 1,000 items for each zim because we don’t want to DDoS unsuspecting websites with requests; and also would not be able to afford the bill If it becomes as popular as we think it will be. But since this is free software, you can obviously cut the middleman by copying, studying, modifying and redistributing the code that can be found here: https://github.com/openzim/zimit or contact us directly and get the full thing for a small fee (tbd, but this should not be a blocker for legitimate uses).
-
We Developed A Tool To Make A Copy Of Most
Documentation is available at github.com/openzim/zimit and github.com/kiwix (there's a repo for each platform, kiwix-serve and android are the ones to look at atm for integration of service workers)
What are some alternatives?
rdrview - Firefox Reader View as a command line tool
koodo-reader - A modern ebook manager and reader with sync and backup capacities for Windows, macOS, Linux and Web
instascrape - Powerful and flexible Instagram scraping library for Python, providing easy-to-use and expressive tools for accessing data programmatically
SingleFile - Web Extension for saving a faithful copy of a complete web page in a single HTML file
zim-plugin-instantsearch - Search as you type in Zim, in similar manner to OneNote Ctrl+E.
monolith-of-web - A chrome extension to make a single static HTML file of the web page using a WebAssembly port of monolith CLI
gazpacho - 🥫 The simple, fast, and modern web scraping library
BasicCrawler - Basic web crawler that automates website exploration and producing web resource trees.
nautilus - Turns a collection of documents into a browsable ZIM file
parser - 📜 Extract meaningful content from the chaos of a web page
mwoffliner - Mediawiki scraper: all your wiki articles in one highly compressed ZIM file