percollate
monolith
percollate | monolith | |
---|---|---|
14 | 23 | |
4,122 | 9,972 | |
- | 24.4% | |
5.7 | 7.2 | |
6 days ago | about 1 month ago | |
JavaScript | Rust | |
MIT License | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
percollate
-
The Case Against AI Everything, Everywhere, All at Once
You can still choose automation. The easier route for me is to use wallabag to save the article. Then on my remarkable tablet I can grab a very readable document with https://github.com/koreader/koreader.
The other option is to use https://github.com/danburzo/percollate to convert a webpage to a nice document directly. I use both tools depending on my needs.
-
Share my down(load) function!
This function is just a simple combination with yt-dlp and percollate.
- Selfhosted service to screenshot websites - but I'm not finding the options I need
-
Reverse Engineering or Recreating the Chrome Extension?
If someone hasn't already done this and I can't figure out how they are converting HTML, I have also considered using Percollate to convert, then sending to ReMarkable via rmapi.
-
ArchiveBox Alternative
The Cli Tool Percollate offers a different approach, but is also very good: https://github.com/danburzo/percollate
- Reading web articles on the reMarkable
-
Is there a command line program to convert web pages into readable markdown/htm/pdf format? preferably markdown
Concerning pdf there is the well known wkhtmltopdf , but let me say that I love the not so well known percollate
- CLI to turn web pages into beautiful, readable PDF, ePub, or HTML docs
-
Show HN: Lurnby, a tool for better learning, is now open source
Since I'm working on a similar project, this is how I am planning to pull content from the web, utilizing percollate[1] to get the HTML content, I haven't written any implementation for this in Python yet.
If you don't mind me asking, how were you going to implement spaced repetition? Since the Incremental Reading algorithm has never been published as far as I know.
[1]: https://github.com/danburzo/percollate
- What Are The Best Linux Apps?
monolith
-
🛠️Non-AI Open Source Projects that are 🔥
Monolith is a CLI tool for saving complete web pages as a single HTML file.
-
An Introduction to the WARC File
I have never used monolith to say with any certainty, but two things in your description are worth highlighting between the goals of WARC versus the umpteen bazillion "save this one page I'm looking at as a single file" type projects:
1. WARC is designed, as a goal, to archive the request-response handshake. It does not get into the business of trying to make it easy for a browser to subsequently display that content, since that's a browser's problem
2. Using your cited project specifically, observe the number of "well, save it but ..." options <https://github.com/Y2Z/monolith#options> which is in stark contrast to the archiving goals I just spoke about. It's not a good snapshot of history if the server responded with `content-type: text/html;charset=iso-8859-1` back in the 90s but "modern tools" want everything to be UTF-8 so we'll just convert it, shall we? Bah, I don't like JavaScript, so we'll just toss that out, shall we? And so on
For 100% clarity: monolith, and similar, may work fantastic for any individual's workflow, and I'm not here to yuck anyone's yum; but I do want to highlight that all things being equal it should always be possible to derive monolith files from warc files because the warc files are (or at least have the goal of) perfect fidelity of what the exchange was. I would guess only pcap files would be of higher fidelity, but also a lot more extraneous or potentially privacy violating details
- Reddit limits the use of API to 1000,Let's work together to save the content of StableDiffusion Subreddit as a team
-
nix-init: Create Nix packages with just the URL, with support for dependency inference, license detection, hash prefetching, and more
console $ nix-init default.nix -u https://github.com/Y2Z/monolith [...] (press enter to select the defaults) $ nix-build -E "(import { }).callPackage ./. { }" [...] $ result/bin/monilith --version monolith 2.7.0
-
What is the best free, least likely to discontinue, high data allowance app/service for saving articles/webpages permanently?
For example, here’s a command-line tool to save webpages as HTML files: https://github.com/Y2Z/monolith
- Offline Internet Archive
-
Rust Easy! Modern Cross-platform Command Line Tools to Supercharge Your Terminal
monolith: Convert any webpage into a single HTML file with all assets inlined.
-
Is there a way to (bulk) save all tabs as a pdf document in a quick way?
There is also a program (monolith: https://github.com/Y2Z/monolith) that does the same
-
Is there a good list of up-to-date data archiving tools for different websites?
besides wget, for single pages I use monolith https://github.com/Y2Z/monolith
-
Ask HN: Full-text browser history search forever?
You can pipe the URLs through something like monolith[1].
https://github.com/Y2Z/monolith
What are some alternatives?
rdrview - Firefox Reader View as a command line tool
SingleFile - Web Extension for saving a faithful copy of a complete web page in a single HTML file
koodo-reader - A modern ebook manager and reader with sync and backup capacities for Windows, macOS, Linux and Web
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
zimit - Make a ZIM file from any Web site and surf offline!
shrface - Extend eww/nov with org-mode features, archive web pages to org files with shr.
monolith-of-web - A chrome extension to make a single static HTML file of the web page using a WebAssembly port of monolith CLI
archivy - Archivy is a self-hostable knowledge repository that allows you to learn and retain information in your own personal and extensible wiki.
BasicCrawler - Basic web crawler that automates website exploration and producing web resource trees.
Wallabag - wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.