hyperfiler
monolith
hyperfiler | monolith | |
---|---|---|
5 | 23 | |
48 | 9,972 | |
- | 24.4% | |
1.1 | 7.2 | |
almost 3 years ago | about 1 month ago | |
TypeScript | Rust | |
GNU Affero General Public License v3.0 | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hyperfiler
-
We are drowning in churn and noise. I am fighting by switching this site to PDF
>HTML can easily be offline-able. Base64 your images or use SVG, put your CSS in the HTML page, remove all 2-way data interaction, basically reduce HTML to the same performance as PDF and allow it to be downloaded.
I built a tool for this exact purpose[0] since the HTML specification and modern browsers have a lot of nice features for creating and reading documents compared to PDF (reflow and responsive page scaling, accessibility, easily sharable, a lot of styling options that are easy to use, ability for the user to easily modify the document or change the style, integration with existing web technologies, etc.). In general I would rather read an HTML document than the PDF document since I like to modify the styling in various ways (dark theme extensions in the browser for example) which may be hard to do with a PDF, but its more of a personal preference. Some people will prefer that the document adjusts to the screen size of the device (many HTML pages), and others will prefer the exact same or similar rendering regardless of the screen size (PDF).
Either way, kind of a fun idea making a website using just PDFs. Not the most practical choice, but fun none-the-less.
[0] https://github.com/chowderman/hyperfiler
-
HTTrack Website Copier – Free Software Offline Browser (GNU GPL)
There is also a similar program called HyperFiler[0]* that bundles web pages into single HTML files with a few more options such as a headless chromium transport option, built in minifiers, page sanitizers, and an option to grayscale the output pages, among other options. It's TypeScript based and has an programmatic API to customize the bundling process as well.
[0] https://github.com/chowderman/hyperfiler
* disclaimer: I created HyperFiler
- HyperFiler: Archive web pages by bundling them into single HTML files
- HyperFiler: Bundle web pages into hyper minified, single HTML files
monolith
-
🛠️Non-AI Open Source Projects that are 🔥
Monolith is a CLI tool for saving complete web pages as a single HTML file.
-
An Introduction to the WARC File
I have never used monolith to say with any certainty, but two things in your description are worth highlighting between the goals of WARC versus the umpteen bazillion "save this one page I'm looking at as a single file" type projects:
1. WARC is designed, as a goal, to archive the request-response handshake. It does not get into the business of trying to make it easy for a browser to subsequently display that content, since that's a browser's problem
2. Using your cited project specifically, observe the number of "well, save it but ..." options <https://github.com/Y2Z/monolith#options> which is in stark contrast to the archiving goals I just spoke about. It's not a good snapshot of history if the server responded with `content-type: text/html;charset=iso-8859-1` back in the 90s but "modern tools" want everything to be UTF-8 so we'll just convert it, shall we? Bah, I don't like JavaScript, so we'll just toss that out, shall we? And so on
For 100% clarity: monolith, and similar, may work fantastic for any individual's workflow, and I'm not here to yuck anyone's yum; but I do want to highlight that all things being equal it should always be possible to derive monolith files from warc files because the warc files are (or at least have the goal of) perfect fidelity of what the exchange was. I would guess only pcap files would be of higher fidelity, but also a lot more extraneous or potentially privacy violating details
- Reddit limits the use of API to 1000,Let's work together to save the content of StableDiffusion Subreddit as a team
-
nix-init: Create Nix packages with just the URL, with support for dependency inference, license detection, hash prefetching, and more
console $ nix-init default.nix -u https://github.com/Y2Z/monolith [...] (press enter to select the defaults) $ nix-build -E "(import { }).callPackage ./. { }" [...] $ result/bin/monilith --version monolith 2.7.0
-
What is the best free, least likely to discontinue, high data allowance app/service for saving articles/webpages permanently?
For example, here’s a command-line tool to save webpages as HTML files: https://github.com/Y2Z/monolith
- Offline Internet Archive
-
Rust Easy! Modern Cross-platform Command Line Tools to Supercharge Your Terminal
monolith: Convert any webpage into a single HTML file with all assets inlined.
-
Is there a way to (bulk) save all tabs as a pdf document in a quick way?
There is also a program (monolith: https://github.com/Y2Z/monolith) that does the same
-
Is there a good list of up-to-date data archiving tools for different websites?
besides wget, for single pages I use monolith https://github.com/Y2Z/monolith
-
Ask HN: Full-text browser history search forever?
You can pipe the URLs through something like monolith[1].
https://github.com/Y2Z/monolith