monolith
readability
monolith | readability | |
---|---|---|
23 | 52 | |
10,149 | 8,204 | |
2.2% | 1.3% | |
7.2 | 6.3 | |
15 days ago | 14 days ago | |
Rust | JavaScript | |
Creative Commons Zero v1.0 Universal | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
monolith
-
🛠️Non-AI Open Source Projects that are 🔥
Monolith is a CLI tool for saving complete web pages as a single HTML file.
-
An Introduction to the WARC File
I have never used monolith to say with any certainty, but two things in your description are worth highlighting between the goals of WARC versus the umpteen bazillion "save this one page I'm looking at as a single file" type projects:
1. WARC is designed, as a goal, to archive the request-response handshake. It does not get into the business of trying to make it easy for a browser to subsequently display that content, since that's a browser's problem
2. Using your cited project specifically, observe the number of "well, save it but ..." options <https://github.com/Y2Z/monolith#options> which is in stark contrast to the archiving goals I just spoke about. It's not a good snapshot of history if the server responded with `content-type: text/html;charset=iso-8859-1` back in the 90s but "modern tools" want everything to be UTF-8 so we'll just convert it, shall we? Bah, I don't like JavaScript, so we'll just toss that out, shall we? And so on
For 100% clarity: monolith, and similar, may work fantastic for any individual's workflow, and I'm not here to yuck anyone's yum; but I do want to highlight that all things being equal it should always be possible to derive monolith files from warc files because the warc files are (or at least have the goal of) perfect fidelity of what the exchange was. I would guess only pcap files would be of higher fidelity, but also a lot more extraneous or potentially privacy violating details
- Reddit limits the use of API to 1000,Let's work together to save the content of StableDiffusion Subreddit as a team
-
nix-init: Create Nix packages with just the URL, with support for dependency inference, license detection, hash prefetching, and more
console $ nix-init default.nix -u https://github.com/Y2Z/monolith [...] (press enter to select the defaults) $ nix-build -E "(import { }).callPackage ./. { }" [...] $ result/bin/monilith --version monolith 2.7.0
-
What is the best free, least likely to discontinue, high data allowance app/service for saving articles/webpages permanently?
For example, here’s a command-line tool to save webpages as HTML files: https://github.com/Y2Z/monolith
- Offline Internet Archive
-
Rust Easy! Modern Cross-platform Command Line Tools to Supercharge Your Terminal
monolith: Convert any webpage into a single HTML file with all assets inlined.
-
Is there a way to (bulk) save all tabs as a pdf document in a quick way?
There is also a program (monolith: https://github.com/Y2Z/monolith) that does the same
-
Is there a good list of up-to-date data archiving tools for different websites?
besides wget, for single pages I use monolith https://github.com/Y2Z/monolith
-
Ask HN: Full-text browser history search forever?
You can pipe the URLs through something like monolith[1].
https://github.com/Y2Z/monolith
readability
-
2markdown – Transform Websites into Markdown
Why not just use something like https://github.com/mozilla/readability
And not pay $0.01 per request?
There’s a node version too https://www.npmjs.com/package/@mozilla/readability
- Mozilla: Readability.js
-
CSS for readability
I'm working with the Mozilla's readability library https://github.com/mozilla/readability to get the "readable" text from articles and now I want to style the extracted text in a readable way.
-
Building a Serverless Reader View with Lambda and Chrome
Do you remember the Firefox Reader View? It's a feature that removes all unnecessary components like buttons, menus, images, and so on, from a website, focusing on the readable content of the page. The library powering this feature is called Readability.js, which is open source.
-
Webrecorder: Capture interactive websites and replay them at a later time
I wonder if Firefox "reader mode as a utility" might be a viable alternative for Pinboard like "content oriented" archiving?
https://github.com/mozilla/readability
-
Creating an advanced search engine with PostgreSQL
Depending upon the type of content, one might want to look into using the Readability (Browder's reader view) to parse the webpage. It will give you all the useful info without the junk. Then you can put it in the DB as needed.
https://github.com/mozilla/readability
Btw, readability, is also available in few other languages like Kotlin:
https://github.com/dankito/Readability4J
-
Seeking a tool or method to convert webpages into Q&A format using NLP
Use Mozilla's Readability to extract that sweet, sweet text content from webpages.
-
I built a free prompt managing tool - Knit
Same as above but the ability to grab the entire article text (you can use the Readability library for that: https://github.com/mozilla/readability)
-
I need automatic source URLs when I paste any text onto a card or note, like on OneNote.
// Original script // https://gist.github.com/kepano/90c05f162c37cf730abb8ff027987ca3 // Bookmarklet Converter // https://caiorss.github.io/bookmarklet-maker/ // Libraries // https://github.com/mixmark-io/turndown // https://github.com/mozilla/readability javascript: Promise.all([import('https://unpkg.com/[email protected]?module'), import('https://unpkg.com/@tehshrike/[email protected]'), ]).then(async ([{ default: Turndown }, { default: Readability }]) => { /* Optional vault name */ const vault = ""; /* Optional folder name such as "Clippings/" */ const folder = "Clippings/"; /* Optional tags */ const tags = ""; function getSelectionHtml() { var html = ""; if (typeof window.getSelection != "undefined") { var sel = window.getSelection(); if (sel.rangeCount) { var container = document.createElement("div"); for (var i = 0, len = sel.rangeCount; i < len; ++i) { container.appendChild(sel.getRangeAt(i).cloneContents()); } html = container.innerHTML; } } else if (typeof document.selection != "undefined") { if (document.selection.type == "Text") { html = document.selection.createRange().htmlText; } } return html; } const selection = getSelectionHtml(); const { title, byline, content } = new Readability(document.cloneNode(true)).parse(); function getFileName(fileName) { var userAgent = window.navigator.userAgent, platform = window.navigator.platform, windowsPlatforms = ['Win32', 'Win64', 'Windows', 'WinCE']; if (windowsPlatforms.indexOf(platform) !== -1) { fileName = fileName.replace(':', '').replace(/[/\\?%*|"<>]/g, '-'); } else { fileName = fileName.replace(':', '').replace(/\//g, '-').replace(/\\/g, '-'); } return fileName; } const fileName = getFileName(title); if (selection) { var markdownify = selection; } else { var markdownify = content; } if (vault) { var vaultName = '&vault=' + encodeURIComponent(`${vault}`); } else { var vaultName = ''; } const markdownBody = new Turndown({ headingStyle: 'atx', hr: '---', bulletListMarker: '-', codeBlockStyle: 'fenced', emDelimiter: '*', }).turndown(markdownify); var date = new Date(); function convertDate(date) { var yyyy = date.getFullYear().toString(); var mm = (date.getMonth()+1).toString(); var dd = date.getDate().toString(); var mmChars = mm.split(''); var ddChars = dd.split(''); return yyyy + '-' + (mmChars[1]?mm:"0"+mmChars[0]) + '-' + (ddChars[1]?dd:"0"+ddChars[0]); } const today = convertDate(date); // This is the output template // It is similar to an Obsidian core template // except to insert a value we use: ${value} instead of {{value}} const fileContent =`--- type: clipping date_added: ${today} aliases: [] tags: [${tags}] --- author:: ${byline.toString().split('\n')[0].trim()} source:: [${title}](${document.URL}) ${markdownBody} `; // This copies your text to the clipboard navigator.clipboard.writeText(fileContent); // This creates a new document in Obsidian containing your clipping // I commented it out as this isn't what you asked for /* document.location.href = "obsidian://new?" + "file=" + encodeURIComponent(folder + fileName) + "&content=" + encodeURIComponent(fileContent) + vaultName; */ })
- Any js packages to only scrape relevant content from a webpage?
What are some alternatives?
SingleFile - Web Extension for saving a faithful copy of a complete web page in a single HTML file
parser - 📜 Extract meaningful content from the chaos of a web page
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
koreader - An ebook reader application supporting PDF, DjVu, EPUB, FB2 and many more formats, running on Cervantes, Kindle, Kobo, PocketBook and Android devices
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
hn-search - Hacker News Search
shrface - Extend eww/nov with org-mode features, archive web pages to org files with shr.
readability.php - PHP port of Mozilla's Readability.js
archivy - Archivy is a self-hostable knowledge repository that allows you to learn and retain information in your own personal and extensible wiki.
rssguard - Feed reader (and podcast player) which supports RSS/ATOM/JSON and many web-based feed services.
Wallabag - wallabag is a self hostable application for saving web pages: Save and classify articles. Read them later. Freely.
SponsorBlock - Skip YouTube video sponsors (browser extension)