zimit
parser
zimit | parser | |
---|---|---|
9 | 12 | |
233 | 5,254 | |
5.2% | 1.8% | |
7.6 | 1.1 | |
16 days ago | 7 months ago | |
Python | JavaScript | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
zimit
-
Zim vs WARC ?
There are clearly similarities between the two, given that Kiwix put resources into making WARC content available in ZIM archives (i.e. Zimit-style ZIMs, created with the Zimit scraper and warc2zim backend). But as u/IMayBeABitShy said, the ZIM specification focuses on providing a highly compressed container that is readable on-the-fly (i.e. by decompressing only the needed content to show an article), whereas WARC, or rather the compressed version WACZ, is merely a zipped version of the WARC data (request headers and responses). It is also readable on-the-fly, but compression will not be as optimal as the zstandard compression used by modern ZIM archives.
-
What's the "best" way to make your own ZIMs (in docker)?
I'm looking at making my own ZIM though not sure the best way to go about it. I've seen zimit on Github and the mwoffliner on Github too.
-
How do I zimit listings with slideshows?
You would have more chances at getting a technical reply to your technical issue by hitting https://github.com/openzim/zimit/issues I believe
-
Openzim/zimit using docker on Windows 10; mounting the volume with the complete .zim file works how exactly?
The zimit readme says it uses /output as the default directory, so we can use that as the name for our docker's volume.
-
Prepping for the end of the internet.
Zimit. This tool allows you to convert an existing website into an offline ZIM archive. https://hub.docker.com/r/openzim/zimit
-
Reading from the web offline and distraction-free
which worked quite well for most sites, but still very far from a general-purpose solution.
There is also more powerful/general-purpose scraper that generates a ZIM file here: https://github.com/openzim/zimit
It would be really nice to a "common" scraper code base that takes care of scraping (possibly with a real headless browser) and outputs all assets as files + info as JSON. This common code base could then be used by all kinds of programs to package the content as standalone HTML zip files, ePub, ZIM, or even PDF for crazy people like me who like to print things ;)
-
You can now create your own zim files!
There is a limit at 1,000 items for each zim because we don’t want to DDoS unsuspecting websites with requests; and also would not be able to afford the bill If it becomes as popular as we think it will be. But since this is free software, you can obviously cut the middleman by copying, studying, modifying and redistributing the code that can be found here: https://github.com/openzim/zimit or contact us directly and get the full thing for a small fee (tbd, but this should not be a blocker for legitimate uses).
-
We Developed A Tool To Make A Copy Of Most
Documentation is available at github.com/openzim/zimit and github.com/kiwix (there's a repo for each platform, kiwix-serve and android are the ones to look at atm for integration of service workers)
parser
-
Show HN: I made a tool to clean and convert any webpage to Markdown
Thoroughly scraping is challenging, especially in an environment where you don’t have (or want) a JavaScript runtime.
For content extraction, I found the approach the Postlight library takes quite neat. It scores individual html nodes based on some heuristics (text length, link density, css classes). It the selects the nodes with the highest score. [1] I ported it to Swift for a personal read later app.
[1] https://github.com/postlight/parser
-
Trouble Building Chrome Extension to Get News Article Content
I've been working on an enhanced reader mode extension for the last few months. I found that Mercury Reader's parser tool is useful for extracting content. If that's not exactly what you're looking for, readibility is another good option. It's a library used inside Firefox's reader moder that you can use in any project.
-
What Are The Coolest Virtual Machines You Currently Run 24/7?
I currently have it turned off while I search for better sources, but I have a VM that runs a custom cron script that combines a custom RSS reader, podfox, mercury-parser, and coqui-ai to generate audio podcasts from RSS news feeds. I should probably clean it up and release the script/setup process. With a few tweaks and some AI text-to-speech and a little machine learning audio processing you can get a really good podcast experience from text posts.
-
Extracting Text button no longer works
It looks like Relay could be updated to convert it locally though, since the parser that it uses appears to be open source.
-
Which are some open-source Chrome extensions you want to use on Firefox?
https://github.com/postlight/mercury-parser The only one I need, shit's too good
-
API for getting news fulltext
An alternative would be to extract the plain text from the article's page with either some "readability" API or a library like Mercury Parser: https://github.com/postlight/mercury-parser
-
How does Firefox's Reader View work?
I haven’t directly compared them, but I have also found mercury parser (https://github.com/postlight/mercury-parser) to be very reliable.
Since it turns a website into very plain (X)HTML it‘s fairly easy to use it to make a browsing proxy or automatically produce epub files for e-readers, which is what I do.
-
Build your self-hosted Evernote
Make sure that at the end of the process you have the node and npm executables installed - the http.webpage integration uses the Mercury Parser API to convert web pages to Markdown.
-
Reading from the web offline and distraction-free
Good luck! Those HTML issues you're coming across are tough and so varied across the web!
I was working with Mercury Parser (pluggable parsing for different sites) in the past.
https://github.com/postlight/mercury-parser
- The most underused browser feature
What are some alternatives?
rdrview - Firefox Reader View as a command line tool
readability - A standalone version of the readability lib
percollate - A command-line tool to turn web pages into readable PDF, EPUB, HTML, or Markdown docs.
hn-search - Hacker News Search
instascrape - Powerful and flexible Instagram scraping library for Python, providing easy-to-use and expressive tools for accessing data programmatically
Just-Read - A customizable read mode web extension.
zim-plugin-instantsearch - Search as you type in Zim, in similar manner to OneNote Ctrl+E.
FParsec - A parser combinator library for F#
gazpacho - 🥫 The simple, fast, and modern web scraping library
tidy-html5 - The granddaddy of HTML tools, with support for modern standards
nautilus - Turns a collection of documents into a browsable ZIM file