wiktextract
awesome-web-archiving
wiktextract | awesome-web-archiving | |
---|---|---|
7 | 13 | |
704 | 1,818 | |
- | 2.1% | |
9.8 | 5.2 | |
8 days ago | 4 days ago | |
Python | ||
GNU General Public License v3.0 or later | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wiktextract
- Wiktionary dump file parser and multilingual data extractor
- How to Download All of Wikipedia onto a USB Flash Drive
-
I built a dictionary app even with more than and300 apps available at AppStore
Great work
I'm working on similar dictionary app and found wiktionary insanely usable as dictionary source.
Here is one more project aiming to make wiktionary data usable as json data structure: https://github.com/tatuylonen/wiktextract.
It has a link to a site https://kaikki.org/ which hosts dictionary data dumps.
-
Dynamically generating minimal pair decks for Anki
Hm, that would be a good idea... if I didn't have to download so much data (over 20GB for just audio?!). But, looking at the Python library that processed those dumps (https://github.com/tatuylonen/wiktextract), which is more manageable, using it would involve getting the WikiMedia dump file for every word on the list, then parsing them for the relevant data and what follows is mostly the same, except I end up with a bunch of cached files.
-
What are some of the best digital free dictionaries available online (even for commercial use)?
Many parsers are available. https://github.com/tatuylonen/wiktextract
-
Best Approach to importing a languages dictionary
I'd probably try pulling from Wiktionary, there looks to be a Python package that can do it here.
-
This is not perfect but it's a start
And the json is built with https://github.com/tatuylonen/wiktextract whom I have to thank
awesome-web-archiving
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
https://github.com/iipc/awesome-web-archiving/blob/main/READ...
-
DPReview.com is going down effective April 10.
People have pasted this around, https://github.com/iipc/awesome-web-archiving Could probably do it with wget if you had enough time?
- DPReview.com to close on April 10 after 25 years of operation
-
This Layoff Does Not Exist: tech layoff announcements but weird
Maybe something on this list can help you https://github.com/iipc/awesome-web-archiving
-
Software to keep Website pages "alive"?
Awesome Web Archiving has a longer list of tools and software
-
How to Download All of Wikipedia onto a USB Flash Drive
Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.
ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.
WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.
ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.
browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.
For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving
[1] https://github.com/ArchiveBox/ArchiveBox
- Self Hosted Roundup #14
- SingleFile: Save a Complete Web Page into a Single HTML File
- [HELP] Starting Out for a Beginner
- Reflections as the Internet Archive turns 25
What are some alternatives?
WiktionaryParser - A Python Wiktionary Parser
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
Maat - Validation and transformation library powered by deductive ascending parser. Made to be extended for any kind of project.
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
trankit - Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
obelisk - Go package and CLI tool for saving web page as single HTML file
laserembeddings - LASER multilingual sentence embeddings as a pip package
SingleFile-MV3 - SingleFile version compatible with Manifest V3. The future, right now!
zim-tools - Various ZIM command line tools
firefox-scrapbook - ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit.
Kotoba - Quickly search the built-in iOS dictionary to see definitions of words. Collect words you want to remember.
youtube-dl - Command-line program to download videos from YouTube.com and other video sites