mwparserfromhell
WiktionaryParser
mwparserfromhell | WiktionaryParser | |
---|---|---|
5 | 2 | |
705 | 355 | |
- | - | |
6.6 | 0.0 | |
5 days ago | 4 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mwparserfromhell
- FLaNK AI Weekly for 29 April 2024
-
Processing Wikipedia Dumps With Python
There's also https://github.com/earwig/mwparserfromhell, if you don't want to roll your own.
-
[Python] How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
In particular what you're looking at is not XML but wikitext. I found a discussion on stackoverflow about solving the same problem of getting text from wikitext. Seems like the most promising solution in Python since you already have the dump is to run each page through mwparserfromhell. According to the top stackoverflow answer you could use something like
-
How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
Thank you so much! I was actually talking about the markup language within the text. Turns out it's proprietary to WikiMedia and user lowerthansound kindly suggested I use this: https://github.com/earwig/mwparserfromhell
WiktionaryParser
-
I spent the 2 weeks building a complex data parsing program for a data project and today I found out that such a library already exists.
Today I just randomly searched up Wiktionary parser and lo and behold a perfectly working library appears in the search results. Yea all my problems are solved, but I'm empty inside. I could've just Googled and save my 2 weeks worth of effort for the rest of the project.
-
[UPDATE] Here's the transcript of the 1781 most-used German Nouns according to a 4.2 million word corpus research performed by Routledge
Haha no doing them by hand would be way too tedious. I used this library to scrape Wiktionary and queried every word in your Noun list. Note that some items like "die Leute" don't have a plural.
What are some alternatives?
wikitextparser - A Python library to parse MediaWiki WikiText
wiktextract - Wiktionary dump file parser and multilingual data extractor
archwiki - MediaWiki used on Arch Linux websites (read-only mirror)
Mediawiker - A plugin for Sublime Text editor that adds possibility to use it as Wiki Editor on MediaWiki-based sites like Wikipedia and many other.
wikiteam - Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2023, WikiTeam has preserved more than 350,000 wikis.
pywikibot - A Python library that interfaces with the MediaWiki API. This is a mirror from gerrit.wikimedia.org. Do not submit any patches here. See https://www.mediawiki.org/wiki/Developer_account for contributing.
MediaWiki-Tools - Tools for getting data from MediaWiki websites
isbntools - python app/framework for 'all things ISBN' including metadata, descriptions, covers...
wiki_dump - A library that assists in traversing and downloading from Wikimedia Data Dumps and their mirrors.
pydantic - Data validation using Python type hints