mwparserfromhell
A Python parser for MediaWiki wikicode (by earwig)
wikitextparser
A Python library to parse MediaWiki WikiText (by 5j9)
mwparserfromhell | wikitextparser | |
---|---|---|
5 | 1 | |
784 | 301 | |
1.3% | 0.3% | |
3.2 | 8.6 | |
2 months ago | 5 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mwparserfromhell
Posts with mentions or reviews of mwparserfromhell.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-29.
- FLaNK AI Weekly for 29 April 2024
-
Processing Wikipedia Dumps With Python
There's also https://github.com/earwig/mwparserfromhell, if you don't want to roll your own.
-
[Python] How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
In particular what you're looking at is not XML but wikitext. I found a discussion on stackoverflow about solving the same problem of getting text from wikitext. Seems like the most promising solution in Python since you already have the dump is to run each page through mwparserfromhell. According to the top stackoverflow answer you could use something like
-
How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
Thank you so much! I was actually talking about the markup language within the text. Turns out it's proprietary to WikiMedia and user lowerthansound kindly suggested I use this: https://github.com/earwig/mwparserfromhell
wikitextparser
Posts with mentions or reviews of wikitextparser.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-04-06.
-
Updated: I've saved all of Wikipedia into a SQLITE database!
The use of regex seems inefficient, is there any reason why you didn't start with lxml or a purpose built parser like wikitextparser?
What are some alternatives?
When comparing mwparserfromhell and wikitextparser you can also consider the following projects:
wikiteam - Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2025, WikiTeam has preserved more than 600,000 wikis.
MediaWiki-Tools - Tools for getting data from MediaWiki websites
WiktionaryParser - A Python Wiktionary Parser
Mediawiker - A plugin for Sublime Text editor that adds possibility to use it as Wiki Editor on MediaWiki-based sites like Wikipedia and many other.
pywikibot - A Python library that interfaces with the MediaWiki API. This is a mirror from gerrit.wikimedia.org. Do not submit any patches here. See https://www.mediawiki.org/wiki/Developer_account for contributing.
PlainTextWikipedia - Convert Wikipedia database dumps into plaintext files