wikiextractor VS pudzu-packages

Compare wikiextractor vs pudzu-packages and see what are their differences.

wikiextractor

A tool for extracting plain text from Wikipedia dumps (by attardi)

pudzu-packages

Various python packages, mostly geared towards dataviz. (by Udzu)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
wikiextractor pudzu-packages
3 6
3,630 1
- -
0.0 7.9
3 months ago about 2 months ago
Python Python
GNU Affero General Public License v3.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

wikiextractor

Posts with mentions or reviews of wikiextractor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-13.
  • Letter and next letter frequencies for 24 languages (see comments for non-English plots) [OC]
    3 projects | /r/dataisbeautiful | 13 Jun 2022
    Larger text corpora: each plot is generated from around 450MB of Wikipedia article text (or as much as is available), extracted using wikiextractor.
  • Most similar language to each European language, based purely on letter distribution [OC]
    1 project | /r/dataisbeautiful | 8 Jun 2022
    Methodology: extracted 100MB of article texts from each of the different Wikipedias using https://github.com/attardi/wikiextractor, and counted the character prevalences using Python. The similarity measure is just the sum of the absolute differences in character prevalences (so a lower score means more similar): e.g. if language A has distribution {A: 0.5, B: 0.3, C: 0.2} and language B has distribution {A: 0.8, B: 0.2} then their similarity is |0.5-0.8|+|0.3-0.2|+|0.2-0.0|=0.6. The final chart was generated using graphviz and pillar.
  • Finding a English Wikipedia dump
    1 project | /r/LanguageTechnology | 5 Aug 2021
    With the help of wikiextractor, i was able to query it and process the dump. However, when i start inspecting it, some articles are empty. For example AccessibleComputing should not be empty, but the dump gave: ``` AccessibleComputing 0 10 854851586 2021-01-23T15:15:01Z Elli shel wikitext text/x-wiki #REDIRECT [[Computer accessibility]]

pudzu-packages

Posts with mentions or reviews of pudzu-packages. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-13.

What are some alternatives?

When comparing wikiextractor and pudzu-packages you can also consider the following projects:

hangul-jamo - A library to compose and decompose Hangul syllables using Hangul jamo characters

colorgram.py - A Python module for extracting colors from images. Get a palette of any picture!

twemoji-parser - A python module made on top of PIL that draws twemoji from text to image.

pudzu - Various python scripts, mostly geared towards dataviz.