Jieba
python-pinyin-jyutping-sentence
Jieba | python-pinyin-jyutping-sentence | |
---|---|---|
6 | 1 | |
32,442 | 50 | |
- | - | |
0.0 | 1.2 | |
about 2 months ago | about 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Jieba
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
python-pinyin-jyutping-sentence
-
Sentence parser for Mandarin?
Pinyin/Jyutping Generator, Reddit thread here
What are some alternatives?
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
annotator-js
SnowNLP - Python library for processing Chinese text
g2pC - g2pC: A Context-aware Grapheme-to-Phoneme Conversion module for Chinese
NLTK - NLTK Source
zhvocab - Chinese vocab database, tagged by category
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
wordfreq - Access a database of word frequencies, in various natural languages. [Moved to: https://github.com/rspeer/wordfreq]
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
OpenCC - Conversion between Traditional and Simplified Chinese
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
zhtext - Tools for analyzing chinese texts