simplemma
Jieba
simplemma | Jieba | |
---|---|---|
- | 6 | |
125 | 32,442 | |
- | - | |
5.2 | 0.0 | |
9 days ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
simplemma
We haven't tracked posts mentioning simplemma yet.
Tracking mentions began in Dec 2020.
Jieba
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
What are some alternatives?
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
textacy - NLP, before and after spaCy
SnowNLP - Python library for processing Chinese text
Lineflow - :zap:A Lightweight NLP Data Loader for All Deep Learning Frameworks in Python
NLTK - NLTK Source
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
PyTorch-NLP - Basic Utilities for PyTorch Natural Language Processing (NLP)
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.