Jieba
结巴中文分词 (by fxsjy)
Stanza
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages (by stanfordnlp)
Jieba | Stanza | |
---|---|---|
6 | 8 | |
32,442 | 7,053 | |
- | 0.6% | |
0.0 | 9.8 | |
about 2 months ago | 2 days ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Jieba
Posts with mentions or reviews of Jieba.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-14.
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
Stanza
Posts with mentions or reviews of Stanza.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-06.
- Down and Out in the Magic Kingdom
-
Parts of speech tagged for German
I use Python's spacy library: https://spacy.io/models/de or stanza: https://stanfordnlp.github.io/stanza/ each with their respective language models.
-
Off the shelf sentence parsers?
stanza has a constituency parser. There's a model compatible with the dev branch with an accuracy of 95.8 on PTB, using Roberta as a bottom layer, so it's pretty decent at this point. (The currently released model is not as accurate, but it's easy to get the better model to you.) There's also Tregex as a Java addon which can very easily search for a noun phrase highest up in the tree: NP !>> NP will search for a noun phrase which is not dominated by any higher up noun phrase.
- The Spacy NER model for Spanish is terrible
- Spacy vs NLTK for Spanish Language Statistical Tasks
-
Stanza not tokenising sentences as expected
I am using Stanza to tokenise the sentences:
- Stanza – A Python NLP Package for Many Human Languages
What are some alternatives?
When comparing Jieba and Stanza you can also consider the following projects:
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
SnowNLP - Python library for processing Chinese text
NLTK - NLTK Source
BERT-NER - Pytorch-Named-Entity-Recognition-with-BERT
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
flair - A very simple framework for state-of-the-art Natural Language Processing (NLP)
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
pytext - A natural language modeling framework based on PyTorch
textacy - NLP, before and after spaCy
polyglot - Multilingual text (NLP) processing toolkit