Stanza
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages (by stanfordnlp)
Jieba
结巴中文分词 (by fxsjy)
Stanza | Jieba | |
---|---|---|
8 | 7 | |
7,337 | 33,599 | |
0.4% | 0.5% | |
9.7 | 0.0 | |
5 days ago | 5 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stanza
Posts with mentions or reviews of Stanza.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-01-06.
- Down and Out in the Magic Kingdom
-
Parts of speech tagged for German
I use Python's spacy library: https://spacy.io/models/de or stanza: https://stanfordnlp.github.io/stanza/ each with their respective language models.
-
Off the shelf sentence parsers?
stanza has a constituency parser. There's a model compatible with the dev branch with an accuracy of 95.8 on PTB, using Roberta as a bottom layer, so it's pretty decent at this point. (The currently released model is not as accurate, but it's easy to get the better model to you.) There's also Tregex as a Java addon which can very easily search for a noun phrase highest up in the tree: NP !>> NP will search for a noun phrase which is not dominated by any higher up noun phrase.
- The Spacy NER model for Spanish is terrible
- Spacy vs NLTK for Spanish Language Statistical Tasks
-
Stanza not tokenising sentences as expected
I am using Stanza to tokenise the sentences:
- Stanza – A Python NLP Package for Many Human Languages
Jieba
Posts with mentions or reviews of Jieba.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-06-13.
-
PostgreSQL Full-Text Search in a Nutshell
Let's continue with jieba as an example. This is the main program logic for pg_jieba, which is also a Python package, so let's use Python for the example.
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
What are some alternatives?
When comparing Stanza and Jieba you can also consider the following projects:
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
NLTK - NLTK Source
SnowNLP - Python library for processing Chinese text
pytext - A natural language modeling framework based on PyTorch
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
stanfordnlp - [Deprecated] This library has been renamed to "Stanza". Latest development at: https://github.com/stanfordnlp/stanza
polyglot - Multilingual text (NLP) processing toolkit
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
BERT-NER - Pytorch-Named-Entity-Recognition-with-BERT
textacy - NLP, before and after spaCy