cjkvi-ids | Jieba | |
---|---|---|
4 | 7 | |
399 | 33,159 | |
0.0% | - | |
0.0 | 0.0 | |
over 1 year ago | about 2 months ago | |
Python | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cjkvi-ids
-
Where can I find a list of all Chinese characters grouped by graphic origin? (pictogram, ideogrammic compound, phono-semantic compound, etc)
Here's a comprehensive list from the CJKVi Ideographic Database. If you read Japanese, you can read how the list was designed here. They appear to have made liushu 六書 judgements mainly based on Shuowen jiezi 說文解字 entries.
-
I'm making the kanji learning app that I wish existed.
CJKV-IDS https://github.com/cjkvi/cjkvi-ids A list of the CDP references used within CJKV-IDS http://en.glyphwiki.org/wiki/Group:CDP%e5%a4%96%e5%ad%97-ALL
-
Kanji Club: Search Kanji by Parts with Instant Feedback
> What character decomposition database is this using..
From the about page, it seems that it's using Wikimedia data: https://commons.wikimedia.org/wiki/Commons:Chinese_character... .
To add to your comment, there's also RADKFILE/KRADFILE, which is used by a lot of Japanese dictionaries out there (including jisho.org), and also IDS (Ideographic Description Sequence) data: https://github.com/cjkvi/cjkvi-ids . The latter, I believe, is not meant for general lookup, but nonetheless can be quite informative, such as identifying semantic/phonetic components.
- I'm looking for a specific vocab list
Jieba
-
PostgreSQL Full-Text Search in a Nutshell
Let's continue with jieba as an example. This is the main program logic for pg_jieba, which is also a Python package, so let's use Python for the example.
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
What are some alternatives?
topokanji - Topologically ordered lists of kanji for effective learning
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
kanjium - The ultimate kanji resource
SnowNLP - Python library for processing Chinese text
kanji-data - A JSON kanji dataset with updated JLPT levels and WaniKani information
NLTK - NLTK Source
kanji-graph - Visualize connections between Japanese words
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
textacy - NLP, before and after spaCy
IEPY - Information Extraction in Python