Jieba | OpenCC | |
---|---|---|
6 | 7 | |
32,442 | 8,045 | |
- | - | |
0.0 | 4.0 | |
about 2 months ago | 7 days ago | |
Python | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Jieba
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
OpenCC
-
Converting ebook files between simplified/traditional?
Try Open Chinese Convert. You can also install OpenCC via Homebrew as well. macOS can can convert between the two character sets, although it's not as smart as OpenCC (for example, macOS will convert「才」 to 「纔」 whereas OpenCC won't do this).
- 瑞友们,有没有可以屏蔽简体中文的插件
-
Sentence parser for Mandarin?
OpenCC: convert between traditional and simplified Chinese, see also http://opencc.byvoid.com/
- 汉字简繁转换是不是应该基于词组
-
Does there exist a full list of the exclusively traditional characters that are never used alongside the simplified character set?
Oh, also, there’s an online demo where you can test out the conversions. Also note that there may appear to be blanks in some of those files, or they may appear like empty rectangles, because some are very rare unicode characters. It’s possible to view them with special fonts or at GlyphWiki (example).
-
Trad Chinese Subtitle Sources
Many popular TV shows and movies have both traditional and simplified subtitles, and often have dual English/Chinese subtitles in both character sets. However if traditional subtitles are not available, then your only recourse is to convert a simplified subtitle to traditional. macOS has this functionality built-in, or you can try OpenCC (this is the GitHub repository. A web-based version is available here).
What are some alternatives?
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
netflix-to-srt - Rip, extract and convert subtitles to .srt closed captions from .xml/dfxp/ttml and .vtt/WebVTT (e.g. Netflix, YouTube)
SnowNLP - Python library for processing Chinese text
fastlangid - fastlangid, the only language identification package that support cantonese (zh-yue), simplified (zh-hans) and traditional chinese (zh-hant)
NLTK - NLTK Source
Aegisub - Cross-platform advanced subtitle editor
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
awesome-malware-analysis - Defund the Police.
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
annotator-js
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
dragonmapper - Identification and conversion functions for Chinese text processing