langid.py | Jieba | |
---|---|---|
2 | 6 | |
2,242 | 32,442 | |
- | - | |
0.0 | 0.0 | |
over 4 years ago | about 2 months ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
langid.py
-
Curator v0.1.0: Auto-organize large movie collections (AI language detection+sync)
Right now it's in early stages: It can detect languages from audio and subtitles (Whisper+LangID) with good results so far tried with 52 movies here (failed with just 1 which was silent). I'm currently working on synchronization: Hopefully subtitle timestamps and audio sound effects can suffice for cross-correlation. After that, I'll work on the TUI (maybe add a proper GUI too) to improve UX.
-
Announcing Lingua 1.0.0: The most accurate natural language detection library for Python, suitable for long and short text alike
Python is widely used in natural language processing, so there are a couple of comprehensive open source libraries for this task, such as Google's CLD 2 and CLD 3, langid and langdetect. Unfortunately, except for the last one they have two major drawbacks:
Jieba
-
[OC] How Many Chinese Characters You Need to Learn to Read Chinese!
jieba to do Chinese word segmentation
-
Sentence parser for Mandarin?
Jieba: Chinese text segmenter
-
How many in here use google sheets to keep track on their Chinese vocabulary? (2 pics) - More info in the comments
If you know some python you can use a popular library called Jieba 结巴 to automatically get pinyin for every word. (Jieba has actually been ported to many languages) You can also use it to break a chinese text into a set of unique words for easy addition to your spreadsheet.
- Where can I download a database of Chinese word classifications (noun, verb, etc)
-
Learn vocabulary effortlessly while browsing the web [FR,EN,DE,PT,ES]
Since you're saying the main issue is segmentation, there are libraries to help out with that issue. jieba is fantastic if you have a Python backend, nodejieba (50k downloads/week) if it's more JS-side.
-
I'm looking for a specific vocab list
https://github.com/fxsjy/jieba/ (has some good word frequency data)
What are some alternatives?
polyglot - Multilingual text (NLP) processing toolkit
spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python
TextBlob - Simple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
SnowNLP - Python library for processing Chinese text
py3langid - Faster, modernized fork of the language identification tool langid.py
NLTK - NLTK Source
pkuseg-python - pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
Stanza - Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
stanfordnlp - [Deprecated] This library has been renamed to "Stanza". Latest development at: https://github.com/stanfordnlp/stanza