unihan-etl
uniunihan-db
unihan-etl | uniunihan-db | |
---|---|---|
1 | 1 | |
51 | 4 | |
- | - | |
9.5 | 4.7 | |
5 days ago | 2 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unihan-etl
-
Using Mypy in Production
I am moving all my open source projects to `mypy --strict`. Here's the diff of adding basic / --strict mypy types:
libvcs: https://github.com/vcs-python/libvcs/pull/362/files, https://github.com/vcs-python/libvcs/pull/390/files
libtmux: https://github.com/tmux-python/libtmux/pull/382/files, https://github.com/tmux-python/libtmux/pull/383/files
unihan-etl: https://github.com/cihai/unihan-etl/pull/255/files, https://github.com/cihai/unihan-etl/pull/257/files
As for return on investment - not sure yet. What I like about it is:
- completions (through annotating)
- typings can be used downstream (since the above are all now typed python libraries)
- maintainability and bug finding. Easy to wire into CI and run locally.
There's a thread on mypy, "--strict is too strict to be useful", https://github.com/python/mypy/issues/7767. I'm not sure if I walked away with that impression. If I have a function that could potentially return `None` (`Optional[str]` or `str | None`) - it makes sense for the user to handle such a case. They could:
assert response is not None
uniunihan-db
-
Office of the President of Mongolia (top to bottom text on the web)
I loved learning to read Japanese through the second volume of Heisig's _Learning the Kanji_. Volume 1, which teaches only meanings, is a slog, but volume 2, which teaches the Sino-Japanese readings is a beautiful example of organizing material to minimize entropy and maximize benefit for memorization as soon as possible. Unfortunately he never put together a volume 2 for a Chinese language. I haven't worked on it in a while, but I have a project where I attempt re-create the book for Japanese as well as Mandarin, Korean, and Vietnamese: https://nateglenn.com/uniunihan-db/ (repo: https://github.com/garfieldnate/uniunihan-db).
The "pure groups" are the ones where the presence of a specific radical guarantee you a specific pronunciation (within the list of character/pronunciation pairs you're trying to learn). Of the 4800 characters I used for the volume, only 290 are in the chapter on pure groups. The rest are either in semi-regular groups with varying numbers of exceptions, or in completely irregular groups with no discernible patterns.
The characters were designed continuously over a period of time starting thousands of years ago, and the phonetic parts were sometimes exact and sometimes just clues, similar sounds or rhymes to give the reader a hint. Ancient Chinese pronunciation has changed beyond recognition, so it makes perfect sense that the pronunciations wouldn't be regular anymore.
Mainland China uses a "simplified" character set, which did not affect literacy but in my opinion is a bit more difficult to read; they reduced the number of lines so that more characters look samey and they combined many (Mandarin) homonyms (https://en.wikipedia.org/wiki/Simplified_Chinese_characters#...), removing the meaning portion of characters that would have distinguished them. The simplification did not apply to all characters, so to achieve a high level of literacy you need to know traditional forms, anyway.
It would be interesting to see someone try to actually remodel hanzi from scratch for a specific dialect of Chinese, using 100% regular phonetic components and no variants; multiple pronunciations of a character in the current system would be required to be written differently. An interesting example of this would be certain Korean gukja, where they've combined a Chinese character with a phonetic hangeul (example: https://en.wiktionary.org/wiki/%E3%AB%87). This would be a truly simplified Chinese character set... but all of the culture's history that gets built into spelling over time would be completely lost, which is why I always prefer conservative spelling systems.
What are some alternatives?
libtmux - ⚙️ Python API / wrapper for tmux
kengdic - Joe Speigle's Korean/English dictionary database
flakeheaven - flakeheaven is a python linter built around flake8 to enable inheritable and complex toml configuration.
pykakasi - Lightweight converter from Japanese Kana-kanji sentences into Kana-Roman.
pyright - Static Type Checker for Python
buondua-downloader - :ribbon: NSFW. Album downloader for https://buondua.com.
mypy - Optional static typing for Python
python-jamo - Hangul syllable decomposition and synthesis using jamo.
ark-pixel-font - Open source Pan-CJK pixel font / 开源的泛中日韩像素字体
pydantic - Data validation using Python type hints