scenic
manga-ocr
scenic | manga-ocr | |
---|---|---|
5 | 31 | |
3,010 | 1,382 | |
3.9% | - | |
8.6 | 5.8 | |
7 days ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
scenic
-
Vid2Seq: A pretrained visual language model for describing multi-event videos
Anyone figured out how to run this against a video?
https://github.com/google-research/scenic/tree/main/scenic/p... has an example showing how to "train Vid2Seq on YouCook2" using "python -m scenic.projects.vid2seq.main", but I couldn't see the recipe for using it against a video to return a description.
-
[D] SE for machine learning reaserch
There are a few libraries/frameworks that one can use and allow to reuse the same code for datasets, logging, training loop etc.... . E.g. Lightning or Scenic. Maybe you can use one of these or at least get some inspiration for your own code.
-
Google Research Proposes an Artificial Intelligence (AI) Model to Utilize Vision Transformers on Videos
Quick Read: https://www.marktechpost.com/2022/11/25/google-research-proposes-an-artificial-intelligence-ai-model-to-utilize-vision-transformers-on-videos/ Paper: https://openaccess.thecvf.com/content/ICCV2021/papers/Arnab\_ViViT\_A\_Video\_Vision\_Transformer\_ICCV\_2021\_paper.pdf Github link: https://github.com/google-research/scenic/tree/main/scenic/projects/vivit
-
Google Research Introduces ‘SCENIC’: An Open-Source JAX Library For Computer Vision Research
GitHub: https://github.com/google-research/scenic
-
[R] Google Open-Sources SCENIC: A JAX Library for Rapid Computer Vision Model Prototyping and Cutting-Edge Research
The SCENIC code, etc., has been open-sourced on the project’s GitHub. The paper SCENIC: A JAX Library for Computer Vision Research and Beyond is on arXiv.
manga-ocr
-
Any way to extract characters from images, or are there any apps/ tools that allow you to handwrite the characters?
I use manga-ocr on pc
- Do you guys know where I can read the translated version of Isekai Joshi Kangoku?
-
How do you read Japanese?
I usually read manga, and use yomichan and manga_ocr. Initially I'll try to read the sentence by itself to see if I understand it. If I don't, I'll take a screenshot of the sentence so manga_ocr parses the text, and then I'll paste it somewhere in my browser and use yomichan to check unknown vocabulary. I've already done Genki I and II so I almost never have to look up grammar. When I do find grammar I don't understand, a quick Google search, or just referring back to Tae Kim or Genki, will do. I never translate the full sentence, I just check the meaning of unknown words and try to understand the sentence in Japanese.
-
Looking for a program for quick word extraction WITHOUT leaving the screen?
If on browser, use Yomichan. Otherwise, set up a screenshot tool like ShareX with https://github.com/kha-white/manga-ocr
-
easy manga that writes left to right (horizontal) and uses kanji with furigana
manga-ocr, which you will probably want to use for convenience anyway, convert the text to horizontal and it will automatically show up this way in the Yomichan clipboard monitor.
-
What is the most accurate Windows OCR
MangaOCR on github is really good. There are two main gui of it that i know of. The first is a whole gui reader of it and is called poricom. The second lets you ocr anything on the screen when pressing alt+q and is called cloe. Here are the links : https://github.com/kha-white/manga-ocr https://github.com/blueaxis/Poricom https://github.com/blueaxis/Cloe
-
Reading Manga with Hiragana
I'm going to go against what everybody is saying and say to not rely on furigana. There was another comment saying to set up mokuro to use yomichan with. Imo, that would be the more ideal setup. You not only have access to a lot more manga (that don't use furigana), but being overly reliant on furigana may prevent you from building up your intuition for figuring out which kanji reading to use in which moment. Part of building comprehension/intuition is trying to figure out which kanji reading/word meaning you should use in each differing context. Furthermore, if you can't set up mokuro, look into setting up an OCR and a texthooker. https://github.com/kha-white/manga-ocr The one above would be the one that I would recommend the most.
-
How do you (personally) read manga at i+1?
they are probably referring to this Great thing, reads an image from the clipboard and puts back the text from the image.
- I have a question
-
Need help translating a picture.
If you have more of these, manga-ocr can read it without any issues.
What are some alternatives?
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch
mokuro - Read Japanese manga inside browser with selectable text.
jax-resnet - Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
Poricom - Optical character recognition in manga images. Manga OCR desktop application
EasyCV - An all-in-one toolkit for computer vision
EasyOCR - Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.
elegy - A High Level API for Deep Learning in JAX
Kaku - 画 - Japanese OCR Dictionary
asreview - Active learning for systematic reviews
jidoujisho - A full-featured immersion language learning suite for mobile.
LFattNet - Attention-based View Selection Networks for Light-field Disparity Estimation
mahjong - Implementation of riichi mahjong related stuff (hand cost, shanten, agari end, etc.)