donut
unilm
donut | unilm | |
---|---|---|
19 | 40 | |
5,312 | 18,358 | |
2.9% | 1.7% | |
3.6 | 9.0 | |
6 months ago | 8 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
donut
-
Ask HN: Why are all OCR outputs so raw?
maybe this is better? https://github.com/clovaai/donut
I'm not sure
-
Show HN: BetterOCR combines and corrects multiple OCR engines with an LLM
Yup! But I'm still exploring options. (any recommendations would be welcomed!) Here are some candidates I'm considering:
- https://github.com/mindee/doctr
- https://github.com/open-mmlab/mmocr
- https://github.com/PaddlePaddle/PaddleOCR (honestly I don't know Mandarin so I'm a bit stuck)
- https://github.com/clovaai/donut - While it's primarily an "OCR-free document understanding transformer," I think it's worth experimenting with. Think I can sort this out by letting the LLM reason through it multiple times (although this will impact performance)
- yesterday got a suggestion to consider https://github.com/kakaobrain/pororo - I don't think development is still active but the results are pretty great on Korean text
-
New to ML, looking for some GPU and learning material info
I am also interested in experimenting with something like DONUT (https://github.com/clovaai/donut) but I have never seen anything on what the VRAM expectations are for something like this. Does anyone know also if there are any newer better models than this for document parsing as well? Or what the VRAM requirements for something like this tend to be?
-
[D] Is there a good ai model for image-to-text where the images are diagrams and screenshots of interfaces?
Here are a few useful resources you could start with: [Pix2Struct by Google Research](https://github.com/google-research/pix2struct) might be a valuable tool, although it will most likely need some fine-tuning to fit your specifics. You can also find some fine-tuned models on HuggingFace by searching 'pix2struct'. Another option worth considering is [DonutI](https://github.com/clovaai/donut). Like Pix2Struct, fine-tuning likely needed to meet your requirements. Tesseract OCR is another alternative, particularly for handling text. It's primarily designed for pages of text, think books, but with some tweaking and specific flags, it can process tables as well as text chunks in regions of a screenshot. Bit too much tweaking for my taste. As I'm also in search of OCR tools for UI and chart screenshots, so share if you find something else.
- How to Automate Document Extraction from Insurance Documents
- FLaNK Stack Weekly 29 may 2023
- Donut: OCR-Free Document Understanding Transformer
unilm
- The Era of 1-Bit LLMs: Training_Tips, Code And_FAQ [pdf]
- The Era of 1-Bit LLMs: Training Tips, Code and FAQ
-
The Era of 1-bit LLMs: ternary parameters for cost-effective computing
+1 On this, the real proof would have been testing both models side-by-side.
It seems that it may be published on GitHub [1] according to HuggingFace [2].
[1] https://github.com/microsoft/unilm/tree/master/bitnet
[2] https://huggingface.co/papers/2402.17764
- I'm an Old Fart and AI Makes Me Sad
-
On building a semantic search engine
e5-mistral is essentially a distillation from gpt-4 to a smaller model. You can see here https://github.com/microsoft/unilm/blob/16da2f193b9c1dab0a69...
they actually have custom prompts for each dataset being tested.
Question would be, if you haven't seen the task before, what is a good prompt to prepend for your task?
IMO e5-mistral is overfit to MTEB
-
Leveraging GPT-4 for PDF Data Extraction: A Comprehensive Guide
Layout LM v1, v2 and v3 models [ Github ] DocBERT [ Github ]
-
Microsoft Publishes LongNet: Scaling Transformers to 1,000,000,000 Tokens
The repository is available here.
-
Recommended open LLMs with image input modality?
It is missing kosmos-2. I remember its image captioning was(demo currently down) really good and it's almost as fast as llava and lavin.
-
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Should be this: https://github.com/microsoft/unilm/
-
[R] LongNet: Scaling Transformers to 1,000,000,000 Tokens
This is from Microsoft Research (Asia). https://aka.ms/GeneralAI
What are some alternatives?
PaddleOCR - Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
image-to-sound-python- - A python project for converting an Image into audible sound using OCR and speech synthesis
ERNIE - Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
involution - [CVPR 2021] Involution: Inverting the Inherence of Convolution for Visual Recognition, a brand new neural operator
CascadeTabNet - This repository contains the code and implementation details of the CascadeTabNet paper "CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents"
gensim - Topic Modelling for Humans
Multi-Type-TD-TSR - Extracting Tables from Document Images using a Multi-stage Pipeline for Table Detection and Table Structure Recognition:
maelstrom - A workbench for writing toy implementations of distributed systems.
deepdoctection - A Repo For Document AI
rasa - 💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants