LLaVA
MiniGPT-4
Our great sponsors
LLaVA | MiniGPT-4 | |
---|---|---|
20 | 37 | |
16,101 | 24,859 | |
- | 1.2% | |
9.4 | 9.4 | |
6 days ago | 7 days ago | |
Python | Python | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaVA
-
Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real
Update: For anyone else facing the commercial use question on LLaVA - it is licensed under Apache 2.0. Can be used commercially with attribution: https://github.com/haotian-liu/LLaVA/blob/main/LICENSE
-
Image-to-Caption Generator
https://github.com/haotian-liu/LLaVA (fairly established and well supported)
-
Llamafile lets you distribute and run LLMs with a single file
That's not a llamafile thing, that's a llava-v1.5-7b-q4 thing - you're running the LLaVA 1.5 model at a 7 billion parameter size further quantized to 4 bits (the q4).
GPT4-Vision is running a MUCH larger model than the tiny 7B 4GB LLaVA file in this example.
LLaVA have a 13B model available which might do better, though there's no chance it will be anywhere near as good as GPT-4 Vision. https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZO...
- FLaNK Stack Weekly for 27 November 2023
-
Using GPT-4 Vision with Vimium to browse the web
There are open source models such as https://github.com/THUDM/CogVLM and https://github.com/haotian-liu/LLaVA.
-
Is supervised learning dead for computer vision?
Hey Everyone,
I’ve been diving deep into the world of computer vision recently, and I’ve gotta say, things are getting pretty exciting! I stumbled upon this vision-language model called LLaVA (https://github.com/haotian-liu/LLaVA), and it’s been nothing short of impressive.
In the past, if you wanted to teach a model to recognize the color of your car in an image, you’d have to go through the tedious process of training it from scratch. But now, with models like LLaVA, all you need to do is prompt it with a question like “What’s the color of the car?” and bam – you get your answer, zero-shot style.
It’s kind of like what we’ve seen in the NLP world. People aren’t training language models from the ground up anymore; they’re taking pre-trained models and fine-tuning them for their specific needs. And it looks like we’re headed in the same direction with computer vision.
Imagine being able to extract insights from images with just a simple text prompt. Need to step it up a notch? A bit of fine-tuning can do wonders, and from my experiments, it can even outperform models trained from scratch. It’s like getting the best of both worlds!
But here’s the real kicker: these foundational models, thanks to their extensive training on massive datasets, have an incredible grasp of image representations. This means you can fine-tune them with just a handful of examples, saving you the trouble of collecting thousands of images. Indeed, they can even learn with a single example (https://www.fast.ai/posts/2023-09-04-learning-jumps)
-
Adept Open Sources 8B Multimodal Modal
Fuyu is not open source. At best, it is source-available. It's also not the only one.
A few other multimodal models that you can run locally include IDEFICS[0][1], LLaVA[2], and CogVLM[3]. I believe all of these have better licenses than Fuyu.
[0]: https://huggingface.co/blog/idefics
[1]: https://huggingface.co/HuggingFaceM4/idefics-80b-instruct
[2]: https://github.com/haotian-liu/LLaVA
[3]: https://github.com/THUDM/CogVLM
-
AI — weekly megathread!
Researchers released LLaVA-1.5. LLaVA (Large Language and Vision Assistant) is an open-source large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. LLaVA-1.5 achieved SoTA on 11 benchmarks, with just simple modifications to the original LLaVA and completed training in ~1 day on a single 8-A100 node [Demo | Paper | GitHub].
- LLaVA: Visual Instruction Tuning: Large Language-and-Vision Assistant
-
LLaVA gguf/ggml version
Hi all, I’m wondering if there is a version of LLaVA https://github.com/haotian-liu/LLaVA that works with gguf and ggml models?? I know there is one for miniGPT4 but it just doesn’t seem as reliable as LLaVA but you need at least 24gb of vRAM for LLaVA to run it locally by the looks of it. The 4bit version still requires 12gb vram.
MiniGPT-4
-
"Building Machines That Learn and Think Like People", 7 Years Later
I just think the tech has been out for so long it's not as big of a deal. Mini-Gpt4 has been out for 6 months! Of course the descriptions aren't exactly gpt-4 grade, but with mistral 7b being used as the language model instead of llama 7b, the reasoning ability will improve noticeably.
[1] https://github.com/Vision-CAIR/MiniGPT-4
- Minigpt4 Inference on CPU
-
Multimodal LLM for infographics images
Isn't there only two open multimodal LLMs, LLaVA and mini-gpt4?
-
Ai trained on photos
For LLM visual instruction, you can use LLaVA, LaVIN, or MiniGPT-4.
- CLIP and DeepDanbooru Alternatives For Prompt Generation [Relevant Self-Promotion]
-
Looking for a pre trained food recognition model
Please read the rules before posting. If you want a model for visual instruction, use LLaVA, LaVIN, or MiniGPT-4.
- Minigpt-4 (Vicuna 13B + images)
-
Upload a photo of your meal and get roasted by ChatGPT
So we use MiniGPT-4 for image parsing, and yep it does return a pretty detailed (albeit not always accurate) description of the photo. You can actually play around with it on Huggingface here.
We use MiniGPT-4 first to interpret the image and then pass the results onto GPT-4. Hopefully, once GPT-4 makes its multi-modal functionality available, we can do it all in one request.
-
Give some love to multi modal models trained on censored llama based models
But I would like to bring up that there are some multi models(llava, miniGPT-4) that are built based on censored llama based models like vicuna. I tried several multi modal models like llava, minigpt4 and blip2. Llava has very good captioning and question answering abilities and it is also much faster than the others(basically real time), though it has some hallucination issue.
What are some alternatives?
CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
AutoGPT - AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
llama.cpp - LLM inference in C/C++
BooruDatasetTagManager
image2dsl - This repository contains the implementation of an Image to DSL (Domain Specific Language) model. The model uses a pre-trained Vision Transformer (ViT) as an encoder to extract image features and a custom Transformer Decoder to generate DSL code from the extracted features.
bark - 🔊 Text-Prompted Generative Audio Model
llamafile - Distribute and run LLMs with a single file.
mini-agi - MiniAGI is a simple general-purpose autonomous agent based on the OpenAI API.