LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond. (by haotian-liu)

LLaVA Alternatives

Similar projects and alternatives to LLaVA

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better LLaVA alternative or higher similarity.

LLaVA reviews and mentions

Posts with mentions or reviews of LLaVA. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real
    4 projects | news.ycombinator.com | 10 Dec 2023
    Thank you for creating this demo. This was the point I was trying to make when the Gemini launch happened. All that hoopla for no reason.

    Yes - GPT-4V is a beast. I’d even encourage anyone who cares about vision or multi-modality to give LLaVA a serious shot (https://github.com/haotian-liu/LLaVA). I have been playing with the 7B q5_k variant last couple of days and I am seriously impressed with it. Impressed enough to build a demo app/proof-of-concept for my employer (will have to check the license first or I might only use it for the internal demo to drive a point).

    4 projects | news.ycombinator.com | 10 Dec 2023
    Update: For anyone else facing the commercial use question on LLaVA - it is licensed under Apache 2.0. Can be used commercially with attribution: https://github.com/haotian-liu/LLaVA/blob/main/LICENSE
  • Image-to-Caption Generator
    3 projects | /r/computervision | 7 Dec 2023
    https://github.com/haotian-liu/LLaVA (fairly established and well supported)
  • Llamafile lets you distribute and run LLMs with a single file
    12 projects | news.ycombinator.com | 29 Nov 2023
    That's not a llamafile thing, that's a llava-v1.5-7b-q4 thing - you're running the LLaVA 1.5 model at a 7 billion parameter size further quantized to 4 bits (the q4).

    GPT4-Vision is running a MUCH larger model than the tiny 7B 4GB LLaVA file in this example.

    LLaVA have a 13B model available which might do better, though there's no chance it will be anywhere near as good as GPT-4 Vision. https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZO...

  • FLaNK Stack Weekly for 27 November 2023
    28 projects | dev.to | 27 Nov 2023
  • Using GPT-4 Vision with Vimium to browse the web
    9 projects | news.ycombinator.com | 8 Nov 2023
    There are open source models such as https://github.com/THUDM/CogVLM and https://github.com/haotian-liu/LLaVA.
  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Hey Everyone,

    I’ve been diving deep into the world of computer vision recently, and I’ve gotta say, things are getting pretty exciting! I stumbled upon this vision-language model called LLaVA (https://github.com/haotian-liu/LLaVA), and it’s been nothing short of impressive.

    In the past, if you wanted to teach a model to recognize the color of your car in an image, you’d have to go through the tedious process of training it from scratch. But now, with models like LLaVA, all you need to do is prompt it with a question like “What’s the color of the car?” and bam – you get your answer, zero-shot style.

    It’s kind of like what we’ve seen in the NLP world. People aren’t training language models from the ground up anymore; they’re taking pre-trained models and fine-tuning them for their specific needs. And it looks like we’re headed in the same direction with computer vision.

    Imagine being able to extract insights from images with just a simple text prompt. Need to step it up a notch? A bit of fine-tuning can do wonders, and from my experiments, it can even outperform models trained from scratch. It’s like getting the best of both worlds!

    But here’s the real kicker: these foundational models, thanks to their extensive training on massive datasets, have an incredible grasp of image representations. This means you can fine-tune them with just a handful of examples, saving you the trouble of collecting thousands of images. Indeed, they can even learn with a single example (https://www.fast.ai/posts/2023-09-04-learning-jumps)

  • Adept Open Sources 8B Multimodal Modal
    6 projects | news.ycombinator.com | 18 Oct 2023
    Fuyu is not open source. At best, it is source-available. It's also not the only one.

    A few other multimodal models that you can run locally include IDEFICS[0][1], LLaVA[2], and CogVLM[3]. I believe all of these have better licenses than Fuyu.

    [0]: https://huggingface.co/blog/idefics

    [1]: https://huggingface.co/HuggingFaceM4/idefics-80b-instruct

    [2]: https://github.com/haotian-liu/LLaVA

    [3]: https://github.com/THUDM/CogVLM

    6 projects | news.ycombinator.com | 18 Oct 2023
    I too would like to know about the training dataset, as I just took a look at the one for LLava[0], and found out that they used a pretty big amount of BLIP auto generated captions.

    This seemed a bit surreal to me, like trying to train an LLM with the outputs of a worse performing smaller LLM.

    [0] https://github.com/haotian-liu/LLaVA/blob/main/docs/Data.md#...

  • AI — weekly megathread!
    2 projects | /r/artificial | 15 Oct 2023
    Researchers released LLaVA-1.5. LLaVA (Large Language and Vision Assistant) is an open-source large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. LLaVA-1.5 achieved SoTA on 11 benchmarks, with just simple modifications to the original LLaVA and completed training in ~1 day on a single 8-A100 node [Demo | Paper | GitHub].
  • A note from our sponsor - WorkOS
    workos.com | 28 Mar 2024
    The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →

Stats

Basic LLaVA repo stats
20
15,271
9.4
1 day ago

haotian-liu/LLaVA is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of LLaVA is Python.

SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com