LLaVA VS InternVideo

Compare LLaVA vs InternVideo and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
LLaVA InternVideo
20 3
16,101 909
- 14.4%
9.4 8.0
6 days ago 7 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LLaVA

Posts with mentions or reviews of LLaVA. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-10.
  • Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real
    4 projects | news.ycombinator.com | 10 Dec 2023
    Update: For anyone else facing the commercial use question on LLaVA - it is licensed under Apache 2.0. Can be used commercially with attribution: https://github.com/haotian-liu/LLaVA/blob/main/LICENSE
  • Image-to-Caption Generator
    3 projects | /r/computervision | 7 Dec 2023
    https://github.com/haotian-liu/LLaVA (fairly established and well supported)
  • Llamafile lets you distribute and run LLMs with a single file
    12 projects | news.ycombinator.com | 29 Nov 2023
    That's not a llamafile thing, that's a llava-v1.5-7b-q4 thing - you're running the LLaVA 1.5 model at a 7 billion parameter size further quantized to 4 bits (the q4).

    GPT4-Vision is running a MUCH larger model than the tiny 7B 4GB LLaVA file in this example.

    LLaVA have a 13B model available which might do better, though there's no chance it will be anywhere near as good as GPT-4 Vision. https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZO...

  • FLaNK Stack Weekly for 27 November 2023
    28 projects | dev.to | 27 Nov 2023
  • Using GPT-4 Vision with Vimium to browse the web
    9 projects | news.ycombinator.com | 8 Nov 2023
    There are open source models such as https://github.com/THUDM/CogVLM and https://github.com/haotian-liu/LLaVA.
  • Is supervised learning dead for computer vision?
    9 projects | news.ycombinator.com | 28 Oct 2023
    Hey Everyone,

    I’ve been diving deep into the world of computer vision recently, and I’ve gotta say, things are getting pretty exciting! I stumbled upon this vision-language model called LLaVA (https://github.com/haotian-liu/LLaVA), and it’s been nothing short of impressive.

    In the past, if you wanted to teach a model to recognize the color of your car in an image, you’d have to go through the tedious process of training it from scratch. But now, with models like LLaVA, all you need to do is prompt it with a question like “What’s the color of the car?” and bam – you get your answer, zero-shot style.

    It’s kind of like what we’ve seen in the NLP world. People aren’t training language models from the ground up anymore; they’re taking pre-trained models and fine-tuning them for their specific needs. And it looks like we’re headed in the same direction with computer vision.

    Imagine being able to extract insights from images with just a simple text prompt. Need to step it up a notch? A bit of fine-tuning can do wonders, and from my experiments, it can even outperform models trained from scratch. It’s like getting the best of both worlds!

    But here’s the real kicker: these foundational models, thanks to their extensive training on massive datasets, have an incredible grasp of image representations. This means you can fine-tune them with just a handful of examples, saving you the trouble of collecting thousands of images. Indeed, they can even learn with a single example (https://www.fast.ai/posts/2023-09-04-learning-jumps)

  • Adept Open Sources 8B Multimodal Modal
    6 projects | news.ycombinator.com | 18 Oct 2023
    Fuyu is not open source. At best, it is source-available. It's also not the only one.

    A few other multimodal models that you can run locally include IDEFICS[0][1], LLaVA[2], and CogVLM[3]. I believe all of these have better licenses than Fuyu.

    [0]: https://huggingface.co/blog/idefics

    [1]: https://huggingface.co/HuggingFaceM4/idefics-80b-instruct

    [2]: https://github.com/haotian-liu/LLaVA

    [3]: https://github.com/THUDM/CogVLM

  • AI — weekly megathread!
    2 projects | /r/artificial | 15 Oct 2023
    Researchers released LLaVA-1.5. LLaVA (Large Language and Vision Assistant) is an open-source large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. LLaVA-1.5 achieved SoTA on 11 benchmarks, with just simple modifications to the original LLaVA and completed training in ~1 day on a single 8-A100 node [Demo | Paper | GitHub].
  • LLaVA: Visual Instruction Tuning: Large Language-and-Vision Assistant
    1 project | news.ycombinator.com | 11 Oct 2023
  • LLaVA gguf/ggml version
    1 project | /r/LocalLLaMA | 19 Sep 2023
    Hi all, I’m wondering if there is a version of LLaVA https://github.com/haotian-liu/LLaVA that works with gguf and ggml models?? I know there is one for miniGPT4 but it just doesn’t seem as reliable as LLaVA but you need at least 24gb of vRAM for LLaVA to run it locally by the looks of it. The 4bit version still requires 12gb vram.

InternVideo

Posts with mentions or reviews of InternVideo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-19.
  • [Demo] Watch Videos with ChatGPT
    7 projects | /r/ChatGPT | 19 Apr 2023
    Thanks for your interest! If you had any ideas to make the given demo more user-friendly, please do not hesitate to share them with us. We are open to discussing relevant ideas about video foundation models or other topics. We made some progress in these areas (InternVideo, VideoMAE v2, UMT, and more). We believe that user-level intelligent video understanding is on the horizon with the current LLM, computing power, and video data.
  • [R] InternVideo: General Video Foundation Models via Generative and Discriminative Learning
    1 project | /r/MachineLearning | 10 Apr 2023
    Found relevant code at https://github.com/OpenGVLab/InternVideo + all code implementations here
    2 projects | /r/u_noise_3 | 10 Apr 2023
    The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adaption, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo.

What are some alternatives?

When comparing LLaVA and InternVideo you can also consider the following projects:

MiniGPT-4 - Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)

VideoMAEv2 - [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking

CogVLM - a state-of-the-art-level open visual language model | 多模态预训练模型

mmaction2 - OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch

mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model

ego4d-eccv2022-solutions - Champion Solutions for Ego4D Chanllenge of ECCV 2022

llama.cpp - LLM inference in C/C++

ALPRO - Align and Prompt: Video-and-Language Pre-training with Entity Prompts

image2dsl - This repository contains the implementation of an Image to DSL (Domain Specific Language) model. The model uses a pre-trained Vision Transformer (ViT) as an encoder to extract image features and a custom Transformer Decoder to generate DSL code from the extracted features.