Recommended open LLMs with image input modality?

This page summarizes the projects mentioned and recommended in the original post on /r/LocalLLaMA

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • Awesome-Multimodal-Large-Language-Models

    :sparkles::sparkles:Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.

  • https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation this is pretty comprehensive. tldr; blip is probably the best, though i've heard it does need a lot of vram. In my experience its the most responsive to prompt engineering.

  • unilm

    Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

  • It is missing kosmos-2. I remember its image captioning was(demo currently down) really good and it's almost as fast as llava and lavin.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • instructblip-pipeline

    A multimodal inference pipeline that integrates InstructBLIP with textgen-webui for Vicuna and related models.

  • I've been using it in oobabooga. There's a repo for the extension here: https://github.com/kjerk/instructblip-pipeline/tree/main

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts