OFA
ImageNet21K
OFA | ImageNet21K | |
---|---|---|
3 | 1 | |
2,325 | 695 | |
1.0% | 2.9% | |
2.8 | 10.0 | |
14 days ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OFA
-
[R][P] Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework + VQA Hugging Face Spaces Demo
github: https://github.com/OFA-Sys/OFA
-
OFA: model that does text-to-image as well as other tasks
From this:
- [R] Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework. Shocking performance in text-to-image synthesis and open-domain tasks.
ImageNet21K
-
Improving Search Quality for Non-English Queries with Fine-tuned Multilingual CLIP Models
ViT-B/32, using the ImageNet-21k dataset
What are some alternatives?
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
vision_transformer
ONE-PEACE - A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
LMOps - General technology for enabling AI capabilities w/ LLMs and MLLMs
MAGIC - Language Models Can See: Plugging Visual Controls in Text Generation
Fashion12K_german_queries
UPop - [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
mPLUG-Owl - mPLUG-Owl & mPLUG-Owl2: Modularized Multimodal Large Language Model
docarray - Represent, send, store and search multimodal data