OFA
MAGIC
Our great sponsors
OFA | MAGIC | |
---|---|---|
3 | 2 | |
2,323 | 245 | |
2.4% | - | |
2.8 | 0.0 | |
4 days ago | almost 2 years ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
OFA
-
[R][P] Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework + VQA Hugging Face Spaces Demo
github: https://github.com/OFA-Sys/OFA
-
OFA: model that does text-to-image as well as other tasks
From this:
- [R] Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework. Shocking performance in text-to-image synthesis and open-domain tasks.
MAGIC
-
What if TTI was ran backwards? Like showing an image and asking what prompt it would need and conditions (temperature, top_k, etc) to generate that image. This might give us a better glimpse at how it wants to receive prompts.
yeah you could force a model to try to fill in a provided prompt template like that. Check this out: https://github.com/yxuansu/MAGIC
-
Cambridge AI Researchers Propose ‘MAGIC’: A Training-Free Framework That Plugs Visual Controls Into The Generation Of A Language Model
Github: https://github.com/yxuansu/magic
What are some alternatives?
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
Auto-GPT - Auto-GPT + CLIP vision for stable v0.3.1
GroundingDINO - Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
ONE-PEACE - A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
GPT2-Chinese - Chinese version of GPT2 training code, using BERT tokenizer.
UPop - [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
cappr - Completion After Prompt Probability. Make your LLM make a choice
CapDec - CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)
InternChat - InternGPT / InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device. [Moved to: https://github.com/OpenGVLab/InternGPT]