MAGIC
CLIP-Caption-Reward
MAGIC | CLIP-Caption-Reward | |
---|---|---|
2 | 2 | |
245 | 225 | |
- | - | |
0.0 | 0.0 | |
almost 2 years ago | almost 2 years ago | |
Python | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MAGIC
-
What if TTI was ran backwards? Like showing an image and asking what prompt it would need and conditions (temperature, top_k, etc) to generate that image. This might give us a better glimpse at how it wants to receive prompts.
yeah you could force a model to try to fill in a provided prompt template like that. Check this out: https://github.com/yxuansu/MAGIC
-
Cambridge AI Researchers Propose ‘MAGIC’: A Training-Free Framework That Plugs Visual Controls Into The Generation Of A Language Model
Github: https://github.com/yxuansu/magic
CLIP-Caption-Reward
-
is there any "image to text" ai?
Look for 'image captioning'. Here's an on-line example: https://vision-explorer.allenai.org/image_captioning . Here's a recent one that was open sourced: https://github.com/j-min/CLIP-Caption-Reward
-
Adobe AI Researchers Open-Source Image Captioning AI CLIP-S: An Image-Captioning AI Model That Produces Fine-Grained Descriptions of Images
Continue reading | Checkout the paper, github
What are some alternatives?
Auto-GPT - Auto-GPT + CLIP vision for stable v0.3.1
LAVIS - LAVIS - A One-stop Library for Language-Vision Intelligence
OFA - Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Oscar - Oscar and VinVL
GPT2-Chinese - Chinese version of GPT2 training code, using BERT tokenizer.
VLDet - [ICLR 2023] PyTorch implementation of VLDet (https://arxiv.org/abs/2211.14843)
cappr - Completion After Prompt Probability. Make your LLM make a choice
prismer - The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
CapDec - CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)
InternChat - InternGPT / InternChat allows you to interact with ChatGPT by clicking, dragging and drawing using a pointing device. [Moved to: https://github.com/OpenGVLab/InternGPT]