clip-glass
meshed-memory-transformer
Our great sponsors
clip-glass | meshed-memory-transformer | |
---|---|---|
13 | 2 | |
177 | 497 | |
- | 2.8% | |
0.0 | 0.0 | |
over 2 years ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 only | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-glass
-
test
(Added Feb. 5, 2021) CLIP-GLaSS.ipynb - Colaboratory by Galatolo. Uses BigGAN (default) or StyleGAN to generate images. The GPT2 config is for image-to-text, not text-to-image. GitHub.
-
Image to text models
After a cursory search I found CLIP-GLaSS and CLIP-cap. I've used CLIP-GLaSS in a previous experiment, but found the captions for digital/CG images quite underwhelming. This is understandable since this is not what the model was trained on, but still I'd like to use a better model.
-
[R] end-to-end image captioning
CLIP-GLaSS
- What CLIP-GLaSS thinks Ancient Egyptian computers would look like
-
Texttoimage 3 Images For Text Photo Of Donald
The images were generated using this notebook.
- CLIP-GLaSS prompt: "Screenshot of a video game from the 1930s"
-
[P] List of sites/programs/projects that use OpenAI's CLIP neural network for steering image/video creation to match a text description
The CLIP-GLaSS project has image-to-text functionality (I haven't tried it.)
-
For educational purposes: Text-to-image (3 runs with no cherry-picking, 6 images each) for text "Photo of a Lamborghini painted purple and red" generated using CLIP-GLaSS. config=StyleGAN2_car_d. save_each=50. generations=1000
Link to notebook.
-
Sharing CLIP magic based on OpenAI's blog post via a bit more accessible YT medium. Lmk what u think 🙈 ❤️
CLIP-GLaSS
-
[R] [P] Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search. Link to code and Google Colab notebook for project CLIP-GLaSS is in a comment.
Github for CLIP-GLaSS is here.
meshed-memory-transformer
- [D] Data transfer(image features) between different models in separate docker containers
-
[R] end-to-end image captioning
I could use some up-to-date models (e.g, this one: https://github.com/aimagelab/meshed-memory-transformer), but all those I looked into require pre-processing step of features/bounding-boxes generation. The problem is that I can't use an off-the shelf bounding-box extraction model as it would not perform well on the dataset I have (images are not like COCO at all). So I was wondering if there is a relatively up-to-date architecture that I can use that will not require this processing step. That is, an implementation that requires only inputs (images) and outputs (sentences).
What are some alternatives?
a-PyTorch-Tutorial-to-Image-Captioning - Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
stargan-v2 - StarGAN v2 - Official PyTorch Implementation (CVPR 2020)
aphantasia - CLIP + FFT/DWT/RGB = text to image/video
BLIP - PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
stylized-neural-painting - Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.
catr - Image Captioning Using Transformer
StyleCLIP - Using CLIP and StyleGAN to generate faces from prompts.
py-bottom-up-attention - PyTorch bottom-up attention with Detectron2
CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.
StyleCLIP - Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)