clipping-CLIP-to-GAN
TediGAN
clipping-CLIP-to-GAN | TediGAN | |
---|---|---|
1 | 1 | |
40 | 361 | |
- | 0.0% | |
10.0 | 0.0 | |
over 3 years ago | about 1 year ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clipping-CLIP-to-GAN
-
test
(Added Feb. 24, 2021) clipping-CLIP-to-GAN by cloneofsimo. Uses FastGAN to generate images.
TediGAN
-
test
(Added Feb. 23, 2021) TediGAN - Colaboratory by weihaox. Uses StyleGAN to generate images. GitHub. I got error "No pre-trained weights found for perceptual model!" when I used the Colab notebook, which was fixed when I made the change mentioned here. After this change, I still got an error in the cell that displays the images, but the results were in the remote file system. Use the "Files" icon on the left to browse the remote file system.
What are some alternatives?
DALLECLIP
StyleCLIP - Using CLIP and StyleGAN to generate faces from prompts.
CLIP-Style-Transfer - Doing style transfer with linguistic features using OpenAI's CLIP.
Colab-deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network)
VectorAscent - Generate vector graphics from a textual caption
stylegan2-clip-approach - Navigating StyleGAN2 w latent space using CLIP
AuViMi - AuViMi stands for audio-visual mirror. The idea is to have CLIP generate its interpretation of what your webcam sees, combined with the words thare are spoken.
Story2Hallucination
clip-glass - Repository for "Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search"
stylized-neural-painting - Official Pytorch implementation of the preprint paper "Stylized Neural Painting", in CVPR 2021.