feed_forward_vqgan_clip
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt (by mehdidc)
VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab. (by nerdyrodent)
feed_forward_vqgan_clip | VQGAN-CLIP | |
---|---|---|
4 | 67 | |
136 | 2,563 | |
- | - | |
3.7 | 0.0 | |
4 months ago | over 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
feed_forward_vqgan_clip
Posts with mentions or reviews of feed_forward_vqgan_clip.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-09-11.
-
[D] Hosting AI Art Generative ML Model
WOMBO I suspect uses the feed forward inferential approach to VQGAN + CLIP (instead of finetuning, predict the final z latent vector for a given text input) which is why their outputs are less sophisticated: as a result there are many deployment optimizations you can do to speed that up, which may be complicated.
-
A small experiment on how changes in a text prompt may affect output image in a CLIP-based system
The system used to produce these images is unlike most other VQGAN+CLIP systems because it uses a neural network trained by the developer(s) instead of an iterative process. This system is known to have a "formula" for image layout.
-
Get a VQGAN output image for a given text description almost instantly (not including time for one-time setup) using Colab notebook "Feed Forward VQGAN CLIP - Using a pretrained model" from mehdidc. Here are 20 non-cherry picked images from the notebook. Details in a comment.
Hello, some news. For those who are interested, I released new models (release 0.2) that you could try and you might find them better (depending on the prompt) than the current one(s), also the problem that was mentioned by /u/Wiskkey is less visible (object parts appearing systematically on top-left), but still not 100% solved, there is still a common global structure that can be identified, but it's more centered on the image. The Colab notebook was updated to use the new models.
VQGAN-CLIP
Posts with mentions or reviews of VQGAN-CLIP.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-09-23.
-
📚 Tutorials & 🎨 AI Art Generation Tool List Mega Thread
VQGAN-CLIP
-
Which is your favorite text to image model overall?
I've screwed with many text-to-image models over the past couple of years, and I found that while I currently enjoy Stable Diffusion's coherency, I have a soft spot for the ImageNet model used by default for VQGAN+CLIP. It easily approaches the uncanny valley when generating people or animals, but makes for great abstract backgrounds and wallpapers. I already have nostalgia for generating images with it on my CPU overnight.
-
Stable Diffusion Announcement
For someone only tangentially familiar with this space, how is this different than e.g. https://github.com/nerdyrodent/VQGAN-CLIP which you can also run at home? Is it the quality of the generated images?
-
Medieval Noir - VQGAN-CLIP - COCO Checkpoint
Used https://github.com/nerdyrodent/VQGAN-CLIP
- Once have access, do you run it on your computer or over the internet on Open-AI's computers?
- How to get AI imaging effect in Premiere pro
-
A Guide to Asking Robots to Design Stained Glass Windows
I don't have any of the DALL-Es but I do have a couple from github [1], [2] which gave these outputs[3]
[1] https://github.com/nerdyrodent/VQGAN-CLIP
-
How not to waste $1600?
If you want to try your hand at buggering your whole system - try playing with AI image generation as it uses all possible computer assets :D . There is a lot of forms and installations for those but I VQGANs from github the easiest. Problem is that some require familarity with shell, python and in some cases - you need to enable the Linux subsystem in Windows (is it called a subsystem? it is not exactly a VM). This one is the easiest to install out of all I tried. But I liked the results of Pixray most but I wrecked it. I use this one nowadays.
- Ask HN: Is there a publicly available (not private beta) text-to-image API?
-
Got a Machine Learning Algorithm to depict Aphex
For those that are interested, I used VQGAN-CLIP, specifically this GitHub repository