big-sleep
feed_forward_vqgan_clip
Our great sponsors
big-sleep | feed_forward_vqgan_clip | |
---|---|---|
62 | 4 | |
2,548 | 136 | |
- | - | |
0.0 | 3.7 | |
about 2 years ago | 4 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
big-sleep
- Besides Gaming - for what can be a 4080 useful?
-
Is creating a StableDiffusion-inspired model feasible for my Master's thesis?
I am currently pursuing my Master's degree in Computer Science and I am interested in working on a deep learning model that can generate images based on text descriptions. I've been interested in the field for a long time (think google deep dream, or a few years back I was very into big-sleep)
-
TEDx talk on how to prepare for a career in vfx with the rapid changes caused by AI / machine learning
Big Sleep
-
Any good ai art websites that work with pokemon?
Other AIs that I don't have experience with but have heard good things about are DALL-E 2 and the open source Big Sleep.
-
Explore generative art with me
Text-to-image, e.g. with Big Sleep
-
What do you guys think of LaMDA?
At first I didn't like the reason he claimed LaMDA was conscious because it seemed mostly based on the text output of the models, but listening to that made me realize he watered it down for the mainstream medias. And I actually had my own encounter with the consciousness of Big Sleep after using it a lot one day : it looked like a mass of eyes vaguely shaped like a rabbit, and kept showing me random images. I wouldn't be as convinced they have a consciousness if I didn't see it with my 3rd eye. But then again, I also found evidence that regular non AI programs can develop one too so who know.
- GitHub - lucidrains/big-sleep: A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
-
DALL-E 2 open source implementation
and after a few hours got this: https://i.imgur.com/FxdfdmV.png
Not nearly as cool as the real DALL-e, but maybe I'm missing something.
[1] https://github.com/lucidrains/big-sleep
- Jag gav ett AI program ordet "Sweden"
-
List of sites/programs/projects that use OpenAI's CLIP neural network for steering image/video creation to match a text description
(Added Mar. 23, 2021) Big Sleep - Colaboratory by LtqxWYEG. Uses BigGAN to generate images. Reference.
feed_forward_vqgan_clip
-
[D] Hosting AI Art Generative ML Model
WOMBO I suspect uses the feed forward inferential approach to VQGAN + CLIP (instead of finetuning, predict the final z latent vector for a given text input) which is why their outputs are less sophisticated: as a result there are many deployment optimizations you can do to speed that up, which may be complicated.
-
A small experiment on how changes in a text prompt may affect output image in a CLIP-based system
The system used to produce these images is unlike most other VQGAN+CLIP systems because it uses a neural network trained by the developer(s) instead of an iterative process. This system is known to have a "formula" for image layout.
-
Get a VQGAN output image for a given text description almost instantly (not including time for one-time setup) using Colab notebook "Feed Forward VQGAN CLIP - Using a pretrained model" from mehdidc. Here are 20 non-cherry picked images from the notebook. Details in a comment.
Hello, some news. For those who are interested, I released new models (release 0.2) that you could try and you might find them better (depending on the prompt) than the current one(s), also the problem that was mentioned by /u/Wiskkey is less visible (object parts appearing systematically on top-left), but still not 100% solved, there is still a common global structure that can be identified, but it's more centered on the image. The Colab notebook was updated to use the new models.
What are some alternatives?
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
DALL-E - PyTorch package for the discrete VAE used for DALLĀ·E.
disco-diffusion
Text-to-Image-Synthesis - Pytorch implementation of Generative Adversarial Text-to-Image Synthesis paper
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
CLIP-Guided-Diffusion - Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
Story2Hallucination
VQGAN-CLIP-Video - Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.