latent-diffusion
imagen-pytorch
latent-diffusion | imagen-pytorch | |
---|---|---|
70 | 47 | |
10,622 | 7,787 | |
2.8% | - | |
0.0 | 6.8 | |
2 months ago | about 1 month ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
latent-diffusion
-
SDXL: The next generation of Stable Diffusion models for text-to-image synthesis
Stable Diffusion XL (SDXL) is the latest text-to-image generation model developed by Stability AI, based on the latent diffusion techniques. SDXL has the potential to create highly realistic images for media, entertainment, education, and industry domains, opening new ways in practical uses of AI imagery.
-
Is it possible to create a checkpoint from scratch?
Here's a link to the early latent-diffusion git, that might be able to create a blank model (I haven't tested it): https://github.com/CompVis/latent-diffusion
-
Anything better than pix2pixHD?
Latent diffusion could work for you: https://github.com/CompVis/latent-diffusion (https://arxiv.org/abs/2112.10752)
-
Image Upscaler AI
There are a lot but the one implemented as LDSR in most stable guis is this one. https://github.com/CompVis/latent-diffusion
-
I've been collecting millions of images of only public domain /cc0 licensing. I'd like to train a stable diffusion model on the collection. Could some one share their knowledge of what this would take? Otherwise, simply enjoy my library.
CompVis/latent-diffusion: High-Resolution Image Synthesis with Latent Diffusion Models (github.com)
-
Run Clip on iPhone to Search Photos
The "retrieval based model" refers to https://github.com/CompVis/latent-diffusion#retrieval-augmen..., which uses ScaNN to train a knn embedding searcher.
-
Class Action Lawsuit filed against Stable Diffusion and Midjourney.
Stability is basically https://github.com/CompVis/latent-diffusion + training data.
-
[D] Influential papers round-up 2022. What are your favorites?
Found relevant code at https://github.com/CompVis/latent-diffusion + all code implementations here
-
Can anyone explain differences between sampling methods and their uses to me in simple terms, because all the info I've found so far is either very contradicting or complex and goes over my head
DDIM and PLMS were the original samplers. They were part of Latent Diffusion's repository. They stand for the papers that introduced them, Denoising Diffusion Implicit Models and Pseudo Numerical Methods for Diffusion Models on Manifolds.
-
AI art is very dystopian.
yes, https://github.com/CompVis/latent-diffusion
imagen-pytorch
-
Google's StyleDrop can transfer style from a single image
If google doesnt, someone like lucidrains probably would implement it, just like he did for imagen and muse.
- Create a Stable diffusion neural network from scratch.
-
Google just announced an Even better diffusion process.
lucidrains/imagen-pytorch: Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch (github.com)
- Karlo, the first large scale open source DALL-E 2 replication is here
-
training imagen
Hi Can someone guide me a little, as to how i can use LAION dataset to train my imagen model? like how i can download the data, and in which format it should be fed to https://github.com/lucidrains/imagen-pytorch code?
-
If everyone in this sub make a donation of $10 then we can train truly open stable diffusion.
If we were to put money into training something, I'd hope we use a better model, like Imagen.
- AI Content Generation, Part 1: Machine Learning Basics
-
DALL-E 2 is switching to a credits system (50 generations for free at first, 15 free per month)
I've been messing around with this open-source implementation. You can get a pretty good idea of the model size by just copying the parameters from the paper.
-
Protests erupt outside of DALL-E offices after pricing implementation, press photograph
I'm waiting on this implementation/training of imagen: https://github.com/lucidrains/imagen-pytorch
-
Show HN: Food Does Not Exist
I'm honestly surprised that they trained a StyleGAN. Recently, the Imagen architecture has been show to be both easier in structure, easier to train, and even faster to produce good results. Combined with the "Elucidating" paper by NVIDIA's Tero Karras you can train a 256px Imagen to tolerable quality within an hour on a RTX 3090.
Here's a PyTorch implementation by the LAION people:
https://github.com/lucidrains/imagen-pytorch
And here's 2 images I sampled after training it for some hours, like 2 hours base model + 4 hours upscaler:
https://imgur.com/a/46EZsJo
What are some alternatives?
disco-diffusion
dalle-mini - DALLĀ·E Mini - Generate images from a text prompt
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
hent-AI - Automation of censor bar detection
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
dalle-2-preview
DeepCreamPy - deeppomf's DeepCreamPy + some updates
stable-diffusion
CogVideo - Text-to-video generation. The repo for ICLR2023 paper "CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers"
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality