DALLE-pytorch
imagen-pytorch


DALLE-pytorch | imagen-pytorch | |
---|---|---|
20 | 47 | |
5,598 | 8,169 | |
0.1% | 0.4% | |
2.5 | 4.8 | |
12 months ago | 4 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DALLE-pytorch
- The Eleuther AI Mafia
-
Thoughts on AI image generators from text
Here you go: https://github.com/lucidrains/DALLE-pytorch
-
[P] DALL·E Mini & Mega demo and production API
Here are some other implementations of Dalle clones in Pytorch by various authors in the ML and DL community: https://github.com/lucidrains/DALLE-pytorch
- New text-to-image network from Google beats DALL-E
-
[Project] DALL-3 - generate better images with fewer tokens through clip guided diffusion
If in general DDPM > GAN > VAE, why do transformer image generators all use VQVAE to decode images? Wouldn't it be better to use a diffusion model? I was wondering about this and started experimenting with different ways to decode vector-quantized embeddings with a diffusion model - see discussion here After a lot of trial and error I got something that works pretty well.
- Still waiting for dall-e
-
Ask HN: Computer Vision Project Ideas?
- "Discrete VAE", used as the backbone for OpenAI's DALL-E, reimplimented here (and other places) https://github.com/lucidrains/DALLE-pytorch (code for training a discrete VAE)
-
Crawling@Home: Help Build The Worlds Largest Image-Text Pair Dataset!
Here's the DALLE-pytorch git repo.
-
(from the discord stream) I'm so hyped for this game. This generation is really good.
I am very excited, when AI Dungeon was released and seeing them filtering stuff, I thought that one day there will be an open source version of this without filters, the same goes for any future open sourced GPT-X. Now if we can get to train an open source DALL-E too and integrate it on NovelAI. Wouldn't that be even more awesome?
-
Wann habt Ihr euch das letzte Mal wie ein Kind über eine Sache gefreut?
Vielleicht bei https://github.com/lucidrains/DALLE-pytorch und https://github.com/kobiso/DALLE-reproduction
imagen-pytorch
-
Google's StyleDrop can transfer style from a single image
If google doesnt, someone like lucidrains probably would implement it, just like he did for imagen and muse.
- Create a Stable diffusion neural network from scratch.
-
Google just announced an Even better diffusion process.
lucidrains/imagen-pytorch: Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch (github.com)
- Karlo, the first large scale open source DALL-E 2 replication is here
-
training imagen
Hi Can someone guide me a little, as to how i can use LAION dataset to train my imagen model? like how i can download the data, and in which format it should be fed to https://github.com/lucidrains/imagen-pytorch code?
-
If everyone in this sub make a donation of $10 then we can train truly open stable diffusion.
If we were to put money into training something, I'd hope we use a better model, like Imagen.
- AI Content Generation, Part 1: Machine Learning Basics
-
DALL-E 2 is switching to a credits system (50 generations for free at first, 15 free per month)
I've been messing around with this open-source implementation. You can get a pretty good idea of the model size by just copying the parameters from the paper.
-
Protests erupt outside of DALL-E offices after pricing implementation, press photograph
I'm waiting on this implementation/training of imagen: https://github.com/lucidrains/imagen-pytorch
-
Show HN: Food Does Not Exist
I'm honestly surprised that they trained a StyleGAN. Recently, the Imagen architecture has been show to be both easier in structure, easier to train, and even faster to produce good results. Combined with the "Elucidating" paper by NVIDIA's Tero Karras you can train a 256px Imagen to tolerable quality within an hour on a RTX 3090.
Here's a PyTorch implementation by the LAION people:
https://github.com/lucidrains/imagen-pytorch
And here's 2 images I sampled after training it for some hours, like 2 hours base model + 4 hours upscaler:
https://imgur.com/a/46EZsJo
What are some alternatives?
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
min-dalle - min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch
DALL-E - PyTorch package for the discrete VAE used for DALL·E.
dalle-mini - DALL·E Mini - Generate images from a text prompt
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
CogVideo - text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
vit-pytorch - Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
deep-daze - Simple command line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). Technique was originally created by https://twitter.com/advadnoun
DeepCreamPy - deeppomf's DeepCreamPy + some updates
open_clip - An open source implementation of CLIP.
hent-AI - Automation of censor bar detection

