disco-diffusion
shared-tensor
disco-diffusion | shared-tensor | |
---|---|---|
22 | 1 | |
7,457 | 32 | |
0.1% | - | |
0.0 | 10.0 | |
10 months ago | over 8 years ago | |
Jupyter Notebook | C | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
disco-diffusion
-
Halloween 2022
Disco-diffusion, a framework like Stable, which came out about 13 months ago: https://github.com/alembics/disco-diffusion
-
Which is your favorite text to image model overall?
Runner-ups are Craiyon (for being more "creative" than SD), Disco Diffusion, minDALL-E, and CLIP Guided Diffusion.
-
AI Generated Music Video using Disco Diffusion software
From the Disco Diffusion GitHub, "“A frankensteinian amalgamation of notebooks, models and techniques for the generation of AI Art and Animations.”
- List of open source machine learning AI image generation/text-to-image libraries that can be installed on an Amazon GPU instance? e.g. MinDall-E, Disco Diffusion, Pixray
- Colab notebook "Disco Diffusion v5.6, Inpainting_mode by cut_pow" by kostarion. From the developer: "Inpainting mode in #DiscoDiffusion! I've finally made the parametrised guided inpainting for disco, and applied it for more stable 2D and 3D animations. In the thread i show what's in there".
- I used an AI to create EVE Online themed Art!
-
A good tutorial to get started?
Google Colab is probably the easiest way to run DD. To find the most recent version go to the GitHub page and then open the link to the Colab. Initially, you'll probably just want to experiment with the prompts. But there's also Zippy's Disco Diffusion Cheatsheet v0.3 which can be a useful place to learn more.
-
Free/open-source AI Text-To-Image Models that can be run on AWS?
You can probably port Disco Diffusion pretty easily. It’s available on Google Colab, so should be straightforward. Their GitHub is: https://github.com/alembics/disco-diffusion
-
Protests erupt outside of DALL-E offices after pricing implementation, press photograph
https://www.reddit.com/r/DiscoDiffusion/, https://github.com/alembics/disco-diffusion. As far as I'm aware the only way to use this is via Google Colab. Rather difficult to use because of this.
-
First nice portrait on 5.6 running locally on 2070 (comparison untouched / GFPGAN)
https://github.com/alembics/disco-diffusion,
shared-tensor
-
DALL-E 2 open source implementation
This needs distributed training...
Years ago I made a shared tensor library[1] which should allow people to do training in a distributed fashion around the world. Even with relatively slow internet connections, training should still make good use of all the compute available because the whole lot runs asynchronously with highly compressed and approximate updates to shared weights.
The end result is that every bit of computation added has some benefits.
Obviously for a real large scale effort, anti-cheat and anti-spam mechanisms would be needed to ensure nodes aren't deliberately sending bad data to hurt the group effort.
[1]: https://github.com/Hello1024/shared-tensor
What are some alternatives?
latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models
DALLE2-pytorch - Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
dalle-2-preview
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
big-sleep - A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
VQGAN-CLIP - Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
artroom-stable-diffusion
guided-diffusion
CLIP-Guided-Diffusion - Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.