stable-diffusion
DISCONTINUED
waifu-diffusion
Our great sponsors
stable-diffusion | waifu-diffusion | |
---|---|---|
142 | 28 | |
2,438 | 1,926 | |
- | - | |
9.8 | 0.0 | |
over 1 year ago | about 1 year ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion
- [Machine Learning] [P] Exécutez une diffusion stable sur le GPU de votre M1 Mac
- Its time!
-
Anybody running SD on a Macbook Pro? What are you using and how did you install it?
Yes, you can install it with Python! https://github.com/lstein/stable-diffusion works with macOS, and you can control all the common parameter via their WebUI or CLI :)
-
How do I save the arguments for images I create when using the terminal? (Apple M1 Pro)
I'm using lstein fork ("dream") and when I create an image from the terminal, it also writes back to the terminal like this:
-
I Resurrected “Ugly Sonic” with Stable Diffusion Textual Inversion
Stable Diffusion is wild - the space has been quickly developing and watching the pace of development makes me reconsider what I consider "staggering". I've been blown away. The accessibility of this technology is even more incredible - there's even a fork that is working on M1 Macs (https://github.com/lstein/stable-diffusion)
We are in for some interesting times. Whatever the next iteration of Textual Inversion is will be extremely disruptive, especially if the concepts continue to be developed collectively.
-
AI Seamless Texture Generator Built-In to Blender
Oh, it generates from a text prompt, not a sample texture. I thought this was just a tool to generate wrapped textures from non-wrapped ones.
The licensing is a mess. The Blender plug-in is GPL 3, the stable diffusion code is MIT, and the weights for the model have a very restrictive custom license.[1] Whether the weights, which are program-generated, are copyrightable is a serious legal question.
[1] https://github.com/lstein/stable-diffusion/blob/61f46cac31b5...
> Whenever I ask for something like ‘seamless tiling xxxxxx’ it kinda sorta gets the idea, but the resulting texture doesn’t quite tile right.
Getting seamless tiling requires more than just have "seamless tiling" in the prompt. It also depends on if the fork you're using has that feature at all.
https://github.com/lstein/stable-diffusion has the feature, but you need to pass it outside the prompt. So if you use the `dream.py` prompt cli, you can pass it `"Hats on the ground" --seamless` and it should be perfectly tilable.
-
Auto SD Workflow - Update 0.2.0 - "Collections", Password Protection, Brand new UI + more
From https://github.com/lstein/stable-diffusion
Yes, works perfectly fine as of 1e8e5245ebca5211e271f35a3a849dee8f4793d2 which contains the performance improvements. Probably works fine with later commits too but I haven't personally tested them so won't vouch for it.
waifu-diffusion
- AI: "All Your Horny Belong to Us"
-
Cool Japan Diffusion 2.1.1 has been released! 🎉
It's slander to say that the WD team is milking donations in the same way UD is. They already have a public working model which they are continuing to train. The WD team has also implemented features like aspect ratio bucketing into their custom trainer. If they were really milking the project for donations they would have just used the base compvis trainer it was forked from.
-
From one of the original DreamBooth authors : Stop using SKS as the initializer word
Use one of the original SD repos, or the code for Waifu Diffusion, or the Smirkingface refactor.
- Stable Diffusion links from around October 4, 2022 that I collected for further processing
-
These images of Senko were Generated by AI (Part 2 - Halloween themed)
My model is based off waifu-diffusion.
- Pasar Malam outside Yishun MRT Station
-
How many public models are there?
I did post a comment on that comment, asking what models they were speaking of. No reply sadly, lmao. u/Rogerooo was helpful enough to direct me to Trinart, though. There's also Waifu Diffusion and Pokemon.
-
Waifu-Diffusion v1-2: A SD 1.4 model finetuned on 56k Danbooru images for 5 epochs
Training Code: https://github.com/harubaru/waifu-diffusion
-
"a leopard's head on a headless peacock's body..." Full prompt and more info in the comments
Created using the Gradio UI using harubaru's fork and an earlier version of hlky's. Prompt and input parameters for the first batch. If anyone wants the full prompt(s) for any of the other generations, let me know here.
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
diffusers-uncensored - Uncensored fork of diffusers
txt2imghd - A port of GOBIG for Stable Diffusion
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/sd-webui/stable-diffusion-webui]
merge-models - Merges two latent diffusion models at a user-defined ratio
stable-diffusion-webui - Stable Diffusion web UI [Moved to: https://github.com/Sygil-Dev/sygil-webui]
stable-diffusion-gui - Windows GUI for Stable Diffusion
dream-textures - Stable Diffusion built-in to Blender
fast-stable-diffusion - fast-stable-diffusion + DreamBooth