examples
stable-diffusion-webui
examples | stable-diffusion-webui | |
---|---|---|
12 | 2,808 | |
789 | 131,121 | |
1.0% | - | |
7.2 | 9.9 | |
4 months ago | 5 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
examples
-
SD 1.4: Switching Clip with a new encoder
Hello everyone, I am trying to fine tune a Stable Diffusion 1.4 model to work on specific images which requires specific descriptions. I am following this github, which is a fork of the original one, I have12000 images and I am at the 20th epoch with a 0.199 loss: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
-
Custom model training question
It seems there are two ways: 1) Use Dreambooth technique (joepenna, Shivam's, lastben repos) 2) Train on top of original stable-diffusion model (as described for example here https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning, on XavierXiao repo)
-
Differences between a hypernetwork, embedding and Dreambooth models?
If you want to make or customize a model: - there's fine-tuning a model (not Dreambooth). You're essentially continuing the training process that the SD authors used. It requires professional-grade AI hardware and takes a while. People seem to not even know this exists. You start with some base model (usually plain SD, but it could be any model) and fine-tune it. You should assume the process of fine-tuning it will make it unsuitable for anything else -- for instance, if you tune on one person's face, expect it to never generate anyone else's face, and if you find-tune on one art style, any other art style may suck. - Dreambooth is a different method for fine-tuning a model, needing a fraction of the power and time "real" fine-tuning does. But it still takes a lot of power, the most optimized Dreambooth tools take 12GB of VRAM and most graphics cards don't even have that. - There's several competitors to the Dreambooth method, such as EveryDream, which claim better results and sometimes claim only needing one photo. I'm not sure how things have really played out, especially since you can't tell the difference between "this method sucks" and "this method is great but everyone is using it wrong". - Hypernetworks take less time and power than DB. some testify it's better for style training than DreamBooth - TI's take the least time and power, I recently saw a training method for 6 GB VRAM cards. - Aesthetic gradients don't need training! :)
- Can't clone from Huggingface?
- Huggingface cloning not working, more info inside
-
Was told to crosspost here. My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images. Questions and advice welcome!
I BLIP captioned the images to try and retrain using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb. I used the BLIP captions, and then put "D&D character {race}" in front, where race was the manually annotated race I did. After that, for dreambooth (followed this roughly: https://www.youtube.com/watch?v=7bVZDeGPv6I) You don't need to rename the images, just put them in the same folder, which you specify in a json file that Dreambooth reads to know how to handle each class
-
My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images
Trained a Dreambooth model from v1.5 checkpoint. I tried finetuning the model using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb, but I didn't achieve results I liked
-
How To Fine Tune Stable Diffusion: Naruto Character Edition
Thank you! This model training did not use dreambooth. Here is the reference repo I used, it is based on the original training repo for stable diffusion. Dreambooth is a more sophisticated framework and I am very interested in doing a side-by-side comparison against this model as a follow-up.
- [D] DreamBooth Stable Diffusion training now possible in 24GB GPUs, and it runs about 2 times faster.
-
[P] Stable Diffusion finetuned on Pokemon!
Code and details: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
stable-diffusion-webui
-
Show HN: I made an app to use local AI as daily driver
* LLaVA model: I'll add more documentation. You are right Llava could not generate images. For image generation I don't have immediate plans, but checkout these projects for local image generation.
- https://diffusionbee.com/
- https://github.com/comfyanonymous/ComfyUI
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
I would love to be able to have a native stable diffusion experience, my rx 580 takes 30s to generate a single image. But it does work after following https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki...
I got this up and running on my windows machine in short order and I don't even know what stable diffusion is.
But again, it would be nice to have first class support to locally participate in the fun.
-
Ask HN: What is the state of the art in AI photo enhancement?
In Auto1111, that just uses Image.blend. :)
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob...
- How To Increase Performance Time on MacOS
-
Can anyone suggest an AI model that can help me enhance a poorly drawn logo?
I used SDXL in automatic1111 webui for both images. Now that I think about it, the procedure I described was how I made this one, but the one that looks like an illustration was done in two steps. I used the canny ControlNet as I said for the outer part of the logo to preserve the shape of the fonts, but I had to turn it off for the boot to give SDXL leeway to add detail and make it look more like a boot.
-
Seeking out an experienced and empathetic coding buddy.
That said, please do learn coding and don't get discouraged when somebody says to learn PyTorch or recommends using a Jupiter notebook with no further information on how to translate the skill into images. I would highly recommend some short term goals. Get your feet wet by taking apart the UIs. The comfy API documentation is here and the A1111 API documentation is here. There is a difference in completeness, welcome to programming. Writing nodes or plugins is also a good way to jump into this world. Custom wildcard logic might be very attractive to you if you aren't the type that want to deal with a nested file structure to simulate logic.
- can't get it working with an AMD gpu
-
SD extension that allows for setting override
Possibly Unprompted? https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8094
- Need to write an application to use Stable Diffusion on my desktop PC - which resource should I learn to use?
-
4090 Speed Decrease on each Generation/Iteration
version: v1.6.1 • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: 6e8d4871f8
What are some alternatives?
stable-diffusion
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
artbot-for-stable-diffusion - A front-end GUI for interacting with the AI Horde / Stable Diffusion distributed cluster
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
SHARK - SHARK - High Performance Machine Learning Distribution
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
safetensors - Simple, safe way to store and distribute tensors
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
sd-webui-additional-networks
CodeFormer - [NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.